text
stringlengths
4
2.78M
--- abstract: 'We report the results of a search for molecular oxygen ([O$_2$]{}) toward the Orion Bar, a prominent photodissociation region at the southern edge of the H$\,$II region created by the luminous Trapezium stars. We observed the spectral region around the frequency of the [O$_2$]{} $\rm N_J =\;$[3$_3\,$–$\,$1$_2$]{} transition at 487 GHz and the [5$_4\,$–$\,$3$_4$]{} transition at 774 GHz using the Heterodyne Instrument for the Far Infrared on the [*Herschel Space Observatory*]{}. Neither line was detected, but the 3$\sigma$ upper limits established here translate to a total line-of-sight [O$_2$]{} column density $<\:$1.5$\,\times\,$10$^{16}$ [cm$^{-2}$]{} for an emitting region whose temperature is between 30$\:$K and 250$\:$K, or $\,<\;$1$\,\times\,$10$^{16}$ [cm$^{-2}$]{} if the [O$_2$]{} emitting region is primarily at a temperature of ${\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle <}{\sim}}$}}\normalsize}\,$100$\:$K. Because the Orion Bar is oriented nearly edge-on relative to our line of sight, the observed column density is enhanced by a factor estimated to be between 4 and 20 relative to the face-on value. Our upper limits imply that the face-on [O$_2$]{} column density is less than 4$\,\times\,$10$^{15}$ [cm$^{-2}$]{}, a value that is below, and possibly well below, model predictions for gas with a density of 10$^4$[$\,$–$\,$]{}10$^5$ [cm$^{-3}$]{} exposed to a far ultraviolet flux 10$^4$ times the local value, conditions inferred from previous observations of the Orion Bar. The discrepancy might be resolved if: (1) the adsorption energy of O atoms to ice is greater than 800$\:$K; (2) the total face-on [[*A*]{}$_{\rm V}$]{} of the Bar is less than required for [O$_2$]{} to reach peak abundance; (3) the [O$_2$]{} emission arises within dense clumps with a small beam filling factor; or, (4) the face-on depth into the Bar where [O$_2$]{} reaches its peak abundance, which is density dependent, corresponds to a sky position different from that sampled by our [*Herschel*]{} beams.' --- [**[Herschel]{}$^*$ Search for O$_2$ Toward the Orion Bar**]{}\ Gary J. Melnick$^1$, Volker Tolls$^1$, Paul F. Goldsmith$^2$, Michael J. Kaufman$^3$,\ David J. Hollenbach$^4$, John H. Black$^5$, Pierre Encrenaz$^6$, Edith Falgarone$^7$, Maryvonne Gerin$^7$, Åke Hjalmarson$^5$, Di Li$^8$, Dariusz C. Lis$^9$, René Liseau$^5$, David A. Neufeld$^{10}$, Laurent Pagani$^{6}$,\ Ronald L. Snell$^{11}$, Floris van der Tak$^{12}$, and Ewine F. van Dishoeck$^{13, 14}$ Received$\;$ ------------------------------------------------------------------------ $\,$;     Accepted$\;$ ------------------------------------------------------------------------ $^*$ Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA 1. Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, MS 66, Cambridge, MA 02138, USA\ 2. Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109, USA\ 3. Department of Physics and Astronomy, San Jośe State University, San Jose, CA 95192, USA\ 4. SETI Institute, Mountain View, CA 94043, USA\ 5. Department of Earth & Space Sciences, Chalmers University of Technology, Onsala Space Observatory, SE-439 92 Onsala, Sweden\ 6. LERMA & UMR8112 du CNRS, Observatoire de Paris, 61 Av. de l’Observatoire, 75014 Paris, France\ 7. LRA/LERMA, CNRS, UMR8112, Observatoire de Paris & École Normale Supérieure, 24 rue Lhomond, 75231 Paris Cedex 05, France\ 8. National Astronomical Observatories, Chinese Academy of Sciences, A20 Datun Road, Chaoyang District, Beijing 100012, China\ 9. California Institute of Technology, Cahill Center for Astronomy and Astrophysics 301-17, Pasadena, CA 91125, USA\ 10. Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218, USA\ 11. Department of Astronomy, University of Massachusetts, Amherst, MA 01003, USA\ 12. SRON Netherlands Institute for Space Research, P.O. Box 800, 9700 AV, and Kapteyn Astronomical Institute, University of Groningen, Groningen, The Netherlands\ 13. Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA, Leiden, The Netherlands\ 14. Max-Planck-Institut f¬ur Extraterrestrische Physik, Giessenbachstrasse 1, 85748, Garching, Germany\ INTRODUCTION ============ Searches for interstellar [O$_2$]{} have a long history, but their motivation has evolved with time. Prior to the late-1990’s, efforts to detect [O$_2$]{} were driven largely by a desire to confirm its predicted role as a major reservoir of elemental oxygen within dense molecular clouds and as the most important gas coolant – after CO – of cold ($T\,{\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle <}{\sim}}$}}\normalsize}\,$30$\:$K), modestly dense ([$n$[(H$_2$)]{}]{}$\,\simeq\,$10$^3$[$\,$–$\,$]{}10$^4$ [cm$^{-3}$]{}) gas [cf. @Goldsmith78; @Neufeld95]. The launch of the [*Submillimeter Wave Astronomy Satellite (SWAS)*]{} in 1998 and [*Odin*]{} in 2001, and the subsequent failure of these observatories to detect [O$_2$]{} toward a large number of sources at levels of a few percent of the abundances predicted by equilibrium gas-phase chemical models, have forced a shift in emphasis to a re-examination of the oxygen chemistry in dense molecular gas. Today, interest in [O$_2$]{} no longer lies in its being a significant reservoir of elemental oxygen or in its cooling power. Instead, because the abundance of gas-phase [O$_2$]{}is set by a balance of various formation, destruction, and depletion processes thought to affect the broader chemistry in dense gas – such as gas-phase reactions, grain-surface reactions, thermal sublimation, far-ultraviolet (FUV) photodesorption, cosmic-ray desorption, photodissociation, and freeze out – measures of [O$_2$]{} have become an important test of our current understanding of the relative effectiveness of these processes. The capabilities of the [*Herschel Space Observatory’s*]{} Heterodyne Instrument for the Far-Infrared [HIFI; @deGraauw10] have enabled improved searches for [O$_2$]{} through: (1) its high sensitivity, including at 487 GHz – the frequency of the $\rm N_J =\;$3$_3$[$\,$–$\,$]{}1$_2$ transition observed previously by [*SWAS*]{} and [*Odin*]{}; and, (2) its broad frequency coverage that permits observations of additional [O$_2$]{}  submillimeter transitions, some of which are expected to exhibit stronger emission than the [3$_3\,$–$\,$1$_2$]{} line under certain physical conditions. The Open Time Key Program “Herschel Oxygen Project" (HOP; Co-PI’s P. Goldsmith and R. Liseau) is designed to survey Galactic sources with the goal to detect [O$_2$]{} or set meaningful limits on its abundance within these regions. Because the effectiveness of the processes that determine the [O$_2$]{} column density depends upon the gas density, temperature, and incident FUV flux [$G_{\rm o}$]{} (scaling factor in multiples of the average Habing local interstellar radiation field; @Habing68) among other parameters, testing these models requires that the HOP observations include a range of source types, such as dense quiescent clouds, outflows and shocked gas regions, and FUV-illuminated cloud surfaces [see, for example, @Goldsmith11; @Liseau12]. In this paper, we report the results of a deep search for [O$_2$]{} emission toward the Orion Bar, a well known ionization front located approximately 2[$^{\prime}$]{} southeast of the Trapezium stars in Orion at the interface of the H$\,$II region created by these stars and the dense gas associated with the surrounding Orion molecular cloud. The Orion Bar lends itself well to the study of FUV-illuminated molecular gas for several reasons, including its nearly edge-on geometry, its proximity [$\sim\,$420 pc; @Menten07; @Hirota07; @Kim08], its relatively high density ([$n$[(H$_2$)]{}]{}$\,{\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle >}{\sim}}$}}\normalsize}\,$3$\times$10$^4$ [cm$^{-3}$]{}), and the strong ([$G_{\rm o}$]{}$\,\simeq\,$10$^4$[$\,$–$\,$]{}10$^5$) external FUV field irradiating this gas. The Orion Bar, and sources like it, are of particular interest since the dust grains within these regions are predicted to be sufficiently warm that the thermal evaporation of O atoms from the grain surfaces is enhanced, resulting in a higher fraction of O in the gas phase and the increased production of [O$_2$]{} via gas-phase chemical reactions (O$\,+\,$OH$\,\rightarrow\,$O$_2\,+\,$H). Under such circumstances, the [O$_2$]{} column density can be more than a factor of 10 greater than within gas exposed to lower (i.e., [$G_{\rm o}$]{}$\,<\,$500) external FUV fields [cf. @Hollenbach09]. The inclusion of the Orion Bar within the HOP program was intended to test this prediction. The observations and data reduction methods are described in §2 below. In §3, we present the resultant spectra and the upper limits to the [O$_2$]{} integrated intensity. In §4, we review the excitation conditions within the Orion Bar and the derived limits on the line-of-sight [O$_2$]{} column density. In §5, we discuss these limits in the context of recent chemical models that trace the [O$_2$]{} abundance from the FUV-illuminated cloud surface to the deep interior. OBSERVATIONS AND DATA REDUCTION =============================== The [*Herschel*]{} HIFI observations presented here were carried out using the HIFI Band 1a receiver for the [3$_3\,$–$\,$1$_2$]{} 487 GHz observations and the HIFI Band 2b receiver for the [5$_4\,$–$\,$3$_4$]{} 774 GHz observations. The 487 GHz observations were conducted on operational day (OD) 291 in spectral scan dual beam switch (DBS) mode, while the 774 GHz observations were conducted on OD 297 in spectral scan DBS mode and on OD 509 in HIFI single point DBS mode. Eight LO-settings were used for both the 487 GHz and the 774 GHz spectral scans to enable the spectral deconvolution, and the additional eight single point 774 GHz observations were observed also using eight different LO settings. The total integration time (on-source$\,+\,$off-source) [*for each polarization*]{} was 0.93 hours for the 487 GHz spectral scan, 0.86 hours for the 774 GHz spectral scan, and a total of 4.6 hours for the eight single point 774 GHz observations. The full-width-at-half-maximum (FWHM) beam sizes were 44.7[$^{\prime\prime}$]{}at 487 GHz and 28.2[$^{\prime\prime}$]{} at 774 GHz. The observed position, $\alpha =\,$5h 35m 20.6s, $\delta =\,-$5[$^{\rm o}$]{}$\:$25[$^{\prime}$]{}$\:$14.0Õ[$^{\prime\prime}$]{} (J2000), is shown in Fig. \[finderchart\]. We applied the total observing time allotted to HOP observations of the Orion Bar to a single spatial position – versus multiple positions – in order to achieve the lowest radiometric noise and, thus, the greatest sensitivity to weak [O$_2$]{} emission. In the absence of prior information about the possible [O$_2$]{}spatial distribution, our choice of sky position was guided by the desire to place the 487 GHz and 774 GHz beam centers a distance corresponding to approximately 8 visual magnitudes into the molecular gas measured from the ionization front, in accord with model predictions (see §5 for a full discussion). For an [H$_2$]{} density between 5[$\,\times\,$]{}10$^4$ [cm$^{-3}$]{} and 5[$\,\times\,$]{}10$^5$ [cm$^{-3}$]{}, applicable to the interclump medium in the Bar, and [$G_{\rm o}$]{}$\,\simeq\,$10$^4$, this corresponds to a projected angular distance of between 2.4[$^{\prime\prime}$]{} and 24[$^{\prime\prime}$]{} from the ionization front. As shown in Fig. \[finderchart\], the selected position places the beams in the center of this range, while the beam sizes encompass the full range. The sky position parallel to the Orion Bar was selected to coincide with the molecular gas, as delineated by the [$^{13}$CO]{} $J\,=\:$3[$\,$–$\,$]{}2 emission (see Fig. \[finderchart\]), and, for future analysis, one of the positions under present study by another [*Herschel*]{} Key Program. The data were processed using the standard HIFI pipeline software HIPE version 7.3 [@Ott10], spurious signals (“spurs") removed, spectra defringed, spectral scans deconvolved, and all data finally exported to GILDAS-CLASS format. Further processing was performed only on the Wide Band Spectrometer (WBS) spectra (0.5 MHz channel spacing and 1.1 MHz effective spectral resolution) using the IRAM GILDAS software package (http://iram.fr/IRAMFR/GILDAS/), including first-order baseline removal, averaging of the 774 GHz spectral scans and frequency-aligned single point observations, averaging of the H- and V-polarization spectra, and production of separate averages for both frequencies and both sidebands. The frequencies for the line identification were extracted from the JPL and CDMS databases [@Pickett98; @Muller05] as well as [@Drouin10] in the case of [O$_2$]{}. RESULTS ======= A summary of the identified lines in the HIFI Band 1a and Band 2b spectra along with the observing modes, integration times, and Gaussian fit parameters is provided in Table 1. The summed H+V polarization spectra observed in Band 1a are shown in Fig. \[Band1aspectra\], while those observed in Band 2b are shown in Fig. \[Band2bspectra\]. With the exception of the H$_2$Cl$^+$ chloronium 485 GHz spectrum, which is a blend of three hyperfine components [cf. @Lis10; @Neufeld11], all of the detected lines appear well fit by single Gaussian profiles with a common LSR line center of 10.68$\,\pm\,$0.14[ km s$^{{-1}}$]{} (1$\sigma$) and individual best-fit FWHM line widths ranging from about 1.8[ km s$^{{-1}}$]{} to 2.5[ km s$^{{-1}}$]{}. The upper limit to the integrated intensity of the [O$_2$]{} [3$_3\,$–$\,$1$_2$]{} and [5$_4\,$–$\,$3$_4$]{} transitions is derived assuming each line is described by a single Gaussian profile, as is the case for the other unblended lines we detect toward this position. The rms noise in the [O$_2$]{} [3$_3\,$–$\,$1$_2$]{} 487 GHz spectrum between LSR velocities of $-$110[ km s$^{{-1}}$]{} and $+$25[ km s$^{{-1}}$]{} – a velocity range within which there is no evidence for any spectral features – is 2.62$\,$mK per 0.5 MHz channel. Similarly, the rms noise in the [O$_2$]{} [5$_4\,$–$\,$3$_4$]{} 774 GHz spectrum between LSR velocities of $-$70[ km s$^{{-1}}$]{} and $+$30[ km s$^{{-1}}$]{} is 2.19$\,$mK per 0.5 MHz channel. The intrinsic [O$_2$]{} line widths along this line of sight are unknown; however, we assume they lie between the extremes of 1.8[ km s$^{{-1}}$]{} and 2.5[ km s$^{{-1}}$]{} (FWHM) measured for the other unblended lines we detect along this line of sight (see Table 1). This leads to 3$\sigma$ upper limits of between 0.0150 and 0.0209 [K$\:$km s$^{-1}$]{} for the [3$_3\,$–$\,$1$_2$]{} 487 GHz line and between 0.0126 and 0.0175 [K$\:$km s$^{-1}$]{} for the [5$_4\,$–$\,$3$_4$]{} 774 GHz line. EXCITATION AND LIMITS ON THE O$_2$ COLUMN DENSITY ================================================= The Orion Bar, like many other photodissociation regions (PDRs), displays emission from a variety of ionic, atomic, and molecular species best fit by a mix of gas densities and temperatures. The broad picture to emerge is that of a layer consisting of at least two components: interclump gas with [$n$[(H$_2$)]{}]{}$\sim\,$3[$\,$–$\,$]{}20$\,\times\,$10$^{4}\;$[cm$^{-3}$]{} [@Hogerheijde95; @Wyrowski97; @Simon97; @Marconi98] surrounding clumps with [$n$[(H$_2$)]{}]{} $\sim\,$10$^6$[$\,$–$\,$]{}10$^{7}\;$[cm$^{-3}$]{}[@Lis03; @Owl00], which comprise about 10% of the mass [@Jansen95]. Gas temperature estimates similarly vary, depending on the species observed and the component giving rise to most of the emission. Within the denser well-shielded gas, the gas temperature is thought to range between $\sim\,$50 and 85$\,$K [@Hogerheijde95; @Gorti02]. The gas temperature associated with the interclump medium is estimated to be 85$\,\pm\,$30$\:$K [@Hogerheijde95], with some gas temperatures associated with the surfaces ([[*A*]{}$_{\rm V}$]{}$\,{\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle <}{\sim}}$}}\normalsize}\,$1) of the denser clumps ranging as high as 220$\,$K [@Jansen95; @Batrla03; @Goicoechea11]. There is evidence for an even warmer component (300[$\,$–$\,$]{}700$\,$K) based on emission from pure rotational lines of [H$_2$]{} and far-infrared fine-structure lines of \[O$\,$I\] at 63 and 145[$\:\mu$m]{} and \[C$\,$II\] at 158[$\:\mu$m]{} [@Herrmann97; @Allers05]. This warmer component is believed to arise in the gas between the ionization front and the molecular region traced by [$^{13}$CO]{} emission [@Walmsley00]. The strength of the FUV field incident on the Orion Bar has been estimated to be [$G_{\rm o}$]{}$\,\simeq\,$1[$\,$–$\,$]{}4$\times$10$^4$ based upon the total radiation from the Trapezium stars – and the O star $\theta^1$ Ori C in particular – the intensity of the far-infrared \[C$\,$II\] and \[O$\,$I\] fine-structure lines mapped toward the Orion molecular ridge, the strength of several near-infrared lines whose intensities have been ascribed to recombinations to highly excited states of CI, and the strength of near-infrared NI lines excited by the fluorescence of UV lines [@Herrmann97; @Marconi98; @Walmsley00]. Given a density of $\sim\,$10$^5$ [cm$^{-3}$]{} for the bulk of the material and a [$G_{\rm o}$]{} of $\sim\,$10$^4$, models predict that the [O$_2$]{} abundance peaks at [[*A*]{}$_{\rm V}$]{}$\:{\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle >}{\sim}}$}}\normalsize}\:$8 mag. [cf. @Sternberg95; @Hollenbach09]. At these depths into the cloud, the gas temperature is predicted to be 30[$\,$–$\,$]{}40$\:$K [@Hollenbach09]. Thus, in our analysis, we consider the possibility that the [O$_2$]{} emission could arise in gas with temperatures anywhere between 30$\:$K and 250$\:$K. The weak line flux of the [O$_2$]{} magnetic dipole transitions makes them highly likely to be optically thin. Under the assumption that the [O$_2$]{} emission uniformly fills the HIFI beam, the observed integrated intensity in a given transition is: $$\int T_{\rm mb}\,dv~=~\frac{h c^3}{8 \pi \nu^2 k}\;\, A_{\rm u\ell}\:N({\rm O_2})\:f_u~=~5.15 \times 10^{-4}\;\frac{A_{\rm u\ell}\:N({\rm O_2})\:f_u} {\nu_{\rm GHz}^2}~~~{\rm (K\;km\:s^{-1})}~,~$$ where $T_{\rm mb}$ is the main beam temperature, $\nu$ is the line frequency (and $\nu_{\rm GHz}$ is the line frequency in GHz), $A_{\rm u\ell}$ is the spontaneous decay rate between the transition upper level, $u$, and lower level, $\ell$, $N$([O$_2$]{}) is the total [O$_2$]{} column density in [cm$^{-2}$]{}, and $f_u$ is the fractional population in the transition upper level. The conversion between main beam and antenna temperature makes use of the efficiencies reported in [@Roelfsema12]. To determine the fractional population of the transition upper state, $f_u$, the excitation of the lowest 36 levels of [O$_2$]{}, corresponding to a maximum upper-level temperature of 1141 K, was computed under the large velocity gradient (LVG) approximation. The spontaneous decay rates are those of [@Drouin10] and the collisional rate coefficients are those calculated by [@Lique10] for He[$\,$–$\,$]{}[O$_2$]{} collisions, multiplied by 1.37 to account for the different reduced mass when [H$_2$]{} is the collision partner. For molecular hydrogen densities $>\,$3$\,\times\,$10$^4$ [cm$^{-3}$]{}, both the [3$_3\,$–$\,$1$_2$]{} and [5$_4\,$–$\,$3$_4$]{} transitions are close to (or in) LTE and the values of $f_u$ depend essentially only on the temperature. Fig. \[contours487\] shows the resulting contours of integrated antenna temperature for the [3$_3\,$–$\,$1$_2$]{} transition as functions of the total [O$_2$]{} column density and gas temperature between 30 and 250 K. Similarly, Fig. \[contours774\] shows the corresponding results for the [5$_4\,$–$\,$3$_4$]{} transition. Of the two [O$_2$]{} lines searched for here, an examination of Figs. \[contours487\] and \[contours774\] shows that our measured upper limits to the [5$_4\,$–$\,$3$_4$]{} 774 GHz integrated intensity place a more stringent limit on the maximum [O$_2$]{} column density for [$T_{\rm gas}$]{}$\:>\:$35$\:$K (and comparable limits to that set by the 487 GHz line at [$T_{\rm gas}$]{}$\:\sim\,$30$\:$K). Specifically, assuming the emission fills the beam, the total line-of-sight [O$_2$]{} column density must be less than 1.5$\,\times\,$10$^{16}$ [cm$^{-2}$]{} (3$\sigma$). If the [O$_2$]{} abundance peaks within the cooler well-shielded gas, for which [$T_{\rm gas}$]{}$\;{\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle <}{\sim}}$}}\normalsize}\;$100$\:$K, the upper limit to the total [O$_2$]{} column density is less than 1$\,\times\,$10$^{16}$ [cm$^{-2}$]{} (3$\sigma$). DISCUSSION ========== [O$_2$]{} is produced primarily through the gas-phase reaction O$\,+\,$OH$\,\rightarrow\,$[O$_2$]{}$\,+\,$H and is destroyed by photodissociation for the cloud depths of interest here. Thus, the [O$_2$]{} abundance is expected to peak where the FUV field has been heavily attenuated and where both the gas-phase O and OH abundances are high which, in externally FUV-illuminated clouds, is predicted to occur within a relatively narrow (i.e., a few [[*A*]{}$_{\rm V}$]{} deep) zone centered at an [[*A*]{}$_{\rm V}$]{}  ${\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle <}{\sim}}$}}\normalsize}\,$ 9 mag. from the cloud surface [cf. @Hollenbach09]. The proximity of this zone to the surface and the range of depths over which the peak abundance occurs are governed by several important processes. Near the cloud surface, where the FUV field is largely unattenuated, the equilibrium [O$_2$]{} abundance is low owing to a high photodissociation rate. Beyond a few [[*A*]{}$_{\rm V}$]{} into the cloud, the FUV field is attenuated, the photodissociation rate reduced, and a region of peak [O$_2$]{} (and [H$_2$O]{}) abundance is attained. Within most clouds with [$G_{\rm o}$]{}$\,<\:$500, the path to [O$_2$]{} formation is believed to start with the formation of water ice, [H$_2$O$_{\rm ice}$]{}, on grains, which occurs when O atoms strike and stick to grains long enough to combine with an accreted H atom to form [OH$_{\rm ice}$]{} and then [H$_2$O$_{\rm ice}$]{}. Within this region the FUV field remains strong enough to photodesorb [H$_2$O]{} from the ice mantles and subsequently photodissociate these molecules, creating sufficient gas-phase O and OH to produce [O$_2$]{} by the gas-phase chemical reaction above. Deeper into the cloud (i.e., greater [[*A*]{}$_{\rm V}$]{}), the FUV field is almost completely attenuated and the gas-phase OH and [H$_2$O]{}  produced through the photodesorption and photodissociation of [H$_2$O$_{\rm ice}$]{} drops significantly; most O atoms that then strike dust grains and form [H$_2$O$_{\rm ice}$]{} remain locked in ice as long as the grain temperature is ${\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle <}{\sim}}$}}\normalsize}\,$100$\:$K. Over time ($\sim\,$10$^5$ years), this process greatly reduces the gas-phase atomic oxygen abundance and suppresses the formation and abundance of [O$_2$]{}. Hence, in the model of [@Hollenbach09], the steady-state abundance profile of [O$_2$]{} (and [H$_2$O]{}) resembles an elevated plateau that peaks at an [[*A*]{}$_{\rm V}$]{}$\:{\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle <}{\sim}}$}}\normalsize}\:$6 for gas with [$n$[(H$_2$)]{}]{}$\:=\:$10$^4$[$\,$–$\,$]{}10$^5$ [cm$^{-3}$]{} and [$G_{\rm o}$]{}$\:{\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle <}{\sim}}$}}\normalsize}\:$500. For regions subject to a [$G_{\rm o}$]{} greater than $\sim\,$500, such as the Orion Bar, the scenario above is altered and, for several reasons, the peak [O$_2$]{} abundance is higher and occurs at a higher [[*A*]{}$_{\rm V}$]{}. First, the high FUV field absorbed at the cloud surface leads to a high infrared field that keeps the grains warm, even deep within the cloud. For [$G_{\rm o}$]{}$\:=\:$10$^4$, $T_{\rm gr}\:\approx\;$40$\:$K to [[*A*]{}$_{\rm V}$]{}$\,{\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle >}{\sim}}$}}\normalsize}\,$8, resulting in a significant fraction of the O atoms being thermally desorbed from the grains before they can form [H$_2$O$_{\rm ice}$]{} and leading to an increase in O in the gas phase. Second, the higher grain temperature also reduces the freezeout of such oxygen-bearing species as OH and [O$_2$]{}, further increasing the amount of elemental O in the gas phase. Finally, the attenuated FUV flux at the higher values of [[*A*]{}$_{\rm V}$]{} lowers the photodestruction rates, allowing [O$_2$]{} to survive to greater cloud depths. The combined result of these effects is a peak [O$_2$]{} abundance about 3 times higher, and a total [O$_2$]{} column density more than 10 times greater than for comparably dense gas exposed to [$G_{\rm o}$]{}$\:{\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle <}{\sim}}$}}\normalsize}\:$500. This result is reflected in the detailed calculations presented in [@Hollenbach09] and shown in Fig. \[Go\], which is adapted from their paper. For this reason, the Orion Bar was considered a promising source for our attempts to detect [O$_2$]{} emission. From Fig. \[Go\], it would appear that the upper limits on the total [O$_2$]{} column density established here are not in serious disagreement with the model predictions. However, the results shown in Fig. \[Go\] apply to a gas column perpendicular to the face of a planar cloud. This is not the geometry of the Orion Bar, which has often been described as an edge-on PDR, though its true structure has been the subject of some study and debate. For example, based on millimeter and submillimeter line observations, [@Hogerheijde95] and [@Jansen95] propose a model in which the Bar has a tilt angle, $\alpha$, of $\sim\,$3[$^{\rm o}$]{} from edge-on, resulting in an increase in the line-of-sight column density (beyond what would be measured for a face-on geometry) by a factor (sin$\,\alpha$)$^{-1}$, or almost 20. Alternately, [@Walmsley00] find that a cylindrical model, in which the axis is in the plane of sky and the radius is 0.3$\:$pc, best reproduces the observed spatial distribution of the fluorescent OI 1.317[$\:\mu$m]{} emission. In this scenario, the average geometrical enhancement of the line-of-sight depth into the Bar versus the face-on depth is about 5. Finally, [@Neufeld06] find that a geometrical enhancement factor of $\sim\,$4 is required to reconcile observed and predicted C$^+$ column densities. The 3$\sigma$ upper limit to the [*face-on*]{} [O$_2$]{} column density can thus be inferred from our line-of-sight values to be 1.5[$\,\times\,$]{}10$^{16}\:$sin$\,\alpha$ [cm$^{-2}$]{}, or 1.0[$\,\times\,$]{}10$^{16}\:$sin$\,\alpha$ [cm$^{-2}$]{}for [$T_{\rm gas}$]{}$\:{\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle <}{\sim}}$}}\normalsize}\:$100$\:$K. (We note that these upper limits are derived assuming the intrinsic [O$_2$]{} FWHM line width is 2.5 [ km s$^{{-1}}$]{}; if the intrinsic width is closer to the lower end of the observed range, i.e., 1.8$\:$[ km s$^{{-1}}$]{}, the face-on [O$_2$]{} column density upper limits are further reduced by a factor of 1.4.) For gas densities ${\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle <}{\sim}}$}}\normalsize}\,$10$^5$ [cm$^{-3}$]{}, which applies to most of the gas in the Bar, this is to be compared with a total predicted face-on [O$_2$]{}column density of $\,{\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle >}{\sim}}$}}\normalsize}\;$7$\,\times\,$10$^{15}$ [cm$^{-2}$]{}, as shown in Fig. \[Go\], with most of this column occurring inside a layer of peak [O$_2$]{} abundance with a width corresponding to approximately 2 magnitudes (see Fig. \[Kaufman\]), or a linear size of $\sim\,$1.9[$\,\times\,$]{}10$^{16}$/$n_5$ cm, where $n_5\,=\;$[$n$[(H$_2$)]{}]{}/\[10$^5$ [cm$^{-3}$]{}\]. Viewed from a distance of 420 pc, this zone of peak [O$_2$]{} emission would subtend 3\[(1/$n_5$ + 162.4$\,\ell\,$sin$\,\alpha$\][$^{\prime\prime}$]{}, where $\ell$ is the physical length of the Bar in parsecs. For $\ell\,\simeq\;$0.6 pc [cf. @Jansen95] and $n_5 \simeq\:$1, $\alpha\,{\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle >}{\sim}}$}}\normalsize}\:$6[$^{\rm o}$]{} would result in [O$_2$]{} emission that fills the [*Herschel*]{}/HIFI beam at 774 GHz, though a minimum geometric enhancement factor of 4, derived from other observations, suggests that $\alpha$ does not exceed 15[$^{\rm o}$]{}. However, these tilt angles imply an upper limit to the face-on [O$_2$]{} column density between 1.6[$\,\times\,$]{}10$^{15}$ [cm$^{-2}$]{} and 3.9[$\,\times\,$]{}10$^{15}$ [cm$^{-2}$]{}, which is below, and in some cases, significantly below that predicted by theory. For $\ell\,\simeq\;$0.6 pc and $n_5 \simeq\:$1, but $\alpha <\;$6[$^{\rm o}$]{}, the [O$_2$]{} layer no longer fills the 774 GHz beam. Although the peak [O$_2$]{} column density within the beam will continue to increase for angles less than 6[$^{\rm o}$]{}, the beam filling factor will decrease. These two effects offset exactly, and the beam-averaged [O$_2$]{} column density will remain the same for all tilt angles less than about 6[$^{\rm o}$]{}. Since the [O$_2$]{} emission is optically thin, the line emission will likewise remain constant within the under-filled beam. In this case, the geometrical enhancement factor would be $\sim\:$10, and the upper limit to the face-on [O$_2$]{}  column density remains below that predicted. Therefore, we conclude that Bar geometry cannot account for the discrepancy between theory and observations. What, then, can account for the discrepancy? The amount of [O$_2$]{} produced in externally FUV-illuminated dense gas depends on several factors, which we examine below: [*Thermal evaporation:*]{} As noted earlier, the dwell time of an O atom on a grain surface can have a considerable effect on the [O$_2$]{} abundance, particularly when this time becomes less than the time to combine with an H atom on the surface. The timescale for thermal evaporation of an O atom is approximately 9$\,\times\,$10$^{-13}$ exp$\,$\[800$\:$K / $T_{\rm gr}$\] seconds, where 800$\:$K is the adsorption energy of O to water-ice [@Hasegawa93] that applies to van der Waals binding to a chemically saturated surface. It is possible that the binding energy is greater than 800$\:$K, which would increase the grain temperature, and thus the [$G_{\rm o}$]{}, required to thermally desorb O atoms on short timescales and produce the jump in the total [O$_2$]{} column density for [$G_{\rm o}$]{}$\,{\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle >}{\sim}}$}}\normalsize}\:$500 seen in Fig. \[Go\]. If, for example, the O adsorption energy was 1600$\:$K, grains as warm as $\sim\:$42$\:$K – the expected dust temperature at high [[*A*]{}$_{\rm V}$]{} in a [$G_{\rm o}$]{}$\,\simeq\,$10$^4$ field – would, on average, retain their O atoms long enough to form [H$_2$O$_{\rm ice}$]{}, thus delaying the [$G_{\rm o}$]{}$\:>\:$500 rise in [O$_2$]{} column density seen in Fig. \[Go\] until [$G_{\rm o}$]{}$\:>\:$10$^4$. [*Photodesorption yield of H$_2$O from a grain surface, Y$_{H_2O}$:*]{} The abundance (and column density) of [O$_2$]{}depends on the gas-phase abundance of O and OH, the latter being produced primarily through the photodissociation of [H$_2$O]{}, much of which is either photodesorbed from grains or produced via the dissociative recombination of gas-phase H$_3$O$^+$. At high [$G_{\rm o}$]{} (and $T_{\rm gr} >\,$20$\:$K), short O-atom dwell times on grains suppress the formation of [OH$_{\rm ice}$]{} and [H$_2$O$_{\rm ice}$]{}. However, even though it is not formed on the grain surface in a high-[$G_{\rm o}$]{} environment, [H$_2$O]{} formed in the gas phase via H$_3$O$^+$ dissociative recombination will be depleted through freezeout onto grains and will remain locked in [H$_2$O$_{\rm ice}$]{} for as long as $T_{\rm gr} {\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle <}{\sim}}$}}\normalsize}\,$100$\:$K. Since the quantity of OH and [H$_2$O]{} returned to the gas phase as a consequence of [H$_2$O$_{\rm ice}$]{} photodesorption scales with [$Y_{H_2O}$]{}, the total [O$_2$]{} column density likewise scales with [$Y_{H_2O}$]{}, as is seen in Fig. \[Go\]. A value for [$Y_{H_2O}$]{} less than 10$^{-3}$ would help to reconcile theory and observation. However, fits to the [*SWAS*]{} and [*Odin*]{} [H$_2$O]{} data [@Hollenbach09] as well as theoretical simulations and laboratory measurements [@Andersson08; @Arasa11; @Westley95a; @Westley95b; @Oberg09] suggest, if anything, that the appropriate value of [$Y_{H_2O}$]{} is greater than 10$^{-3}$. [*Grain cross-sectional area (per H):*]{} The equilibrium [O$_2$]{} abundance in the [[*A*]{}$_{\rm V}$]{} range of maximum [O$_2$]{} abundance scales as ([$Y_{H_2O}$]{})$^{2\,}$[$\sigma_H$]{}, where [$\sigma_H$]{} is the grain cross-sectional area per H nucleus. Therefore, lowering [$\sigma_H$]{} will decrease the [O$_2$]{} column density, bringing model and observation into closer agreement. For an “MRN" [@Mathis77] grain size distribution $n_{\rm gr}(a)\propto a^{-3.5}$, where $a$ is the grain radius, [$\sigma_H$]{} $\sim\:$2$\,\times\,$10$^{-21}$ cm$^2$ for an assumed gas-to-dust mass ratio of 100 with grains ranging in radii between a minimum, [$a_{\rm min}$]{}, of 20$\:$Å and a maximum, [$a_{\rm max}$]{}, of 2500$\:$Å (the standard value in @Hollenbach09). Grains with [$a_{\rm min}$]{}$\,<\;$20$\,$Å will be cleared of ice mantles by single photon heating or cosmic rays and, thus, are not significant ice reservoirs. Because [$\sigma_H$]{}$\:\propto\:$($a_{\rm min} \cdot a_{\rm max})^{\;-0.5}$, in order to lower the value of [$\sigma_H$]{} while preserving the total mass in grains, either or both [$a_{\rm min}$]{} and [$a_{\rm max}$]{}must [*increase*]{}, such as through coagulation. For example, a reduction in [$\sigma_H$]{}, and thus the face-on [O$_2$]{} column density, by at least a factor of 2 could be achieved if the minimum grain radius were to increase to ${\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle >}{\sim}}$}}\normalsize}\,$80$\:$Å. Alternately, the buildup of an ice mantle, which can increase the radius of grains by as much as $\sim\,$50$\:$Å, will increase the value of [$\sigma_H$]{}. For values of [$G_{\rm o}$]{} of $\sim\,$10$^4$ applicable to the Orion Bar, grain temperatures are expected to be $\sim\,$40$\:$K, which is high enough to inhibit ice formation via surface reactions (absent a higher O adsorption energy); however, water formed in the gas phase via the reaction H$_3$O$^+ +\,e^-\!\rightarrow\;$[H$_2$O]{}$\,+\,$H can still freeze out and form an ice mantle. Toward Orion, there is evidence for a departure from the assumed gas-to-dust mass ratio of 100, which is consistent with the buildup of ice mantles [see, for example, @Goldsmith97]. In addition, there is evidence for a deficiency in small grains and for grain growth, possibly due to radiation pressure, the preferential evaporation of small grains, and coagulation [e.g., @Cesarsky00; @Pellegrini09; @Shaw09]. The net effect of lowering [$\sigma_H$]{} through these processes, and increasing [$\sigma_H$]{} through the accumulation of an ice mantle, is unclear in a high-[$G_{\rm o}$]{} environment like the Orion Bar. [*Beam position:*]{} For an interclump [H$_2$]{} density between 5$\,\times\,$10$^4$ [cm$^{-3}$]{} and 5$\,\times\,$10$^5$ [cm$^{-3}$]{} and [$G_{\rm o}$]{}$\:=\:$10$^4$, the peak [O$_2$]{} abundance is predicted to occur at a face-on depth into the cloud corresponding to an [[*A*]{}$_{\rm V}$]{}$\;\sim\;$8 (see Fig. \[Kaufman\]). Thus, the linear distance from the [[*A*]{}$_{\rm V}$]{}$\:=\:$0 surface, which we assume is the prominent ionization front, to the depth of peak [O$_2$]{} abundance is $\sim\:$7.6$\,\times\,$10$^{21}$/[$n$[(H$_2$)]{}]{} cm. For an assumed distance of 420 pc, the angular separation between the ionization front and the position of peak [O$_2$]{}abundance (and column density) is then $\simeq\:$1.5$\,$[[*A*]{}$_{\rm V}$]{}/\[[$n$[(H$_2$)]{}]{}/10$^5$\] arcseconds, where [[*A*]{}$_{\rm V}$]{} is the face-on depth of the [O$_2$]{} peak abundance in magnitudes. Thus, an interclump [H$_2$]{} density of 10$^5$ [cm$^{-3}$]{} should produce [O$_2$]{}emission that peaks $\sim\:$12[$^{\prime\prime}$]{} from the ionization front and close to the center of the observed sky positions (see Fig. \[finderchart\]). However, if the interclump density is more than a factor of 2 different from 10$^5$ [cm$^{-3}$]{} – values that remain within the range of density estimates for the interclump medium – then the peak [O$_2$]{} abundance is predicted to fall to either side of the observed beam center position. Finally, we note that the inferred [*peak*]{} line-of-sight [H$_2$]{} column density, $N$([H$_2$]{}), applicable to the interclump medium toward the Orion Bar is estimated to be 6.5$\,\times\,$10$^{22}$ [cm$^{-2}$]{} [@Hogerheijde95]. If the geometrical enhancement factor is ${\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle >}{\sim}}$}}\normalsize}\,$10, as would be the case for a tilt angle ${\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle <}{\sim}}$}}\normalsize}\:$5.5[$^{\rm o}$]{}, this would imply a face-on [H$_2$]{} column density of ${\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle <}{\sim}}$}}\normalsize}\;$6.5$\times\,$10$^{21}$ [cm$^{-2}$]{}, corresponding to a total [[*A*]{}$_{\rm V}$]{} through the Bar of about 7. If the face-on extinction through the Orion Bar is indeed this low, then the attenuation of the [$G_{\rm o}$]{}$\,\sim\,$10$^4$ field is not sufficient to allow [O$_2$]{} to reach its peak abundance and the total [O$_2$]{} column density will be less than predicted by [@Hollenbach09], whose total column densities are based upon cloud depths corresponding to [[*A*]{}$_{\rm V}$]{}$\,\geq\,$10. This is illustrated in Fig. \[Kaufman\], which shows both the profile of [O$_2$]{} abundance versus [[*A*]{}$_{\rm V}$]{}and the cumulative [O$_2$]{} column density to a given [[*A*]{}$_{\rm V}$]{}, computed using the model described in [@Hollenbach09] for the conditions appropriate to the Bar interclump medium. At a depth corresponding to an [[*A*]{}$_{\rm V}$]{} of 7, the predicted face-on [O$_2$]{} column density remains $<\:$3$\,\times\,$10$^{14}$ [cm$^{-2}$]{}, well below the limits set here. The clumps known to exist within the Bar do possess higher [H$_2$]{} densities (i.e., 10$^6$[$\,$–$\,$]{}10$^7$ [cm$^{-3}$]{}) and column densities [i.e., $>\,$10$^{23}$ [cm$^{-2}$]{}; @Lis03] and would provide the necessary FUV shielding to allow [O$_2$]{} to reach its full predicted abundance. Such conditions help to reconcile observation and theory in two ways. First, as shown in Fig. \[Go\], the predicted total [O$_2$]{} column densities [*decrease*]{} with higher [H$_2$]{} densities. Thus, the total [O$_2$]{} column density is predicted to be lower if the [O$_2$]{} emission arises primarily from within the dense clumps rather than the surrounding lower density interclump medium. Second, interferometric observations indicate that the dense clumps within the Bar typically subtend angles of between 4[$^{\prime\prime}$]{} and 8[$^{\prime\prime}$]{} [see, for example, @Lis03], and thus provide a natural explanation for why the beam filling factor of [O$_2$]{} emission could be less than unity. However, whether the correct explanation for what we observe is that [O$_2$]{} emission originates preferentially within the dense clumps, and is suppressed within the [[*A*]{}$_{\rm V}$]{}$\,{\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle <}{\sim}}$}}\normalsize}\,$7 interclump medium, and with both gas components governed by the processes described in [@Hollenbach09], will depend on how well this model reproduces the wealth of new lines being detected toward the Orion Bar by [*Herschel*]{}. SUMMARY ======= 1.$\,$ We have conducted a search for [O$_2$]{} toward the Orion Bar, carrying out deep integrations around the frequencies of the $\rm N_J =\;$[3$_3\,$–$\,$1$_2$]{} and [5$_4\,$–$\,$3$_4$]{} transitions at 487 GHz and 774 GHz, respectively. Neither line was detected, but sufficiently sensitive limits on their integrated intensities were obtained to test current models of molecular gas exposed to high fluxes of FUV radiation – i.e., [$G_{\rm o}$]{}$\,\sim\,$10$^4$. In particular, we infer a total face-on [O$_2$]{} column density of ${\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle <}{\sim}}$}}\normalsize}\:$4$\,\times\,$10$^{15}$ [cm$^{-2}$]{}, assuming a Bar geometry in which the line-of-sight depth is more than 4 times greater than its face-on dimension. This column density is at least 2 times less than that predicted by the model of [@Hollenbach09] for the densities, temperatures, and [$G_{\rm o}$]{} appropriate to the Orion Bar. 2.$\,$ The discrepancy between the model predictions and our observations would be reduced, if not eliminated, if the adsorption energy of atomic oxygen to wate-ice were greater than 800$\,$K, and possibly as high as 1600$\:$K. A lower value for the photodesorption yield for [H$_2$O]{} would help, but is not supported by fits to other astronomical data or recent theoretical calculations and laboratory measurements. A lower grain cross-sectional area per H, such as might occur through grain coagulation, radiation pressure, or the preferential destruction of small grains, would lower the [O$_2$]{} column density, but it is unclear whether these grain properties apply within the Orion Bar. 3.$\,$ If the total face-on depth of the interclump medium within the Orion Bar corresponds to an [[*A*]{}$_{\rm V}$]{}$\,{\scriptsize{\raisebox{-2pt}{${\stackrel{\textstyle <}{\sim}}$}}\normalsize}\,$7, then photodissociation will reduce the [O$_2$]{} column density to values below our detection limit. Clumps embedded within the Bar would offer sufficient shielding to enable the buildup of higher [O$_2$]{} abundances and column densities in accord with model predictions, while the small filling factor of these clumps would reduce the [O$_2$]{} line flux to levels consistent with our upper limits. 4.$\,$ If the total face-on depth of the interclump medium within the Orion Bar corresponds to an [[*A*]{}$_{\rm V}$]{}$\,>\,$8, it remains possible that most of the [O$_2$]{} emission may have been missed. In particular, since the gas density affects the angular separation between the ionization front and the face-on depth into the Bar at which the [O$_2$]{} abundance is predicted to peak, interclump [H$_2$]{} densities much different than the assumed value of 10$^5$ [cm$^{-3}$]{} could result in the position of peak [O$_2$]{} abundance and column density occurring to either the northwest or southeast of the position we selected. Only further modeling, including predictions for other species, can establish which, if any, of the above possibilities is most likely to resolve the present puzzle. HIFI has been designed and built by a consortium of institutes and university departments from across Europe, Canada and the United States under the leadership of SRON Netherlands Institute for Space Research, Groningen, The Netherlands, and with major contributions from Germany, France and the US. Consortium members are: Canada: CSA, U. Waterloo; France: CESR, LAB, LERMA, IRAM; Germany: KOSMA, MPIfR, MPS; Ireland, NUI Maynooth; Italy: ASI, IFSI-INAF, Osservatorio Astrofisico di Arcetri-INAF; Netherlands: SRON, TUD; Poland: CAMK, CBK; Spain: Observatorio Astronómico Nacional (IGN), Centro de Astrobiologá (CSIC-INTA). Sweden: Chalmers University of Technology - MC2, RSS & GARD; Onsala Space Observatory; Swedish National Space Board, Stockholm University - Stockholm Observatory; Switzerland: ETH Zurich, FHNW; USA: Caltech, JPL, NHSC. We also acknowledge the effort that went into making critical spectroscopic data available through the Jet Propulsion Laboratory Molecular Spectroscopy Data Base (http://spec.jpl.nasa.gov/), the Cologne Database for Molecular Spectroscopy (http://www.astro.uni-koeln.de/cdms/ and @Muller05) and the Leiden Atomic and Molecular Database (http://www.strw.leidenuniv.nl/$\sim$moldata/ and @Schoier05). Finally, it is a pleasure to acknowledge useful discussions with Dr. Edwin Bergin. Support for this work was provided by NASA through an award issued by JPL/Caltech. Allers, K.N., Jaffe, D.T, Lacy, J.H., Draine, B.T., Richter, M.J. 2005, [[*Ap. J.*]{}]{}, 630, 368 Andersson, S., & van Dishoeck, E.F. 2008, [[*A&A*]{}]{}, 491, 907 Arasa, C., Andersson, S., Cuppen, H.M., van Dishoeck, E.F., & Kroes, G.J. 2011, [*J. Chem. Phys.*]{}, 134, 164503 Batrla, W., & Wilson, T. L. 2003, [[*A&A*]{}]{}, 408, 231 Cesarsky, D., Jones, A. P., Lequeux, J., & Verstraete, L. 2000, [[*A&A*]{}]{}, 358, 708 de Graauw, T., [et$\,$al.]{} 2010, [[*A&A*]{}]{}, 518, L6 Drouin, B. J., Yu, S., Miller, C. E. [et$\,$al.]{} 2010, [*JQSRT*]{}, 111, 1167 Goicoechea, J.R., Joblin, C., Contursi, A., Berné, O., Cernicharo, J., Gerin, M., Le Bourlot, J., Bergin, E.A., Bell, T.A., Röllig, M. 2011, [[*A&A*]{}]{}, 530, L16 Goldsmith, P. F., Bergin, E. A., & Lis, D. C. 1997, [[*Ap. J.*]{}]{}, 491, 615 Goldsmith, P. F., & Langer, W.D. 1978, [[*Ap. J.*]{}]{}, 222, 881 Goldsmith, P. F., Liseau, R., Bell, T. A., [et$\,$al.]{} 2011, [[*Ap. J.*]{}]{}, 737, 96 Gorti, U., & Hollenbach, D.J. 2002, [[*Ap. J.*]{}]{}, 573, 215 Habing, H. J. 1968, [*Bull. Astron. Inst. Netherlands*]{}, 19, 421 Hasegawa, T. I., & Herbst, E. 1993, [[*M.N.R.A.S.*]{}]{}, 261, 83 Herrmann, F., Madden, S. D., Nikola, T., Poglitsch, A., Timmermann, R., Geis, N., Townes, C. H., & Stacey, G. J. 1997, [[*Ap. J.*]{}]{}, 481, 343 Hirota, T., [et$\,$al.]{} 2007, [*Publ. Astron. Soc. Japan*]{}, 59, 897 Hogerheijde, M. R., Jansen, D. J., & Van Dishoeck, E. F. 1995, [[*A&A*]{}]{}, 294, 792 Hollenbach, D.J., Kaufman, M. J., Bergin, E.A., & Melnick, G.J. 2009, [[*Ap. J.*]{}]{}, 690, 1497 Jansen, D.J., Spaans, M., Hogerheijde, M.R., Van Dishoeck, E.F. 1995, [[*A&A*]{}]{}, 303, 541 Kim, M. K., [et$\,$al.]{} 2008, [*Publ. Astron. Soc. Japan*]{}, 60, 991 Lique, F. 2010, [*J. Chem. Phys.*]{}, 132, 044311 Lis, D. C., & Schilke, P. 2003, [[*Ap. J.*]{}]{}, 597, L145 Lis, D. C., Pearson, J. C., Neufeld, D. A., [et$\,$al.]{} 2010, [[*A&A*]{}]{}, 521, L9 Liseau, R., Goldsmith, P. F., Larsson, B., [et$\,$al.]{} 2012, submitted Mathis, J. S., Rumpl, W., & Nordsieck, K. H. 1977, [[*Ap. J.*]{}]{}, 217, 425 Marconi, A., Testi, L., Natta, A., & Walmsley, C. M. 1998, [[*A&A*]{}]{}, 330, 696 Menten, K. M., Reid, M. J., Forbrich, J., & Brunthaler, A. 2007, [[*A&A*]{}]{}, 474, 515 Müller, H. S. P., Schöder, F., Stutzki, J., & Winnewisser, G. 2005, [*J. Mol. Struct.*]{}, 742, 215 Neufeld, D. A., Lepp, S., & Melnick, G. J. 1995, [[*Ap. J. Suppl.*]{}]{}, 100, 132 Neufeld, D. A., Schilke, P., Menten, K. M., Wolfire, M. G., Black, J. H., Schuller, F., Müller, H. S. P., Thorwirth, S., Güsten, R., & Philipp, S. 2006, [[*A&A*]{}]{}, 454, L37 Neufeld, D. A., Roueff, E., Snell, R. L. [et$\,$al.]{} 2011, submitted to [[*Ap. J.*]{}]{} Öberg, K. I., Linnartz, H., Visser, R., & van Dishoeck, E. F. 2009, [[*Ap. J.*]{}]{}, 693, 1209 O’Dell, C. R., & Wong, S. K. 1996, [[*A. J.*]{}]{}, 111, 846 Ott, S. 2010, [*ASP Conf. Ser. 434, Astronomical Data Analysis Software and Systems XIX*]{}, ed. Y. Mizuno, K. I. Morita, & M. Ohishi (San Francisco, CA: ASP), 139 Pellegrini, E. W., Baldwin, J. A., Ferland, G. J., Shaw, G., & Heathcote, S. 2009, [[*Ap. J.*]{}]{}, 693, 285 Pickett, H. M., Poynter, R. L., Cohen, E. A., Delitsky, M. L., Pearson, J. C., & Müller, H. S. P. 1998, “Submillimeter, Millimeter, and Microwave Spectral Line Catalog,” [*J. Quant. Spectrosc. & Rad. Transfer*]{}, 60, 883 Roelfsema, P. R., Helmich, F. P., Teyssier, D., [et$\,$al.]{} 2012, [[*A&A*]{}]{}, 537, A17 Schöier, F. L., van der Tak, F. F. S., van Dishoeck, E. F., & Black, J. H. 2005, [[*A&A*]{}]{}, 432, 369 Shaw, G., Ferland, G. J., Henney, W. J., Stancil, P. C., Abel, N. P., Pellegrini, E. W., Baldwin, J. A., & van Hoof, P. A. M. 2009, [[*Ap. J.*]{}]{}, 701, 677 Sternberg, A. & Dalgarno, A. 1995, [[*Ap. J. Suppl.*]{}]{}, 99, 565 Simon, R., Stutzki, J., Sternberg, A., & Winnewisser, G. 1997, [[*A&A*]{}]{}, 327, L9 Walmsley, C. M., Natta, A., Oliva, E., & Testi, L. 2000, [[*A&A*]{}]{}, 364, 301 Westley, M. S., Baragiola, R. A., Johnson, R. E., & Baratta, G. A. 1995a, [*Nature*]{}, 373, 405 Westley, M. S., Baragiola, R. A., Johnson, R. E., & Baratta, G. A. 1995b, [*Planet. Space Sci.*]{}, 43, 1311 Wyrowski, F., Schilke, P., Hofner, P., & Walmsley, C. M. 1997, [[*Ap. J.*]{}]{}, 487, L171 Young Owl, R. C., Meixner, M. M., Wolfire, M., Tielens, A. G. G. M., & Tauber, J. 2000, [[*Ap. J.*]{}]{}, 540, 886 TABLE 1. Summary of Observations\ [lcccccccc]{} ------------------------------------------------------------------------ & & & & &\ ------------------------------------------------------------------------ & & Rest &  Observing  & Integration & T$_{\rm A}^{\;*}$ & LSR Line & & Integrated\ Species & Transition & Frequency$^{1}$ & Mode$^{\, 2}$ &  Time  & Amplitude & Center & FWHM & Intensity\ & & (GHz) & & (hrs) & (K) & (km s$^{-1}$) &  (km s$^{-1}$)  &  (K-km s$^{-1}$) \ ------------------------------------------------------------------------ H$_2$Cl$^+$ & $J =\,$1$_{11}$[$\,$–$\,$]{}0$_{00}$ & & & & & & &\ & F$ =\,$3/2[$\,$–$\,$]{}3/2 & 485.413 & sc & 1.16 &   0.055 & 10.56 & 2.47 & 0.15\ & F$ =\,$5/2[$\,$–$\,$]{}3/2 & 485.418 & sc & 1.16 &   0.076 & 10.56 & 2.47 & 0.20\ & F$ =\,$1/2[$\,$–$\,$]{}3/2 & 485.421 & sc & 1.16 &   0.030 & 10.57 & 2.47 & 0.08\  SO$^+$ & $J =\,$21/2[$\,$–$\,$]{}19/2 &    486.837    & sc & 1.85 &   0.029 & 10.77 & 2.28 & 0.07\ & $\Omega =\,$1/2, $\ell =\,$e & & & & & & &\  SO$^+$ & $J =\,$21/2[$\,$–$\,$]{}19/2 & 487.212 & sc & 1.85 &   0.027 & 10.99 & 1.86 & 0.05\ & $\Omega =\,$1/2, $\ell =\,$f & & & & & & &\  O$_2$ & 3$_3$[$\,$–$\,$]{}1$_2$ & 487.249 & sc & 1.85 & $\leq\,$0.008$^{\, 3}$ & [$\,$–$\,$]{}& [$\,$–$\,$]{}& [$\,$–$\,$]{}\  CS & $J =\,$10[$\,$–$\,$]{}9 & 489.751 & sc & 0.46 & 0.46 & 10.58 & 1.78 & 0.87\  [$^{13}$CO]{}& $J =\,$7[$\,$–$\,$]{}6 & 771.184 & sp & 1.15 & 27.04   & 10.67 & 2.24 & 64.48   \  O$_2$ & 5$_4$[$\,$–$\,$]{}3$_4$ & 773.840 & sc, sp & 10.91   & $\leq\,$0.007$^{\, 3}$ & [$\,$–$\,$]{}& [$\,$–$\,$]{}& [$\,$–$\,$]{}\  C$_2$H & N$\,=\,$9[$\,$–$\,$]{}8 & 785.802 & sc, sp & 10.91   & 0.34 & 10.76 & 2.35 & 0.84\ & $J =\,$19/2[$\,$–$\,$]{}17/2 & & & & & & &\ & F$ =\,$9[$\,$–$\,$]{}8 & & & & & & &\  C$_2$H & N$\,=\,$9[$\,$–$\,$]{}8 & 785.865 & sc, sp & 10.91   & 0.30 & 10.77 & 2.35 & 0.75\ & $J =\,$17/2[$\,$–$\,$]{}15/2 & & & & & & &\ & F$ =\,$9[$\,$–$\,$]{}8 & & & & & & &\  C$^{17}$O & $J =\,$7[$\,$–$\,$]{}6 & 786.281 & sc, sp & 10.91   & 1.19 & 10.62 & 1.76 & 2.23\ $^{1}$ NRAO-recommended rest frequency.  $^{2}$ sc: spectral scan observation;  sp: single point observation.\ $^{3}$ 3$\sigma$ upper limit. \[tobs\] $\!$![Position of the HIFI 44.7[$^{\prime\prime}$]{} and 28.2[$^{\prime\prime}$]{} beams at 487 GHz and 774 GHz, respectively, superposed on an HST image of the Orion nebula [@ODell96]. Also shown are contours of [$^{13}$CO]{} $J =\,$3[$\,$–$\,$]{}2 integrated intensity for a portion of a larger map obtained by [@Lis03], with intensities in [K$\:$km s$^{-1}$]{} noted. The HIFI beams are centered at $\alpha\,=\,$05$^{\rm h}$35$^{\rm m}$20$^{\rm s}\!\!.$6, $\delta\:=\,-$05[$^{\rm o}$]{}25[$^{\prime}$]{}14[$^{\prime\prime}$]{} (J2000), toward the surface layers of the FUV-illuminated Orion Bar where the [O$_2$]{} emission is predicted to peak.](fig1.pdf "fig:") \[finderchart\] $\!$![Averaged H and V polarization spectra obtained in HIFI Band 1a toward the Orion Bar, ordered by rest frequency, with the Gaussian fits given in Table 1 superposed in red. Also indicated is the energy of the upper level for each transition, in Kelvins. The H$_2$Cl$^+$ line is a blend of three, partially resolved, hyperfine components (see Table 1). An LSR velocity of 10.7[ km s$^{{-1}}$]{} is denoted with a vertical dashed line.](fig2.eps "fig:") \[Band1aspectra\] $\!$![Averaged H and V polarization spectra obtained in HIFI Band 2b toward the Orion Bar, ordered by rest frequency, with the Gaussian fits given in Table 1 superposed in red. Also indicated is the energy of the upper level for each transition, in Kelvins. An LSR velocity of 10.7[ km s$^{{-1}}$]{} is denoted with a vertical dashed line. The frequency of the [$^{13}$CO]{} $J=\,$7[$\,$–$\,$]{}6 transition was near the edge of the band and, thus, the spectrum does not cover the full LSR velocity range of the other lines. The [O$_2$]{} spectrum has been truncated to remove features in the other sideband.](fig3.eps "fig:") \[Band2bspectra\] $\!$![Contours of integrated antenna temperature, in K km s$^{-1}$, under a Gaussian line profile versus line-of-sight [O$_2$]{} column density and gas temperature assuming an aperture efficiency of 0.7. The shaded area bounds the range of upper limits to the [O$_2$]{} 487 GHz integrated intensity assuming the intrinsic [O$_2$]{} line FWHM is 1.8[ km s$^{{-1}}$]{} (0.0150 [K$\:$km s$^{-1}$]{}) or 2.5[ km s$^{{-1}}$]{} (0.0209 [K$\:$km s$^{-1}$]{}).](fig4.eps "fig:") \[contours487\] $\!$![Contours of integrated antenna temperature, in K km s$^{-1}$, under a Gaussian line profile versus line-of-sight [O$_2$]{} column density and gas temperature assuming an aperture efficiency of 0.7. The shaded area bounds the range of upper limits to the [O$_2$]{} 774 GHz integrated intensity assuming the intrinsic [O$_2$]{} line FWHM is 1.8[ km s$^{{-1}}$]{} (0.0126 [K$\:$km s$^{-1}$]{}) or 2.5[ km s$^{{-1}}$]{} (0.0175 [K$\:$km s$^{-1}$]{}).](fig5.eps "fig:") \[contours774\] $\!$![Predicted total [O$_2$]{} column density perpendicular to the ionization front as a function of [$G_{\rm o}$]{} and $n$(H$+$2[H$_2$]{}) for [H$_2$O]{} photodesorption yields of 10$^{-3}$ (solid lines) and 10$^{-4}$ (dashed line). The results shown assume a cloud thickness sufficient to encompass the zone of peak abundance [after @Hollenbach09]. The range of [$G_{\rm o}$]{} that applies to the Orion Bar is shown in the shaded region. The horizontal dotted line denotes the upper limit to the [O$_2$]{} column density established here, i.e., 1.5$\,\times\,$10$^{16}$ [cm$^{-2}$]{}, divided by a geometrical enhancement factor of 4.](fig6.eps "fig:") \[Go\] $\!$![[*Top panel:*]{} Abundance of [O$_2$]{} as a function of face-on depth into a cloud, measured in [[*A*]{}$_{\rm V}$]{}, for a cloud with $n_{\rm H} =\:n$(H$+$2H$_2$)$\:=\:$10$^5$ [cm$^{-3}$]{} and 10$^6$ [cm$^{-3}$]{} exposed to a FUV field of [$G_{\rm o}$]{}$\,=\:$10$^4$. This result was computed using the model described in [@Hollenbach09] assuming their “standard" model parameters, except for those noted here. An [H$_2$O]{} photodesorption yield of 10$^{-3}$ is assumed. The gas and dust temperatures throughout the cloud are calculated self-consistently in the Hollenbach et al. code, which predicts a gas temperature of 33$\:$K, and a dust temperature of 42$\:$K, at the depth of the peak [O$_2$]{} abundance above. [*Bottom panel:*]{} Cumulative face-on column density of [O$_2$]{} integrated from the cloud surface to a given depth, in [[*A*]{}$_{\rm V}$]{}, for the abundance profile shown in the top panel.](fig7.eps "fig:") \[Kaufman\]
--- abstract: 'We obtain a compact Sobolev embedding for $H$-invariant functions in compact metric-measure spaces, where $H$ is a subgroup of the measure preserving bijections. In Riemannian manifolds, $H$ is a subgroup of the volume preserving diffeomorphisms: a compact embedding for the critical exponents follows. The results can be viewed as an extension of Sobolev embeddings of functions invariant under isometries in compact manifolds.' author: - | Micha[ł]{} Gaczkowski$^1$, Przemys[ł]{}aw Górka$^1$[^1] and Daniel J. Pons$^2$[^2]\ \ \ \ \ \ title: '**Symmetry and compact embeddings for critical exponents in metric-measure spaces**' --- [**Keywords**]{}: Sobolev spaces, metric-measure spaces, compact embedding *Mathematics Subject Classification (2010):* 46E35; 30L99. Introduction ============ Arising in the Calculus of Variations and PDE’s, the study of Sobolev spaces in Euclidean domains, and the embeddings between them, has been an active area of research for more than a century (see [@Adams] for the classical results, and [@naumann] for an overview on history). In the last fifty years, motivated by problems in Geometric Analysis, Physics and Topology, those studies have been generalized to functions on manifolds, with the extension to sections of vector bundles over those spaces, see [@aubin; @Hawking; @hebey; @lawson; @palais]. More recently, the study of metric-measure spaces[^3] demands, whenever it is possible, similar studies in this context, see the books [@Ambrosio; @Heinonen; @H].\ A fundamental ingredient in Sobolev spaces is the the concept of weak or distributional gradient. In metric-measure spaces there are at least two notions that provide a valid generalization of the usual gradient in $\mathbb{R}^n$: - An upper gradient, see [@Cheeger; @H]. - The one used in this work, nowadays called a Hajłasz gradient, see [@Ambrosio; @Hajlasz] and Section \[Preliminaries\] below. Both notions of gradient have advantages and disadvantages with respect to each other, see [@Heinonen; @H; @JSYY].\ If $X$ is a set, $\mu$ is a measure on $X$, and $1 \leq p < \infty$, denote by $L^p(X,\mu)$ the vector space of $\mu$-measurable functions such that $$\| f \|_{L^p(X,\mu)} := (\ \int_X | f(x) |^p d \mu(x)\ )^{1/p}$$ is finite. In particular, if $X$ is either a bounded domain in $\mathbb{R}^n$ or a compact Riemannian $n$-manifold, with $\mu = V_g$ the volume measure associated to the Euclidean or Riemannian line element $g$, respectively, $L^{p}_1(X,\mu)$ refers to the subspace of $L^p(X,\mu)$ made up of those functions such that the norm of their distributional gradient (with respect to $g$) is in $L^p(X,\mu)$. In those cases one has the embedding $$L^{p}_1(X,\mu) \hookrightarrow L^q(X,\mu)$$ whenever $p \leq q \leq p^{\ast} := n p / (n-p)$, where $1 \leq p < n$. If $q < p^{\ast}$, the embedding is compact, and one writes $$\label{comp-emb-1} L^{p}_1(X,\mu) \hookrightarrow \hookrightarrow L^q(X,\mu) ,$$ see [@Adams; @aubin; @hebey]. The non-compactness of the embedding in the limit case $q = p^{\ast}$ is a phenomena that arises from sequences of transformations that induce substantial changes in the functions, transformations that nonetheless leave the norm of functions unchanged. With such information, it is tempting to look for subspaces of $L^{p}_1(X,\mu)$ whose elements are invariant under an appropriate subgroup of $\text{Diff}(X)$, and see if the compact embedding (\[comp-emb-1\]), when restricted to these subspaces, can be extended to higher values of $q$: let $H$ be a subgroup of $\text{Diff}(X)$, and denote by $L^{p}_{1,H}(X,\mu)$ the subspace of $L^{p}_1(X,\mu)$ made up of $H$-invariant functions.\ The best result in this context is due to E. Hebey and M. Vaugon, who consider $H$ as being a subgroup of $\text{Isom}(X,g)$, the group of isometries of $(X,g)$: (Hebey-Vaugon [@hebey-vaugon], also Theorem 9.1 in [@hebey]) \[heb-vaug\] Suppose $(X,g)$ is a compact Riemannian $n$-manifold, and $H$ is a compact subgroup of $\text{Isom}(X,g)$. If $H(x)$ denotes the orbit of the point $x$ under the action of $H$, require that $H(x)$ is uncountable for every $x$ in $X$. If $k := \text{min}\ \{\ \text{dim}\ H(x) : x\in X\ \}$, then $$\label{comp-emb-2} L^{p}_{1,H}(X,V_g) \hookrightarrow \hookrightarrow L^q(X,V_g)$$ whenever $1 \leq p < n-k$ and $1 \leq q < \frac{(n-k) p}{n-k-p}$. In a metric-measure space $(X,d,\mu)$ conditions for the metric $d$ and the measure $\mu$ are sometimes required, leading to *synthetic* extensions of *analytic* Riemannian concepts, like curvature, volume, and dimension (see [@villani] for a friendly introduction to these ideas). We will use the doubling condition for the metric space $(X,d)$ and the lower Ahlfors $s$-regularity of the metric-measure space $(X, d, \mu)$.[^4] For instance, if $(X,g)$ is a compact $n$-manifold with induced distance $d_g$, then $(X,d_g,V_g)$ is a lower $n$-regular metric-measure space, and $(X,d_g)$ is doubling.\ As aforementioned, we use Hajłasz gradients: denote by $M^p_1(X,\mu)$ the vector space of functions in $L^p(X,\mu)$ such that their Hajłasz gradient is also in $L^p(X,\mu)$. In Section \[examples\], Theorem \[riem-mm\], we see that when $(X,d_g, V_g)$ is the natural metric-measure space induced from a compact $n$-manifold $(X,g)$, then $M^p_1(X, V_g)$ coincides with $L^p_1(X, V_g)$ for $1 < p < \infty$.\ In the *analytic* context of Riemannian geometry, symmetry groups are subgroups of $\text{Diff}(X)$. Instead, in the *synthetic* context of metric-measure spaces, symmetry groups are subgroups of $\text{Aut}(X)$, the group of automorphisms or bijections of $X$: let $H$ be a subgroup of $\text{Aut}_{\mu}(X)$, the group of $\mu$-preserving automorphisms of $X$; denote by $M^{p}_{1,H}(X,\mu)$ the subspace of $M^{p}_1(X,\mu)$ made up of $H$-invariant functions. The main result in this work is: \[main2\] Assume that $(X, d, \mu)$ is a metric-measure space that is compact, Ahlfors lower $s$-regular, with $(X,d)$ doubling, and such that $M^p_1(X,\mu)$ is reflexive. If $H$ is a subgroup of $\text{Aut}_{\mu}(X)$ such that for every $x$ in $X$ the set $H(x)$ is uncountable, then $$\label{comp-emb-3} M^p_{1,H} (X,\mu) \hookrightarrow \hookrightarrow L^{q} (X,\mu)$$ whenever $1 < p < s$ and $1 \leq q \leq p^{*} = \frac{s p}{s - p}$. In contrast with classical Sobolev spaces, there are situations where $M^p_1(X,\mu)$ is not reflexive for $1<p<\infty$: some examples of this unexpected phenomena are self similar Cantor sets, see [@rissanen]. On the other hand, a discussion about sufficient conditions on $(X,d,\mu)$ for $M^p_1(X,\mu)$ to be reflexive can be found in [@gorka1; @Hajlasz1].[^5] To highlight the contributions of this work, we make some remarks: 1. Concerning the groups appearing in Theorems \[heb-vaug\] and \[main2\]: In the context of metric-mesure spaces arising from Riemannian manifolds, the group $H$ in Theorem \[main2\] is a subgroup of $\text{Diff}_{V_g}(X)$, the group of volume preserving diffeomorphisms of $(X,g)$; in Theorem \[heb-vaug\] the group $H$ is a compact subgroup of the smaller group $\text{Isom}(X,g)$. A classical result of S. Myers and N. Steenrod, see [@kobayashi], provides $\text{Isom}(X,g)$ with the structure of a finite dimensional Lie group, that is compact if $X$ is compact. In contrast, if $X$ is compact, H. Omori provided the larger group $\text{Diff}_{V_g}(X)$, and $\text{Diff}(X)$ as well, with the structure of an Inverse Limit of Hilbert manifolds, see [@ebin; @kobayashi]. The Lie algebras of both groups are *represented* by vector fields: those in the *formal* Lie algebra of $\text{Diff}_{V_g}(X)$ are free of divergence; those in the Lie algebra of $\text{Isom}(X,g)$ are Killing, a stronger condition. In every Riemannian manifold there are non-trivial vector fields free of divergence; on the other hand, the sign of the Ricci curvature imposes restrictions for Killing vector fields: if the Ricci tensor is non-positive and negative definite at some point, there are no non-trivial Killing vector fields, and the group $\text{Isom}(X,g)$ is finite [@Berard; @kobayashi]. 2. Concerning the proofs of Theorems \[heb-vaug\] and \[main2\]: Roughly speaking, Theorem \[heb-vaug\] uses local charts compatible with the dimension reduction under the Riemannian submersion induced by the isometries, reducing the compact embedding of functions to a Sobolev inequality in the space orthogonal to the $H$-orbits, providing a convenient setup for specific results obtained by H. Berestycki, E. Lieb, P. L. Lions and others, see [@lions1]. The proof of Theorem \[main2\] is different: the dimension reduction compatible with isometries used in Theorem \[heb-vaug\] is not always compatible with volume preserving diffeomorphisms.[^6] Some ingredients in the proof are a Sobolev-Hajłasz inequality [@Hajlasz], and variations of the Concentration-Compactness principle, see [@lions2].\ In Section \[Preliminaries\] we provide definitions and results that will be used in Section \[main\], where a detailed proof of Theorem \[main2\] is given. In Section \[examples\] we see that Theorem \[main2\] can be applied in the Riemannian context, and discuss necessary and sufficient conditions for its applicability when the dimension of the $H$-orbits is 1.\ For Sobolev embeddings in non-compact spaces using symmetry, see [@Gaczkowski; @gorka1], and the references there. Preliminaries {#Preliminaries} ============= In this work $(X, d, \mu)$ is a metric-measure space equipped with a metric $d$ and a Radon measure $\mu$. We assume that the measure of every open non-empty set is positive, and that the measure of every bounded set is finite. In most parts of our paper we will assume that the metric-measure space $(X, d, \mu)$ is *lower Ahlfors $s$-regular*: this means that there exists a constant $b$ such that $$b R^s \leq \mu \left(B_R(x)\right)$$ for all balls $B_R(x)$ in $X$ with $R < \hbox{diam} X$. A metric space is said to be *doubling* if there exists a constant $C$ such that for every ball of radius $R$, there exist $C$ balls of radius $R/2$ that cover the original ball. It not difficult to see that if $(X, d, \mu)$ is a doubling metric-measure space,[^7] then $(X, d)$ is a doubling metric space (see [@gromov], Appendix $B_+$). Conversely, J. Luukkainen and E. Saksman in [@Luukkainen] prove that every complete doubling metric space carries a doubling measure.\ If $(X, d, \mu)$ is a metric-measure space, say that a function $f$ in $L^p(X,\mu)$ belongs to the *Haj[ł]{}asz-Sobolev* space $M^{p}_{1}(X,\mu)$ if there exists some $g \in L^{p}(X,\mu)$, called a *Hajłasz gradient*, such that $$\begin{aligned} \label{global-local} |f(x) - f(y)| \leq d(x,y) \left( g(x) + g(y) \right) \end{aligned}$$ for $\mu$ almost every $x$ and $y$ in $X$. In this context, given $f$ in $M^{p}_1(X,\mu)$, we denote by $g_f$ any Hajłasz gradient for $f$, to endow the space $M^p_1(X,\mu)$ with the norm $$\begin{aligned} \label{global-norm} \|f\|_{M^p_1(X,\mu)}:= \|f\|_{L^{p}(X,\mu)}+\inf_{g_f} \|g_f\|_{L^{p}(X,\mu)},\end{aligned}$$ and then $M^p_1(X,\mu)$ is a Banach space. In the same context, say that $f \in L^p(X,\mu)$ belongs to $m^p_1(X,\mu)$ if there exists some $g \in L^{p}(X,\mu)$, called a *local Hajłasz gradient*, such that for every $z$ in $X$ there exists an open set $U_z$ and some $E_z \subset U_z$ with $\mu(E_z) = 0$, such that for every pair of points $\{x,y\}$ in $U_z \thicksim E_z$ the inequality (\[global-local\]) holds. As in (\[global-norm\]) one defines $$\begin{aligned} \label{local-norm} \|f\|_{m^p_1(X,\mu)}:= \|f\|_{L^{p}(X,\mu)}+\inf_{g_f} \|g_f\|_{L^{p}(X,\mu)},\end{aligned}$$ where now the infimum is over all those $g_f$ that are local Hajłasz gradients for $f$. Then $m^p_1(X,\mu)$ is also a Banach space. It is obvious that Hajłasz gradients for a function $f$ are local Hajłasz gradients; the converse is not true in general, see [@JSYY] for an example. For a detailed exposition of some basic properties of these spaces, we refer to [@Ambrosio; @Hajlasz; @Hajlasz1; @HajlaszKoskela; @Heinonen; @H; @JSYY; @Kinnunen].\ The next result will be useful: (See [@Hajlasz]) \[wlozenie\] Suppose $(X, d, \mu)$ is an Ahlfors lower $s$-regular metric-measure space of finite diameter. If $1 < p < s$, then $$\begin{aligned} M^p_1(X,\mu) \hookrightarrow L^{p^*}(X,\mu), \end{aligned}$$ where $p^* =\frac{sp}{s-p}$. Moreover, there exists a constant $C=C(s,p,b)$, depending on $s, p, b$, such that for each $f$ in $M^p_1(X,\mu)$ $$\begin{aligned} \|f\|_{L^{p^*}(X,\mu)} \leq C\left(\|f\|_{L^{p}(X,\mu)}+\| g_f\|_{L^{p}(X,\mu)}\right) \end{aligned}$$ whenever $g_f$ is a Hajłasz gradient for $f$. We use Proposition \[wlozenie\] to infer: \[noncritic\] Assume that $(X, d, \mu)$ is an Ahlfors lower $s$-regular compact metric-measure space, with $(X, d)$ doubling. If $1 < p < s$, then for every $q <p^*$ $$\begin{aligned} M^p_1(X,\mu) \hookrightarrow \hookrightarrow L^q(X,\mu), \end{aligned}$$ where $p^* =\frac{sp}{s-p}$. By Proposition \[wlozenie\] have that $M^p_1(X,\mu) \hookrightarrow L^q(X,\mu)$ for every $q \in [1, p^*]$. Moreover, since $(X, d)$ is doubling, by Theorem 2 in [@Kalamajska] we have the compact embedding $$\begin{aligned} \label{comp} M^p_1(X,\mu) \hookrightarrow \hookrightarrow L^{p}(X,\mu).\end{aligned}$$ Hence if $q\in [1,p]$, then $$M^p_1(X,\mu) \hookrightarrow \hookrightarrow L^p(X,\mu) \hookrightarrow L^q(X,\mu).$$ Next, consider the case when $p < q < p^*$. We will prove that the ball $\mathfrak{B}=\{f: \|f\|_{M^p_1(X,\mu)}\leq 1\}$ is precompact in $L^q(X,\mu)$. Fix $\theta$ in $(0,1)$ so that $$\frac{1}{q}=\frac{\theta}{p}+\frac{1-\theta}{p^*},$$ and use (\[comp\]) to note that $\mathfrak{B}$ is precompact in $L^p(X,\mu)$. Hence for every $\epsilon >0$ there exists an $\tilde{\varepsilon}=2C\epsilon^{\frac{1}{\theta}} / (2C)^{\frac{1}{\theta}}$ net[^8] of $\mathfrak{B}$ in $L^p(X,\mu)$, say $\{f_k\}_{k \in \{1,...,N\}}$, where $C$ is the constant from Proposition \[wlozenie\]. Now it is enough to prove that $\{f_k\}_{k \in \{1,...,N\}}$ is an $\epsilon$ net of $\mathfrak{B}$ in $L^q(X,\mu)$; indeed, by the interpolation inequality we have $$\begin{aligned} \|f-f_k\|_{L^q(X,\mu)}&\leq&\|f-f_k\|^{\theta}_{L^p(X,\mu)}\|f-f_k\|^{1-\theta}_{L^{p^*}(X,\mu)}\\ &\leq&C^{1-\theta}\|f-f_k\|^{\theta}_{L^p(X,\mu)}\|f-f_k\|^{1-\theta}_{M^p_1(X,\mu)}\\ &\leq&2^{1-\theta}C^{1-\theta}\|f-f_k\|^{\theta}_{L^p(X,\mu)}\leq \epsilon\end{aligned}$$ for some $k$ in $\{1,...,N\}$. Proposition \[noncritic\] highlights the fact that in general one cannot expect that $M^p_1(X,\mu) \hookrightarrow \hookrightarrow L^{p^*}(X,\mu)$. Theorem \[main2\] ensures that some proper subset of $M^p_1(X,\mu)$ is relatively compact in $L^{p^*}(X,\mu)$. Auxiliary Lemmata ----------------- The next lemma seems to be well known, however we give a detailed proof due to its role in Section \[main\]. \[lemma:delty\] Here $(\Omega ,d , \mu)$ is a separable metric-measure space with a finite Borel measure $\mu$. Suppose that there exists some $\delta >0$ such that for every measurable set $A$, either $\mu(A) = 0$ or $\mu(A) \geq \delta$. Then there exists a finite set $\{x_i \}_{i \in I}$ of points in $\Omega$ and a finite set of numbers $\{ \mu_i \}_{i \in I}$ not smaller than $\delta$ such that $$\mu = \sum_{i \in I} \mu_i \delta_{x_i}.$$ Consider the set $A_{\delta} := \left\{ x \in \Omega \, : \, \mu(x) \geq \delta \right\}$. Since $\mu(\Omega) < \infty$, the set $A_{\delta}$ must be finite. We will show that $\mu(\Omega \thicksim A_{\delta}) = 0$. Since $\Omega \thicksim A_{\delta}$ is open, we have $$\Omega \thicksim A_{\delta} = \bigcup_{x \in \Omega \thicksim A_{\delta}} B_{R_x}(x) .$$ Moreover, since there are no atoms in $\Omega \thicksim A_{\delta}$, for every $x$ in $\Omega \thicksim A_{\delta}$ we can choose $R_x$ in such a way that $$\mu(B_{R_x}(x)) = 0.$$ Furthermore, since $\Omega$ is separable, Lindelöf’s lemma yields $$\Omega \thicksim A_{\delta} = \bigcup_{x \in A} B_{R_x}(x),$$ where $A$ is a countable subset of $\Omega \thicksim A_{\delta} $, and $\mu( \Omega \thicksim A_{\delta} ) = 0$ follows. To state the next lemma, given some space $F(\Omega)$ of functions on some set $\Omega$, denote by $$F_c (\Omega) :=\{\phi \in F(\Omega): \hbox{spt} \phi \subset \subset \Omega \}$$ the subset of $F(\Omega)$ consisting of those functions whose support is a compact subset of $\Omega$. \[aproksymacja\] Here $(X,d)$ is a locally compact metric space with two Radon measures $\mu$ and $\nu$, and $ \Omega \subset X$ is a precompact open set. Then for every $p$ and $r$ in $[1, \infty)$ the set $\hbox{Lip}_c (\Omega)$ is equidense both in $L^{r}(\Omega,\mu)$ and in $L^{p}(\Omega,\nu)$. This means that for every $\epsilon >0$ and every $f \in L^{r}(\Omega,\mu) \cap L^{p}(\Omega,\nu)$ there exists some $\phi$ in $\hbox{Lip}_c (\Omega)$ such that $$\|f -\phi\|_{L^r(\Omega,\mu)} \leq \epsilon \quad \text{and} \quad \|f-\phi\|_{L^p(\Omega,\nu)} \leq \epsilon.$$ By Urysohn’s lemma $C_c (\Omega)$ is dense in $L^r(\Omega,\mu)$ and in $L^p(\Omega,\nu)$. To prove the lemma it is sufficient to check that for every measurable set $A$ the characteristic function $\mathbf{1}_A$ can be approximated both in $L^r(\Omega,\mu)$ and in $L^p(\Omega,\nu)$ by functions in $\hbox{Lip}_c (\Omega)$. The regularity of the measures ensures that there exists a sequence $\{K_n\}_n$ of compact sets and a sequence $\{U_n\}_n$ of open sets such that $ K_n \subset A \subset U_n $, with $$\mu(U_n \thicksim K_n) \leq \frac{1}{n}\ \text{and} \ \nu(U_n \thicksim K_n) \leq \frac{1}{n}.$$ Without loss of generality we can assume that $U_n \subset \Omega$. Moreover, since the space is locally compact, for every $n$ there exists an open precompact set $V_n$ such that $$K_n \subset V_n \subset \overline{V}_n \subset U_n.$$ Introduce the sequence of functions $\{\psi_n\}_n$ given for each $n$ by $$\psi_n := \mathbf{1}_{K_n} : K_n \cup (\Omega \thicksim V_n)\rightarrow [0,1] ,$$ and denote by $\tilde{\psi}_n$ the extension of $\psi_n$ to all $\Omega$ defined as $$\tilde{\psi}_n(x) := \sup_{y \in K_n \cup (\Omega \thicksim V_n) } \left( \psi_n(y) - L_n\ d(x,y) \right) ,$$ where $L_n = 1 / \text{ dist } ( K_n , \Omega \thicksim V_n)$. Such functions are Lipschitz on $\Omega$, with $\tilde{\psi}_n = \psi_n$ on $ K_n \cup (\Omega \thicksim V_n)$, and with $\tilde{\psi}_n \leq 1$. Finally, define $$\phi_n = \max \{0, \tilde{\psi}_n \},$$ and note that $\phi_n \in \hbox{Lip}_c (\Omega)$. Then it is easy to see that $$\int_{\Omega} \left| \phi_n(x) - \mathbf{1}_A(x) \right|^{r} d \mu(x) \leq 2^{r} \mu(U_n \thicksim K_n) \leq \frac{2^{r}}{n} ,$$ and similarly $$\int_{\Omega} \left| \phi_n(x) - \mathbf{1}_A(x) \right|^{p} d \nu(x) \leq \frac{2^{p}}{n}.$$ Proof of Theorem \[main2\] {#main} ========================== In this section we prove Theorem \[main2\], our main result. To prove such a theorem, we will need Theorem \[theorem:Lions\], which in turn requires Lemma \[lemma:rhol\], Lemma \[prod\] and Lemma \[BL\]. We start with Lemma \[lemma:rhol\]: \[lemma:rhol\] (Reverse Hölder) Let $\Omega \subset X$ be an open precompact subset of the metric space $(X, d)$, and let $\mu$ and $\nu$ be Radon measures on $\Omega$. Assume that $1 \leq p < r$. If there exists a positve real number $C$ such that for every Lipshitz $\phi$ with compact support $$\label{hypothesis} \| \phi \|_{L^r(\Omega,\mu)} \leq C \| \phi \|_{L^{p}(\Omega,\nu)},$$ then there exists a countable set of points $\{x_i \}_{i \in I}$ in $\Omega$ and a countable set $\{\mu_i\}_{i \in I}$ of positive real numbers such that $$\mu = \sum_{i \in I} \mu_i \delta_{x_i}.$$ We divide the proof into two steps.\ [**Step 1.**]{} Assume that $\mu = \nu$, choose any measurable set $A$, and assume that (\[hypothesis\]) holds; by Lemma \[aproksymacja\] $$\| \mathbf{1}_{A} \|_{L^{r}(\Omega,\mu)} \leq C \| \mathbf{1}_{A} \|_{L^{p}(\Omega,\mu)},$$ and then $$\mu(A)^{\frac{1}{r}} = \| \mathbf{1}_{A} \|_{L^r(\Omega,\mu)} \leq C \| \mathbf{1}_{A} \|_{L^p(\Omega,\mu)} = C \mu(A)^{\frac{1}{p}}.$$ Hence either $\mu(A) = 0$, or $\mu(A) \geq 1 / {C}^{\frac{pr}{r- p}}$. Then by Lemma \[lemma:delty\] there exists finite set $\{x_i \}_{i \in I}$ of points in $\Omega$ and a finite set $\{ \mu_i \}_{i \in I}$ of real numbers with $\mu_i \geq 1/C^{\frac{pr}{r- p}}$ such that $$\mu = \sum_{i \in I} \mu_i \delta_{x_i}.$$ [**Step 2.**]{} Now assume that $\mu$ and $\nu$ are arbitrary; the Lebesgue Decomposition theorem ensures that $$\label{decomposition} \nu = \mu \llcorner \theta + \sigma$$ for some non-negative $\theta$ in $L^1(\Omega,\mu)$, where $\mu \llcorner \theta (A) := \int_{A} \theta d \mu$, and $\sigma$ is a positive measure singular with respect to $\mu$. For every positive integer $n$ consider the function $$\phi_n := \theta^{\frac{1}{r - p}} \mathbf{1}_{\theta \leq n} \psi ,$$ where $ \psi $ is Lipschitz with compact support, and also the measure $$\mu_n := \mu \llcorner ( \theta^{\frac{r}{r - p}} \mathbf{1}_{\theta \leq n} ).$$ Assuming (\[hypothesis\]) and recalling Lemma \[aproksymacja\], use the decomposition (\[decomposition\]) to obtain $$\label{ineq-rp} \| \phi_n \|_{L^r(\Omega,\mu)} \leq C \| \phi_n \|_{L^p(\Omega,\nu)} = C \| \phi_n \|_{L^p(\Omega, \mu \llcorner \theta + \sigma)} = C \| \phi_n \|_{L^p(\Omega, \mu \llcorner \theta )} .$$ However $$\label{eq-p} \| \phi_n \|^p_{L^p(\Omega, \mu \llcorner \theta )} = \int_{\Omega} \left| \psi \right|^p \theta^{\frac{p}{r-p}} \mathbf{1}_{\theta \leq n} \theta d \mu = \int_{\Omega} \left| \psi \right|^p \theta^{\frac{r}{r-p}} \mathbf{1}_{\theta \leq n} d \mu = \| \psi \|^p_{L^p(\Omega , \mu_n)} ,$$ and similarly $$\label{eq-r} \| \psi \|_{L^r(\Omega , \mu_n)} = \| \phi_n \|_{L^r(\Omega, \mu)} .$$ Then use (\[eq-p\]) and (\[eq-r\]) in (\[ineq-rp\]) to infer that $$\| \psi \|_{L^r(\Omega , \mu_n)} \leq C \| \psi \|_{L^p(\Omega , \mu_n)}$$ for every $n$.\ Hence by Step 1 $$\mu_n = \sum_{i \in I_n} \mu_{n,i} \delta_{x_{n,i}}$$ for every $n$. Recall the definition of the measures $\mu_n$, and note that $\hbox{spt} \mu_n \subset \hbox{spt} \mu_{n+1}$, in particular $I_n \subset I_{n+1}$. Let $I=\bigcup_{n=1}^{\infty}I_n$ and define $x_i : =x_{n,i} \big|_{I_n}$; one can write $$\mu_n = \sum_{i \in I_n} \mu_{n,i} \delta_{x_i}.$$ Since $\mu_{n,i} =\mu_n (\{x_i\})\leq \mu_{n+1} (\{x_i\})=\mu_{n+1,i} $, it follows that for each $i$ the number $\mu_{n,i}$ is non decreasing with respect to $n$.\ Denote by $\mathcal{M}(\Omega)$ the set of measures on $\Omega$ endowed with the weak-$\ast$ topology. Let $\tilde{\mu}_n= \mu_n \llcorner (\theta^{-\frac{r}{r-p}}\mathbf{1}_{\{\theta>0\}})$, and observe that $\tilde{\mu}_n \rightarrow \mu \llcorner \mathbf{1}_{\{\theta>0\}}$ in $\mathcal{M}(\Omega)$. We claim that $$\tilde{\mu}_n \rightarrow \mu .$$ To prove that, it suffices to show that $\mu(\{\theta=0\})=0$. Since $\mu$ is singular with respect to $\sigma$, there exist subsets $A$ and $B$ with $A \cap B =\emptyset$, such that for every measurable set $E$ we have $\mu(E)= \mu (A\cap E)$ and $\sigma(E)= \sigma (B\cap E)$. Therefore $$\int_{\Omega}\mathbf{1}_A \mathbf{1}_{\{\theta=0\}} d \nu =\int_{\Omega}\mathbf{1}_A \mathbf{1}_{\{\theta=0\}} \theta d \mu + \int_{\Omega}\mathbf{1}_A \mathbf{1}_{\{\theta=0\}} d \sigma=0,$$ hence $\nu (A\cap \{\theta=0\})=0$, and using (\[hypothesis\]) $$\| \mathbf{1}_{A\cap \{\theta=0\}} \|_{L^r(\Omega,\mu)} \leq C \| \mathbf{1}_{A\cap \{\theta=0\}} \|_{L^{p}(\Omega,\nu)}=0.$$ Thus $\mu ( \{\theta=0\})=\mu (A\cap \{\theta=0\})=0$, as required. Now we continue with Lemma \[prod\]: (Hajłasz-Leibniz) \[prod\] If $v \in M^p_1(X,\mu)$ and $\phi \in \hbox{Lip}(X)$, then $f = v \phi \in M^p_1(X,\mu)$. Moreover, $$g_f = g_v |\phi| + |v|\|\phi\|_{\hbox{Lip}}$$ is a Hajłasz gradient for $v \phi$. The result follows from the string of inequalities $$\begin{aligned} |v(x)\phi(x) - v(y)\phi(y)| &\leq& |v(x)-v(y)|\ \min\{|\phi(x)|,|\phi(y)|\} + \max \{|v(x)|,|v(y)| \}\ |\phi(x)-\phi(y)|\\ &\leq& \big( g_v(x)+g_v(y) \big)\ \min \{ |\phi(x)|,|\phi(y)|\}\ d(x,y) + \max \{ |v(x)|,|v(y)| \}\ \|\phi\|_{\hbox{Lip}}\ d(x,y)\\ &\leq& \big( g_v(x)|\phi(x)|+g_v(y)|\phi(y)| \big)\ d(x,y) + (|v(x)|+|v(y)|)\ \|\phi\|_{\hbox{Lip}}\ d(x,y) \\ & = & d(x,y)\ ( g_{v \phi}(x) + g_{v \phi}(y) ),\end{aligned}$$ with $g_{v \phi} := g_v |\phi| + |v|\|\phi\|_{\hbox{Lip}} .$ Finally, before stating Theorem \[theorem:Lions\], we recall the following Lemma attributed to H. Brézis and E. Lieb: \[BL\] Let $p \in [1, \infty)$. If $f_n \rightarrow f$ weakly in $L^{p}(X,\mu)$ and $f_n \rightarrow f$ $\mu$-almost everywhere, then $$\lim_{n \rightarrow \infty} \left( \int_{X} |f_n|^{p} d\mu - \int_{X} |f_n - f|^{p} d \mu \right) = \int_{X} |f|^{p} d \mu.$$ With those results at hand, we have: \[theorem:Lions\] If $(X, d, \mu)$ is an Ahlfors lower $s$-regular compact metric-measure space, and $1 < p < s$, then for every sequence $\left\{ u_n \right\}$ in $M^p_1(X,\mu)$ such that $u_n \rightarrow u$ weakly in $M^p_1(X,\mu)$ and $u_n \rightarrow u$ strongly in $L^p(X,\mu)$, there exists a subsequence $\left\{ u_n \right\}$ and a countable set $I$ such that $$\label{eq:form} \mu \llcorner |u_n|^{p^*} \to \mu \llcorner |u|^{p^*} + \sum_{i \in I} \mu_i \delta_{x_i}$$ in $\mathcal{M} (X)$, where $x_i \in X$ for every $i \in I$. We begin with two observations: 1. Let $v_n:= u_n - u$, and fix some $\phi$ in $\hbox{Lip}_c(X)$. The hypotheses, Proposition \[wlozenie\] and Lemma \[prod\] when applied to $f_n := v_n \phi$ give $$\label{istot} \|v_n \phi\|_{L^{p^*}(X,\mu)} \leq C\left(\|v_n \phi\|_{L^{p}(X,\mu)}+\| g_{v_n} \phi\|_{L^{p}(X,\mu)} + \|\phi\|_{\hbox{Lip}} \| v_n \|_{L^{p}(X,\mu)}\right) .$$ 2. The hypotheses also imply that - $\|v_n \phi\|_{L^{p}(X,\mu)} \to 0$ and $\| v_n \|_{L^{p}(X,\mu)} \to 0$, - $\mu \llcorner |v_n|^{p^{\ast}} \to \bar{\mu}$ and $\mu \llcorner |g_{v_n}|^{p} \to \nu$ for some $\bar{\mu}$ and $\nu$ in $\mathcal{M}(X)$. Those observations yield the reverse Hölder inequality $$\| \phi \|_{L^{p^*}(X,\bar{\mu})} \leq C \| \phi \|_{L^{p}(X,\nu)} ,$$ and Lemma \[lemma:rhol\] ensures that the mesure $\bar{\mu}$ has the form $$\label{eq:suma} \bar{\mu} = \sum_{i \in I} \mu_i \delta_{x_i}.$$ Now use Lemma \[BL\] when $f_n=u_n \phi^{\frac{1}{p^*}}$, where $\phi$ is a non-negative function in $C_c(X)$, and $$\lim_{n \rightarrow \infty} \left( \int_{X} \phi |u_n|^{p^{*}} d \mu - \int_{X} \phi |v_n|^{p^{*}} d \mu \right) = \int_{X} \phi |u|^{p^{*}} d \mu$$ follows. Then recall that $\mu \llcorner |v_n|^{p^{*}} \to \bar{\mu}$ in $\mathcal{M}(X)$, to infer $$\label{eq:pos} \lim_{n \rightarrow \infty} \int_X \phi |u_n|^{p^{*}} d \mu = \int_X \phi\ d \bar{\mu} + \int_X \phi |u|^{p^{*}} d \mu.$$ Since every continuous function of compact support, say $\phi$, can be written as $\phi=\phi_{+}-\phi_{-}$, where $\phi_{+}$ and $\phi_{-}$ are non-negative with compact support, one concludes that (\[eq:pos\]) holds for every $\phi$ in the dual of $\mathcal{M}(X)$. Now use (\[eq:suma\]), to obtain (\[eq:form\]). Now we can prove Theorem \[main2\], the main result in this work. By the hypotheses, whenever $h \in H$ one has $h_{\#} \mu = \mu$. Let $\{ u_n \}$ be a bounded sequence in $M^p_{1,H}(X,\mu)$, namely a bounded sequence in $M^p_1(X,\mu)$ such that $h^{\#} u_n = u_n$ for each $n$ and each $h$ in $H$. Then the sequence of measures $\{ \mu_n \}$ defined by $$\mu_n := \mu \llcorner |u_n|^{p^{\ast}}$$ is also $H$-invariant. On the other hand, if the sequence $\{ u_n \}$ converges weakly to some $u$ in $M_{1,H}^p(X,\mu)$, then[^9] by Theorem \[theorem:Lions\] there exists a subsequence[^10] of $\{ \mu_n \}$ such that $$\label{measure-inv} \mu_n \to \mu \llcorner |u|^{p^{\ast}} + \sum_{i \in I} \mu_i \delta_{x_i}$$ in $\mathcal{M}(X)$, where $I$ is at most countable. In addition, it is not difficult to see that if $\{ \mu_n \}$ is a sequence of $H$-invariant measures converging to some $\nu$ in $\mathcal{M}(X)$, then $\nu$ is also $H$-invariant; therefore from (\[measure-inv\]) the measure $$\mu \llcorner |u|^{p^{\ast}} + \sum_{i \in I} \mu_i \delta_{x_i}$$ is $H$-invariant. Moreover, since $\mu \llcorner |u|^{p^{\ast}}$ is $H$-invariant, then $\sigma := \sum_{i \in I} \mu_i \delta_{x_i}$ is $H$-invariant as well.\ Choose some $k$ in $I$, and let $y = h (x_k)$ be some element in $H(x_k)$. Then $$\mu_k = \sigma(x_k) = \sigma(h^{-1}(y)) = h_{\#} \sigma(y) = \sigma(y) = \sum_{i \in I} \mu_i \delta_{x_i}(y) ,$$ hence $x_i = y$ for some $i \in I$. This gives a contradiction, since $I$ is at most countable, meanwhile the orbit of each point in $X$ is uncountable by hypothesis. It follows that $$\mu \llcorner |u_n|^{p^{\ast}} \to \mu \llcorner |u|^{p^{\ast}}$$ in $\mathcal{M}(X)$; but this is equivalent to say that $$\label{M(X)} \| \phi u_n \|_{L^{p^{\ast}}(X,\mu)} \to \| \phi u \|_{L^{p^{\ast}}(X,\mu)}$$ whenever $\phi \in C_c(X)$.\ Since $X$ is compact, we can use $\phi = 1$ in (\[M(X)\]), to conclude that if $\{u_n \}$ is a bounded sequence in $M^p_{1,H}(X,\mu)$ converging weakly to some $u$ in $M^p_{1,H}(X,\mu)$, then $$\| u_n \|_{L^{p^{\ast}}(X,\mu)} \to \| u \|_{L^{p^{\ast}}(X,\mu)}$$ for some subsequence. But $L^{p^{\ast}}(X,\mu)$ is uniformly convex, hence $u_n \to u$ in $L^{p^{\ast}}(X,\mu)$. A useful consequence of Theorem \[main2\] is: \[corollary\] Using the same hypotheses as in Theorem \[main2\], define the constant $C$ by $$C:= \inf \{\ A > 0\ :\ \| u \|_{L^{p^{\ast}}(X,\mu)} \leq A \| u \|_{M^p_1(X,\mu)}\ \text{whenever}\ u \in M^p_{1,H}(X,\mu)\ \}.$$ Then there exists some $u_0$ in $M^p_{1,H}(X,\mu)$ such that $$C = \| u_0 \|_{L^{p^{\ast}}(X,\mu)}\ /\ \| u_0 \|_{M^p_{1,H}(X,\mu)} .$$ Define the functional $\mathcal{I} : M^p_{1,H}(X,\mu) \thicksim \{ 0\} \to \mathbb{R}$ by $$\mathcal{I}[u] := \| u \|_{M^p_1(X,\mu)} ,$$ and set $$D := \inf \{\ \mathcal{I}[u]\ :\ u \in M^p_{1,H}(X,\mu)\ ,\ \| u \|_{L^{p^{\ast}}(X,\mu)} = 1 \ \} .$$ Let $\{ u_n \}$ be a minimizing sequence, i.e. such that $u_n \in M^p_{1,H}(X,\mu)$ and $ \| u_n \|_{L^{p^{\ast}}(X,\mu)} = 1$ for every $n$, with $\mathcal{I}[u_n] \to D$. Since $\{ u_n \}$ is bounded in $M^p_{1,H}(X,\mu)$, by Theorem \[main2\] there is a subsequence[^11] of $\{ u_n \}$ and some $w$ in $M^p_{1,H}(X,\mu)$ such that $$u_n \to w\ \text{weakly in}\ M^p_{1,H}(X,\mu),$$ $$\text{and}\ u_n \to w\ \text{strongly in}\ L^{p^{\ast}}(X,\mu).$$ But $\| w \|_{L^{p^{\ast}}(X,\mu)} = 1$ by strong convergence in $L^{p^{\ast}}(X,\mu)$, hence $$D = \lim_{n \to \infty} \mathcal{I}[u_n] = \liminf_{n \to \infty} \| u_n \|_{M^p_1(X,\mu)} \geq \| w \|_{M^p_1(X,\mu)} = \mathcal{I}[w] \geq D.$$ Therefore $\mathcal{I}[w] = D$, and it follows that $C = 1/D$, with $u_0 = w$. Riemannian applications {#examples} ======================= The next result is not surprising and probably not new, however we could not find it in the literature. To satisfy the interested reader, and justify the discussion in Section \[final\] below, we give a proof of it with some details. \[riem-mm\] Suppose $(X, g)$ is a compact Riemannian $n$-manifold. Then for every $p$ such that $1 < p < \infty$ the spaces $L^p_1(X, V_g)$ and $M^p_1(X, V_g)$ coincide with equivalent norms. By Proposition 10.1 from [@HajlaszKoskela] $$M^p_1(X,V_g) \hookrightarrow L^p_1(X,V_g),$$ hence we need the opposite inclusion. Since $X$ is compact, there exists a finite number of charts $$\{(U_{\alpha},\phi_{\alpha})\ :\ \alpha \in A \},$$ such that for every $\alpha$ the components $g_{i j}^\alpha$ of $g$ in $(U_{\alpha},\phi_{\alpha})$ satisfy $$\begin{aligned} \frac{1}{2} \delta_{ij} \leq g^{\alpha}_{ij} \leq 2 \delta_{ij}\end{aligned}$$ as bilinear forms. Let $\{ \eta_{\alpha} \}$ be smooth partition of unity subordinate to covering $\{ U_{\alpha} \}$. We proceed in two steps.\ [**Step 1.**]{} Let $\mathcal{L}^n$ be the $n$-dimensional Lebesgue measure, and fix $u$ in $C^{\infty}(X)$. Since $M^p_1(\mathbb{R}^n, \mathcal{L}^n)$ and $L^p_1(\mathbb{R}^n, \mathcal{L}^n)$ are equivalent, see [@Hajlasz] for example, there exists a constant $C > 1$ such that for every $\alpha$ in $A$ $$\begin{aligned} \label{ru} \frac{1}{C} \| (\eta_{\alpha} u ) \circ \phi_{\alpha}^{-1} \|_{L^p_1(\mathbb{R}^n, \mathcal{L}^n)} \leq \| (\eta_{\alpha} u ) \circ \phi_{\alpha}^{-1} \|_{M^p_1(\mathbb{R}^n, \mathcal{L}^n)} \leq C \| (\eta_{\alpha} u ) \circ \phi_{\alpha}^{-1}\|_{L^p_1(\mathbb{R}^n,\mathcal{L}^n)}.\end{aligned}$$ Furthermore $$\int_X |\eta_{\alpha} u|^p d V_g = \int_{U_{\alpha}} |\eta_{\alpha} u|^p d V_g = \int_{\phi_{\alpha}(U_{\alpha})} \sqrt{ \det g^{\alpha}_{ij}} \left| \eta_{\alpha} u \right|^{p} \circ \phi^{-1}_{\alpha}(x)\ d \mathcal{L}^n(x) ,$$ hence $$\begin{aligned} \label{niermodularfunkc} 2^{-\frac{n}{2p}}\|\eta_{\alpha} u\|_{L^p (X,V_g)} \leq \| (\eta_{\alpha} u) \circ \phi^{-1}_{\alpha}\|_{L^p (\mathbb{R}^n,\mathcal{L}^n)} \leq 2^{\frac{n}{2p}}\|\eta_{\alpha} u\|_{L^p (X,V_g)} . \end{aligned}$$ On the other hand, we estimate the gradient locally by $$\begin{aligned} \int_{X} \left| \nabla (\eta_{\alpha} u) \right|^{p} d V_{g} &=& \int_{\phi_{\alpha} (U_{\alpha})} \sqrt{ \det g^{\alpha}_{ij}}\ \left| \sum_{k,j=1}^{n} g_{\alpha}^{kj} D_{k} ( (\eta_{\alpha} u ) \circ \phi^{-1}_{\alpha}) D_{j} ( (\eta_{\alpha} u ) \circ \phi^{-1}_{\alpha}) \right|^{p} d\mathcal{L}^n \nonumber \\ &\geq& 2^{- \frac{n+p}{2}} \int_{\phi_{\alpha} (U_{\alpha})} \left| \nabla ( (\eta_{\alpha} u ) \circ \phi^{-1}_{\alpha}) \right|^{ p } d\mathcal{L}^n,\end{aligned}$$ therefore $$\| \nabla ( \eta_{\alpha} u) \circ \phi_{\alpha}^{-1} \|_{L^p(\mathbb{R}^n, \mathcal{L}^n)} \leq 2^{\frac{n+p}{2p}} \| \nabla( \eta_{\alpha} u) \|_{L^p(X,V_g)}$$ for each $\alpha$ in $A$.\ Set $\displaystyle C_0 := \max_{\alpha \in A} \| \nabla \eta_{\alpha} \|_{\infty}+1 $. Then $$\label{eq:1} \| \nabla ( \eta_{\alpha} u) \circ \phi_{\alpha}^{-1} \|_{L^p(\mathbb{R}^n, \mathcal{L}^n)} \leq 2^{\frac{n+p}{2p}} \left( \| \nabla u \|_{L^p(X, V_g)} + C_0 \| u \|_{L^p(X,V_g)} \right) \leq 2^{\frac{n+p}{2p}}C_0 \| u \|_{L^p_1(X,V_g)} \ .$$ Fix some $\epsilon >0$; then there exists a Hajłasz gradient $h_{\alpha}$ for $(\eta_{\alpha} u) \circ \phi_{\alpha}^{-1} $ in $\phi_{\alpha}(U_{\alpha})$, so that $$\label{eq:2} \| (\eta_{\alpha} u) \circ \phi_{\alpha}^{-1} \|_{L^p(\mathbb{R}^n,\mathcal{L}^n)} + \| h_{\alpha} \|_{L^p(\mathbb{R}^n,\mathcal{L}^n)} - \epsilon \leq \| (\eta_{\alpha} u) \circ \phi_{\alpha}^{-1}\|_{M^p_1(\mathbb{R}^n, \mathcal{L}^n)}.$$ Gather inequalities (\[ru\]), (\[niermodularfunkc\]), (\[eq:1\]) and (\[eq:2\]) to get $$\begin{aligned} \| (\eta_{\alpha} u) \circ \phi_{\alpha}^{-1} \|_{L^p(\mathbb{R}^n,\mathcal{L}^n)} + \| h_{\alpha} \|_{L^p(\mathbb{R}^n,\mathcal{L}^n)} - \epsilon &\leq& C \| (\eta_{\alpha} u) \circ \phi_{\alpha}^{-1} \|_{L^p_1(\mathbb{R}^n,\mathcal{L}^n)} \\ & \leq& C \left( 2^{\frac{n+p}{2p}}C_0 + 2^{\frac{n}{2p}} \right) \| u \|_{L^p_1(X,V_g)}.\end{aligned}$$ Observe that for each $\alpha$ the function $\sqrt{2} h_{\alpha} \circ \phi_{\alpha} =: \tilde{h}_{\alpha}: U_{\alpha} \rightarrow \mathbb{R}$ is a Hajłasz gradient for $ (\eta_{\alpha} u) |_{U_{\alpha}}$. Indeed, since $h_{\alpha}$ is a Hajłasz gradient for $(\eta_{\alpha} u) \circ \phi_{\alpha} ^{-1}\ \mathbf{1}_{\phi_{\alpha} (U_{\alpha})}$, there exists a subset $E_{\alpha} \subset \mathbb{R}^n$ such that $\mathcal{L}^n(E_{\alpha})=0$, and such that for every pair $x,y \in \phi_{\alpha} (U_{\alpha}) \thicksim E_{\alpha}$ $$|\eta_{\alpha} u(\phi_{\alpha}^{-1}(x)) - \eta_{\alpha} u(\phi_{\alpha}^{-1}(y))| \leq \left( h_{\alpha}( x ) + h_{\alpha}(y) \right) |x - y|.$$ Therefore, for each pair $x,y$ in $U_{\alpha} \thicksim \phi_{\alpha} ^{-1}(E_{\alpha})$ $$\begin{aligned} \label{gradient} |\eta_{\alpha}(x) u(x) - \eta_{\alpha}(y) u(y)| &=& |\eta_{\alpha} u(\phi_{\alpha}^{-1}( \phi_{\alpha}(x) )) - \eta_{\alpha} u(\phi_{\alpha}^{-1}( \phi_{\alpha}(y))| \nonumber \\ &\leq &\left( h_{\alpha}( \phi_{\alpha}(x) ) + h_{\alpha}(\phi_{\alpha}(y)) \right) |\phi_{\alpha}(x) - \phi_{\alpha} (y)| \nonumber \\ & \leq &\left( \tilde{h}_{\alpha}(x) + \tilde{h}_{\alpha}(y) \right) d_g(x,y).\end{aligned}$$ Our next goal is to prove that $$h:= \sum_{\alpha \in A} \tilde{h}_{\alpha} \mathbf{1}_{U_{\alpha}}$$ is a local Hajłasz gradient for $u$.\ Fix $z\in X$ and define - $I_z := \{ \alpha \in A: z \in U_{\alpha}\},$ - $ J_z :=\{ \alpha \in A : z \in \partial U_{\alpha}\},$ and - $K_z :=\{ \alpha \in A: z \in X \thicksim \bar{U}_{\alpha}\}.$ Then $I_z, J_z, K_z$ are pairwise disjoint, with $I_z \cup J_z\cup K_z= A$ for each $z$ in $X$.\ Define $R > 0$ such that - For all $\alpha$ in $I_z$, the ball $B_R(z) \subset U_{\alpha}$, - For all $\alpha$ in $J_z, \eta_{\alpha} (B_R(z))=\{0\}$, and - For all $\alpha$ in $K_z, B_R(z) \cap U_{\alpha} = \emptyset.$ Note that if $x,y \in B_R(z)$ and $\eta_{\alpha}(x) \neq 0$, then $y \in U_{\alpha}$; indeed, $\alpha$ can not belong to $K_z \cup J_z$, therefore $\alpha \in I_z$, and then $y \in B_R(z) \subset U_{\alpha}$. Hence for $x, y \in B_R(z)\thicksim \bigcup_{\alpha \in A} \phi^{-1}_{\alpha}(E_{\alpha})$, taking (\[gradient\]) into account, the inequality $$\left| u(x) - u(y) \right| \leq \sum_{\alpha \in A} \left| \eta_{\alpha}(x) u (x) - \eta_{\alpha}(y) u (y) \right| \leq \sum_{\alpha \in A} \left(\tilde{h}_{\alpha}(x) + \tilde{h}_{\alpha}(y) \right) d_g(x,y),$$ follows, and this proves that $h$ is a local Hajłasz gradient for $u$.\ Recalling (\[local-norm\]), collect previous estimates to infer $$\begin{aligned} \| u \|_{m^p_1(X, V_g)} & \leq& \sum_{\alpha \in A} \| \eta_{\alpha} u \|_{L^p(X,V_g)} + \| h \|_{L^p(X,V_g)} \\ &\leq& 2^{\frac{n}{2p}} \sum_{\alpha \in A} \| (\eta_{\alpha} u) \circ \phi_{\alpha}^{-1} \|_{L^p(\mathbb{R}^n,\mathcal{L}^n)} + 2^{\frac{n}{2p}} \sum_{\alpha \in A} \|h_{\alpha}\|_{L^p(\mathbb{R}^n,\mathcal{L}^n)} \\ & \leq& 2^{\frac{n}{2p}}|A| C \left( 2^{\frac{n+p}{2p}}C_0 + 2^{\frac{n}{2p}} \right) \| u \|_{L^p_1(X,V_g)} +2^{\frac{n}{2p}}|A|\epsilon ,\end{aligned}$$ where $|A|$ denotes the cardinality of the set $A$. Hence if $ \epsilon \rightarrow 0 $ $$\label{ineq-m-M} \| u \|_{m^p_1(X,V_g)} \leq C_1 \| u \|_{L^p_1(X,V_g)},$$ where $C_1 := 2^{\frac{n}{2p}} |A| C \left( 2^{\frac{n+p}{2p}}C_0 + 2^{\frac{n}{2q}} \right)$.\ [**Step 2.**]{} Choose $u$ in $L^p_1(X, V_g)$. By the density of $C^{\infty}(X)$ in $L^p_1(X, V_g)$ there exists a sequence of smooth functions $\{u_n\}$ converging to $u$ in $L^p_1(X,V_g)$. Therefore using (\[ineq-m-M\]) for every $\epsilon >0$ there exists some $N$ such that for $m,n \geq N$ $$\| u_m - u_n \|_{m^{1,p}(X, V_g)} \leq C_1\| u_m - u_n \|_{L^p_1(X,V_g)} \leq \epsilon.$$ On the other hand, by the completeness of $m^p_1(X,V_g)$ the sequence $\{u_n\}$ converges to some $v$ in $m^p_1(X,V_g)$. By the definitions of $L^p_1(X,V_g)$ and $m^p_1(X,V_g)$ the sequence $\{u_n\}$ converges to both $u$ and $v$ in $L^p(X,V_g)$: Thus $u = v$, and using (\[ineq-m-M\]) $$\| u \|_{m^p_1(X,V_g)} \leq C_1 \| u \|_{L^p_1(X,V_g)},$$ therefore $L^p_1(X,V_g) \hookrightarrow m^p_1(X,V_g)$.\ Finally, by Corollary 3.5 from [@JSYY] the spaces $m^p_1(X,V_g)$ and $M^p_1(X,V_g)$ are equivalent, hence $$L^p_1(X,V_g) \hookrightarrow M^p_1(X,V_g),$$ as required. Theorem \[main2\] for flows {#final} --------------------------- We use Theorem \[riem-mm\] to apply Theorem \[main2\] in closed Riemannian manifolds when the $H$-orbits have dimension one. In this setup we see that Theorem \[main2\] can be applied if and only if the Euler characteristic of the manifold is equal to zero; this condition is restrictive only in even dimensional manifolds.\ Consider a closed orientable Riemannian $n$-manifold $(X,g)$ whose Euler characteristic $\chi(X)$ is equal to $0$. A result attributed to H. Hopf, see [@milnor], ensures that there exists a non-vanishing[^12] vector field $\tau$ on $X$, or equivalently a non-vanishing $(n-1)$-form $\omega_{\tau}$, related with $\tau$ through the bijection $TX \longleftrightarrow \wedge^{n-1} TX$ given by $$\tau \leftrightarrow \omega_{\tau} = \tau \lrcorner\ \Omega_g ,$$ where $\Omega_g$ is the volume $n$-form induced from $g$ giving the orientation of $X$. The form $\omega_{\tau}$ is closed if and only if the vector field $\tau$ is free of divergence; indeed: $$( \text{div} \cdot \tau )\ \Omega_g = L_{\tau} \Omega_g = d (\tau \lrcorner\ \Omega_g),$$ where $L_{\tau}$ is the Lie derivative along $\tau$. Denote by $H= \{ h_t : t \in \mathbb{R}\}$ the subgroup of $\text{Diff}(X)$ associated to the flow of $\tau$: if $\omega_{\tau}$ is a non-vanishing closed $(n-1)$-form on $X$, then $H$ is a subgroup of $\text{Diff}_{V_g}(X)$, and the orbit of every point in $X$ under $H$ is uncountable.\ In this spirit, D. Asimov proved in [@Asimov] that if $n$ is at least $4$, and if the first Betti number of $X$ is different from zero, then every non-vanishing vector field is homotopic through a family of non-vanishing vector fields to a non-vanishing vector field that preserves $\Omega_g$, see also [@sullivan]. Shortly afterwards, M. Gromov using Convex Integration[^13] proved that if $n$ is at least $3$, then every non-vanishing $(n-1)$-form can be homotoped through non-vanishing forms to a non-vanishing exact form, with no restrictions on the first Betti number of $X$. Note that when $n=2$ the only possible manifold is the $2$-torus, and then the required vector fields are constant slope fields [@Asimov].\ With those facts, Theorem \[riem-mm\] and Corollary \[corollary\] provide simple and concrete applications: Suppose $(X,g)$ is an orientable closed Riemannian manifold with $\chi(X)=0$. If $\tau$ is a non-vanishing solenoidal vector field, the problem $$\text{Min}\ \{\ \int_X\ \Big( | \nabla u |_g^p + |u|^p \Big)\ d V_g\ :\ u \in L^p_{1,H}(X,V_g)\ \text{and}\ \int_{X}\ |u|^{p^{\ast}}\ dV_g = 1\ \}$$ has a solution whenever $1 < p < n$, where $H$ is the group associated to the flow of $\tau$. [99999]{} R. A. Adams, J. Fournier, Sobolev spaces, Second Edition, Elsevier, 2005. L. Ambrosio, P. Tilli, Topics on analysis in metric spaces. Oxford Lecture Series in Mathematics and its Applications, 25. Oxford University Press, Oxford, 2004. D. Asimov, Homotopy to divergence-free vector fields, Topology [**15**]{} (1976), 349-352. T. Aubin, Non-linear Analysis on Manifolds, Monge Ampére Equations, Springer Verlag, 1982. P. Bérard, From vanishing theorems to estimating theorems: the Bochner technique revisited, Bull. of the AMS [**19**]{} (1988), 371-406. J. Cheeger, Differentiability of Lipschitz functions on metric-measure spaces, Geom. Funct. Anal. [**9**]{} (1999), 428-517. D. Ebin, J. Marsden, Groups of diffeomorphisms and the motion of an incrompessible fluid, Ann. of Math. [**92**]{} (1970), 102-163. Y. Eliashberg, N. Mishachev, Introduction to the h-principle, Graduate Studies in Mathematics, Vol. 48, AMS, 2002. M. Gaczkowski, P. Górka, D. J. Pons, Sobolev spaces with variable exponents on complete manifolds, J. Funct. Anal., [**270**]{} (2016), 1379-1415. P. Górka, Looking for compactness in Sobolev spaces on noncompact metric spaces, Ann. Acad. Sci. Fenn. Math. [**43**]{} (2018), 531-340. M. Gromov, Metric Structures for Riemannian and Non-Riemannian Spaces, Birkhäuser, 1999. P. Haj[ł]{}asz, Sobolev spaces on an arbitrary metric space, Potential Anal. [**5**]{} (1996), 403-415. P. Haj[ł]{}asz, Sobolev spaces on metric-measure spaces. Heat kernels and analysis on manifolds, graphs, and metric spaces (Paris, 2002), Contemp. Math. [**338**]{} (2003), 173-218. P. Haj[ł]{}asz, P. Koskela, Sobolev met Poincaré, Memoirs Amer. Math. Soc. [**688**]{} (2000), 1-101. S. W. Hawking, G. F. Ellis, The large scale structure of space-time, Cambridge Monographs on Mathematical Physics, Cambridge University Press, 1973. E. Hebey, Non-linear Analysis on Manifolds: Sobolev Spaces and Inequalities, Courant Lecture Notes, Vol 5, AMS, 2000. E. Hebey, M. Vaugon, Sobolev spaces in the presence of symmetries, J. Math. Pures Appl. [**76**]{} (1997), 859-881. J. Heinonen, Lectures on Analysis on Metric Spaces, Universitext, 2001. J. Heinonen, P. Koskela, N. Shanmugalingam, J. T. Tyson, Sobolev spaces on metric-measure spaces. An approach based on upper gradients, New Mathematical Monographs, 27, Cambridge University Press, 2015. R. Jiang, N. Shanmugalingam, D. Yang, W. Yuan, Hajłasz gradients are upper gradients, J. Math. Anal. Appl. [ **422**]{} (2015), no. 1, 397-407. A. Ka[ł]{}amajska, On compactness of embedding for Sobolev spaces defined on metric spaces, Ann. Acad. Sci. Fenn. Math. [**24**]{} (1999), 123-132. J. Kinnunen, O. Martio, The Sobolev capacity on metric spaces, Ann. Acad. Sci. Fenn. Math. [**21**]{} (1996), 367-382. S. Kobayashi, Transformation Groups in Differential Geometry, Springer Verlag, 1972. H. B. Lawson, The Theory of Gauge Fields in Four Dimensions, Regional Conference Series in Mathematics, Number 58, AMS, 1985. P. L. Lions, Symétrie et compacité dans les espaces de Sobolev, Journal of Functional Analysis [**49**]{} (1982), 315-334. P. L. Lions, The Concentration-Compactness Principle in the Calculus of Variations. The limit case, Part 1, Revista Matemática Iberoamericana [**1**]{} (1985), 145-201. J. Luukkainen, E. Saksman, Every complete doubling metric space carries a doubling measure, Proc. AMS [**126**]{} (1998), 531-534. J. Milnor, Topology from the Differentiable Viewpoint, Revised Edition. Princeton Landmarks in Mathematics, Princeton University Press, 1997. J. Naumann, Remarks on the prehistory of Sobolev spaces, Preprint Humbolt Universität, Berlin. DOI: 10.18452/2615 (2002). R. S. Palais, Foundations of Global Non-linear Analysis, W. A. Benjamin, Inc, 1968. J. Rissanen, Wavelets on self-similar sets and the structure of the spaces $M^{1,p}(E,\mu)$, Ann. Acad. Sci. Fenn. Math. Diss. [**125**]{} (2002), 46 pp. N. Shanmugalingam, Newtonian spaces: an extension of Sobolev spaces to metric-measure spaces, Revista Matemática Iberoamericana [**16**]{} (2000), 243-279. D. Sullivan, Cycles for the Dynamical Study of Foliated Manifolds and Complex Manifolds, Inventiones Math. [**36**]{} (1976), 225-255. C. Villani, Optimal transport, old and new, Springer Verlag, 2009. [^1]: Email: p.gorka@mini.pw.edu.pl [^2]: Email: dpons@unab.cl ; pons.dan@gmail.com [^3]: See [@gromov] for an interesting perspective. [^4]: See Section \[Preliminaries\] for definitions. [^5]: For instance, $M^p_1(X,V_g)$ is reflexive for every compact $(X,g)$. [^6]: The quotient space might not be Hausdorff. [^7]: A metric-measure space $(X,d,\mu)$ is said to be *doubling* if the measure $\mu$ is *doubling*, namely if there exists a constant $C_{\mu}>1$ such that for every ball $B_R(x)$ one has $ \mu \left(B_{2R}(x)\right) \leq C_{\mu}\ \mu \left(B_R(x)\right).$ [^8]: This means that for each $f$ in $\mathfrak{B}$ there exists some $k$ in $\{1,...,N\}$ such that $\|f-f_k\|_{L^p(X,\mu)} <\tilde{\varepsilon}$. [^9]: $M_1^p(X,\mu)$ is reflexive in the hypotheses of the theorem. [^10]: We use the same subindex for sequences and the pertinent subsequences. [^11]: As in Theorem \[main2\], we use the same subindex for the sequence and the pertinent subsequence. [^12]: Non-vanishing, or vanishing no-where. [^13]: See [@eliashberg] for a detailed exposition.
--- abstract: | Single-cell RNA sequencing data have complex features such as dropout events, over-dispersion, and high-magnitude outliers, resulting in complicated probability distributions of mRNA abundances that are statistically characterized in terms of a zero-inflated negative binomial (ZINB) model. Here we provide a mesoscopic kinetic foundation of the widely used ZINB model based on the biochemical reaction kinetics underlying transcription. Using multiscale modeling and simplification techniques, we show that the ZINB distribution of mRNA abundance and the phenomenon of transcriptional bursting naturally emerge from a three-state stochastic transcription model. We further reveal a nontrivial quantitative relation between dropout events and transcriptional bursting, which provides novel insights into how and to what extent the burst size and burst frequency could reduce the dropout rate. Three different biophysical origins of over-dispersion are also clarified at the single-cell level.\ **Keywords**: dropout, over-dispersion, transcriptional bursting, stochastic gene expression, chemical master equation, multiscale modeling, model simplification author: - | Chen Jia$^{1,2}$\ $^1$ Division of Applied and Computational Mathematics, Beijing Computational Science Research Center, Beijing 100193, China.\ $^2$ Department of Mathematics, Wayne State University, Detroit, MI 48202, U.S.A.\ Email: chenjia@wayne.edu title: '**Kinetic foundation of the zero-inflated negative binomial model for single-cell RNA sequencing data**' --- @headfont[****]{} Introduction ============ Gene expression in living cells is a complex stochastic process, resulting in spontaneous random fluctuations in mRNA and protein abundances [@paulsson2005models]. Recent technological advances in single-cell RNA sequencing (scRNA-seq) have made it possible to measure mRNA expression and provide transcriptome profiles at the single-cell level [@sandberg2014entering; @eberwine2014promise; @kolodziejczyk2015technology; @bacher2016design]. Compared with traditional bulk RNA sequencing which measures the average mRNA expression levels across millions of cells, scRNA-seq enables the dissection of gene expression heterogeneity in different cell populations and tissues, and thus allows the investigation of many fundamental biological questions such as the identification of novel cell types, the classification of cell subtypes, and the reconstruction of cellular developmental trajectories [@pollen2014low]. Stochasticity in gene expression measurements has two fundamental origins: (i) the intrinsic noise due to small copy numbers of biochemical molecules and random collisions between them, giving rise to various probabilistic chemical reactions [@paulsson2005models], and (ii) the extrinsic noise due to limitations of current experimental techniques. Although scRNA-seq provides a new level of data resolution, it also produces a much higher noise level than bulk-level measurements. A remarkable characteristic of scRNA-seq data is the high frequency of zero read counts [@landau2013dispersion; @li2018accurate]. Given the excessive amount of zero observations in scRNA-seq data, it is important to distinguish between (i) the structural (true) zeros where the genes are truly unexpressed and (ii) the dropout (false) zeros where the genes are actually expressed but fail to be detected [@liu2016single; @hicks2017missing; @jia2017accounting; @zhu2018unified; @gong2018drimpute]. While the former is due to intrinsic biological variability, the latter, which is referred to as dropout events, is due to extrinsic technical reasons. Due to the tiny amount of mRNA in an individual cell, the input material needs to be captured with low efficiency and go through many rounds of amplification before being sequenced. This results in low mRNA capture rate and strong amplification bias, as well as dropout events [@wang2015advances]. To be more specific, the workflow of scRNA-seq experiments includes the following steps: isolation of single cells, cell lysis while preserving mRNA, mRNA capture, reverse transcription of primed RNA into cDNA, cDNA amplification, library preparation, and sequencing [@haque2017practical]. During these steps, possible technical reasons leading to dropouts include mRNA degradation after cell lysis, low efficiency of mRNA capture, reverse transcription, and cDNA amplification, library size differences, and sequencing depth [@hicks2017missing]. Recent studies [@haque2017practical] have shown that the efficiency for poly-adenylated mRNA species to be captured, converted into cDNA, and amplified can range between 10% and 40%, depending on the study. This means that if the starting transcripts in an individual cell are in low amount, there is a certain probability that they will not be detected by current scRNA-seq methods. Besides the dropout effect, other characteristics of scRNA-seq data include over-dispersion [@mccarthy2012differential] and high-magnitude outliers [@kharchenko2014bayesian] due to the stochastic nature of gene expression at the single-cell level and the related phenomenon of transcriptional bursting [@haque2017practical]. Given these complex features of scRNA-seq data, recent studies have highlighted the need to develop novel statistical and computational methods in data analysis, especially differential expression analysis [@bacher2016design]. When handling dropout events, a popular perspective held by the bioinformatic field is that the complicated probability distributions of mRNA abundances in a cell population need to be explicitly characterized by a global zero-inflation parameter, resulting in various zero-inflated models [@mcdavid2012data; @paulson2013differential]. Among these statistical models, the zero-inflated negative binomial (ZINB) model is the most widely used [@pierson2015zifa; @wagner2016revealing; @fang2016zero; @vallejos2017normalizing; @gao2017nanogrid; @wallrapp2017neuropeptide; @risso2018general; @chen2018umi; @lopez2018deep; @van2018observation; @miao2018desingle; @eraslan2019single], where the zero-inflated part describes dropouts and the negative binomial part accounts for over-dispersion. Some other commonly used models are listed in Sec. \[discussion\]. Modern sciences emphasize quantitative characterization of experimental observations, which is widely known as mathematical modeling. Along this line, two types of modeling methods should be distinguished: data-driven and mechanism-based modeling [@qian2013stochastic]. The former explains experimental phenomena in terms of data analysis based on various mathematical formulas and statistical models, while the latter understands the world in terms of mathematical deductions based on various mechanisms and scientific laws. The ZINB model of scRNA-seq data proposed in previous studies belongs to the former category. In the present work, we provide a mesoscopic kinetic foundation of the widely used ZINB model based on the stochastic biochemical reaction kinetics underlying transcription. In fact, many stochastic models of transcription dynamics have been proposed [@peccoud1995markovian; @raj2006stochastic; @iyer2009stochasticity; @mugler2009spectral; @chong2014mechanism; @kumar2015transcriptional; @jia2017emergent; @klindziuk2018theoretical]. Although some previous models could provide a clear explanation of over-dispersion, very few of them have incorporated the dropout effect into their model assumptions. So far, there is still a lack of a kinetic basis for the ZINB distribution of mRNA abundance. In addition, it is widely believed that the complex features of scRNA-seq data are closely related to the phenomenon of transcriptional bursting. However, the quantitative relationship among dropout events, over-dispersion, and transcriptional bursting still remains unclear. The present paper addresses these issues. A novel three-state model of transcription ========================================== Based on the central dogma of molecular biology, the transcription of a gene in an individual cell has a standard two-stage representation involving the switching of the gene between an active and an inactive epigenetic state and the synthesis of the mRNA from the gene [@paulsson2005models]. In the active state, the gene produces the mRNA. When the gene is inactive, the process of mRNA synthesis is terminated. Due to various technical factors in scRNA-seq experiments such as low mRNA capture rate, amplification bias, and sequencing depth, at a particular time, the mRNA expression in a single cell can be either detectable or undetectable [@eraslan2019single]. As a result, it is reasonable to assume that the gene of interest can exist in a third state, referred to as the dropout state, where the mRNA expression of this gene cannot be detected due to technical reasons. Here the dropout state should not be regarded as an epigenetic conformation of the gene. Instead, it characterizes an undetectable state where the transcriptional signal of the gene is missing. These considerations lead to the three-state transcription model illustrated in Fig. \[model\](a), where a transcript can be synthesized with rate $s$ or be degraded with rate $v$, and the gene can switch among the active, inactive, and dropout states with certain switching rates $a_i$ and $b_i$, $i = 1,2,3$. Compared with the classical two-state transcriptional model without the dropout state [@paulsson2005models], the cyclic structure of gene state switching will remarkably increase the theoretical complexity, as we shall see. ![**Stochastic transcription kinetics in individual cells with dropout events.** (a) A three-state transcription model involving gene switching among the active, inactive, and dropout states. Here the dropout state characterizes the detection state where the mRNA expression of this gene is undetectable. (b) Transition diagram of the Markovian model whose dynamics is governed by the chemical master equation.[]{data-label="model"}](model.pdf){width="60.00000%"} From the chemical perspective, the microstate of the gene of interest can be described by an ordered pair $(i,m)$: the state $i$ of the gene and the copy number $m$ of detectable transcripts, where $i = 1,2,3$ correspond to the active, inactive, and dropout states, respectively. Then the stochastic dynamics of our three-state transcription model can be described by the Markov jump process (continuous-time Markov chain) with transition diagram illustrated in Fig. \[model\](b). Since the transcriptional signal is missing when a dropout occurs, it is reasonable to assume that the dropout state can only exist with zero detectable transcript, described by the microstate $(3,0)$. Experimentally, it was widely observed that the dropout rate for a given cell strongly depends on its expression level, with dropouts being more frequent for cells with low mRNA expression levels [@kharchenko2014bayesian]. In general, the total content of mRNA in a single cell is low (0.01-2.5pg per cell) [@livesey2003strategies] and most genes only transcribe a small copy number of mRNA [@taniguchi2010quantifying]. Due to the tiny amount of mRNA in an individual cell, the input material needs to be captured with low efficiency and go through many rounds of amplification before being sequenced. This results in low mRNA capture rate and strong amplification bias, as well as dropout events [@risso2018general]. As a result, microstates $(1,m)$ and $(2,m)$ with tiny mRNA abudance $m$ are more likely to convert to the dropout microstate $(3,0)$. In our Markovian model, for simplicity, we assume that $(3,0)$ can only be reached from $(1,m)$ and $(2,m)$ with $m = 0$ (Fig. \[model\](b)). In Sec. \[discussion\], a removal of this assumption will be discussed and a more realistic model will be given. There is another reason leading us to consider the three-state transcription model. Recent single-cell experiments have provided evidence that for many genes, more than two states may participate in the transcription process [@suter2011mammalian; @harper2011dynamic; @rieckh2014noise; @corrigan2016continuum; @bintu2016dynamics]. In fact, if a gene can only switch between the active and inactive states, then the sojourn times in the active and inactive states should be exponentially distributed. However, recent single-cell time-lapse measurements in eukaryotic cells [@suter2011mammalian; @harper2011dynamic] have indicated that the sojourn time in the inactive state may have a non-exponential peaked distribution. This indicates that the gene dynamics in the inactive state may contain at least two exponential steps, so that in sum the gene would undergo a three-state switching process. In particular, in two recent studies, the authors monitored gene expression dynamics in mouse fibroblasts [@suter2011mammalian] and Chinese hamster ovary cells [@bintu2016dynamics] using single-cell time-lapse microscopy and found that both data sets were well described by a three-state gene expression model involving gene switching among an active, an inactive (reversibly silent), and a refractory (irreversibly silent) state. The difference between the inactive and refractory states is that the former has a good chance to switch back to the active state, while the possibility for the latter to switch back is much lower. In the inactive or refractory state, RNA polymerases could either be absent from the promoter or present in a paused state. Therefore, the dropout state in our three-state transcription model may have two different interpretations: It may either correspond to an undetectable state due to purely technical factors or correspond to a refractory state due to real biological factors. Let $p_{i,m}(t)$ denote the probability of having $m$ detectable transcripts at time $t$ when the gene is in state $i$. Then the evolution of the Markovian model is governed by the chemical master equation $$\left\{ \begin{split} \dot p_{1,0} =&\; a_1p_{2,0}+a_3p_{3,0}+vp_{1,1}-(b_1+b_3+s)p_{1,0}, \\ \dot p_{2,0} =&\; b_1p_{1,0}+a_2p_{3,0}+vp_{2,1}-(a_1+b_2)p_{2,0}, \\ \dot p_{3,0} =&\; b_3p_{1,0}+b_2p_{2,0}-(a_2+a_3)p_{3,0}, \\ \dot p_{1,m} =&\; a_1p_{2,m}+sp_{1,m-1}+(m+1)vp_{1,m+1}-(b_1+s+mv)p_{1,m},\;\;\;m\geq 1, \\ \dot p_{2,m} =&\; b_1p_{1,m}+(m+1)vp_{2,m+1}-(a_1+mv)p_{2,m},\;\;\;m\geq 1. \end{split}\right.$$ Here $s$ is the transcription rate; $v$ is the degradation rate of the mRNA; $a_i$ and $b_i$, $i = 1,2,3$ are the switching rates of the gene among the three states. Since $(i,m)$ represents the microstate of having $m$ transcripts in a single cell when the gene is in state $i$ and each transcript can be degraded with rate $v$, the transition rate from microstate $(i,m)$ to microstate $(i,m-1)$, which represents the total degradation rate of the $m$ transcripts, should be $mv$ (Fig. \[model\](b)) [@paulsson2005models]. In addition, since the dropout state could describe a refractory state, which has a lower chance to switch back to the active state than the inactive state, it is natural to assume $a_1 > a_3$ in our model. Model simplification via decimation =================================== One of the most important reasons for over-dispersion of bulk and single-cell RNA-seq data is transcriptional bursting, also known as transcriptional pulsing [@haque2017practical], which describes the phenomenon of relatively short transcriptionally active and high expression periods followed by longer transcriptionally silent and low expression periods [@cai2008frequency; @suter2011mammalian], resulting in spontaneous temporal fluctuations of transcript levels (Fig. \[trajectory\](a)). In general, transcriptional bursting results from multiple time scales underlying the transcription process [@moran2012sizing]. In fact, the mechanism of transcriptional bursting has been described by Paulsson in his review paper [@paulsson2005models], “If genes are mostly inactive but transcribe a large number of mRNAs while in the active state, transcription could occur in bursts". Intuitively, if we require the gene to be mostly inactive, the switching rate $b_1$ of the gene from the active to the inactive state should be much larger than the reverse switching rate $a_1$ from the inactive to the active state. On the other hand, if we require the gene to transcribe a large number of transcripts while in the active state, the transcription rate $s$ should be very large, at least at the same order of magnitude as the switching rate $b_1$. These considerations lead to the following biochemical conditions for transcriptional bursting: $b_1\gg a_1$ and $s/b_1$ is finite. Here, by saying that $s/b_1$ is finite, we mean that $s$ and $b_1$ are roughly at the same order of magnitude. When the gene is active, the large transcription rate $s$ will give rise to fast accumulation of mRNA. Once the gene becomes inactive, the transcription process is terminated and transcripts will be degraded until the gene becomes active again. We stress here that the above biochemical conditions imposed on the rate constants are consistent with a recent single-cell experiment on transcriptional bursting [@suter2011mammalian], where the authors monitored the transcription kinetics in mouse fibroblasts using time-lapse bioluminescence imaging and found that the three rate constants $a_1$, $b_1$, and $s$ across different genes are typically at the magnitude of 0.01/min, 0.1/min, and 1/min, respectively. ![**Numerical simulations of the stochastic trajectories of the gene state and mRNA copy number based on the original Markovian model under two sets of biologically relevant parameters.** (a) Two typical trajectories when the mean burst size $h$ and maximum burst frequency $\lambda$ are moderate. The model parameters are chosen as $h = 3, \lambda = 1.5, s = hb_1, v = 1, a_1 = \lambda v, b_1 = 25, a_2 = 1, b_2 = 4, a_3 = 0, b_3 = 1$. (b) Two typical trajectories in the limiting case of $h\rightarrow 0$ and $\lambda\rightarrow\infty$, while $\lambda h = \gamma$ is kept as a constant. The model parameters are chosen as $h = 0.2, \lambda = 10, s = hb_1, v = 1, a_1 = \lambda v, b_1 = 100, a_2 = 1, b_2 = 4, a_3 = 0, b_3 = 1$.[]{data-label="trajectory"}](trajectory.pdf){width="70.00000%"} Due to the timescale separation of the underlying biochemical reaction kinetics, our Markovian model can be simplified to a much simpler one. To see this, let $\beta = b_1/a_1\gg 1$ denote the ratio of the switching rates between the active and inactive states. Moreover, let $q_{(i,m),(i',m')}$ denote the transition rate of the Markovian model from microstate $(i,m)$ to microstate $(i',m')$ and let $$q_{(i,m)} = \sum_{(i',m')\neq(i,m)}q_{(i,m),(i',m')}$$ denote the rate at which the system leaves microstate $(i,m)$, which is defined as the sum of transition rates from $(i,m)$ to other microstates. Since $\beta\gg 1$, we say that $(i,m)$ is a fast state if $$\lim_{\beta\rightarrow\infty}q_{(i,m)} = \infty$$ and we say that $(i,m)$ is a slow state if $$\lim_{\beta\rightarrow\infty}q_{(i,m)} < \infty.$$ If $(i,m)$ is a fast state, then the time that the system stays in this state will be very short. Since $b_1\gg a_1$ and $s/b_1$ is finite, we write $b_1 = \beta a_1$ and $s = \beta a_1(s/b_1)$, where $\beta\gg 1$ and we treat $a_1$ and $s/b_1$ as constants. Here $b_1$ and $s$ are the only two model parameters depending on $\beta$ and all other model parameters are independent of $\beta$. It is easy to check that the leaving rates of all microstates are given by $$\left\{ \begin{split} q_{(1,0)} &= b_1+b_3+s = \beta a_1(1+s/b_1)+b_3, \\ q_{(2,0)} &= a_1+b_2, \\ q_{(3,0)} &= a_2+a_3, \\ q_{(1,m)} &= b_1+s+mv = \beta a_1(1+s/b_1)+mv,\;\;\;m\geq 1, \\ q_{(2,m)} &= a_1+mv,\;\;\;m\geq 1, \end{split}\right.$$ which shows that $$\lim_{\beta\rightarrow\infty}q_{(1,m)} = \infty,\;\;\; \lim_{\beta\rightarrow\infty}q_{(2,m)} < \infty,\;\;\; \lim_{\beta\rightarrow\infty}q_{(3,m)} < \infty.$$ Therefore, all active microstates $(1,m)$ are fast states and all other microstates $(2,m)$ and $(3,m)$ are slow states (Fig. \[decimation\](a)). By using a classical simplification method of two-time-scale Markov jump processes called decimation [@pigolotti2008coarse; @cappelletti2016elimination; @jia2016reduction; @jia2016simplification; @bo2016multiple; @jia2017simplification], the original Markovian model can be simplified to a reduced one by removal of all fast states. ![**Multiscale model simplification of the Markovian model.** (a) Fast (green) and slow (blue) states of the original Markovian model. (b) Schematic diagram of the decimation method of model simplification. The effective transition rate from microstate $(i,m)$ to microstate $(i',m')$ is the superposition of the direct transition rate and the contribution of indirect transitions via all fast transition paths. (c) Transition diagram of the reduced Markovian model when $b_1\gg a_1$ and $s/b_1$ is finite. The red arrows in (a)-(c) point the directions of typical fast transition paths, which correspond to random transcriptional bursts.[]{data-label="decimation"}](decimation.pdf){width="90.00000%"} The remaining question is to determine the transition diagram and effective transition rates of the reduced model. This process is described as follows. Suppose that the original model jumps from microstate $(i,m)$ to another microstate a particular time. When $\beta\gg 1$, the transition probability from microstate $(i,m)$ to another microstate $(i',m')$ is given by $$w_{(i,m),(i',m')} = \lim_{\beta\rightarrow\infty}\frac{q_{(i,m),(i',m')}}{q_{(i,m)}}.$$ Let $(i_1,m_1),\cdots,(i_n,m_n)$ be a sequence of microstates. We say that $$c: (i,m)\rightarrow(i_1,m_1)\rightarrow\cdots\rightarrow(i_n,m_n)\rightarrow(i',m')$$ is a fast transition path from $(i,m)$ to $(i',m')$ if the intermediate states $(i_1,m_1),\cdots,(i_n,m_n)$ are all fast states. Moreover, the probability weight $w_c$ of the fast transition path $c$ is defined as $$w_c = q_{(i,m),(i_1,m_1)}w_{(i_1,m_1),(i_2,m_2)}\cdots w_{(i_n,m_n),(i',m')}.$$ According to the decimation theory [@pigolotti2008coarse; @cappelletti2016elimination; @jia2016reduction; @jia2016simplification; @bo2016multiple; @jia2017simplification], the effective transition rate from $(i,m)$ to $(i',m')$ is given by $$\tilde{q}_{(i,m),(i',m')} = q_{(i,m),(i',m')}+\sum_cw_c,$$ where $c$ ranges over all fast transition paths from $(i,m)$ to $(i',m')$. This formula shows that the effective transition rate from $(i,m)$ to $(i',m')$ is the sum of two parts: the direct transition rate $q_{(i,m),(i',m')}$ and the contribution of indirect transitions via all fast transition paths, as illustrated in Fig. \[decimation\](b). Since the intermediate states of a fast transition path $c$ are all fast states, in order for the path to have a positive probability weight, all the intermediate transitions along this path must satisfy $$\lim_{\beta\rightarrow\infty}q_{(i_1,m_1),(i_2,m_2)} = \cdots = \lim_{\beta\rightarrow\infty}q_{(i_n,m_n),(i',m')} = \infty.$$ By using this criterion, it is easy to see that the original model only has two types of fast transition paths with positive probability weights, which are given by $$\label{path1} (2,m)\rightarrow(1,m)\rightarrow(1,m+1)\rightarrow\cdots\rightarrow(1,m')\rightarrow(2,m'),\;\;\;m'>m,$$ and $$\label{path2} (3,0)\rightarrow(1,0)\rightarrow(1,1)\rightarrow\cdots\rightarrow(1,m)\rightarrow(2,m),\;\;\;m\geq 0,$$ as illustrated by the red arrows in Fig. \[decimation\](a). To proceed, we defined two constants $p$ and $q$ as $$p = \frac{s}{s+b_1},\;\;\;q = \frac{b_1}{s+b_1}.$$ When $\beta\gg 1$, the transition probabilities along the above two fast transition paths are given by $$\begin{gathered} w_{(1,m),(1,m+1)} = \lim_{\beta\rightarrow\infty}\frac{s}{q_{(1,m)}} = p, \\ w_{(1,m),(2,m)} = \lim_{\beta\rightarrow\infty}\frac{b_1}{q_{(1,m)}} = q.\end{gathered}$$ Therefore, the probability weight of the path is given by $a_1p^{m'-m}q$ and the probability weight of the path is given by $a_3p^mq$. Since there is no direct transition, the effective transition rate from $(2,m)$ to $(2,m')$ is the indirect transition rate via the fast transition path : $$\tilde{q}_{(2,m),(2,m')} = a_1p^{m'-m}q.$$ Moreover, the effective transition rate from $(3,0)$ to $(2,m)$ is the sum of the direct transition rate and the indirect transition rate via the fast transition path : $$\begin{split} \tilde{q}_{(3,0),(2,m)} &= q_{(3,0),(2,m)}+a_3p^mq = \begin{cases} a_2+a_3q, &m = 0,\\ a_3p^mq, &m\geq 1. \end{cases}\end{split}$$ The above two formulas indicate that the reduce model may produce large jumps of mRNA abundance within a very short period, which correspond to transcriptional bursts. Each random burst corresponds to a fast transition path of the original model (see the red arrows in Fig. \[decimation\](a)). So far, we have obtained all effective transition rates of the reduced model, whose transition diagram is depicted in Fig. \[decimation\](c). The above calculations can be understood intuitively as follows. Since $b_1\gg a_1$ and $s/b_1$ is finite, the process of mRNA synthesis followed by gene silencing is essentially instantaneous. Once the gene becomes active, it can either produce a transcript with probability $p = s/(s+b_1)$ or switch to the inactive state with probability $q = 1-p = b_1/(s+b_1)$. Therefore, the probability that the gene produces $k$ transcripts in a single burst before it is finally silenced will be $p^kq$, which follows a geometric distribution. This consideration again leads to the reduced model illustrated in Fig. \[decimation\](c). The evolution of the reduced model is governed by the chemical master equation $$\label{reducedcme}\left\{ \begin{split} \dot p_{2,0} =&\; vp_{2,1}+(a_2+a_3q)p_{3,0}-(a_1p+b_2)p_{2,0}, \\ \dot p_{3,0} =&\; b_2p_{2,0}-(a_2+a_3)p_{3,0}, \\ \dot p_{2,m} =&\; \sum_{k=0}^{m-1}a_1p^{m-k}qp_{2,k}+(m+1)vp_{2,m+1}\\ &\; +a_3p^mqp_{3,0}-(a_1p+mv)p_{2,m},\;\;m\geq 1. \end{split}\right.$$ Since the burst size of the mRNA, which is defined as the number of transcripts produced in a single burst (Fig. \[trajectory\](a)), is geometrically distributed, its expected value is given by $$h = \sum_{k=0}^\infty kp^kq = \frac{p}{q} = \frac{s}{b_1}.$$ Theoretical foundation for the ZINB model ========================================= Although the topological structure of the reduced model is complicated, its steady-state probability distribution can be solved analytically. To see this, let $p^{ss}_{(i,m)}$ denote the steady-state probability of microstate $(i,m)$. At the steady state, the probabilities of all microstates are time-independent and thus the left side of must equal zero, giving rise to a set of linear equations. Interestingly, this set of linear equations can be solved explicitly with its solution being given by (see Appendix) $$\label{distribution}\left\{ \begin{split} p^{ss}_{2,0} &= A\cdot\frac{a_1}{\tilde a_1}, \\ p^{ss}_{3,0} &= A\cdot\frac{a_1b_2}{\tilde a_1(a_2+a_3)}, \\ p^{ss}_{2,m} &= A\cdot\frac{p^m(a_1/v)_m}{m!},\;\;\;m\geq 1, \end{split}\right.$$ where $A$ is a normalization constant, $\tilde a_1$ is a constant given by $$\label{a1} \tilde a_1 = a_1+\frac{b_2a_3}{a_2+a_3},$$ and $(x)_m = x(x+1)\cdots(x+m-1)$ is the Pochhammer symbol. Since all steady-state probabilities add up to one, the normalization constant $A$ can be determined as $$A = \left[\frac{a_1}{\tilde a_1}\left(1+\frac{b_2}{a_2+a_3}\right)+q^{-a_1/v}-1\right]^{-1}.$$ Let $p^{ss}_m$ denote the steady-state probability of having $m$ copies of detectable transcripts. Then we obtain $$\left\{ \begin{split} p^{ss}_0 &= p^{ss}_{2,0}+p^{ss}_{3,0} = A\cdot\frac{a_1}{\tilde a_1}\left(1+\frac{b_2}{a_2+a_3}\right), \\ p^{ss}_m &= p^{ss}_{2,m} = A\cdot\frac{p^m(a_1/v)_m}{m!},\;\;\;m\geq 1. \end{split}\right.$$ Since the probabilities $p$ and $q$ can be represented by the mean burst size $h$ as $$p = \frac{h}{1+h},\;\;\;q = \frac{1}{1+h},$$ the steady-state distribution of mRNA abundance can be written in a unified way as $$\begin{split} p^{ss}_m &= w\delta_0(m)+(1-w)\frac{(\lambda)_m}{m!} \left(\frac{h}{1+h}\right)^m\left(\frac{1}{1+h}\right)^{\lambda} \\ &= wp^{\textrm{zero-inflated}}_m+(1-w)p^{\textrm{NB}}_m, \end{split}$$ where $\delta_0(m)$ is Kronecker’s delta function which takes the value of 1 when $m = 0$ and takes the value of 0 otherwise, and $\lambda>0$ and $0<w<1$ are two constants given by $$\begin{gathered} \lambda = \frac{a_1}{v},\\ w = \frac{\frac{a_1}{\tilde a_1}\left(1+\frac{b_2}{a_2+a_3}\right)-1} {\frac{a_1}{\tilde a_1}\left(1+\frac{b_2}{a_2+a_3}\right)+(1+h)^{\lambda}-1}.\end{gathered}$$ Here $0<w<1$ is a result of our model assumption $a_1> a_3$. This is exactly the ZINB distribution of mRNA abundance widely used in scRNA-seq data analysis [@pierson2015zifa; @wagner2016revealing; @fang2016zero; @vallejos2017normalizing; @gao2017nanogrid; @wallrapp2017neuropeptide; @risso2018general; @chen2018umi; @lopez2018deep; @van2018observation; @miao2018desingle; @eraslan2019single]. Specifically, the ZINB distribution is the mixture of two distributions: the zero-inflated part $$p^{\textrm{zero-inflated}}_m = \delta_0(m)$$ is a single-point distribution concentrated at zero and the negative binomial part $$p^{\textrm{NB}}_m = \frac{(\lambda)_m}{m!}\left(\frac{h}{1+h}\right)^m\left(\frac{1}{1+h}\right)^{\lambda}$$ is a negative binomial distribution. The ZINB distribution is determined by three parameters with clear biological implications: the dropout rate $w$ which characterizes the proportion of the zero-inflated part due to both technical and biological effects, the mean burst size $h$ which describes the average number of transcripts synthesized in a single burst, and the maximum burst frequency $\lambda$ which represents the maximum number of occurrence of random bursts per mRNA lifetime. A more detailed discussion on the burst frequency will be given in the next section. The ZINB distribution can exhibit three different types of shapes, as illustrated in Fig. \[distribution\]. To clarify the conditions for the three types of shapes, we notice that the mode (maximum point) of the negative binomial part $p^{\textrm{NB}}_m$ is given by $$\mu_{\textrm{mode}} = \begin{cases} 0 & \textrm{when\;} \lambda<1, \\ [(\lambda-1)h] & \textrm{when\;} \lambda\geq 1, \end{cases}$$ where $[x]$ denotes the integer part of $x$. In fact, the first type of shape occurs when $p^{ss}_0\leq p^{ss}_1$, that is, $$(\lambda-1)h \geq 1+\frac{w}{1-w}(1+h)^{\lambda+1}.$$ In this case, the dropout rate is small and the mode of the negative binomial part is large. Then the ZINB distribution peaks at the non-zero mode $[(\lambda-1)h]$ with no zero-inflation (Fig. \[distribution\](a)). The second type of shape occurs when $p^{ss}_0 > p^{ss}_1$ and $\mu_{\textrm{mode}}\leq 1$, that is, $$(\lambda-1)h < \min\{1+\frac{w}{1-w}(1+h)^{\lambda+1},2\}.$$ In this case, the dropout rate is large and the mode of the negative binomial part is small. Then the ZINB distribution peaks at zero with apparent or inapparent zero-inflation (Fig. \[distribution\](b)). The third type of shape occurs when $p^{ss}_0 > p^{ss}_1$ and $\mu_{\textrm{mode}}\geq 2$, that is, $$2 \leq (\lambda-1)h < 1+\frac{w}{1-w}(1+h)^{\lambda+1}.$$ In this case, both the dropout rate and the mode of the negative binomial part are large. Then the ZINB distribution becomes bimodal and peaks at both zero and the non-zero mode $[(\lambda-1)h]$ with apparent or inapparent zero-inflation (Fig. \[distribution\](c)). ![**Three different types of shapes for the ZINB distribution of mRNA abundance.** (a) The distribution peaks at a non-zero mode with no zero-inflation. (b) The distribution peaks at zero with apparent or inapparent zero-inflation. (c) The distribution exhibits bistability and peaks at both zero and a non-zero mode with apparent or inapparent zero-inflation.[]{data-label="distribution"}](distribution.pdf){width="80.00000%"} Three special cases should be paid special attention to. The first case occurs when the mean burst size $h\rightarrow 0$ and the maximum burst frequency $\lambda\rightarrow\infty$, while $\lambda h = \gamma$ is kept as a constant. In this case, we have $$\begin{gathered} (\lambda)_m\left(\frac{h}{1+h}\right)^m \rightarrow \gamma^m, \\ \left(\frac{1}{1+h}\right)^\lambda = \left(1-\frac{h}{1+h}\right)^\lambda \rightarrow e^{-\gamma}.\end{gathered}$$ Then the ZINB distribution of mRNA abundance reduces to the zero-inflated Poisson (ZIP) distribution $$p^{ss}_m = w\delta_0(m)+(1-w)\frac{\gamma^m}{m!}e^{-\gamma}.$$ In fact, the ZIP distribution is also extensively applied in scRNA-seq data analysis [@chen2018umi] and its kinetic mechanism has been clarified in previous studies [@chong2014mechanism; @klindziuk2018theoretical]. Our analytic theory shows that the ZIP model also naturally emerges from our three-state transcription model. The second special case occurs when $b_2 = b_3 = 0$, which means that the switching from the active or inactive state to the dropout state is forbidden. In this case, the three-state model reduces to the classical two-state model without the dropout state [@paulsson2005models]. It is easy to verify that $\tilde a_1 = a_1$ and $w = 0$ in this regime. This shows that the dropout rate will vanish in the absence of the dropout state. The last special case occurs when $a_3 = 0$, which means that the switching from the dropout state to the active state is forbidden. This is especially biologically relevant when the dropout state is understood to be the refractory (irreversibly silent) state found in recent single-cell experiments [@suter2011mammalian; @bintu2016dynamics]. In this case, we also have $\tilde a_1 = a_1$ and thus the dropout rate is given by $$w = \frac{K_2}{K_2+(1+h)^{\lambda}},$$ where $K_2 = b_2/a_2$ is the equilibrium constant of gene switching between the inactive and dropout states. An increased equilibrium constant $K_2$ will result in a larger fraction of cells being in the dropout state and thus is expected to enhance the dropout rate $w$. Interestingly, our theory reveals a nontrivial quantitative relation between dropout events and transcriptional busting: an increased mean burst size $h$ or maximum burst frequency $\lambda$ will give rise to a decline in the dropout rate $w$. This relation provides novel insights into how and to what extent the burst size and burst frequency of the mRNA could reduce the dropout rate. The basic reason of such dependency is that an increase in the burst size and burst frequency will both promote rapid accumulation of mRNA from a low to a higher level, which is unfavorable to the occurrence of dropouts. Mean burst duration and burst frequency ======================================= It has been shown that the mean burst size of the mRNA is given by $h = s/b_1$. Here we present a more detailed discussion on the burst frequency. In this section, we assume that the time-dependent mRNA abundance in an individual cell could be measured at a series of successive time points, and due to various technical factors, the mRNA expression is undetectable during some periods. Recall that each transcriptional burst is featured by a short transcriptionally active period followed by a long transcriptionally silent period. Mathematically, the mean burst duration, which is defined as the average time needed to complete a single burst (Fig. \[trajectory\](a)), can be computed as the inverse of the total probability flux between the active microstates and other (inactive and dropout) microstates [@jia2009general; @jia2014allosteric]. From , the total flux $J$ between the active microstates and other microstates is given by $$J = \left[\sum_{m=0}^\infty p^{ss}_{2,m}\right]a_1+p^{ss}_{3,0}a_3 = \alpha a_1,$$ where $0<\alpha\leq 1$ is a constant given by $$\alpha = \frac{\frac{a_1}{\tilde a_1}+\frac{a_3b_2}{\tilde a_1(a_2+a_3)}+(1+h)^{\lambda}-1} {\frac{a_1}{\tilde a_1}+\frac{a_1b_2}{\tilde a_1(a_2+a_3)}+(1+h)^{\lambda}-1},$$ and thus the mean burst duration is given by $$\tau_{\textrm{burst}} = \frac{1}{J} = \frac{1}{\alpha a_1}.$$ Since the mRNA lifetime is the inverse of the mRNA degradation rate $v$, the mean burst frequency $\lambda_0$ of the mRNA, which is defined as the average number of occurrence of random bursts per mRNA lifetime, is given by the quotient of the mRNA lifetime $1/v$ and the mean burst duration $\tau_{\textrm{burst}}$: $$\lambda_0 = \frac{1}{v\tau_{\textrm{burst}}} = \frac{\alpha a_1}{v} = \alpha\lambda,$$ where $\lambda = a_1/v$ is the maximum burst frequency defined previously. Since $0<\alpha\leq 1$, the true mean burst frequency $\lambda_0$ is always smaller than the maximal burst frequency $\lambda$. We next focus on three special cases. In the limiting case of $h\rightarrow 0$ and $\lambda\rightarrow\infty$, while $\lambda h = \gamma$ is kept as a constant, we have $\lambda_0\rightarrow\infty$. In this regime, random bursts occur very frequently but each burst only contributes a very small burst size. Due to large burst frequencies, the gene switches very rapidly between the active and inactive states, giving rise to a large number of “futile" switches (Fig. \[trajectory\](b)). In the special case of $b_2 = b_3 = 0$, the three-state model reduces to the classical two-state model without the dropout state [@paulsson2005models]. In this regime, we have $\alpha = 1$ and the mean burst frequency attains its maximum $\lambda_0 = \lambda$. In the presence of the dropout state, we have $b_2>0$ and $\alpha<1$. This shows that dropout events will lead to a reduction of the burst frequency by prolonging the transcriptionally silent periods. The last special case occurs when $a_3 = 0$, which means that the switching from the dropout state to the active state is forbidden. In this case, we have $\tilde a_1 = a_1$ and $$\alpha = \frac{(1+h)^{\lambda}}{K_2+(1+h)^{\lambda}} = 1-w$$ is the proportion of the negative binomial part. Then the mean burst frequency is given by $$\lambda_0 = (1-w)\lambda.$$ This quantitative relation reveals how the dropout rate could affect the burst frequency. Over-dispersion of scRNA-seq data ================================= The simplest kinetic model of transcription is the classical birth-death process, which describes the synthesis and degradation of the mRNA. The steady-state distribution of the birth-death process turns out to be a Poisson distribution, whose mean and variance are equal. In bulk or single-cell RNA-seq experiments, read counts are always over-dispersed relative to Poisson: the variance is higher than the mean [@mccarthy2012differential; @anders2010differential]. In the literature, the dispersion, sometimes referred to as noise, in mRNA abundance within a cell population is often characterized by the Fano factor $\eta = \sigma^2/\langle m\rangle$, which is defined as the ratio of the variance $\sigma^2$ and the mean $\langle m\rangle$. A dispersion greater than one reveals a deviation from the Poisson distribution and thus serves as a characteristic signal of over-dispersion. Strictly speaking, the dispersion captures all sources of variation between samples, including contributions from technical factors leading to dropouts as well as real biological variation. To calculate the mean and variance of mRNA abundance, we consider the generating function of the ZINB distribution: $$F(z) = \sum_{m=1}^\infty p^{ss}_mz^m = w+(1-w)\frac{q^\lambda}{(1-pz)^\lambda}.$$ Then the mean and variance can be recovered by taking the derivatives of the generating function: $$\label{meanvar} \begin{split} \langle m\rangle &= F'(1) = (1-w)\lambda h,\\ \sigma^2 &= F''(1)+F'(1)-F'(1)^2 = (1-w)[w\lambda^2h^2+\lambda h^2+\lambda h]. \end{split}$$ Therefore, the dispersion in mRNA abundance is given by $$\eta = \frac{\sigma^2}{\langle m\rangle} = 1+h+w\lambda h,$$ where the constant term 1 is the dispersion of a Poisson distribution arising from individual births and deaths of the mRNA, the middle term $h$ describes the dispersion due to transcriptional burst sizes, and the last term $w\lambda h$ characterizes the dispersion due to the interaction between dropout events and transcriptional bursting. When there are no dropouts, the dispersion reduces to $\eta = 1+h$, which does not depend on the burst frequency [@paulsson2005models]. Interestingly, in the presence of dropout events, the dispersion positively depends on the three parameters: the dropout rate $w$, mean burst size $h$, and maximum burst frequency $\lambda$. This clearly reveals three different biophysical origins of over-dispersion. Statistically, the three parameters involved in the ZINB distribution can be estimated in several different ways. The maximum likelihood estimation has been discussed in [@miao2018desingle]. Here we provide two additional approaches. In fact, the first three moments of the ZINB distribution can be recovered from the generating function as $$\begin{split} \langle m\rangle\; &= F'(1) = (1-w)\lambda h, \\ \langle m^2\rangle &= F''(1)+F'(1) = (1-w)[\lambda(\lambda+1)h^2+\lambda h] \\ \langle m^3\rangle &= F'''(1)+3F''(1)+F'(1) \\ &= (1-w)[\lambda(\lambda+1)(\lambda+2)h^3+3\lambda(\lambda+1)h^2+\lambda h]. \end{split}$$ By analyzing scRNA-seq data, the first three moments of mRNA abundance can be estimated. Solving the above set of polynomial equations give the moment estimates of $w$, $h$, and $\lambda$. When the mRNA levels across cells are relatively high, there is still another method to estimate the three parameters. From , when $m\gg 1$, we have $$\frac{p^{ss}_{m+1}}{p^{ss}_m} = \frac{h}{h+1}\cdot\frac{m+\lambda}{m+1} \approx \frac{h}{h+1}.$$ This suggests that for any $k\geq 1$, $$p^{ss}_{m+k} \approx \left(\frac{h}{h+1}\right)^kp^{ss}_m.$$ Taking logarithm on both sides gives rise to $$\log p^{ss}_{m+k} \approx k\log\left(\frac{h}{h+1}\right)+\log p^{ss}_m,$$ which is a linear relation with respect to $k$. Therefore, we only need to calculate the logarithm of the steady-state probabilities at large mRNA copy numbers and then carry out a linear regression analysis with respect to the copy number difference $k$. The slope of the linear regression provides an estimate of the mean burst size $h$. Once $h$ is known, we can solve to obtain the estimates of the dropout rate $w$ and maximum burst frequency $\lambda$: $$\begin{gathered} w = \frac{\langle m\rangle}{\langle m\rangle+\eta-h-1}, \\ \lambda = \frac{(\eta-h-1)(\langle m\rangle+\eta-h-1)}{\langle m\rangle h}.\end{gathered}$$ where $\eta = \sigma^2/\langle m\rangle$ is the dispersion. Discussion ========== In this work, we present a comprehensive analysis of a three-state transcription model with dropout events and over-dispersion based on the biochemical reaction kinetics underlying transcription. Using the multiscale simplification technique of decimation, we simplify the original Markovian model to a reduced one by removal of all fast states. It turns out that transcriptional bursts exactly correspond to the fast transition paths of the original model. Although the reduced model has a complicated topology, we obtain its steady-state analytic solution. The widely used ZINB or ZIP model of scRNA-seq data naturally emerges as the steady-state distribution of the reduced model. This provides a mesoscopic kinetic foundation of these statistical models. We further clarify the biological implications of the three parameters involved in the ZINB distribution: the dropout rate $w$, mean burst size $h$, and maximum burst frequency $\lambda$. In addition, we discover a nontrivial relation between dropout events and transcriptional bursting, which quantitatively reveals how and to what extent the burst size and burst frequency could reduce the dropout rate. Another relation reveals how dropout events could lower the burst frequency by prolonging the transcriptionally silent periods. The dispersion of scRNA-seq data is also investigated at the single-cell level and three different biophysical origins of over-dispersion are found. Finally, two statistical methods are given to estimate the three parameters involved in the ZINB distribution. Our three-state transcription model is a minimal kinetic model that could account for the ZINB distribution of mRNA abundance. Recently, there has been some discussion on the role of various technical and biological effects on the apparent zero-inflation in scRNA-seq data [@chen2018umi; @townes2019feature; @svensson2019droplet]. In our minimal three-state model, zero-inflation is realized by the introduction of a dropout state, which may be either interpreted as an undetectable state due to technical factors or interpreted as a refractory state due to biological factors. In other words, our three-state model cannot distinguish whether zero-inflation is a consequence of technical or biological effects. If we would like to empirically decide between the two interpretations, a more realistic model that takes into account more complex features of stochastic transcription dynamics must be developed. If the dropout state is interpreted as a refractory state due to biological factors, then a more realistic model would be the Markovian model illustrated in Fig. \[threestate\](a), where microstates $(3,m)$, $m\geq 1$ are incorporated and transitions between microstates $(1,m)$, $(2,m)$, and $(3,m)$ are allowed. Here $(3,m)$ represents the microstate of having $m$ transcripts in an individual cell when the gene is in the refractory state. In fact, the minimal kinetic model depicted in Fig. \[model\](b) can be viewed as an approximation of the more realistic model when $a_2,a_3\ll v$. This can be understood intuitively as follows. Since $a_2,a_3\ll v$, the degradation of the mRNA is fast and the switching of the gene from the refractory state to the active or inactive state is slow. Once the gene is in the refractory state, before it could switch to the active or inactive state, the microstates $(3,m)$, $m\geq 0$ are already in rapid pre-equilibrium due to fast mRNA degradation and thus most of the probability is concentrated on microstate $(3,0)$. ![**More realistic models of transcription.** (a) A Markovian model of stochastic transcription involving gene switching among an active, an inactive, and a refractory state. The microstate of the gene of interest is described by an ordered pair $(i,m)$: the activity $i$ of the gene and the copy number $m$ of the mRNA. Here $i = 1,2,3$ correspond to the active, inactive, and refractory states, respectively. (b) A Markovian model of stochastic transcription with dropout events. The microstate of the gene of interest is described by an ordered triple $(i,m,k)$: the activity $i$ of the gene, the copy number $m$ of the mRNA, and the detection state $k$ of the transcriptional signal. Here $i = 1,2$ correspond to the active and inactive states, respectively, and $k = 1,0$ correspond to the detectable and undetectable states, respectively. []{data-label="threestate"}](threestate.pdf){width="65.00000%"} If the dropout state is interpreted as an undetectable state due to technical factors, then a more realistic model would be the Markovian model illustrated in Fig. \[threestate\](b), where the microstate of the gene of interest is described by an ordered triple $(i,m,k)$: the activity $i$ of the gene with $i = 1,2$ corresponding to the active and inactive states, respectively, the copy number $m$ of the mRNA, and the detection state $k$ of the transcriptional signal with $k = 1,0$ corresponding to the detectable and undetectable states, respectively. In scRNA-seq experiments, the variable of interest is the copy number of detectable transcripts, which is given by $$N(i,m,k) = \begin{cases} m, &\textrm{if}\;k = 1,\\ 0, &\textrm{if}\;k = 0. \end{cases}$$ This Markovian model allows transitions between detectable microstates $(i,m,1)$ and undetectable microstates $(i,m,0)$. Since dropouts are more frequent for cells with low mRNA expression levels [@kharchenko2014bayesian], the transition rate from $(i,m,1)$ to $(i,m,0)$ should be a decreasing function of $m$ and the transition rate from $(i,m,0)$ to $(i,m,1)$ should be an increasing function of $m$. Within this framework, the minimal kinetic model depicted in Fig. \[model\](b) can be roughly viewed as an approximation of the more realistic model with all undetectable microstates $(i,m,0)$ combined as a single microstate $(3,0)$. Besides the ZINB and ZIP models discussed in the present work, many other statistical models have also been proposed to analyze scRNA-seq data. Some commonly used models include but not limited to the Gaussian mixture model [@satija2015spatial], Poisson-negative binomial mixture model [@kharchenko2014bayesian; @fan2016characterizing], Poisson-gamma mixture model [@huang2018saver], Hurdle model [@finak2015mast], zero-inflated log-normal model [@mcdavid2012data], zero-inflated Gaussian mixture model [@paulson2013differential], and Bayesian mixture model [@prabhakaran2016dirichlet; @sun2019bayesian]. We anticipate that the mesoscopic kinetic mechanisms for these models could be clarified. A deeper understanding of the connection between the kinetic approach and the statistical approach is expected. Acknowledgements {#acknowledgements .unnumbered} ================ The author acknowledges Michael Q. Zhang, Min Chen, and Cong Zhang at the University of Texas at Dallas, Yuxuan Liu at the University of Texas Southwestern Medical Center, Bochao Liu at Rutgers Cancer Institute of New Jersey, and Xuegong Zhang and Jiaqi Li at Tsinghua University for stimulating discussions. The author is also grateful to the anonymous reviewers for their valuable comments and suggestions which help the author greatly in improving the quality of this paper. Appendix {#appendix .unnumbered} ======== Here we provide the detailed derivation of the steady-state probability distribution of the reduced model depicted in Fig. \[decimation\](c). At the steady state, the steady-state probabilities of all microstates satisfy the following set of linear equations: $$\label{set}\left\{ \begin{split} 0 =&\; vp^{ss}_{2,1}+(a_2+a_3q)p^{ss}_{3,0}-(a_1p+b_2)p^{ss}_{2,0}, \\ 0 =&\; b_2p^{ss}_{2,0}-(a_2+a_3)p^{ss}_{3,0}, \\ 0 =&\; \sum_{k=0}^{m-1}a_1p^{m-k}qp^{ss}_{2,k}+(m+1)vp^{ss}_{2,m+1} \\ &\; +a_3p^mqp^{ss}_{3,0}-(a_1p+mv)p^{ss}_{2,m},\;\;\;m\geq 1. \\ \end{split}\right.$$ By the second equation in , we have $$(a_2+a_3)p^{ss}_{3,0} = b_2p^{ss}_{2,0}.$$ Inserting this equation into the first and third equations in eliminates $p^{ss}_{3,0}$ and yields $$\label{setmod}\left\{ \begin{split} 0 &= vp^{ss}_{2,1}-\tilde a_1pp^{ss}_{2,0}, \\ 0 &= \tilde a_1p^mqp^{ss}_{2,0}+\sum_{k=1}^{m-1}a_1p^{m-k}qp^{ss}_{2,k}+(m+1)vp^{ss}_{2,m+1} -(a_1p+mv)p^{ss}_{2,m},\;\;\;m\geq 1, \end{split}\right.$$ where $\tilde a_1$ is the constant defined in . For convenience, set $$w_0 = \frac{\tilde a_1}{a_1}p^{ss}_{2,0},\;\;\;w_m = p^{ss}_{2,m},\;\;\;m\geq 1.$$ Then the two equations in can be rewritten in a unified way as $$\label{master} \sum_{k=0}^{m-1}a_1p^{m-k}qw_k+(m+1)vw_{m+1}-(a_1p+mv)w_m = 0,\;\;\;m\geq 0.$$ To proceed, we introduce the generating function $$F(z) = \sum_{m=1}^\infty w_mz^m.$$ Then the algebraic equation can be converted into the ordinary differential equation $$vF'(z) = \frac{a_1p}{1-pz}F(z),$$ whose solution is given by $$F(z) = A(1-pz)^{-a_1/v},$$ where $A$ is a constant. Therefore, $w_m$ can be recovered from the generating function $F$ as $$w_m = \frac{F^{(m)}(0)}{m!} = A\cdot\frac{p^m(a_1/v)_m}{m!}.$$ This shows that $$p^{ss}_{2,0} = A\cdot\frac{a_1}{\tilde a_1},\;\;\;p^{ss}_{2,m} = A\cdot\frac{p^m(a_1/v)_m}{m!},\;\;\;m\geq 1.$$ [66]{} natexlab\#1[\#1]{}url \#1[`#1`]{}urlprefix Paulsson, J. Models of stochastic gene expression. *Phys. Life Rev.* **2**, 157–175 (2005). Sandberg, R. Entering the era of single-cell transcriptomics in biology and medicine. *Nat. Methods* **11**, 22 (2014). Eberwine, J., Sul, J.-Y., Bartfai, T. & Kim, J. The promise of single-cell sequencing. *Nat. Methods* **11**, 25 (2014). Kolodziejczyk, A. A., Kim, J. K., Svensson, V., Marioni, J. C. & Teichmann, S. A. The technology and biology of single-cell RNA sequencing. *Mol. Cell* **58**, 610–620 (2015). Bacher, R. & Kendziorski, C. Design and computational analysis of single-cell RNA-sequencing experiments. *Genome Biol.* **17**, 63 (2016). Pollen, A. A. *et al.* Low-coverage single-cell mRNA sequencing reveals cellular heterogeneity and activated signaling pathways in developing cerebral cortex. *Nat. Biotechnol.* **32**, 1053 (2014). Landau, W. M. & Liu, P. Dispersion estimation and its effect on test performance in RNA-seq data analysis: a simulation-based comparison of methods. *PloS one* **8**, e81415 (2013). Li, W. V. & Li, J. J. An accurate and robust imputation method scImpute for single-cell RNA-seq data. *Nat. Commun.* **9**, 997 (2018). Liu, S. & Trapnell, C. Single-cell transcriptome sequencing: recent advances and remaining challenges. *F1000Research* **5** (2016). Hicks, S. C., Townes, F. W., Teng, M. & Irizarry, R. A. Missing data and technical variability in single-cell RNA-sequencing experiments. *Biostatistics* **19**, 562–578 (2017). Jia, C. *et al.* Accounting for technical noise in differential expression analysis of single-cell RNA sequencing data. *Nucleic Acids Res.* **45**, 10978–10988 (2017). Zhu, L., Lei, J., Devlin, B., Roeder, K. *et al.* A unified statistical framework for single cell and bulk RNA sequencing data. *Ann. Appl. Stat.* **12**, 609–632 (2018). Gong, W., Kwak, I.-Y., Pota, P., Koyano-Nakagawa, N. & Garry, D. J. DrImpute: imputing dropout events in single cell RNA sequencing data. *BMC Bioinformatics* **19**, 220 (2018). Wang, Y. & Navin, N. E. Advances and applications of single-cell sequencing technologies. *Mol. Cell* **58**, 598–609 (2015). Haque, A., Engel, J., Teichmann, S. A. & L[ö]{}nnberg, T. A practical guide to single-cell RNA-sequencing for biomedical research and clinical applications. *Genome Med.* **9**, 75 (2017). McCarthy, D. J., Chen, Y. & Smyth, G. K. Differential expression analysis of multifactor RNA-Seq experiments with respect to biological variation. *Nucleic Acids Res.* **40**, 4288–4297 (2012). Kharchenko, P. V., Silberstein, L. & Scadden, D. T. Bayesian approach to single-cell differential expression analysis. *Nat. Methods* **11**, 740 (2014). McDavid, A. *et al.* Data exploration, quality control and testing in single-cell qPCR-based gene expression experiments. *Bioinformatics* **29**, 461–467 (2012). Paulson, J. N., Stine, O. C., Bravo, H. C. & Pop, M. Differential abundance analysis for microbial marker-gene surveys. *Nat. Methods* **10**, 1200 (2013). Pierson, E. & Yau, C. ZIFA: Dimensionality reduction for zero-inflated single-cell gene expression analysis. *Genome Biol.* **16**, 241 (2015). Wagner, A., Regev, A. & Yosef, N. Revealing the vectors of cellular identity with single-cell genomics. *Nat. Biotechnol.* **34**, 1145 (2016). Fang, R., Wagner, B., Harris, J. & Fillon, S. Zero-inflated negative binomial mixed model: an application to two microbial organisms important in oesophagitis. *Epidemiology & Infection* **144**, 2447–2455 (2016). Vallejos, C. A., Risso, D., Scialdone, A., Dudoit, S. & Marioni, J. C. Normalizing single-cell RNA sequencing data: challenges and opportunities. *Nat. Methods* **14**, 565 (2017). Gao, R. *et al.* Nanogrid single-nucleus RNA sequencing reveals phenotypic diversity in breast cancer. *Nat. Commun.* **8**, 228 (2017). Wallrapp, A. *et al.* The neuropeptide NMU amplifies ILC2-driven allergic lung inflammation. *Nature* **549**, 351 (2017). Risso, D., Perraudeau, F., Gribkova, S., Dudoit, S. & Vert, J.-P. A general and flexible method for signal extraction from single-cell RNA-seq data. *Nat. Commun.* **9**, 284 (2018). Chen, W. *et al.* UMI-count modeling and differential expression analysis for single-cell RNA sequencing. *Genome Biol.* **19**, 70 (2018). Lopez, R., Regier, J., Cole, M. B., Jordan, M. I. & Yosef, N. Deep generative modeling for single-cell transcriptomics. *Nat. Methods* **15**, 1053 (2018). Van den Berge, K. *et al.* Observation weights unlock bulk RNA-seq tools for zero inflation and single-cell applications. *Genome Biol.* **19**, 24 (2018). Miao, Z., Deng, K., Wang, X. & Zhang, X. DEsingle for detecting three types of differential expression in single-cell RNA-seq data. *Bioinformatics* **34**, 3223–3224 (2018). Eraslan, G., Simon, L. M., Mircea, M., Mueller, N. S. & Theis, F. J. Single-cell RNA-seq denoising using a deep count autoencoder. *Nat. Commun.* **10**, 390 (2019). Qian, H. Stochastic physics, complex systems and biology. *Quant. Biol.* **1**, 50–53 (2013). Peccoud, J. & Ycart, B. Markovian modeling of gene-product synthesis. *Theor. Popul. Biol.* **48**, 222–234 (1995). Raj, A., Peskin, C. S., Tranchina, D., Vargas, D. Y. & Tyagi, S. Stochastic mRNA synthesis in mammalian cells. *PLoS Biol.* **4**, e309 (2006). Iyer-Biswas, S., Hayot, F. & Jayaprakash, C. Stochasticity of gene products from transcriptional pulsing. *Phys. Rev. E* **79**, 031911 (2009). Mugler, A., Walczak, A. M. & Wiggins, C. H. Spectral solutions to stochastic models of gene expression with bursts and regulation. *Phys. Rev. E* **80**, 041921 (2009). Chong, S., Chen, C., Ge, H. & Xie, X. S. Mechanism of transcriptional bursting in bacteria. *Cell* **158**, 314–326 (2014). Kumar, N., Singh, A. & Kulkarni, R. V. Transcriptional bursting in gene expression: analytical results for general stochastic models. *PLoS Comput. Biol.* **11**, e1004292 (2015). Jia, C., Zhang, M. Q. & Qian, H. Emergent L[é]{}vy behavior in single-cell stochastic gene expression. *Phys. Rev. E* **96**, 040402 (2017). Klindziuk, A. & Kolomeisky, A. B. Theoretical Investigation of Transcriptional Bursting: A Multistate Approach. *J. Phys. Chem. B* **122**, 11969–11977 (2018). Livesey, F. Strategies for microarray analysis of limiting amounts of RNA. *Briefings in Functional Genomics* **2**, 31–36 (2003). Taniguchi, Y. *et al.* Quantifying E. coli proteome and transcriptome with single-molecule sensitivity in single cells. *Science* **329**, 533–538 (2010). Suter, D. M. *et al.* Mammalian genes are transcribed with widely different bursting kinetics. *Science* **332**, 472–474 (2011). Harper, C. V. *et al.* Dynamic analysis of stochastic transcription cycles. *PLoS Biol.* **9**, e1000607 (2011). Rieckh, G. & Tka[č]{}ik, G. Noise and information transmission in promoters with multiple internal states. *Biophys. J.* **106**, 1194–1204 (2014). Corrigan, A. M., Tunnacliffe, E., Cannon, D. & Chubb, J. R. A continuum model of transcriptional bursting. *Elife* **5**, e13051 (2016). Bintu, L. *et al.* Dynamics of epigenetic regulation at the single-cell level. *Science* **351**, 720–724 (2016). Cai, L., Dalal, C. K. & Elowitz, M. B. Frequency-modulated nuclear localization bursts coordinate gene regulation. *Nature* **455**, 485 (2008). Moran, M. A. *et al.* Sizing up metatranscriptomics. *The ISME journal* **7**, 237 (2012). Pigolotti, S. & Vulpiani, A. Coarse graining of master equations with fast and slow states. *J. Chem. Phys.* **128**, 154114 (2008). Cappelletti, D., Wiuf, C. *et al.* Elimination of intermediate species in multiscale stochastic reaction networks. *Ann. Appl. Probab.* **26**, 2915–2958 (2016). Jia, C. Reduction of Markov chains with two-time-scale state transitions. *Stochastics* **88**, 73–105 (2016). Jia, C. Simplification of irreversible Markov chains by removal of states with fast leaving rates. *J. Theor. Biol.* **400**, 129–137 (2016). Bo, S. & Celani, A. Multiple-scale stochastic processes: decimation, averaging and beyond. *Phys. Rep.* **670**, 1–59 (2016). Jia, C. Simplification of Markov chains with infinite state space and the mathematical theory of random gene expression bursts. *Phys. Rev. E* **96**, 032402 (2017). Jia, C., Li, Y. & Qian, M. A general analysis of single IP$_3$ receptors modulated by cytosolic Ca$^{2+}$ and IP$_3$. In *The Third International Symposium on Optimization and Systems Biology*, 89–101 (Zhangjiajie, China, 2009). Jia, C., Jiang, D. & Qian, M. An allosteric model of the inositol trisphosphate receptor with nonequilibrium binding. *Phys. Biol.* **11**, 056001 (2014). Anders, S. & Huber, W. Differential expression analysis for sequence count data. *Genome Biol.* **11**, R106 (2010). Townes, F. W., Hicks, S. C., Aryee, M. J. & Irizarry, R. A. Feature Selection and Dimension Reduction for Single Cell RNA-Seq based on a Multinomial Model. *bioRxiv* 574574 (2019). Svensson, V. Droplet scRNA-seq is not zero-inflated. *bioRxiv* 582064 (2019). Satija, R., Farrell, J. A., Gennert, D., Schier, A. F. & Regev, A. Spatial reconstruction of single-cell gene expression data. *Nat. Biotechnol.* **33**, 495 (2015). Fan, J. *et al.* Characterizing transcriptional heterogeneity through pathway and gene set overdispersion analysis. *Nat. Methods* **13**, 241 (2016). Huang, M. *et al.* SAVER: gene expression recovery for single-cell RNA sequencing. *Nature methods* **15**, 539 (2018). Finak, G. *et al.* MAST: a flexible statistical framework for assessing transcriptional changes and characterizing heterogeneity in single-cell RNA sequencing data. *Genome Biol.* **16**, 278 (2015). Prabhakaran, S., Azizi, E., Carr, A. & Pe’er, D. Dirichlet process mixture model for correcting technical variation in single-cell gene expression data. In *International Conference on Machine Learning*, 1070–1079 (2016). Sun, Z. *et al.* A Bayesian mixture model for clustering droplet-based single-cell transcriptomic data from population studies. *Nat. Commun.* **10**, 1649 (2019).
--- author: - 'Chi-Ting Chiang' - and Ane Slosar bibliography: - 'main.bib' title: 'Power spectrum in the presence of large-scale overdensity and tidal fields: breaking azimuthal symmetry' --- Introduction {#sec:introduction} ============ The universe has no global special direction and no global special place. Statistical isotropy and homogeneity on large scale are two of the most fundamental assumptions of cosmology. It means that any statistically non-zero quantity must respect these constraints. The translational invariance calls for a Fourier-space description of the two-point correlation, i.e. the power spectrum, and statistical isotropy additionally requires that the power spectrum is the same in all directions. The observed galaxy power spectrum, however, is not isotropic. There is a special direction, the direction along the line-of-sight, where galaxies are displaced from their nominal, cosmological redshift-given coordinate by the component of their peculiar velocities along the line-of-sight. This results in redshift-space distortions which in Fourier space introduce additional dependence of the power spectrum on the cosine of the polar angle with respect to the line-of-sight $\mu$ [@Kaiser:1987qv]. Conventionally, the full anisotropic power spectrum is expanded in the line-of-sight using the Legendre polynomials and one finds that the redshift-space distortions, on top of the monopole, generate the quadrupole and hexadecapole moments for the Kaiser power spectrum (the odd $\ell$ moments cannot be generated in auto-correlation due to symmetry along $\mu=0$ line). However, full azimuthal symmetry with respect to the line-of-sight still remains, meaning that the power spectrum is independent on the azimuthal angle $\phi$. In fact, unless there is an additional preferred direction in the primordial physics (see the discussions of various models in Ref. [@Shiraishi:2016wec] and the references therein) which is not aligned with the line-of-sight, the symmetry of the system does not allow the statistical properties of the observables to depend on $\phi$. Of course, any real measurement in the finite volume will produce scatter, which will however be consistent with zero azimuthal power. While it is nontrivial to break the azimuthal symmetry for the entire universe, for a finite volume the azimuthal symmetry is generally broken, even without the anisotropic survey window function and in the simplest cosmological model, i.e. single-field inflation, Einstein gravity, and $\Lambda$CDM background. Specifically, a finite volume rests in the background of long-wavelength modes, whose expected variances are given by the convolution of the power spectrum and the window function (see [eqs. (\[eq:sigmad\])–(\[eq:sigmaT\])]{}). These super-volume scale modes are not directly observable for the local observer sitting in the volume because the underlying mean density of the entire universe is unknown, but they do generate both a mean overdensity and tide, which act as a global constant scalar and tensor over the volume. Gravitational evolution couples the Fourier modes of different wavenumbers, so the long-wavelength perturbations will affect the evolution of the small-scale structure formation (see e.g. Ref. [@Bernardeau:2001qr] for a review). This immediately calls for a natural extension of the Kaiser formalism: instead of limiting to the power spectrum $\ell=0,2,4$ multipoles, we will additionally allow for non-zero $m$ moments of the power spectrum due to the presence of the super-volume modes. We stress that we are expanding the tracer power spectrum and *not* the tracer overdensity field. To maximally exploit the symmetries of the problem, we will do the same for the source fields, in particular, we will treat tidal tensor as a general quadrupole [@Schmidt:2013gwa; @Osato:2017ess] and the overdensity of the volume and the linear power spectrum as trivial monopoles. This will simplify the treatment considerably. The effect of the long-wavelength modes on the power spectrum has been studied extensively for the overdensity [@Li:2014sga; @Li:2014jra; @Chiang:2014oga; @Wagner:2015gva; @Chiang:2017jnm; @Dai:2015jaa; @Barreira:2017sqa] as well as the tidal fields [@Schmidt:2013gwa; @Nishimichi:2015kla; @Akitsu:2016leq; @Akitsu:2017syq; @Li:2017qgh; @Ip:2016jji; @Schmidt:2018hbj; @Pen:2012ft; @Zhu:2015zlh; @Zhu:2016esh]. Many of the studies focus on the effect of the covariance due to the super-sample mode, or the super-sample covariance, and the constraints on parameters [@Li:2014sga; @Akitsu:2016leq; @Akitsu:2017syq; @Li:2017qgh]. Ref. [@Li:2014jra] has treated the super-sample overdensity as a signal and discuss the possible constraint. Ref. [@Li:2017qgh] studied this effect as contamination to baryonic acoustic oscillations (BAO) and redshift-space distortions measurements and included all bias parameters. They used the full bias parameterization to the second order and test it on simulations. However, because they focused on the effect as contamination, only the usual, azimuthally averaged power spectrum was considered. Ref. [@Akitsu:2017syq] has also treated the large-scale tidal field both as a contamination and as a signal, but only for azimuthally averaged component and did not include the higher-order biases. Ref. [@Shiraishi:2016wec] did go beyond azimuthally averaged power spectrum, but treated the power asymmetry as a statistical field using the spherical bipolar formalism. In Ref. [@Schmidt:2018hbj] the effect has been studied for dark matter in simulations using the response function. This work has confirmed that the tidal effect, at least for dark matter, never significantly exceeds the large-scale perturbation theory. We will later use the same intuition to argue that non-azimuthally symmetric components of the anisotropic power spectrum are unlikely to be contaminated by nonlinearities. The goal of this paper is to generalize the discussion and explore observability of all components of the tidal fields from the redshift-space galaxy power spectrum of a finite survey. To that extend, we will follow the standard derivation of the power spectrum response along the lines of Ref. [@Akitsu:2017syq], but including all the bias terms as in Ref. [@Li:2017qgh]. However, rather than azimuthally averaging the resulting expression we will expand it in spherical harmonic base. The rest of the paper is organized as follows. In [Sec. \[sec:theory\]]{} we derive the power spectrum in the presence of long-wavelength overdensity and tidal fields, and decompose the power spectrum using the spherical harmonics to highlight the internal symmetry of the problem. In [Sec. \[sec:fisher\]]{} we use Fisher analysis to forecast the constraints on the long modes for a finite survey, and study the degeneracies between the linear bias as well as the growth rate with the long modes. We conclude in [Sec. \[sec:conclusion\]]{}. In [App. \[app:Amn\]]{} we show explicitly the terms with different angular dependencies of the redshift-space power spectrum in the presence of long-wavelength overdensity and tidal fields. Theory {#sec:theory} ====== Power spectrum in a volume with of a long-wavelength density perturbation {#sec:longmode} ------------------------------------------------------------------------- Consider measuring galaxy power spectrum in a volume $V$, which is large enough to encompass linear modes over some range of scales. Within this volume the mean matter density perturbation $\delta^W$ and tide $\tau^W_{ij}$ are given by \^W= W(-)() , \^W\_[ij]{}= W(-)${\hat{k}}_i{\hat{k}}_j-\frac{1}{3}\delta^K_{ij}$() , \[eq:longmodes\] where $W({{\mathbf{k}}})$ is the top-hat smoothing kernel, $\delta^K_{ij}$ is the Kronecker delta, and hat refers to the unit vector. For simplicity we shall assume the smoothing kernel is isotropic and so $W({{\mathbf{k}}})=W(k)$. Even though for a single mode $\tau_{ij}({{\mathbf{k}}})$ is deterministically determined from $\delta({{\mathbf{k}}})$, after averaging over all the modes inside the volume $V$, knowing $\delta^W$, which is a scalar containing just a number, is not sufficient to predict $\tau^W_{ij}$, which is a symmetric and traceless tensor containing five numbers, since they have different $k$ weightings according to [eq. (\[eq:longmodes\])]{}. This is particularly crucial when considering the super-sample modes as the underlying $\delta({{\mathbf{k}}})$ is unknown given that we have only a finite survey. Thus, in this paper we shall consider $\delta^W$ and $\tau^W_{ij}$ as separate variables, meaning that when studying the constraints on the long modes we have to constrain $\delta^W$ and $\tau^W_{ij}$ separately. Due to the presence of the long modes, the power spectrum in this volume $V$ will be affected as [@Li:2014sga; @Li:2014jra; @Chiang:2014oga; @Wagner:2015gva; @Chiang:2017jnm; @Nishimichi:2015kla; @Akitsu:2016leq; @Akitsu:2017syq; @Li:2017qgh; @Dai:2015jaa; @Ip:2016jji; @Barreira:2017sqa; @Schmidt:2018hbj] P\_[gg]{}(|\^W,\^W\_[ij]{})=P\_[gg]{}()+\^W +\_[ij]{}\^W\_[ij]{} +[O]{}$${\left(}\delta^W{\right)}^2,{\left(}\tau^W_{ij}{\right)}^2,\delta^W\tau^W_{ij}$$ , hence the power spectrum contains additional six degrees of freedom: one from $\delta^W$ and five from $\tau^W_{ij}$ due to the symmetric and traceless conditions. On average $\langle P_{gg}({{\mathbf{k}}}|\delta^W,\tau^W_{ij})\rangle=P_{gg}({{\mathbf{k}}})$ since $\langle\delta^W\rangle=\langle\tau^W_{ij}\rangle=0$, but if we correlate the power spectrum with the long modes in the same volume as for measuring the position-dependent power spectrum [@Chiang:2014oga; @Chiang:2015eza], then one would pick up the response signal. The response of the power spectrum to the long mode is equivalent to the bispectrum in the squeezed limit [@Chiang:2014oga; @Wagner:2015gva; @Barreira:2017sqa]. Intuitively, the squeezed bispectrum measures the correlation between one long and two short modes, and one can regard the two short modes as the small-scale power spectrum and the long mode as the large-scale perturbation. Since we consider the response of redshift-space galaxy power spectrum to the underlying long node, we shall adopt the bispectrum formed by two small-scale redshift-space galaxy perturbations and one large-scale real-space matter perturbation. Specifically, \^W P\_[gg]{}(|\^W,\^W\_[ij]{}) &=(\^W)\^2 +\_[ij]{}\^W\^W\_[ij]{}[\ ]{} &=\_[q/k1]{}B\_[mgg]{}(,,--) =B\^[sq]{}\_[mgg]{}(,,--) , \[eq:Bmgg\_0\] where for simplicity we consider $\delta^W$ and $\tau^W_{ij}$ to contain a single mode with wavelength ${{\mathbf{q}}}$. Therefore, by the squeezed bispectrum prescription, we can read off the galaxy power spectrum response to $\delta^W$ and $\tau^W_{ij}$. The galaxy redshift-space bispectrum predicted by the standard perturbation theory at the tree-level is [@Bernardeau:2001qr] B\_[ggg]{}(\_1,\_2,\_3)=2$$Z_1({{\mathbf{k}}}_1)Z_1({{\mathbf{k}}}_2)Z_2({{\mathbf{k}}}_1,{{\mathbf{k}}}_2)P_l(k_1)P_l(k_2)+(2~{\rm cyclic})$$ , where $P_l$ is the linear power spectrum and $Z_1$ and $Z_2$ are the redshift-space kernels given by Z\_1(\_i)=&b\_1+f\_[k\_i]{}\^2 , [\ ]{} Z\_2(\_1,\_2)=&b\_1F\_2(\_1,\_2)++S\_2(\_1,\_2) +f\_[k\_3]{}\^2G\_2(\_1,\_2) [\ ]{} &-$$\frac{\mu_{k_1}}{k_1}(b_1+f\mu_{k_1}^2)+\frac{\mu_{k_2}}{k_2}(b_1+f\mu_{k_2}^2)$$ . Here, $b_1$, $b_2$, and $b_{s^2}$ are linear, nonlinear, and tidal biases, $f$ is the growth rate, $\mu_{k_i}$ is the cosine between ${{\mathbf{k}}}_i$ and the line-of-sight, and F\_2(\_1,\_2)&=+$\frac{k_1}{k_2}+\frac{k_2}{k_1}$ +\_[k\_1k\_2]{}\^2 [\ ]{} G\_2(\_1,\_2)&=+$\frac{k_1}{k_2}+\frac{k_2}{k_1}$ +\_[k\_1k\_2]{}\^2 [\ ]{} S\_2(\_1,\_2)&=\_[k\_1k\_2]{}\^2- , with $\mu_{k_1k_2}$ being the cosine between ${{\mathbf{k}}}_1$ and ${{\mathbf{k}}}_2$. Using the redshift-space kernel, the squeezed bispectrum formed by two small-scale redshift-space galaxy perturbations and one large-scale real-space matter perturbation is &B\^[sq]{}\_[mgg]{}(,,--) [\ ]{} =&2$$Z_1({{\mathbf{k}}})Z_2({{\mathbf{q}}},{{\mathbf{k}}})P_l(q)P_l(k)+Z_1(-{{\mathbf{q}}}-{{\mathbf{k}}})Z_2({{\mathbf{q}}},-{{\mathbf{q}}}-{{\mathbf{k}}})P_l(q)P_l(|{{\mathbf{q}}}+{{\mathbf{k}}}|)$$ [\ ]{} =&P\_l(q)P\_l(k) [\ ]{} +&\_[kq]{}\^2P\_l(q)P\_l(k) [\ ]{} +&\_[kq]{}\_qP\_l(q)P\_l(k) [\ ]{} +&(b\_1\^2f-f\^3\_k\^4)\_q\^2P\_l(q)P\_l(k)+[O]{}$q/k$ , \[eq:Bmgg\] where prime is the logarithmic derivative with respect to $k$, ${{\mathbf{q}}}$ and ${{\mathbf{k}}}$ are the long and short modes, and we take $q/k\ll1$. Note that [eq. (\[eq:Bmgg\])]{} has been derived in Ref. [@Li:2017qgh], with a slightly different notation. For point tracers, there will be additional term associated with the Poissonian process. Namely, in the presence of the large-scale mode ${{\mathbf{q}}}$, the local number density will be modulated as ${\bar{n}}(1+b_1\delta^W)$ with ${\bar{n}}$ being the global mean number density, leading to local modulation of shot-noise term, which will add a $-b_1{\bar{n}}^{-1} P_l(q)$ to [eq. (\[eq:Bmgg\])]{}. This is a real term in the bispectrum, but since we are eventually interested in the locally measured power spectrum for which the shot-noise term will be locally subtracted, we will dismiss it in this paper. In addition, the presence of the large-scale tide would also cause modulation in local galaxy number density, though the leading-order effect is second order. The impact of long modes on stochasticity is discussed in detailed in Sec. 2.8 of [@Desjacques:2016bnm]. To extract the power spectrum response from [eq. (\[eq:Bmgg\])]{}, we first note that the large-scale tide is related to the large-scale overdensity as \^W\_[ij]{}=${\hat{q}}_i{\hat{q}}_j-\frac{1}{3}\delta^K_{ij}$\^W . \[eq:tauij\] This allows us to write &\_[kq]{}\^2=\_[ij]{}\_i\_j\_i\_j=+\_[ij]{}\_i\_j\^W\_[ij]{} , [\ ]{} &\_[kq]{}\_q=\_[ij]{}\_i\_j\_i\_j=+\_[ij]{}h\_[ij]{}\^W\_[ij]{} , [\ ]{} &\_q\^2=\_[ij]{}\_i\_j\_i\_j=+\_[ij]{}\_i\_j\^W\_[ij]{} , \[eq:angles\_tauij\] where ${\hat{n}}$ is the line-of-sight unit vector and $h_{ij}=({\hat{k}}_i{\hat{n}}_j+{\hat{k}}_j{\hat{n}}_i)/2$. Plugging [eq. (\[eq:angles\_tauij\])]{} into [eq. (\[eq:Bmgg\])]{} and using the fact that the power spectrum of the long mode can be regarded as $P_l(q)\sim(\delta^W)^2$, the power spectrum responses can be read off by comparing terms with [eq. (\[eq:Bmgg\_0\])]{}. Specifically, we have the galaxy power spectrum response to $\delta^W$ as & [\ ]{} =&b\_1\^2+2b\_1b\_2-b\_1\^2[P\_l’(k)]{}+b\_1\^2f+b\_1f\_k\^2 +2b\_1\^2f\_k\^2+2b\_2f\_k\^2 [\ ]{} -&b\_1f\_k\^2[P\_l’(k)]{}-b\_1\^2f\_k\^2[P\_l’(k)]{}+f\^2\_k\^4 +b\_1f\^2\_k\^4-f\^2\_k\^4[P\_l’(k)]{}[\ ]{} -&b\_1f\^2\_k\^4[P\_l’(k)]{}-f\^3\_k\^4+f\^3\_k\^6-f\^3\_k\^6[P\_l’(k)]{}, \[eq:delta\_resp\] and to $\tau_{ij}^W$ as & [\ ]{} =&\_i\_j [\ ]{} +&$$-b_1^2f\mu_k{\ln P_l'(k)}+4b_1f^2\mu_k^3-2b_1f^2\mu_k^3{\ln P_l'(k)}+4f^3\mu_k^5-f^3\mu_k^5{\ln P_l'(k)}$$h\_[ij]{} [\ ]{} +&$$b_1^2f-f^3\mu_k^4$$\_i\_j . \[eq:tauij\_resp\] [Eq. (\[eq:delta\_resp\])]{} and [eq. (\[eq:tauij\_resp\])]{} are basically the same as in Ref. [@Nishimichi:2015kla] and Ref. [@Akitsu:2017syq] respectively, except the addition bias parameters. It is useful to decompose the redshift-space galaxy power spectrum and response into different angular dependencies as &P\_[gg]{}()=\_[n=0]{}\^2A\_[0,n]{}\_k\^[2n]{} , =\_[n=0]{}\^3A\_[1,n]{}(k)\_k\^[2n]{} , [\ ]{} &=\_[n=0]{}\^2A\_[2,n]{}(k)\_k\^[2n]{}\_i\_j +\_[n=0]{}\^2A\_[3,n]{}(k)\_k\^[2n+1]{}h\_[ij]{}+\_[n=0]{}\^1A\_[4,n]{}(k)\_k\^[4n]{}\_i\_j , where $A_{m,n}$ are given explicitly in [App. \[app:Amn\]]{}. The above calculation assumes that the underlying mean galaxy number density is known for the power spectrum calculation. This is true for the subvolumes inside a survey because the underlying mean number density can be computed from the survey, assuming that the super-volume modes (larger than the survey) have negligible impact. On the other hand, to extract the super-volume modes, the underlying mean number density requires the observation of even larger volume (in principle the whole universe) so is generally unknown, and one can only use the “local” mean density in the survey to characterize the power spectrum. We refer to this as the “local” power spectrum, which is related to the “global” power spectrum, computed using the true underlying mean density, as P\_[gg]{}\^G()=(1+\_g\^W)\^2P\_[gg]{}\^L() , where the superscripts $G$ and $L$ denote respectively the global and local power spectrum, and $\delta_g^W$ is the mean galaxy overdensity of the volume. In real space $\delta_g^W=b_1\delta^W$, and in redshift space \_g\^W=(b\_1+f\_q\^2)\^W =$b_1+\frac{1}{3}f$\^W+f\_[ij]{}\_i\_j\^W\_[ij]{} =$b_1+\frac{1}{3}f$\^W+f\^W\_[22]{} , where we conventionally set ${\hat{n}}=\hat{z}$. Thus, the local and global power spectra to the leading order are related in real and redshift space respectively as P\_[gg]{}\^L()(1-2b\_1\^W)P\_[gg]{}\^G() , P\_[gg]{}\^L()$$1-2{\left(}b_1+\frac{1}{3}f{\right)}\delta^W-2f\tau^W_{22}$$P\_[gg]{}\^G() . \[eq:local\] We can use [eq. (\[eq:local\])]{} to mimic the power spectrum measured by a local observer living in the volume who cannot access $\delta^W$ and $\tau^W_{ij}$. The same effect has been discussed in Ref. [@Chiang:2017qoh] for probing the correlation of Cosmic Microwave Background (CMB) lensing convergence and Lyman-$\alpha$ forest power spectrum measured using the local mean flux. Spherical expansion {#sec:expansion} ------------------- To highlight the internal symmetry of the problem, we expand the large-scale tidal field in the $\ell=2$ spherical harmonics as \^W\_m=d\^2\_i\_j\^W\_[ij]{}Y\_[2m]{}\^\*() . The existing components are &\^W\_0=-(\^W\_[00]{}+\^W\_[11]{}) , \^W\_1=-(\^W\_[02]{}-i\^W\_[12]{}) , \^W\_2=(\^W\_[00]{}-\^W\_[11]{}-2i\^W\_[01]{}) , hence the tidal tensor can be written as \^W = . \[eq:tauijmat\] Note that ${\mathcal{T}}^W_m$ has five degrees of freedom (one real $m=0$ and two $m=1,2$ complex numbers), matching the number of degrees of freedom in $\tau^W_{ij}$ as $\tau^W_{ij}$ is symmetric and traceless. We also have the usual reality condition ${\mathcal{T}}^W_m=({\mathcal{T}}^W_{-m})^*$. In the same spirit we expand the redshift-space galaxy power spectrum and the response in spherical harmonics. Redshift space-distortions introduce a special direction. For a large survey the line-of-sight direction varies at different angular positions. For simplicity in this paper we will apply the plane-parallel approximation and conventionally set ${\hat{n}}=\hat{z}$. The spherical multipole expansion of the power spectrum is therefore given by P\_[gg,m]{} (k) = d\^2P\_[gg]{}() Y\^\*\_[m]{} () . The similar decomposition has been proposed by Refs. [@Shiraishi:2016wec; @Sugiyama:2017ggb]. Note that $P_{gg,00}(k)$, $P_{gg,20}(k)$ and $P_{gg,40}(k)$ are the usual azimuthally averaged redshift-space monopole, quadrupole and hexadecapole of the power spectra, up to different prefactors. Note also that for $m\neq0$, $P_{gg,\ell m}$ would contain complex components as for ${\mathcal{T}}^W_m$. For $m=0$, linear Kaiser power spectrum (only to $\ell\le4$), $\delta^W$ and ${\mathcal{T}}^W_0$ contribute: P\_[gg,00]{}(k)=&, [\ ]{} P\_[gg,20]{}(k)=&, [\ ]{} P\_[gg,40]{}(k)=& $$22A_{0,2}+\delta^W(22A_{1,2}+30A_{1,3})+\sqrt{\frac{5}{4\pi}}{\mathcal{T}}^W_0(33A_{2,1}+34A_{2,2}+22A_{4,1})$$ , [\ ]{} P\_[gg,60]{}(k)=& $$\delta^W(2A_{1,3})+\sqrt{\frac{5}{4\pi}}{\mathcal{T}}^W_0(3A_{2,2})$$ . \[eq:Pggm0\] For $m=1$ we find: P\_[gg,21]{}(k)=& \^W\_1(42A\_[2,0]{}+18A\_[2,1]{}+10A\_[2,2]{}+21A\_[3,0]{}+9A\_[3,1]{}+5A\_[3,2]{}) , [\ ]{} P\_[gg,41]{}(k)=& \^W\_1(22A\_[2,1]{}+20A\_[2,2]{}+11A\_[3,1]{}+10A\_[3,2]{}) , [\ ]{} P\_[gg,61]{}(k)=& \^W\_1(2A\_[2,2]{}+A\_[3,2]{}) . \[eq:Pggm1\] For $m=2$ we find: P\_[gg,22]{}(k)=&\^W\_2(21A\_[2,0]{}+3A\_[2,1]{}+A\_[2,2]{}) , [\ ]{} P\_[gg,42]{}(k)=&\^W\_2(11A\_[2,1]{}+6A\_[2,2]{}) , [\ ]{} P\_[gg,62]{}(k)=&\^W\_2A\_[2,2]{} . \[eq:Pggm2\] [Eqs. (\[eq:Pggm0\])–(\[eq:Pggm2\])]{} are the main results of this paper. We find that with ${\mathcal{T}}^W_1$ and ${\mathcal{T}}^W_2$, the power spectrum has components with $m=1,2$, meaning that the azimuthal symmetry of the power spectrum is broken. This opens a new window for measuring the super-volume tide from the small-scale power spectrum in a volume: while the linear Kaiser power spectrum dominates the $m=0$ components and so $\delta^W$ and ${\mathcal{T}}^W_0$ may be difficult to extract, $m=1,2$ components can only be generated by the tidal fields hence any measurement is pure signal. In reality, however, the anisotropic window function will also contaminate the signal in $m=1,2$ [@Sugiyama:2017ggb], and so has to be carefully accounted for. In principle, gravitational lensing is also likely to generate $m\neq0$ modes, but these will be small for survey-size volumes. ![(Left) Ratio of the long mode contribution (terms associated with $\delta^W$) to fiducial ($\delta^W=0$) $P^r_{gg,00}$, which is $b_1^2P_l$ and independent of volume, in real space at $z=0.5$. (Right) Ratio of $P^r_{gg,20}$ to $b_1^2P_l$ in real space at $z=0.5$. The red solid, green dashed, and blue dot-dashed lines show volumes of $10^6$, $4\times10^7$, and $10^9~{h^{-3}~{\rm Mpc}^3}$. The values of $\delta^W$ and ${\mathcal{T}}^W_0$ are set to be their $1-\sigma$ expected values, i.e. $\sigma_{\delta^W}$ and $\sigma_{{\mathcal{T}}^W_0}$ with spherical top-hat window function, respectively. Note that the effect of using the local mean density to compute the power spectrum, i.e. [eq. (\[eq:local\])]{} is not included.[]{data-label="fig:Pggr"}](Pgg00_rspace.pdf "fig:"){width="49.50000%"} ![(Left) Ratio of the long mode contribution (terms associated with $\delta^W$) to fiducial ($\delta^W=0$) $P^r_{gg,00}$, which is $b_1^2P_l$ and independent of volume, in real space at $z=0.5$. (Right) Ratio of $P^r_{gg,20}$ to $b_1^2P_l$ in real space at $z=0.5$. The red solid, green dashed, and blue dot-dashed lines show volumes of $10^6$, $4\times10^7$, and $10^9~{h^{-3}~{\rm Mpc}^3}$. The values of $\delta^W$ and ${\mathcal{T}}^W_0$ are set to be their $1-\sigma$ expected values, i.e. $\sigma_{\delta^W}$ and $\sigma_{{\mathcal{T}}^W_0}$ with spherical top-hat window function, respectively. Note that the effect of using the local mean density to compute the power spectrum, i.e. [eq. (\[eq:local\])]{} is not included.[]{data-label="fig:Pggr"}](Pgg20_rspace.pdf "fig:"){width="49.50000%"} To obtain real-space results, we can set $f=0$ in $A_{m,n}$, and the only existing components are P\^r\_[gg,00]{}(k)=&2$$b_1^2+\delta^W{\left(}\frac{47}{21}b_1^2+2b_1b_2{\right)}$$P\_l(k) +\^W$-\frac{1}{3}b_1^2$P’\_l(k), [\ ]{} P\^r\_[gg,2m]{}(k)=&\^W\_m$${\left(}\frac{8}{7}b_1^2+2b_1b_{s^2}{\right)}P_l(k)+{\left(}-b_1^2{\right)}P'_l(k)$$ . [\ ]{}\[eq:Pggr\] These equations have the behavior expected based on the symmetry properties of the sources: scalar sources give raise to $\ell=0$ modes and tidal $\ell=2,m$ sources give raise to $\ell=2,m$ components of the power spectrum. The left panel of [figure \[fig:Pggr\]]{} shows the long mode contribution (terms associated with $\delta^W$) to fiducial ($\delta^W=0$ so independent of volume) $P^r_{gg,00}$, which is $b_1^2P_l$, in real space at $z=0.5$ for various volumes denoted by different colors and styles. Note that the minimum wavenumber and the density of the line reflect the corresponding volume. We set $\delta^W$ to be the $1-\sigma$ expected value, i.e. \_[\^W]{}=(\^W)\^2\^[1/2]{}=$$\int d^3k |W(k)|^2P_l(k)$$\^[1/2]{} , \[eq:sigmad\] where we choose the window function to be spherical top-hat. We find that the long mode contribution is larger for smaller volume, which is the outcome of larger $\sigma_{\delta^W}$. We also find that the ratio is fairly scale independent, hence $\delta^W$ will degenerate with $b_1$ when performing parameter constraint and we will discuss this in details in [Sec. \[sec:fisher\]]{}. The right panel of [figure \[fig:Pggr\]]{} shows the ratio of $P^r_{gg,20}$ to $b_1^2P_l$ in real space at $z=0.5$, and as for $\delta^W$ we set the value of ${\mathcal{T}}^W_0$ to be its $1-\sigma$ expected value, which is \_[\^W\_0]{}=\_[\^W\_1]{}=\_[\^W\_2]{}=\_[\^W]{} . \[eq:sigmaT\] Note that we only show the result for $P^r_{gg,20}$ because it has the same scale dependence as $P^r_{gg,21}$ and $P^r_{gg,22}$. Compared to the long mode contribution in $P^r_{gg,00}$, the signal for $P^r_{gg,20}$ is smaller, ranging from $\sim10^{-4}$ to $\sim10^{-1}$ for $V=10^9$ to $10^6~{h^{-3}~{\rm Mpc}^3}$. However, since the fiducial power spectrum does not contribute in $P^r_{gg,20}$, any detection of $P^r_{gg,20}$ is caused by the presence of ${\mathcal{T}}^W_0$. This opens a promising window for detecting the large-scale tide. We set the redshift to be 0.5 to match most of the current galaxy surveys, and the contribution from the long modes is smaller at high redshift, assuming that the biases are unchanged, since both $\delta^W$ and ${\mathcal{T}}^W_m$ are proportional to the linear growth factor. ![(Top left) Ratio of the long mode contribution (terms associated with $\delta^W$ and ${\mathcal{T}}^W_0$) to fiducial ($\delta^W={\mathcal{T}}^W_0=0$) $P_{gg,l0}$ for $\ell\le4$, i.e. the Kaiser power spectrum, in redshift space at $z=0.5$. The solid, dashed, and dot-dashed lines show $\ell=0$, $2$, and $4$, respectively. The other panels show relative sizes to $b_1^2P_l$ of terms in redshift space at $z=0.5$ that are not sourced by the Kaiser power spectrum. (Top right) Ratio of $P_{gg,2m}$ to $b_1^2P_l$ for $m=1$ (solid) and 2 (dashed), which are sourced respectively by ${\mathcal{T}}^W_1$ and ${\mathcal{T}}^W_2$. (Bottom left) Same as the top right panel, but for $\ell=4$, i.e. $P_{gg,4m}$. (Bottom right) Ratio of $P_{gg,6m}$ to $b_1^2P_l$ for $m=0$ (solid), 1 (dashed), and 2 (dot-dashed), which are sourced respectively by $\delta^W$ and ${\mathcal{T}}^W_0$, ${\mathcal{T}}^W_1$, and ${\mathcal{T}}^W_2$. The red thin, green medium, and blue thick lines show volumes of $10^6$, $4\times10^7$, and $10^9~{h^{-3}~{\rm Mpc}^3}$. The values of $\delta^W$ and ${\mathcal{T}}^W_m$ are set to be their $1-\sigma$ expected values, i.e. $\sigma_{\delta^W}$ and $\sigma_{{\mathcal{T}}^W_m}$ with spherical top-hat window function, respectively. Note that the effect of using the local mean density to compute the power spectrum, i.e. [eq. (\[eq:local\])]{} is not included.[]{data-label="fig:Pggz"}](Pggl0_zspace.pdf "fig:"){width="49.50000%"} ![(Top left) Ratio of the long mode contribution (terms associated with $\delta^W$ and ${\mathcal{T}}^W_0$) to fiducial ($\delta^W={\mathcal{T}}^W_0=0$) $P_{gg,l0}$ for $\ell\le4$, i.e. the Kaiser power spectrum, in redshift space at $z=0.5$. The solid, dashed, and dot-dashed lines show $\ell=0$, $2$, and $4$, respectively. The other panels show relative sizes to $b_1^2P_l$ of terms in redshift space at $z=0.5$ that are not sourced by the Kaiser power spectrum. (Top right) Ratio of $P_{gg,2m}$ to $b_1^2P_l$ for $m=1$ (solid) and 2 (dashed), which are sourced respectively by ${\mathcal{T}}^W_1$ and ${\mathcal{T}}^W_2$. (Bottom left) Same as the top right panel, but for $\ell=4$, i.e. $P_{gg,4m}$. (Bottom right) Ratio of $P_{gg,6m}$ to $b_1^2P_l$ for $m=0$ (solid), 1 (dashed), and 2 (dot-dashed), which are sourced respectively by $\delta^W$ and ${\mathcal{T}}^W_0$, ${\mathcal{T}}^W_1$, and ${\mathcal{T}}^W_2$. The red thin, green medium, and blue thick lines show volumes of $10^6$, $4\times10^7$, and $10^9~{h^{-3}~{\rm Mpc}^3}$. The values of $\delta^W$ and ${\mathcal{T}}^W_m$ are set to be their $1-\sigma$ expected values, i.e. $\sigma_{\delta^W}$ and $\sigma_{{\mathcal{T}}^W_m}$ with spherical top-hat window function, respectively. Note that the effect of using the local mean density to compute the power spectrum, i.e. [eq. (\[eq:local\])]{} is not included.[]{data-label="fig:Pggz"}](Pgg2m_zspace.pdf "fig:"){width="49.50000%"} ![(Top left) Ratio of the long mode contribution (terms associated with $\delta^W$ and ${\mathcal{T}}^W_0$) to fiducial ($\delta^W={\mathcal{T}}^W_0=0$) $P_{gg,l0}$ for $\ell\le4$, i.e. the Kaiser power spectrum, in redshift space at $z=0.5$. The solid, dashed, and dot-dashed lines show $\ell=0$, $2$, and $4$, respectively. The other panels show relative sizes to $b_1^2P_l$ of terms in redshift space at $z=0.5$ that are not sourced by the Kaiser power spectrum. (Top right) Ratio of $P_{gg,2m}$ to $b_1^2P_l$ for $m=1$ (solid) and 2 (dashed), which are sourced respectively by ${\mathcal{T}}^W_1$ and ${\mathcal{T}}^W_2$. (Bottom left) Same as the top right panel, but for $\ell=4$, i.e. $P_{gg,4m}$. (Bottom right) Ratio of $P_{gg,6m}$ to $b_1^2P_l$ for $m=0$ (solid), 1 (dashed), and 2 (dot-dashed), which are sourced respectively by $\delta^W$ and ${\mathcal{T}}^W_0$, ${\mathcal{T}}^W_1$, and ${\mathcal{T}}^W_2$. The red thin, green medium, and blue thick lines show volumes of $10^6$, $4\times10^7$, and $10^9~{h^{-3}~{\rm Mpc}^3}$. The values of $\delta^W$ and ${\mathcal{T}}^W_m$ are set to be their $1-\sigma$ expected values, i.e. $\sigma_{\delta^W}$ and $\sigma_{{\mathcal{T}}^W_m}$ with spherical top-hat window function, respectively. Note that the effect of using the local mean density to compute the power spectrum, i.e. [eq. (\[eq:local\])]{} is not included.[]{data-label="fig:Pggz"}](Pgg4m_zspace.pdf "fig:"){width="49.50000%"} ![(Top left) Ratio of the long mode contribution (terms associated with $\delta^W$ and ${\mathcal{T}}^W_0$) to fiducial ($\delta^W={\mathcal{T}}^W_0=0$) $P_{gg,l0}$ for $\ell\le4$, i.e. the Kaiser power spectrum, in redshift space at $z=0.5$. The solid, dashed, and dot-dashed lines show $\ell=0$, $2$, and $4$, respectively. The other panels show relative sizes to $b_1^2P_l$ of terms in redshift space at $z=0.5$ that are not sourced by the Kaiser power spectrum. (Top right) Ratio of $P_{gg,2m}$ to $b_1^2P_l$ for $m=1$ (solid) and 2 (dashed), which are sourced respectively by ${\mathcal{T}}^W_1$ and ${\mathcal{T}}^W_2$. (Bottom left) Same as the top right panel, but for $\ell=4$, i.e. $P_{gg,4m}$. (Bottom right) Ratio of $P_{gg,6m}$ to $b_1^2P_l$ for $m=0$ (solid), 1 (dashed), and 2 (dot-dashed), which are sourced respectively by $\delta^W$ and ${\mathcal{T}}^W_0$, ${\mathcal{T}}^W_1$, and ${\mathcal{T}}^W_2$. The red thin, green medium, and blue thick lines show volumes of $10^6$, $4\times10^7$, and $10^9~{h^{-3}~{\rm Mpc}^3}$. The values of $\delta^W$ and ${\mathcal{T}}^W_m$ are set to be their $1-\sigma$ expected values, i.e. $\sigma_{\delta^W}$ and $\sigma_{{\mathcal{T}}^W_m}$ with spherical top-hat window function, respectively. Note that the effect of using the local mean density to compute the power spectrum, i.e. [eq. (\[eq:local\])]{} is not included.[]{data-label="fig:Pggz"}](Pgg6m_zspace.pdf "fig:"){width="49.50000%"} In [figure \[fig:Pggz\]]{} we show the long mode contributions in redshift space at $z=0.5$. The top left panel shows long mode contribution (terms associated with $\delta^W$ and ${\mathcal{T}}^W_0$) to fiducial ($\delta^W={\mathcal{T}}^W_0=0$, i.e. the Kaiser power spectrum, so independent of volume) $P_{gg,\ell0}$ for $\ell=0$ (solid), 2 (dashed), and 4 (dot-dashed). Interestingly, we find that for $\ell=4$ and $V=10^6~{h^{-3}~{\rm Mpc}^3}$ the long mode contribution exceeds the fiducial Kaiser power spectrum. This indicates that it is necessary to take the long mode contribution into account for the hexadecapole of the galaxy redshift-space power spectrum when the survey volume is less than $\sim10^6~{h^{-3}~{\rm Mpc}^3}$. The top right, bottom left, and bottom right panels show respectively the ratios of $P_{gg,2m}$, $P_{gg,4m}$, and $P_{gg,6m}$ to $b_1^2P_l$. As in real space, while these signals are smaller compared to the long mode contribution in $P_{gg,\ell0}$, they are caused solely by the long modes ($\delta^W$ and ${\mathcal{T}}^W_0$ for $m=0$, ${\mathcal{T}}^W_1$ for $m=1$, and ${\mathcal{T}}^W_2$ for $m=2$), hence providing a great potential to probe the large-scale perturbations. Unlike in real space, the signal of the long modes depend both on the linear growth factor and the growth rate. Since the growth factor and the growth rate have opposite redshift evolutions, the long mode signal, assuming the biases do not evolve in redshift, does not have a clear redshift evolution. Finally, we find that in general the effects of tides falls with $m$: the relative impact of ${\mathcal{T}}^W_0$ is larger than that of ${\mathcal{T}}^W_1$ which is in turn larger than that of ${\mathcal{T}}^W_2$. This is true in both real and redshift space. In [figure \[fig:Pggr\]]{} and [figure \[fig:Pggz\]]{}, for clarity we do not include the effect of using the local mean density to compute the power spectrum, i.e. [eq. (\[eq:local\])]{}. Since the corrections have the same angular dependencies as the fiducial power spectrum, in real space the effect of the miscalibration of the mean density only contributes to terms associated with $\delta^W$ in $P^r_{gg,00}$, and in redshift space to terms associated with $\delta^W$ as well as ${\mathcal{T}}^W_0$ in $P_{gg,\ell0}$ for $\ell=0$, 2, and 4. Estimator and covariance {#sec:variance} ------------------------ To measure $P_{gg,\ell m}(k)$ in a volume $V$, the simplest estimator is \_[gg,m]{}(k)=\_[k-k/2|\_i|k+k/2]{} \_g(\_i)\_g\^\*(\_i)Y\^\*\_[m]{}(\_i) , where $N(k)$ is the number of independent Fourier modes. One can straightforwardly show that this estimator is unbiased because in the continuous limit \_[k-k/2|\_i|k+k/2]{} d\^2. The covariance of the estimator can be computed as &[cov]{}\[\_[gg,m]{}(k),\_[gg,’m’]{}(k’)\] [\ ]{} =&\_[gg,m]{}(k)\_[gg,’m’]{}(k’)-\_[gg,m]{}(k)\_[gg,’m’]{}(k’)[\ ]{} =&\_[ij]{}\_g(\_i)\_g\^\*(’\_j) \_g\^\*(\_i)\_g(’\_j)Y\_[m]{}\^\*(\_i)Y\_[’m’]{}\^\*(’\_j) , where we assume that the covariance is dominated by the disconnected Gaussian contribution and omit the full notation in the subscript of the summation. Note that $\delta^W$ and ${\mathcal{T}}^W_m$ would also contribute to the covariance [@Li:2014sga; @Li:2014jra; @Akitsu:2016leq; @Akitsu:2017syq; @Li:2017qgh], but we consider the survey to be large enough so that the super-sample covariance is next-to-leading order correction. One can easily see that the covariance is non-zero only if ${{\mathbf{k}}}_i={{\mathbf{k}}}_j$, hence the covariance can be simplified to \[\_[gg,m]{}(k),\_[gg,’m’]{}(k)\]&= \_i\[P\_[gg]{}(\_i)+P\_[shot]{}\]\^2Y\_[m]{}\^\*(\_i)Y\_[’m’]{}\^\*(\_i) [\ ]{} &d\^2\[P\_[gg]{}()+P\_[shot]{}\]\^2Y\_[m]{}\^\*()Y\_[’m’]{}\^\*() , where $P_{\rm shot}$ is the shot noise. To proceed, we assume the galaxy power spectrum is given by the Kaiser formalism, hence \[\_[gg,m]{}(k),\_[gg,’m’]{}(k)\] d\^2\[(b\_1+f\^2)\^2P\_l(k)+P\_[shot]{}\]\^2Y\_[m]{}\^\*()Y\_[’m’]{}\^\*() , \[eq:cov\] which is non-zero only if $m=m'$. We shall apply [eq. (\[eq:cov\])]{} for the Fisher analysis in [Sec. \[sec:fisher\]]{}. $P_{gg,\ell m}(k)$ is a complex number, and in practice we measure the real and imaginary parts separately. Thus, the covariances are &[cov]{}\[\_[gg,m]{}\^R(k),\_[gg,’m]{}\^R(k)\]= d\^2\[(b\_1+f\^2)\^2P\_l(k)+P\_[shot]{}\]\^2[[Re]{}]{}\[Y\_[m]{}\^\*()\][[Re]{}]{}\[Y\_[’m]{}\^\*()\] , [\ ]{} &[cov]{}\[\_[gg,m]{}\^I(k),\_[gg,’m]{}\^I(k)\]= d\^2\[(b\_1+f\^2)\^2P\_l(k)+P\_[shot]{}\]\^2[[Im]{}]{}\[Y\_[m]{}\^\*()\][[Im]{}]{}\[Y\_[’m]{}\^\*()\] , [\ ]{} &[cov]{}\[\_[gg,m]{}\^R(k),\_[gg,’m]{}\^I(k)\]= d\^2\[(b\_1+f\^2)\^2P\_l(k)+P\_[shot]{}\]\^2[[Re]{}]{}\[Y\_[m]{}\^\*()\][[Im]{}]{}\[Y\_[’m]{}\^\*()\] , and one can easily show that ${\rm cov}[{\hat{P}}_{gg,\ell m}^R(k),{\hat{P}}_{gg,\ell'm}^I(k)]=0$ for all possible $\ell,m$. Therefore, the only non-zero components are &[cov]{}\[\_[gg,m]{}\^R(k),\_[gg,’m]{}\^R(k)\]= d\^2\[(b\_1+f\^2)\^2P\_l(k)+P\_[shot]{}\]\^2[[Re]{}]{}\[Y\_[m]{}\^\*()\][[Re]{}]{}\[Y\_[’m]{}\^\*()\] , [\ ]{} &[cov]{}\[\_[gg,m]{}\^I(k),\_[gg,’m]{}\^I(k)\]= d\^2\[(b\_1+f\^2)\^2P\_l(k)+P\_[shot]{}\]\^2[[Im]{}]{}\[Y\_[m]{}\^\*()\][[Im]{}]{}\[Y\_[’m]{}\^\*()\] . This means that for each $k$ the covariance matrix can be written as a block-diagonal matrix, consisting covariances of $({\hat{P}}^R_{gg,00},{\hat{P}}^R_{gg,20},{\hat{P}}^R_{gg,40},{\hat{P}}^R_{gg,60})$, $({\hat{P}}^R_{gg,21},{\hat{P}}^R_{gg,41},{\hat{P}}^R_{gg,61})$, $({\hat{P}}^I_{gg,21},{\hat{P}}^I_{gg,41},{\hat{P}}^I_{gg,61})$, $({\hat{P}}^R_{gg,22},{\hat{P}}^R_{gg,42},{\hat{P}}^R_{gg,62})$, and $({\hat{P}}^I_{gg,22},{\hat{P}}^I_{gg,42},{\hat{P}}^I_{gg,62})$. Fisher forecast {#sec:fisher} =============== In the previous section we derive how galaxy power spectrum in a finite volume would respond to the overdensity and tidal fields with wavelengths larger than the volume. One specific example is the power spectrum in a galaxy redshift survey, and it will be affected by the super-survey modes. This also means that by measuring $P_{gg,\ell m}$ of this survey, it is possible to put constraints on the super-survey overdensity and tidal fields, which are usually not directly observable unless a large survey containing the current one is performed. To explore the ability of measuring the long mode for a given survey, we apply the Fisher matrix as F\_=\_[k=k\_[min]{}]{}\^[k\_[max]{}]{}\_[’]{}\_m $${\rm cov}[P_{gg,\ell m}(k),P_{gg,\ell'm}(k)]$$\^[-1]{} , where $\theta_\alpha$ is the parameter of interest. The constraint on $\theta_\alpha$ as well as the correlation between $\theta_\alpha$ and $\theta_\beta$ are then \[\_\]= , \[\_,\_\]= . For the fitting range we set $k_{\rm min}=k_F$ to be the fundamental frequency of survey and explore the constraint for different $k_{\rm max}$. In this paper we shall adopt the Planck cosmology [@Ade:2015xua], i.e. $h=0.6803$, $\Omega_bh^2=0.0226$, $\Omega_ch^2=0.1186$, $A_s=2.137\times10^{-9}$, and $n_s=0.9667$, hence the shape of the power spectrum is fixed. We fix the redshift to be 0.5 because it is the redshift at which most galaxy surveys are performed, but the results can be straightforwardly generalized to other redshifts. The parameters of interest are $\theta_\alpha\in(b_1,b_2,b_{s^2},f,\delta^W,{\mathcal{T}}^W_0,{\mathcal{T}}^{W,R}_1,{\mathcal{T}}^{W,I}_1,{\mathcal{T}}^{W,R}_2,{\mathcal{T}}^{W,I}_2)$, where ${\mathcal{T}}^W_1$ and ${\mathcal{T}}^W_2$ are complex numbers so there are four parameters in total that one can measure. We set the fiducial values of the biases and growth rate to be $b_1=2$, $b_2=0.3$, $b_{s^2}=-\frac{4}{7}(b_1-1)=-0.57$, and $f(z=0.5)=0.75$, and for the long mode we set the fiducial value to be the $1-\sigma$ expected value for the corresponding volume, assuming a spherical top-hat window function. In the following we shall separately discuss the results in real and redshift space, and in real space we set $f=0$ so the number of parameters is nine. Real space {#sec:rspace} ---------- Let us begin with the Fisher analysis in real space, in which the power spectra are given in [eq. (\[eq:Pggr\])]{}. Moreover, since the global mean density is unknown if only a finite survey is performed, only the local mean density can be used to measure the power spectrum, hence there is an additional contribution to the response from the miscalibration of the mean density. We thus use [eq. (\[eq:local\])]{} to mimic this effect, and only $P^r_{gg,00}$ contains the additional contribution. We first notice that Fisher matrix is not positive definite. This happens because there are nine parameters to be determined, but from [eq. (\[eq:Pggr\])]{} one can only measure eight scale dependencies: two from $P^r_{gg,00}$, two from $P^r_{gg,20}$, and two from $P^r_{gg,21}$ and $P^r_{gg,22}$ respectively because they are complex numbers and have the same scale dependence as $P^r_{gg,20}$ hence only four independent amplitudes can be measured. Since the main focus of this paper is to probe the long mode as well as to study their impact on $b_1$ and $f$, we shall include priors of $\pm1$ on $b_2$ and $b_{s^2}$. These priors are sufficiently strong that they break the perfect degeneracy to the extent that more constraining prior has negligible effects on the results. ![$1-\sigma$ constraint on $b_1$ from the real-space galaxy power spectrum as a function of survey volume for two $P_{\rm shot}$ and $k_{\rm max}$. The left panel shows a cosmic variance limited survey with high $k_{\rm max}$, whereas the right panel shows a BOSS-like survey number density with a realistic $k_{\rm max}$. Lines with different colors and styles show various priors on $\delta^W$. Note that a large number of $\sigma$ is for a weaker prior.[]{data-label="fig:sigmab1_rspace"}](sigmab1_rspace_1.pdf "fig:"){width="49.50000%"} ![$1-\sigma$ constraint on $b_1$ from the real-space galaxy power spectrum as a function of survey volume for two $P_{\rm shot}$ and $k_{\rm max}$. The left panel shows a cosmic variance limited survey with high $k_{\rm max}$, whereas the right panel shows a BOSS-like survey number density with a realistic $k_{\rm max}$. Lines with different colors and styles show various priors on $\delta^W$. Note that a large number of $\sigma$ is for a weaker prior.[]{data-label="fig:sigmab1_rspace"}](sigmab1_rspace_2.pdf "fig:"){width="49.50000%"} Adding priors to $b_2$ and $b_{s^2}$, the Fisher matrix becomes invertible, and we find that $b_1$ and $\delta^W$ are highly correlated. Specifically, for $10^6\le V/({h^{-3}~{\rm Mpc}^3})\le4\times10^{10}$, $0\le P_{\rm shot}/({h^{-3}~{\rm Mpc}^3})\le8000$ (cosmic variance limited to the BOSS-like survey number density [@Alam:2016hwk]), and $0.1\le k_{\rm max}/({h~{\rm Mpc}^{-1}})\le0.5$, the correlation between $b_1$ and $\delta^W$ is greater than 0.95. The large correlation is not surprising as the ratio shown in the left panel of [figure \[fig:Pggr\]]{} is quite scale independent, and the correction from the miscalibration of the mean density has identical scale dependence as the fiducial power spectrum. This means that the constraint on $b_1$ will be largely determined by the knowledge on $\delta^W$. [Figure \[fig:sigmab1\_rspace\]]{} shows the $1-\sigma$ constraint on $b_1$ as a function of survey volume for two $P_{\rm shot}$ and $k_{\rm max}$: the left panel shows no shot noise whereas the right panel shows a BOSS-like survey number density. It is evident that the constraint on $b_1$ is largely dominated by the prior on $\delta^W$. Specifically, adding a $1-\sigma$ prior on $\delta^W$ can improve the constraint on $b_1$ by an order of magnitude compared to no prior. The main caveat in [figure \[fig:sigmab1\_rspace\]]{} is that we only consider the leading-order galaxy power spectrum, and in reality the small-scale nonlinearities in matter power spectrum and galaxy bias reduce the information on linear bias. Nevertheless, [figure \[fig:sigmab1\_rspace\]]{} clearly demonstrates the impact from the knowledge of $\delta^W$ on $b_1$ constraint. ![Ratio of the $1-\sigma$ constraint on ${\mathcal{T}}^W_0$ to its expected value, i.e. ${\rm err}[{\mathcal{T}}^W_0]/\sigma_{{\mathcal{T}}^W_0}$, as a function of survey volume for $P_{\rm shot}=0$ (left) and $8000~{h^{-3}~{\rm Mpc}^3}$ (right). The red solid, green dashed, and blue dot-dashed lines show respectively $k_{\rm max}=0.2$, 0.4, and $0.6~{h~{\rm Mpc}^{-1}}$, whereas the black dotted line is for $\sigma_{{\mathcal{T}}^W_0}$. We do not include any prior on $\delta^W$, as it has negligible impact on the constraint on ${\mathcal{T}}^W_0$. Furthermore, we only present the constraint on ${\mathcal{T}}^W_0$ because the results are identical for ${\mathcal{T}}^{W,R}_1$, ${\mathcal{T}}^{W,I}_1$, ${\mathcal{T}}^{W,R}_2$, and ${\mathcal{T}}^{W,I}_2$.[]{data-label="fig:sigmaT0_rspace"}](sigmaT0_rspace_1.pdf "fig:"){width="49.50000%"} ![Ratio of the $1-\sigma$ constraint on ${\mathcal{T}}^W_0$ to its expected value, i.e. ${\rm err}[{\mathcal{T}}^W_0]/\sigma_{{\mathcal{T}}^W_0}$, as a function of survey volume for $P_{\rm shot}=0$ (left) and $8000~{h^{-3}~{\rm Mpc}^3}$ (right). The red solid, green dashed, and blue dot-dashed lines show respectively $k_{\rm max}=0.2$, 0.4, and $0.6~{h~{\rm Mpc}^{-1}}$, whereas the black dotted line is for $\sigma_{{\mathcal{T}}^W_0}$. We do not include any prior on $\delta^W$, as it has negligible impact on the constraint on ${\mathcal{T}}^W_0$. Furthermore, we only present the constraint on ${\mathcal{T}}^W_0$ because the results are identical for ${\mathcal{T}}^{W,R}_1$, ${\mathcal{T}}^{W,I}_1$, ${\mathcal{T}}^{W,R}_2$, and ${\mathcal{T}}^{W,I}_2$.[]{data-label="fig:sigmaT0_rspace"}](sigmaT0_rspace_2.pdf "fig:"){width="49.50000%"} For the large-scale tidal fields, we find that the absolute value of the correlation between $b_1$ and ${\mathcal{T}}^W_m$ is less than 0.1 if the survey volume is greater than $10^8~{h^{-3}~{\rm Mpc}^3}$. The reason is that the constraint on $b_1$ is mainly from $P^r_{gg,00}$, for which ${\mathcal{T}}^W_m$ do not contribute. As a result, the prior on $\delta^W$ has negligible effect on the constraints on ${\mathcal{T}}^W_m$, meaning that ${\mathcal{T}}^W_m$ can be measured robustly in real space. [Figure \[fig:sigmaT0\_rspace\]]{} shows the ratio of the $1-\sigma$ constraint on ${\mathcal{T}}^W_0$ to its expected value, i.e. ${\rm err}[{\mathcal{T}}^W_0]/\sigma_{{\mathcal{T}}^W_0}$, as a function of survey volume for $P_{\rm shot}=0$ (left) and $8000~{h^{-3}~{\rm Mpc}^3}$ (right). We only present the constraint on ${\mathcal{T}}^W_0$ since the results are identical for ${\mathcal{T}}^{W,R}_1$, ${\mathcal{T}}^{W,I}_1$, ${\mathcal{T}}^{W,R}_2$, and ${\mathcal{T}}^{W,I}_2$. The red solid, green dashed, and blue dot-dashed lines show respectively $k_{\rm max}=0.2$, 0.4, and $0.6~{h~{\rm Mpc}^{-1}}$. We find that the constraint depends significantly on the shot noise. Specifically, for the cosmic variance limited survey ($P_{\rm shot}=0$) $\sigma_{{\mathcal{T}}^W_0}$ can be achieved if $k_{\rm max}=0.6~{h~{\rm Mpc}^{-1}}$, while for BOSS-like survey number density the constraint on $\sigma_{{\mathcal{T}}^W_0}$ worsen by more than a factor of two. Note that while it may seem unrealistic to adopt $k_{\rm max}=0.6~{h~{\rm Mpc}^{-1}}$ for a galaxy redshift survey, the presence of $P^r_{gg,2m}$ cannot be produced by nonlinear evolution and is a distinct feature of large-scale tidal fields. Therefore, in the spirit of putting an upper limit on ${\mathcal{T}}^W_m$, it is justified to use much higher wavenumber. Moreover, on small scales the survey window function tends to be isotropized [@Sato:2013hea], hence the contamination on $P^r_{gg,2m}$ due to the survey window function becomes less important [@Sugiyama:2017ggb]. In [figure \[fig:sigmaT0\_rspace\]]{} we also notice a minimum ratio at $\sim3\times10^6~{h^{-3}~{\rm Mpc}^3}$. This is because for small volume the signal due to ${\mathcal{T}}^W_m$ is larger and for large volume there are more modes one can access to constrain ${\mathcal{T}}^W_m$. Hence there is a sweet spot in the survey volume for constraining the large-scale tidal fields, and the exact value depends on $P_{\rm shot}$ and $k_{\rm max}$. Redshift space {#sec:zspace} -------------- Let us now turn to the Fisher analysis in redshift space, in which the power spectra are given in [eqs. (\[eq:Pggm0\])–(\[eq:Pggm2\])]{} with ten parameters. As in real space, we adopt [eq. (\[eq:local\])]{} to account for the miscalibration of the mean density when measuring the power spectrum in a finite survey, so $P_{gg,00}$, $P_{gg,20}$, and $P_{gg,40}$ receive additional contribution. To make the inversion of the Fisher matrix stable, we also include a prior of $\pm1$ on $b_2$ and $b_{s^2}$. Since the conclusions are insensitive to the choice of the survey parameters, in this section we shall fix $P_{\rm shot}=8000~{h^{-3}~{\rm Mpc}^3}$ and $k_{\rm max}=0.3~{h~{\rm Mpc}^{-1}}$ for a more realistic forecast. ![(Left) $1-\sigma$ constraint on $b_1$ from the redshift-space galaxy power spectrum as a function of survey volume for $P_{\rm shot}=8000~{h^{-3}~{\rm Mpc}^3}$, $k_{\rm max}=0.3~{h~{\rm Mpc}^{-1}}$, and a $3-\sigma$ prior on ${\mathcal{T}}^W_0$. Lines with different colors and styles show various priors on $\delta^W$. (Right) Same as the left panel, but with a $3-\sigma$ prior on $\delta^W$. Lines with different colors and styles show various priors on ${\mathcal{T}}^W_0$.[]{data-label="fig:sigmab1_zspace"}](sigmab1_zspace_T0p3.pdf "fig:"){width="49.50000%"} ![(Left) $1-\sigma$ constraint on $b_1$ from the redshift-space galaxy power spectrum as a function of survey volume for $P_{\rm shot}=8000~{h^{-3}~{\rm Mpc}^3}$, $k_{\rm max}=0.3~{h~{\rm Mpc}^{-1}}$, and a $3-\sigma$ prior on ${\mathcal{T}}^W_0$. Lines with different colors and styles show various priors on $\delta^W$. (Right) Same as the left panel, but with a $3-\sigma$ prior on $\delta^W$. Lines with different colors and styles show various priors on ${\mathcal{T}}^W_0$.[]{data-label="fig:sigmab1_zspace"}](sigmab1_zspace_deltap3.pdf "fig:"){width="49.50000%"} We first examine the constraint on $b_1$, which is dominated by $P_{gg,00}$, $P_{gg,20}$, and $P_{gg,40}$ due to their large signal-to-noise ratios compared to the rest of $P_{gg,\ell m}$. Since only $\delta^W$ and ${\mathcal{T}}^W_0$ contribute to $m=0$ components, we expect that $b_1$ is mostly degenerate with them. As in real space, we find that $b_1$ and $\delta^W$ are highly correlated regardless of the survey parameters and the priors on ${\mathcal{T}}^W_m$, with correlation coefficients greater than 0.8, hence the constraint on $b_1$ will depend strongly on the prior on $\delta^W$. The left panel of [figure \[fig:sigmab1\_zspace\]]{} shows the $1-\sigma$ constraint on $b_1$ from the redshift-space galaxy power spectrum with a $3-\sigma$ prior on ${\mathcal{T}}^W_0$ as a function of survey volume. We find that different priors on $\delta^W$ can affect the constraint on $b_1$ by more than an order of magnitude, and this finding is consistent as in real space. For the correlation coefficient between $b_1$ and ${\mathcal{T}}^W_0$, we find it to be less than -0.5 when no prior on $\delta^W$ is included. The anti-correlation increases when we include a prior on $\delta^W$, hence we expect to see some dependence on the prior on ${\mathcal{T}}^W_0$ of the constraint on $b_1$. The right panel of [figure \[fig:sigmab1\_zspace\]]{} shows the $1-\sigma$ constraint on $b_1$ with a $3-\sigma$ prior on $\delta^W$. We find that as long as there is some prior on ${\mathcal{T}}^W_0$, the constraint on $b_1$ converges well, indicating that a reliable constraint on $b_1$ can be obtained even with a conservative ($3-\sigma$) prior on ${\mathcal{T}}^W_0$. Interestingly, we notice that if there is no prior on $\delta^W$, then the prior on ${\mathcal{T}}^W_0$ has negligible effect on the constraint on $b_1$. This further reinforces the strong correlation between $b_1$ and $\delta^W$. ![(Left) 1-$\sigma$ constraint on $f$ from the redshift-space galaxy power spectrum as a function of survey volume for $P_{\rm shot}=8000~{h^{-3}~{\rm Mpc}^3}$, $k_{\rm max}=0.3~{h~{\rm Mpc}^{-1}}$, and a $3-\sigma$ prior on ${\mathcal{T}}^W_0$. Lines with different colors and styles show various priors on $\delta^W$. (Right) Same as the left panel, but with a $3-\sigma$ prior on $\delta^W$. Lines with different colors and styles show various priors on ${\mathcal{T}}^W_0$.[]{data-label="fig:sigmaf_zspace"}](sigmaf_zspace_T0p3.pdf "fig:"){width="49.50000%"} ![(Left) 1-$\sigma$ constraint on $f$ from the redshift-space galaxy power spectrum as a function of survey volume for $P_{\rm shot}=8000~{h^{-3}~{\rm Mpc}^3}$, $k_{\rm max}=0.3~{h~{\rm Mpc}^{-1}}$, and a $3-\sigma$ prior on ${\mathcal{T}}^W_0$. Lines with different colors and styles show various priors on $\delta^W$. (Right) Same as the left panel, but with a $3-\sigma$ prior on $\delta^W$. Lines with different colors and styles show various priors on ${\mathcal{T}}^W_0$.[]{data-label="fig:sigmaf_zspace"}](sigmaf_zspace_deltap3.pdf "fig:"){width="49.50000%"} We next examine the constraint on $f$, which as $b_1$ is dominated by $P_{gg,00}$, $P_{gg,20}$, and $P_{gg,40}$, so we focus on the degeneracy with $\delta^W$ and ${\mathcal{T}}^W_0$ as well. We find that the absolute value of the correlation coefficient between $f$ and $\delta^W$ is less than 0.35 (weaker correlation for larger volume) and has almost no dependence on the prior on ${\mathcal{T}}^W_0$, whereas the correlation coefficient between $f$ and ${\mathcal{T}}^W_0$ changes from $\sim-0.85$ for no prior on $\delta^W$ to less than -0.9 for a conservative $3-\sigma$ prior on $\delta^W$. The strong effect suggests that the constraint on $f$ will be dependent on both the priors on $\delta^W$ and ${\mathcal{T}}^W_0$, and indeed we find that just adding one prior on either $\delta^W$ or ${\mathcal{T}}^W_0$ does not improve the constraint on $f$. [Figure \[fig:sigmaf\_zspace\]]{} shows the constraint on $f$ from the redshift-space galaxy power spectrum with various priors on $\delta^W$ and ${\mathcal{T}}^W_0$ as a function of the survey volume. Including conservative $3-\sigma$ priors on both $\delta^W$ and ${\mathcal{T}}^W_0$ reduces the constraint on $f$ significantly compared to the one with only one prior on either $\delta^W$ or ${\mathcal{T}}^W_0$, and the result converges well with that of fixing both $\delta^W$ and ${\mathcal{T}}^W_0$. This implies that for future large-scale structure analysis it is sufficient to obtain a rigorous constraint on $f$ as long as $3-\sigma$ priors on both $\delta^W$ and ${\mathcal{T}}^W$ are added. Since $b_1$ and $f$ are mostly degenerate with $\delta^W$ and ${\mathcal{T}}^W_0$, it is natural to ask whether the inclusion of $P_{gg,60}$, which can only be produced by $\delta^W$ and ${\mathcal{T}}^W_0$, improves the constraint on $b_1$ and $f$ or not. To address this question, we perform the Fisher analysis using the observables of $(P_{gg,00},P_{gg,20},P_{gg,40})$ and $(P_{gg,00},P_{gg,20},P_{gg,40},P_{gg,60})$. However, we find that the constraint on $b_1$ and $f$ is insensitive to the presence of $P_{gg,60}$. This is likely due to the low signal-to-noise ratio of $P_{gg,60}$, because even if $\delta^W$ (${\mathcal{T}}^W_0$) is fixed, the constraint on ${\mathcal{T}}^W_0$ ($\delta^W$) reduces only by a few percent regardless of the existence of $P_{gg,60}$. Therefore, it is better to include priors on $\delta^W$ and ${\mathcal{T}}^W_0$ for acquiring reliable constraints on both $b_1$ and $f$. ![(Left) Ratio of the 1-$\sigma$ constraint from the galaxy redshift-space power spectrum on ${\mathcal{T}}^{W,R}_1$ to its expected value, i.e. ${\rm err}[{\mathcal{T}}^{W,R}_1]/\sigma_{{\mathcal{T}}^{W,R}_1}$, as a function of survey volume for $P_{\rm shot}=8000~{h^{-3}~{\rm Mpc}^3}$, and $3-\sigma$ priors on $\delta^W$ and ${\mathcal{T}}^W_0$. The constraint on ${\mathcal{T}}^{W,I}_1$ is identical to ${\mathcal{T}}^{W,R}_1$. Lines with different colors and styles show various $k_{\rm max}$. (Right) Same as the left panel, but for the $1-\sigma$ constraint on ${\mathcal{T}}^{W,R}_2$.[]{data-label="fig:sigmaT1T2_zspace"}](sigmaT1_zspace_deltap3_T0p3_Pshot8000.pdf "fig:"){width="49.50000%"} ![(Left) Ratio of the 1-$\sigma$ constraint from the galaxy redshift-space power spectrum on ${\mathcal{T}}^{W,R}_1$ to its expected value, i.e. ${\rm err}[{\mathcal{T}}^{W,R}_1]/\sigma_{{\mathcal{T}}^{W,R}_1}$, as a function of survey volume for $P_{\rm shot}=8000~{h^{-3}~{\rm Mpc}^3}$, and $3-\sigma$ priors on $\delta^W$ and ${\mathcal{T}}^W_0$. The constraint on ${\mathcal{T}}^{W,I}_1$ is identical to ${\mathcal{T}}^{W,R}_1$. Lines with different colors and styles show various $k_{\rm max}$. (Right) Same as the left panel, but for the $1-\sigma$ constraint on ${\mathcal{T}}^{W,R}_2$.[]{data-label="fig:sigmaT1T2_zspace"}](sigmaT2_zspace_deltap3_T0p3_Pshot8000.pdf "fig:"){width="49.50000%"} ![Same as [figure \[fig:sigmaT1T2\_zspace\]]{}, but with $P_{\rm shot}=1000~{h^{-3}~{\rm Mpc}^3}$.[]{data-label="fig:sigmaT1T2_zspace_lowPshot"}](sigmaT1_zspace_deltap3_T0p3_Pshot1000.pdf "fig:"){width="49.50000%"} ![Same as [figure \[fig:sigmaT1T2\_zspace\]]{}, but with $P_{\rm shot}=1000~{h^{-3}~{\rm Mpc}^3}$.[]{data-label="fig:sigmaT1T2_zspace_lowPshot"}](sigmaT2_zspace_deltap3_T0p3_Pshot1000.pdf "fig:"){width="49.50000%"} While it is difficult to measure $\delta^W$ and ${\mathcal{T}}^W_0$ due to their low signal-to-noise, ${\mathcal{T}}^W_1$ and ${\mathcal{T}}^W_2$ can be probed because they are the only sources that can contribute respectively to $P_{gg,\ell1}$ and $P_{gg,\ell2}$. Moreover, we find that for a survey volume greater than $10^8~{h^{-3}~{\rm Mpc}^3}$, the absolute values of the correlation coefficients between $b_1$ and ${\mathcal{T}}^W_m$ as well as $f$ and ${\mathcal{T}}^W_m$ are less than 0.1 for $m\ge1$. [Figure \[fig:sigmaT1T2\_zspace\]]{} shows the ratios of the $1-\sigma$ constraints from the redshift-space galaxy power spectrum on ${\mathcal{T}}^{W,R}_1$ (left) and ${\mathcal{T}}^{W,R}_2$ (right) to their expected values, i.e. ${\rm err}[{\mathcal{T}}^{W,R}_1]/\sigma_{{\mathcal{T}}^{W,R}_1}$ and ${\rm err}[{\mathcal{T}}^{W,R}_2]/\sigma_{{\mathcal{T}}^{W,R}_2}$, as a function of survey volume for $P_{\rm shot}=8000~{h^{-3}~{\rm Mpc}^3}$, and $3-\sigma$ priors on $\delta^W$ and ${\mathcal{T}}^W_0$. The constraints on the imaginary part are identical to the real part. We find that although it is possible to put upper bounds on the super-survey tidal fields, the constraints are quite weak, ranging from $\sim3$ to 4 times of the $\Lambda$CDM expected values for a survey of $10^9~{h^{-3}~{\rm Mpc}^3}$, even if a large $k_{\rm max}$ of $0.6~{h~{\rm Mpc}^{-1}}$ is used. The weak constraints are mainly driven by the large shot noise, and for a high number density survey with $P_{\rm shot}=1000~{h^{-3}~{\rm Mpc}^3}$ as shown in [figure \[fig:sigmaT1T2\_zspace\_lowPshot\]]{}, one can in general put $\sim1-\sigma$ constraints on the super-survey tidal fields as long as a high $k_{\rm max}$ is adopted. This is consistent with the finding in Ref. [@Akitsu:2017syq] when all cosmological parameters are fixed, though they only forecasted the constraint for ${\mathcal{T}}^W_0$. Conclusion {#sec:conclusion} ========== We have generalized the Kaiser formula in a finite volume with large-scale overdensity and tidal fields. We find that $\ell=0,2,4,6$ and $m=0,1,2$ modes are present in spherical harmonic expansion. The linear power spectrum and mean overdensity generate the azimuthally symmetric $\ell=0,2,4,6$, $m=0$ modes of the power spectrum. The tidal fields, written as a quadrupole, generate $\ell=2,4,6$ modes of the power spectrum, separately for each $m$ components. Hence, the azimuthal symmetry of the Kaiser power spectrum in a finite volume is broken due to the presence of the large-scale tidal fields. The first non-trivial result of writing equations in this way is that we note that the tidal contribution to the azimuthally symmetric power spectrum is sourced by just one of the five degrees of freedom present in the tidal fields. This allows a more natural way of marginalizing over this uncertainty. Our numerical calculation shows that the relative size of effect associated with the tidal fields decrease with increasing $m$. The additional small-scale physics cannot break the basic symmetries of the problem. Hence, beyond second order physics that is still linear in ${\mathcal{T}}^W_m$ will only affect the same $m$ modes, although it can in principle affect arbitrarily large $\ell$, for example, fingers of God. However, terms that are quadratic in ${\mathcal{T}}^W_m$ will in general couple to the sum and difference of two $m$ components. As a concrete example, we made Fisher forecasts for a galaxy redshift survey in determining the galaxy bias parameters, the growth rate $f$, and the super-survey overdensity $\delta^W$ and tidal fields ${\mathcal{T}}^W_m$. While fitting and marginalizing over ${\mathcal{T}}^W_0$ and $\delta^W$ is an efficient way of dealing with the super-sample covariance [@Li:2014sga; @Li:2014jra], using the power spectrum components of $m>0$ one can directly probe the super-sample tidal fields, that are usually not directly observable unless a larger volume containing the current survey is observed. Our numerical work also shows that $\delta^W$ and linear bias are highly degenerate, as expected. For measuring tidal fields, we find a shallow optimum survey size that is given by two competing effects: increasing volume increases the precision with which we can measure the power spectrum but at the same time decreases the expected signal of the super-sample modes. The optimum is at around $V\sim 3\times10^{7}~{h^{-3}~{\rm Mpc}^3}$ and depends only weakly on the number density involved. However, in general we find that the constraint on the tidal fields depends strongly on the galaxy number density, and for a realistic survey the signal-to-noise ratio is generally below unity, indicating that it is challenging to probe the super-sample tidal fields by measuring the anisotropic galaxy power spectrum. On the other hand, for a high number density galaxy survey ($P_{\rm shot}=1000~{h^{-3}~{\rm Mpc}^3}$) it is possible to put $\sim1-\sigma$ constraints on the super-survey tidal fields of ${\mathcal{T}}^W_1$ and ${\mathcal{T}}^W_2$ at their $\Lambda$CDM expected values. Finally, it is logically possible that when measured, the measured tidal field would turn up to be considerably larger than expected, perhaps due to new physics at the horizon scale. Our result indicates that if the actual tidal fields are no more than an order of magnitude larger than expected value, they are likely to be measurable with high significance. When this technique is applied to a real survey, the curvature of the sky and shape of the actual window survey would need to be taken into account carefully. For a fixed large-scale tide, the tidal tensor will be rotated with respect to the line of sight across a survey covering large fraction of the sky. A correct methodology for dealing with this exceeds the scope of this paper. Another way to probe the super-volume tidal fields is to divide the entire survey into smaller subvolumes and measure the fully anisotropic power spectrum in each subvolume, as the position-dependent power spectrum approach [@Chiang:2014oga; @Chiang:2015eza]. In this way, one measures the tidal fields with scale larger than the subvolume size but smaller than the entire survey, hence the signal-to-noise is expected to be much higher compared to the super-survey modes. Since the effect of the long mode on the large-scale overdensity and tidal fields is equivalent to the squeezed bispectrum, the same information can also be extracted from full bispectrum measurements. However, there are now numerically highly efficient methods for bispectrum measurements [@Schmittfull:2014tca; @Scoccimarro:2015bla; @Sugiyama:2018yzo] that make the measurements based on subvolume power spectrum variations likely obsolete. Authors thanks Kazuyuki Akitsu, Donghui Jeong, Eiichiro Komatsu, Naonori Sugiyama, and Masahiro Takada for helpful discussion and comments on the draft. We also thank Fabian Schmidt for pointing out the existence of Poisson modulation term in the galaxy bispectrum and other useful comments. CC is supported by grant NSF PHY-1620628. AS acknowledges hospitality of the Cosmoparticle Hub at University College London where parts of this work have been performed. Angular decomposition of redshift-space galaxy power spectrum in the presence of long-wavelength overdensity and tide {#app:Amn} ===================================================================================================================== From [eqs. (\[eq:delta\_resp\])–(\[eq:tauij\_resp\])]{}, it is straightforward to find &A\_[0,0]{}=$b_1^2$P\_l(k) , A\_[0,1]{}=$2b_1f$P\_l(k) , A\_[0,2]{}=$f^2$P\_l(k) , [\ ]{} &A\_[1,0]{}=$\frac{47}{21}b_1^2+2b_1b_2+\frac{1}{3}b_1^2f$P\_l(k)+$-\frac{1}{3}b_1^2$P’\_l(k) , [\ ]{} &A\_[1,1]{}=$\frac{26}{7}b_1f+2b_1^2f+2b_2f$P\_l(k)+$-\frac{2}{3}b_1f-\frac{1}{3}b_1^2f$P’\_l(k) , [\ ]{} &A\_[1,2]{}=$\frac{31}{21}f^2+\frac{10}{3}b_1f^2-\frac{1}{3}f^3$P\_l(k)+$-\frac{1}{3}f^2-\frac{2}{3}b_1f^2$P’\_l(k) , [\ ]{} &A\_[1,3]{}=$\frac{4}{3}f^3$P\_l(k)+$-\frac{1}{3}f^3$P’\_l(k) , A\_[2,0]{}=$\frac{8}{7}b_1^2+2b_1b_{s^2}$P\_l(k)+$-b_1^2$P’\_l(k) , [\ ]{} &A\_[2,1]{}=$\frac{24}{7}b_1f+2b_{s^2}f$P\_l(k)+$-2b_1f$P’\_l(k) , A\_[2,2]{}=$\frac{16}{7}f^2$P\_l(k)+$-f^2$P’\_l(k) , [\ ]{} &A\_[3,0]{}=$-b_1^2f$P’\_l(k) , A\_[3,1]{}=$4b_1f^2$P\_l(k)+$-2b_1f^2$P’\_l(k) , [\ ]{} &A\_[3,2]{}=$4f^3$P\_l(k)+$-f^3$P’\_l(k) , A\_[4,0]{}=$b_1^2f$P\_l(k) , A\_[4,1]{}=$-f^3$P\_l(k) .
--- abstract: 'In this article we give new proofs for the existence and basic properties of the cirucmcenter of mass defined by V. E. Adler in [@adler1993recuttings] and S. Tabachnikov and E. Tsukerman in [@tabachnikov2014circumcenter].' author: - 'Arseniy V. Akopyan' date: 'Received: date / Accepted: date' title: Some Remarks on the Circumcenter of Mass --- [^1] We start with definitions. \[def:power of point\] *The power of a point* $\mathbf{x}$ with respect to a sphere $\omega(\mathbf{o}, R)$ in $\mathbb{R}^d$ is defined $\operatorname{Pow}(\omega, \mathbf{x})=\|\mathbf{ox}\|^2-R^2$. Here $\mathbf{o}$ is the center and $R$ the radius of the sphere $\omega(\mathbf{o}, R)$. \[def:power of simplex\] Given simplex $\Delta$ in $\mathbb{R}^d$, define $$\operatorname{Pow}(\Delta)=\int\limits_\Delta \operatorname{Pow}(\omega_\Delta, \mathbf{x})d\mathbf{x},$$ where $\omega_\Delta$ is the circumsphere of $\Delta$. If the sphere $\omega$ is a sphere of higher dimension passing through all vertices of $\omega_\Delta$ then the power of any point of $\Delta$ with respect to $\omega$ is the same as the power with respect to $\omega_{\Delta}$. Therefore in the definition of $\operatorname{Pow}(\Delta)$ the circumcscribed sphere could be changed to any sphere passing through vertices of $\Delta$. Note also, that the value of $\operatorname{Pow}(\Delta)$ is always negative. Denote the vertices of $\Delta$ by $\mathbf{v}_0$, $\mathbf{v}_1$, $\dots$, $\mathbf{v}_d$, let $\mathbf{o}_\Delta$ and $R_\Delta$ be the center and the radius of the circumsphere, and let $\mathbf{m}_\Delta$ be the centroid of $\Delta$. Then one has the following formulas for $\operatorname{Pow}(\Delta)$ (see. [@Rajan1994Optimality]), $$-\operatorname{Pow}(\Delta)=\frac{\operatorname{Vol}(\Delta)}{(d+1)(d+2)} \left(\sum_{i=0}^{d} \sum_{j=0}^{i-1} \|\mathbf{v}_i\mathbf{v}_j\|^2 \right)=\frac{d+1}{d+2}\operatorname{Vol}(\Delta)\left(R_\Delta^2-\|\mathbf{o}_\Delta \mathbf{m}_\Delta\|^2 \right),$$ where $\operatorname{Vol}(\Delta)$ is the volume of $\Delta$. \[lem:base equality\] Given a simplex $\Delta$ in $\mathbb{R}^d$, denote by $\vec{\mathbf{n}}_i$ the unit normal to the hyperface $\Delta_i$ directed in the exterior of $\Delta$. Then $$\sum\limits_{i=0}^d \operatorname{Pow}(\Delta_i)\vec{\mathbf{n}}_i= 2\operatorname{Vol}(\Delta) \overrightarrow{\mathbf{o}_\Delta \mathbf{m}_\Delta}.$$ Without loss of generality, assume that $\mathbf{o}_\Delta$ is the origin. Let us use the following variant of the Gauss–Ostrogradsky theorem, also known as the gradient theorem: $$\int\limits_{\Delta} \operatorname{grad}f(\mathbf{x})dv= \int \limits_{\partial \Delta} f(\mathbf{x}) \vec{\mathbf{n}}(\mathbf{x}) ds,$$ where $dv$ and $ds$ are the volume elements of total space and of the surface of the simplex, and $\vec{\mathbf{n}}(\mathbf{x})$ is the unit normal to the surface at a point $\mathbf{x}$. Apply this equation to the power of a point with respect to the circumsphere, $f(\mathbf{x})=\|\mathbf{x}\|^2-R_\Delta^2$. Then $\operatorname{grad}f(\mathbf{x})=2\mathbf{x}$. Note also that $\displaystyle \int_{\Delta_i} f(\mathbf{x})ds=\operatorname{Pow}(\Delta_i)$ since the sphere $(\mathbf{o}, R_\Delta)$ passes through the vertices of $\Delta_i$. We obtain $$2\operatorname{Vol}(\Delta) \overrightarrow{\mathbf{o}_\Delta \mathbf{m}_\Delta}=\int\limits_{\Delta} 2\mathbf{x}dv= \int \limits_{\partial \Delta} f(\mathbf{x})\vec{\mathbf{n}}(\mathbf{x}) ds=\sum\limits_{i=0}^d \operatorname{Pow}(\Delta_i)\vec{\mathbf{n}}_i.$$ \[cor:main theorem\] Let $\mathcal{C}$ be a $d$-dimensional piece-wise linear simplicial cycle in $\mathbb{R}^d$. Let $\mathbf{o}_i$ and $\mathbf{m}_i$ be circumcenters and centroids of $d$-dimensional simplices $\Delta_i\in \mathcal{C}$. Then $$\sum\limits_{\Delta_i\in \mathcal{C}} \overrightarrow{\mathbf{o}_i \mathbf{m}_i}\operatorname{Vol}(\Delta_i)=\mathbf{0}.$$ For the centroid, one has $\sum\limits_{\Delta_i\in \mathcal{C}} \vec{\mathbf{m}_i}\operatorname{Vol}(\Delta_i)=\mathbf{0}$, because each point is counted the same number of times with positive and negative sign. So, we obtain the following corollary. \[cor:theorem for circumcenters\] Let $\mathcal{C}$ be a $d$-dimensional piece-wise linear simplicial cycle in $\mathbb{R}^d$. Suppose $\mathbf{o}_i$ be the circumcenters of $d$-dimensional simplices $\Delta_i\in \mathcal{C}$. Then $$\sum\limits_{\Delta_i\in \mathcal{C}} \vec{\mathbf{o}}\operatorname{Vol}(\Delta_i)=\mathbf{0}.$$ Following [@tabachnikov2014circumcenter], we give the following definition. Let $\mathcal{K}$ be a $d$-dimensional piece-wise linear simplicial chain. Let $(\mathbf{o}_i, \operatorname{Vol}(\Delta_i))$ be the weighted point located at the circumcenter of $\Delta_i\in \mathcal{K}$ with the weight $\operatorname{Vol}(\Delta_i)$. The center of mass of points $(\mathbf{o}_i, \operatorname{Vol}(\Delta_i))$ of all simplices of $\mathcal{K}$ is called the *circumcenter of mass* of $\mathcal{K}$. \[rem:(d-1)-cycle\] We can define the circumcenter of mass of any $(d-1)$-dimensional piece-wise linear simplicial cycle $\mathcal{C}$ in $\mathbb{R}^d$ as the circumcenter of mass of any its filling, that is $\mathcal{K}$ such that $\partial \mathcal{K}= \mathcal{C}$. Due to Corollary \[cor:theorem for circumcenters\], the choice of filling for $\mathcal{C}$ does not matter. It seems that the first who noted the existence of circumcenter of mass of planar polygon was Giusto Bellavitis in 1834 (see the book [@laisant1887theorie], pages 150–151). In 1993 it was independently noticed by V. E. Adler in [@adler1993recuttings] for case of triangulation of planar polygon by diagonals and in the private correspondence of G.C. Shephard and B. Grünbaum. They also noted that the circumcenter could be replaced by any point on the Euler line, that is, by a fixed affine combination of the centroid and the circumcenter (for example, the orthocenter or the center of the Euler circle). A.G Myakishev in [@myakishev2006two] prove the existence of Euler (and also Nagel) line for quadrilateral. S. Tabachnikov and E. Tsukerman in [@tabachnikov2014circumcenter] proved the correctness of definition of circumcenter of mass for any simplicial polytope and the existence of the Euler line in a high dimensional polytope. The case of central triangulation of a tetrahedron was posed on the student contest IMC 2009 (Problem 5). In the planar case, we can take a polygon as a cycle. Using Lemma \[lem:base equality\] we can give a short proof of the following theorem proved by S. Tabachnikov and E. Tsukerman. \[thm:equileteral polygon\] Let $P=\mathbf{a}_1\mathbf{a}_2\dots \mathbf{a}_n$ be an equilateral polygon. Then its circumcenter of mass coincide with centroid of the polygonal lamina. Denote by $\mathbf{o}$ and $\mathbf{m}$ the circumcenter of mass and the centroid. Note that, for all $i$, the values $\operatorname{Pow}(\mathbf{a}_i\mathbf{a}_{i+1})$ are equal to each other. Denote this quantity by $p$ and let $l$ be the length of the sides. We have $\operatorname{Vol}(P)\overrightarrow{\mathbf{m} \mathbf{o}}=\frac12\sum\limits_{i=1}^n p\vec{\mathbf{n}}_i$. Note that this sum is equal to zero because each vector $\vec{\mathbf{n}}_i$ is the vector $\frac{1}{l}\overrightarrow {a_ia_{i+1}}$ rotated by $90^\circ$, but $\sum \limits_{i=1}^n{a_ia_{i+1}}=0$. Note that using the equation on $\operatorname{Pow}(\Delta)$ we can generalize this theorem to higher dimension. \[thm:equileteral polytope\] Let $P$ be a simplicial polytope in $\mathbb{R}^d$, such that for each face of $P$ the sum of squares of its edges is a constant. Then the circumcenter of mass and centroid of solid polytope $P$ coincide. Denote this constant by $c$. Note that for each facet $\Delta_i$ of $P$ we have: $$\operatorname{Pow}(\Delta_i)=\operatorname{Vol}(\Delta_i) \frac{-c}{(d+1)(d+2)}.$$ From Minkowski’s theorem it follows that $$\sum_{\Delta_i \in (\text{facets of }P)}\operatorname{Vol}(\Delta_i) \mathbf{n}_i=0,$$ where $\mathbf{n}_i$ is a unit normal to the facet $\Delta_i$. Therefore $$\operatorname{Vol}(P)\overrightarrow{\mathbf{m} \mathbf{o}}= \frac{1}{2} \sum_{\Delta_i \in (\text{facets of }P)}\operatorname{Pow}(\Delta_i) \mathbf{n}_i =\frac{-c}{2(d+1)(d+2)}\sum_{\Delta_i \in (\text{facets of }P)}\operatorname{Vol}(\Delta_i) \mathbf{n}_i=0.$$ Using another formula for $\operatorname{Pow}(\Delta_i)$ we can reformulate the requirements on the facets of the polytope $P$ by the following way: for each facet $\Delta_i$ the value $R_{\Delta_i}^2-\|\mathbf{o}_{\Delta_i} \mathbf{m}_{\Delta_i}\|^2$ is a constant. As the authors mention in [@tabachnikov2014circumcenter], if the vertices of $\mathcal{C}$ lie on a sphere $\omega$, then the circumcenter of mass coincide with center of the sphere. Indeed, there is a filling of $\mathcal{C}$ with the same set of vertices as $\mathcal{C}$. The circumcenters of the simplices of this filling coincide the center of the sphere $\omega$. In the same article S. Tabachnikov and E. Tsukerman give a definition of cirumcenter of mass in the spherical geometry. Using the previous observation, we can give another explanation of the existence of this point. Consider the unit sphere $\mathcal{S}^d$ with the center at the origin $\mathbf{o}$ of $\mathbb{R}^{d+1}$. By a weighted point $(\mathbf{x}, m)$ we mean a pair consisting of a point $\mathbf{x}$ and a number $m$, which is natural to interpret as vector $m\vec{\mathbf{x}}$ in $\mathbb{R}^{d+1}$. A set of weighted points $(\mathbf{x}_i, m_i)$ has the centroid at point $\frac{\sum{m_i\vec{\mathbf{x}}_i}}{\|\sum{m_i\vec{\mathbf{\mathbf{x}}}_i}\|}$ and the total mass $\|\sum{m_i\vec{\mathbf{x}}_i}\|$ (See [@galperin1993concept]). For each spherical $d$-simplex $\Delta_i=\mathbf{v}_0\mathbf{v}_1\dots\mathbf{v}_d$ of a spherical simplicial chain $\mathcal{C}$, consider a point $\mathbf{o}'_i$ which is the circumcenter of the simplex $\Delta'_i=\mathbf{o}\mathbf{v}_0\mathbf{v}_1\dots\mathbf{v}_d$ in $\mathbb{R}^{d+1}$. Now we can define the weighted circumcenter as the point $\mathbf{o}_i=\left(\displaystyle \frac{\mathbf{o}'_i}{\|\mathbf{o}'_i\|}, \operatorname{Vol}(\Delta'_i)\|\mathbf{o}'_i\|\right)$[^2]. The $(d+1)$-dimensional complex in $\mathbb{R}^{d+1}$ formed by the simplices $\Delta'_i$ is denoted by $\mathcal{C}'$. Suppose $\mathcal{C}$ is a $d$-dimensional simplicial cycle $\mathcal{C}$ in $\mathcal{S}^d\subset \mathbb{R}^{d+1}$. Then its spherical circumcenter of mass coincides with $\mathbf{o}$ (has zero weight). By definition, the spherical circumcenter of mass $\mathcal{C}$ coincides with the Euclidean circumcenter of mass of $\mathcal{C}'$. But its circumcenter of mass coincides with the circumcenter of mass of $\partial \mathcal{C}'$ which is the origin, because $\partial \mathcal{C}'$ is inscribed in $\mathcal{S}^d$. As in Remark \[rem:(d-1)-cycle\], we can define the circumcenter of mass of a $(d-1)$-dimensional spherical simplicial cycle in $\mathcal{S}^d$ as the circumcenter of mass of its filling. The author thanks Sergei Tabachnikov for useful discussions and valuable advice. [1]{} V. E. Adler. Recuttings of polygons. , 27(2):141–143, 1993. G. A. Galperin. A concept of the mass center of a system of material points in the constant curvature spaces. , 154(1):63–84, 1993. C. A. Laisant. . Gauthier-Villars, 1887. A. Myakishev. . In [*Forum Geometricorum*]{}, volume 6, pages 289–295, 2006. V. T. Rajan. Optimality of the [D]{}elaunay triangulation in [${\bf R}\sp d$]{}. , 12(2):189–202, 1994. S. Tabachnikov and E. Tsukerman. Circumcenter of [M]{}ass and generalized [E]{}uler line. , 51(4):815–836, 2014. [^1]: Institute for Information Transmission Problems RAS\ Bolshoy Karetny per. 19, Moscow, Russia 127994\ B. N. Delone International Laboratory “Discrete and Computational Geometry”, P. G  Demidov Yaroslavl State University., Sovetskaya st. 14, Yaroslavl’, Russia 150000\ [^2]: Using simple calculation it is easy to show that $\operatorname{Vol}(\Delta'_i)\|\mathbf{o}'_i\|=\frac{\operatorname{Vol}(\Delta_i)}{2(d+1)}$. So the circumcenter of mass from [@tabachnikov2014circumcenter] is the same as here.
--- abstract: | We use the language of squeezed states to give a systematic description of two issues in cosmological particle creation:\ a) Dependence of particle creation on the initial state specified. We consider in particular the number state, the coherent and the squeezed state. b) The relation of spontaneous and stimulated particle creation and their dependence on the initial state. We also present results for the fluctuations in particle number in anticipation of its relevance to defining noise in quantum fields and the vacuum susceptibility of spacetime. author: - | B. L. Hu, G. Kang, A. Matacz [^1]\ [Department of Physics, University of Maryland, College Park, MD 20742, USA]{}\ [(umdpp 93-163)]{} title: Squeezed Vacua and the Quantum Statistics of Cosmological Particle Creation --- c i ł ø u Ł Ø ¶ plus 1000pt minus 1000pt \#1 \#1[= to]{} \#1[= to]{} \#1[Nucl. Phys. \#1]{} \#1[Phys. Lett. \#1]{} \#1[Phys. Rev. Lett. \#1]{} \#1[Phys. Rev. [**D**]{}  \#1]{} \#1[Phys. Rev. [**A**]{}  \#1]{} \#1[Prog. Theor. Phys. \#1]{} Å\#1[Astron. and Astrophys. \#1]{} \#1[$\sp{#1)}$]{} \#1 =8.5in =6.5in =0.in =0.in =0.in addtoreset[equation]{}[section]{} Introduction ============ Cosmological particle creation [@Par69; @SexUrb; @Zel70; @ZelSta; @Hu72] is a physical process of basic theoretical interest in quantum field theory in curved spacetime [@BirDav], and important applied interest in the quantum dynamics of the early universe [@ZelSta; @HuPar; @HarHu]. In this paper we use the language of squeezed states [@sqst] to give a systematic description of two interelated issues:\ a) Dependence of particle creation on the initial state. We consider in particular the number state, the coherent and the squeezed state.\ b) The relation of spontaneous and stimulated particle creation and their dependence on the initial state.\ We also present the result for the fluctuations in particle number in anticipation of its relevance to defining noise in quantum fields.\ Both of these issues have been explored before, albeit in a restricted context. The use of Bogolubov transformation relating the canonical operators between the [*in*]{} and [*out*]{} states was introduced by Parker [@Par69] in cosmological particle creation. He also derived the evolutionary operator based on earlier work of Kamefuchi and Umezawa [@KamUme] and briefly discussed induced particle creation. Zel’dovich [@Zel70] first pointed out that cosmological particle creation is the quantum version of parametric amplification of classical waves. The connection of cosmological particle creation with processes in quantum optics was thus noticed more than 20 years ago [@Zel70; @ZelSta; @Hu72; @Hu74; @Gri74; @Ber75].\ Cosmological particle creation in coherent states was discussed by Hu [@Hu72] and Berger [@Ber75]. The former used a close analogy with a model in quantum optics based on the quantum statistics of coupled harmonic oscillators ( e.g., [@Mollow; @HuMat2]). Entropy generation associated with thermal particle creation in an exponentially expanding universe was discussed by Parker [@Par76]. The distinction of the number state and the coherent state (in the so-called $N$ and $P$ representation) in the question of entropy generation in particle creation was first noticed by Hu and Pavon [@HuPav] in their search for an intrinsic measure of field entropy. They found that the variance of particle number is a monotonically increasing function of time in the $P$ representation, and that an increase of particles in the $N$ representation is related to the fact that one has chosen an initial state which is an eigenstate of the number operator, as is often the case in most discussions of cosmological particle creation. The phase-particle number uncertainty relation implies that this choice amounts to assuming an initial state with random phase. Kandrup [@Kan] has further clarified these points. Entropy generation in particle creation with interactions was investigated by Hu and Kandrup [@HuKan], who discussed both spontaneous and stimulated production of particles.\ Since the concept of squeezed state was introduced to quantum optics in the seventies [@sqst], there has been much progress in its experimental realizations and theoretical implications. The adoption of the language of squeezed states to cosmological particle creation was introduced recently by Grishchuk and Sidorov [@GriSid]. Although the physics is not new (this was also pointed out by Albrecht [*et al*]{} [@Alb93] in the inflationary cosmology context) and the results are largely known, the use of rotation and squeeze operators give an alternative description which allows one to explore new avenues based on interesting ideas developed in quantum optics. More recent work on entropy generation in cosmological perturbations by Brandenberger and coworkers [@BPM] and Gasperini and Giovannini [@GasGio] make use of coarse-graining via a random phase approximation using the language of squeezed states. Matacz [@Mat93] has used the squeezed state formalism as a starting point for the study of decoherence of cosmological inhomogeneities in the coherent- state representation.\ The issues of initial states and entropy generation have been discussed in restricted conditions, and the issue of spontaneous and stimulated production has only been touched upon before. For the sake of completeness, in this work we will address these issues under a common framework, and present the results for different initial states (the number state, the coherent state and the squeezed state) . In Sec. 2 we give a short summary of particle creation described both in the old language of Bogolubov transformations and in the new language of squeezed states, mainly to present the basic concepts and introduce the terminology. Readers familiar with cosmological particle creation and the squeezed state description can go directly to Sec. 3. In Sec. 3 we give the result of spontaneous and stimulated production for different initial states for bosons. In Sec. 4 we work out the change in the fluctuations in particle number. This is in anticipation of its relation to defining noise in quantum fields and vacuum susceptibility in quantum processes in curved spacetimes. We aim in this simple paper only to show how the old and the new language can be used together to describe the quantum statistics of particle creation. Exploration of the implications of these processes will be described with more detail in later works.\ Particle Creation and Squeezed State ==================================== Particle Creation via Bogolubov Transformation ---------------------------------------------- Readers familiar with the process of cosmological particle creation can skip this subsection. For the original work, see [@Par69]. Our summary here is adopted from [@HuKan] with slight modifications in the discussion of spontaneous versus stimulated creation. Consider a massive ($m$) scalar field $\Phi$ coupled arbitrarily ($\xi$) with a background spacetime with metric $g_{\mu \nu}$ and scalar curvature $R$. Its dynamics is described by the Lagrangian density L(x) = - \[ g\^(x)\_\_- \^2(x)\]. Here $\xi = 0$ and 1 denotes, respectively, conformal and minimal coupling. The scalar field satisfies the wave equation $$\left[ \Box + m^2 + (1 - \xi) \frac{R}{6}\right] \Phi (\vec x,t) = 0,$$ where $\Box = g^{\mu \nu} \nabla_\mu \nabla_\nu$ is the Laplace-Beltrami operator defined on the background spacetime. In the canonical quantization approach, one assumes a foliation of spacetime into dynamically evolving, time-ordered, spacelike hypersurfaces $\Sigma$, expands the field on $\Sigma$ in normal modes, imposes canonical commutation relations on the time-dependent expansion functions now regarded as creation and annihilation operators, defines the vacuum state, and then constructs the Fock space. In flat space, Poincaré invariance guarantees the existence of a unique global Killing vector $\partial_t$ orthogonal to all constant-time spacelike hypersurfaces, an unambiguous separation of the positive-and negative-frequency modes, and a unique and well-defined vacuum. In curved spacetime, general covariance precludes any such privileged choice of time and slicing. There is no natural mode decomposition and no unique vacuum. At any constant-time slice, one can expand the field $\Phi$ in terms of a complete set of (spatial) orthonormal modes $u_{\vec k}(\vec x)$ [@Par69] $$\Phi (x) = \sum_{\vec k} [\psi_{\vec k}(t)u_{\vec k}(\vec x) + \psi^\dagger _{\vec k}(t)u^*_{\vec k}(\vec x)].$$ After second quantization, the fields $\Phi$ and their amplitudes $\psi_{\vec k}$ become operator-valued functions. Write $$\psi_{\vec k}(t) = a_{\vec k}(t) \phi_{\vec k}(t),$$ where $a_{\vec k}$ are the annihilation operators and the (c-number) functions $\phi_{\vec k}(t)$ obey the wave equation derived from (2.2). The canonical commutation rules on $\Phi$ imply these conditions on $a_{\vec k}$ and $a^\dagger_j$, i.e., $$[a_{\vec k},a_{\vec j}] = [a^\dagger_{\vec k}, a^\dagger_{\vec j}]=0 \; \; {\rm and} \; \; [a_{\vec k},a^\dagger_{\vec j}]=\delta_{\vec k \vec j}.$$ Assume that initially a vacuum state $\mid 0 \rangle$ at $t_0$ can be defined by $$a_{\vec k} \mid 0 \rangle_{t_0} = 0,$$ and a Fock space can be constructed from the $n$-particle states by the action of the creation operators. At a later time, say $t_1=t_0 + \Delta t$, the vacuum state defined at $t_0$ will no longer be vacuous, since the annihilation operator $b_{\vec k}(t_1)$ at $t_1$ is not equal to $a_{\vec k}(t_0)$. In general, they are related by a set of Bogolubov transformations $$b_{\vec j}(t_1) = \sum_{\vec k} [\a_{\vec j \vec k}(t)a_{\vec k} + \beta^*_{\vec j \vec k}(t)a^\dagger_{\vec k}].$$ A new vacuum state $\mid 0)$ at $t_1$ can be defined by $$b_{\vec j} \mid 0)_{t_1}=0$$ and from this a new Fock space can be constructed. One can easily see that $a_{\vec k} \mid 0) \neq 0$. The two vacua are different by the coefficients $\alpha, \beta$ whose time dependence are determined by the amplitude functions $\phi_{\vec k}(t)$. In particular, any $\phi_{\vec k}$ with only a positive-frequency component initially at $t_0$ will acquire a negative-frequency component at $t_1$. The new vacuum at $t_1$ now contains $$s_{\vec j} = (0 \mid N_{\vec j} \mid 0) = \sum_{\vec k} \mid \beta_{\vec j \vec k} \mid^2$$ particles, where $$N_{\vec j} \equiv b^\dagger_{\vec j} b_{\vec j}$$ is the particle number operator. From (2.7) one sees that $\beta_{\vec j \vec k}$ measures the negative-frequency component generated by dynamics. In curved space the inequivalence of Fock representation due to the lack of a global timelike Killing vector makes the constant separation of positive-and negative-frequency modes in general impossible. The mixing of positive- and negative- frequency modes in second-quantized form leads to vacuum particle creation. Particle creation may arise from topological, geometrical, or dynamical causes. In cosmological spacetimes the inequivalence of vacua appears at different times of evolution, and thus cosmological particle creation is by nature a dynamically induced effect. Note that we are dealing here with a free-field: particles are not produced from interactions, but rather from the excitation (parametric amplification [@Zel70]) of vacuum fluctuations by the changing background gravitational field. ### Spontaneous Production For spacetimes with certain symmetries, some natural mode decomposition may present itself. For example, in the class of conformally static spacetimes (e.g., Robertson-Walker universe), where the metric is conformally related to a static spacetime (e.g., the Minkowski metric), $$g_{\mu \nu} (x) = a^2 (\eta)\eta_{\mu \nu},$$ where $a$ is the conformal factor, there exists a global conformal Killing vector $\partial_\eta$, where $\eta = \int dt/a(t)$ is the conformal time. Thus the vacuum defined by the mode decomposition with respect to $\partial_\eta$ is a globally well-defined one, known as the conformal vacuum. For conformally-invariant fields \[e.g., a massless, scalar field with $\xi = 0$ in (2.1)\] in conformally-static spacetimes, it is easy to see that there is no particle creation [@Par69]. Thus any small deviation from these conditions, e.g., small $m,\xi$, can be treated perturbatively from these states. Consider the spatially-flat Robertson-Walker metric with line element $$ds^2 = a^2 (\eta)(d\eta^2 -d \vec x^2).$$ The scalar fields can be separated into modes $$\Phi (\eta , \vec x) = \sum_{\vec k} \phi_{\vec k}(\eta) e^{i \vec k \cdot \vec x},$$ where $\phi_{\vec k}$ are the amplitude functions of the $\vec k$th mode. Define new field variables $a(\eta) \phi_{\vec k}(\eta)=\chi_{\vec k}(\eta)$. From the wave equation (2.2) for the $\vec k$th mode $\chi_{\vec k}(\eta)$ satisfies $$\chi^{\prime \prime}_{\vec k} (\eta)+[k^2+(m^2-\xi R/6)a^2]\chi_{\vec k}(\eta)=0.$$ where $k \equiv |\vec k | $. One sees that, for massless ($m=0$) conformally coupled ($\xi=0$) fields, $\chi_{\vec k}$ admits solutions $$\chi_{\vec k}(\eta)=Ae^{i\Omega_{\vec k} \eta}+Be^{-i\Omega_{\vec k} \eta},$$ which are of the same form as travelling waves in flat space. Since $\Omega_{\vec k} =k=$ const, the positive- and negative-frequency components remain separated and there is no particle production [@Par69]. More generally, the wave equation for each mode has a time-dependent natural frequency given by $$\Omega_{\vec k}^2(\eta)=k^2 + (m^2 - \xi R/6) a^2 \equiv \omega_{\vec k}^2 a^2.$$ The negative-frequency modes can thus be excited by the dynamics of the background through $a(\eta)$ and $R(\eta)= 6 a''/a^3 $ (a prime denotes $d/d \eta$). In analogy with the time-dependent Schrödinger equation, one can view the $(m^2-\xi R/6)a^2$ term in (2.14) as a time-dependent potential $V(\eta)$ which can induce backscattering of waves [@Zel70; @Hu72]. The number of created particles in the $\vec k$th mode is given in terms of $\chi^\prime_{\vec k}$ and $\chi_{\vec k}$ by $$s_{\vec k}=\mid \beta_{\vec k} \mid^2 = \frac{1}{2 \Omega_{\vec k}} ( \mid \chi^\prime_{\vec k} \mid^2 + \Omega^2_{\vec k} \mid \chi_{\vec k} \mid^2) - \frac{1}{2}.$$ The energy density associated with these particles is given by the expectation value of the 00 component of the conformal energy-momentum tensor with respect to the conformal vacuum: $$\begin{aligned} \rho _0 & = & \langle 0 \mid \Lambda^0_0 \mid 0 \rangle \nonumber \\ & = & \frac{1}{a^4} \int \frac{d^3k}{2(2\pi)^3} (\mid \chi^\prime_{\vec k} \mid^2 + \Omega^2_{\vec k} \mid \chi_{\vec k} \mid^2) \nonumber \\ & = & \frac{1}{a^4} \int \frac{d^3k}{(2 \pi)^3} (2s_{\vec k} + 1) \frac{\Omega_{\vec k}}{2}.\end{aligned}$$ In a Hamiltonian description of the dynamics of a finite system of parametric oscillators, the Hamiltonian is simply $$H_0(t)=\frac{1}{2}\sum_{k}(\pi_{\vec k}^2 + \Omega_{\vec k}^2q_{\vec k}^2) =\sum_{k}(N_{\vec k}+\frac{1}{2})\Omega_{\vec k},$$ Comparing this with (2.18) one can identify $\mid \chi_{\vec k} \mid^2$ and $\mid \chi^\prime_{\vec k} \mid^2$ with the canonical coordinates $q_{\vec k}^2$ and moment $\pi^2_{\vec k}$, the eigenvalue of $H_0$ being the energy $E_{\vec k}=(N_{\vec k} + \frac{1}{2}) \Omega_{\vec k}$. The analogy of particle creation with parametric amplification is formally clear: (2.17) defines the number operator $$N_{\vec k} = \frac{1}{2 \Omega_{\vec k}} (\pi_{\vec k}^2 + \Omega_{\vec k}^2 q_{\vec k}^2) - \frac{1}{2},$$ and (2.18) says that the energy density of vacuum particle creation comes from the amplification of vacuum fluctuations $\hbar \Omega_{\vec k}/2$ by the factor ${\cal A}_{\vec k}=2s_{\vec k}+1$. ### Stimulated Production Equation (2.18) gives the vacuum energy density of particles produced from an initial vacuum, a pure state. If the initial state at $t_0$ is a statistical mixture of pure states, each of which contains a definite number of particles, then an additional mechanism of particle creation enters. This is categorically known as induced creation. In particular, as already pointed out in the original paper of Parker [@Par69], if the statistical density matrix $\mu$ is diagonal in the representation whose basis consists of the eigenstates of the number operators $a^\dagger_{\vec k}a_{\vec k}$ at time $t_0$ ,then for bosons, this process increases the average number of particles (in mode $\vec k$ in a unit volume) at a later time $t_1$ over the initial amount: $$\begin{aligned} ( N_{\vec k}(t) ) & = & Tr [\mu b^{\dagger}_{\vec k} (t) b_{\vec k}(t)] \nonumber \\ & = & \langle N_{\vec k}(t_0)\rangle + \mid \beta_{\vec k}(t) \mid^2 [1 +2 \langle N_{\vec k}(t_0)\rangle],\end{aligned}$$ where $\langle N_{\vec k}(t_0)\rangle=Tr[\mu a^\dagger_{\vec k}a_{\vec k}]$. For fermions it decreases the initial number. The above result can be understood in the parametric oscillator description as the amplification by a factor ${\cal A} _{\vec k} = 2s_{\vec k} + 1$, of a) the vacuum fluctuation, yielding $|\beta_{\vec k}(t)|^2$, and of b) the particles already present $ N_{\vec k}(t_0)$, i. e., $$( N_{\vec k}(t) )= \mid \beta_{\vec k}(t) \mid^2 + {\cal A}_{\vec k} \langle N_{\vec k}(t_0)\rangle$$ where $ s_k = |\beta _{\vec k}(t)|^2$. The second part is called stimulated production. It yields an energy density $\rho_n$ given by $$\begin{aligned} \rho_n & = & \langle n \mid \Lambda^0_0 \mid n \rangle \nonumber \\ & = & \frac{1}{a^4} \int \frac{d^3k}{(2\pi)^3} ( \mid \chi^\prime_{\vec k} \mid^2 + \Omega_{\vec k}^2 \mid \chi_{\vec k} \mid^2)\langle a^\dagger_{\vec k} a_{\vec k}\rangle, \nn \\ & = & \frac{1}{a^4} \int \frac{d^3k}{(2 \pi)^3} (2s_{\vec k} + 1) \Omega_{\vec k} \langle N_{\vec k}(t_0)\rangle\end{aligned}$$ where $\langle a^\dagger_{\vec k} a_{\vec k}\rangle \equiv \langle N_{\vec k}(t_0)\rangle= Tr[ \mu a^\dagger_{\vec k} a_{\vec k}]$. Combining (18) and (23), for a density matrix diagonal in the number state, the total energy density of particles created from the vacuum and from those already present in the $n$-particle state is given by $$\rho (t) = \rho_0 + \rho_n = \frac{1}{a^4} \int \frac{d^3k}{(2\pi)^3} ( \mid \chi^\prime_{\vec k} \mid^2 + \Omega_{\vec k}^2 \mid \chi_{\vec k} \mid^2) (\frac{1}{2} + \langle a^\dagger_{\vec k} a_{\vec k}\rangle) \nn \\$$ $$= \frac{1}{a^4} \int \frac{d^3k}{(2\pi)^3} {\cal A}_{\vec k} \Omega_{\vec k} (\frac{1}{2} + \langle N_{\vec k}(t_0)\rangle).$$ This can be understood as the result of parametric amplification by the factor $\cal{A}$ of the energy density of vacuum fluctuations $ \hbar \O/2$ and that of the particles originally present in the $k$th mode at $t_0$, i.e., $\langle N_{\vec k}(t_0)\rangle\hbar \Omega_{\vec k}$. For the special but important case where $\mu$ is thermal at temperature $T = \beta^{-1}, \; \langle N_{\vec k}\rangle$ obeys the Bose-Einstein distribution function for scalar fields. The magnification of the $n$-particle thermal state gives the finite-temperature contribution of particle creation, with energy density $$\rho_T = \frac{1}{a^4} \int \frac{d^3k}{(2\pi)^3} (2s_{\vec k}+1)\Omega_{\vec k}/(e^{\beta \Omega_{\vec k}}-1).$$ For a massless conformal field, this yields the familiar Stefan-Boltzmann relation $$\rho_T = \frac{\pi^2}{30} T^4.$$ Finite-temperature particle creation and the related entropy generation problem have been discussed in [@Hu82; @VacVis]. For a more general density matrix the behavior of the induced or stimulated part of particle creation could increase or decrease, depending on the correlation and phase relation of the initial state, even though the spontaneous creation part always give an increase in particle number. Both are important factors in the consideration of entropy generation processes [@HuKan]. Evolutionary Operator, Squeezing and Rotation --------------------------------------------- An equivalent description of particle creation is by means of the evolutionary operator $U$ defined by b\_[k]{}(t) = U(t) a\_[k]{}U\^(t) where $UU^\dagger =1$. The form of $U$ was deduced by Parker [@Par69] following Kamufuchi and Umezawa [@KamUme]. In the modern language of squeezed states [@sqst], one can write $U=RS$ as a product of two unitary operators, the [**rotation operator**]{} R()=and the [**two mode squeeze operator**]{} S\_2(r, )=where $r$ is the squeeze parameter with range $0 \le r < \infty$ and $\phi, \q$ are the rotation parameters with ranges $ -\pi/2 < \phi \le \pi/2, ~~ 0 \le \q < 2 \pi$. (These parameters and $U, R, S$ should all carry the label $\vec k$. The $\pm$ on $a$ refer to the $\pm {\vec k}$ modes.) Note that S\^\_2 (r, ) = S\^[-1]{}\_2 (r, ) = S\_2 (r, + /2) The three real functions $(\theta_{\vec k}, \phi_{\vec k}, r_{\vec k})$ are related to the two complex functions $(\a_{\vec k},\b_{\vec k}) $ by \_[k]{} = e\^[i\_[k]{}]{}r\_[k]{}, \_[k]{} = e\^[i(\_[k]{}-2\_[k]{})]{} r\_[k]{}. For mode-decompositions in spatially-homogeneous spacetimes leading to no mode-couplings, the Bogolubov transformation connecting the $a_{\vec k}$ and the $b_{\vec k}$ operators is given by (for more general situations, see [@Hu72]): b\_[k]{} = \_[k]{}a\_[k]{}+\_[k]{}\^\* a\_[k]{} \^. We see that because of the linear dependence of $b_{+ \vec k}$ on $a_{+\vec k}$ and $a^{\dagger}_{- \vec k}$ (but not $a^{\dagger}_{+ \vec k}$) a two-mode squeeze operator is needed to describe particle pairs in states $\pm {\vec k}$. The physical meaning of ‘rotation’ and ‘squeezing’ can be seen from the result of applying these operators for a single-mode harmonic oscillator as follows: ($\vec k$th mode label is omitted below)\ The Hamiltonian is H\_0 = (a\^a + ) Under rotation, R |0= |0,    R a R\^ = e\^[i ]{} a. Also, R() R(’) = R (+ ’). This implies that R R\^&=& -\ R R\^&=& + . where a= (x + i ). Thus the name rotation. Let $ \D a = a - \langle a\rangle$, (where $\langle \rangle$ denotes the expectation value with respect to any state) then the second-order ‘noise moments’ of $a$ are defined as [@sqst]: (a)\^2&=& a\^2- a\^2 = (a\^)\^2 \^\*\ &=& + i (x p)\_[sym]{}\ |a|\^2&=& a a \^+ a\^ a= . The first quantity is the variance of $a$, a complex second moment, while the second is the correlation, a real second moment, which, as seen in the more familiar $x, p$ representation, measures the mean-square uncertainty (called total noise in [@sqst]). Rotation preserves the number operator R a\^a R\^= a\^a. It rotates the moment R (a)\^2 R\^= e\^[2i]{} (a)\^2corresponding to a redistribution between $\hx, \hp$, but preserves the uncertainty R | a|\^2 R\^ = |a|\^2. One can define a [**displacement operator**]{} as D() =. Note that $D^{-1} (\m) = D^{\dagger}(\m) = D(-\m)$. The coherent state can be defined as |=D()|0. Thus a |= |, and Da\^a D\^= a\^a - (a\^ + \^\* a) + ||\^2. Under displacement, D()a D\^() = a- . The displacement operation also preserves the uncertainty D |a|\^2 D\^= | a|\^2. The [**single-mode squeeze operator**]{} is defined as S\_1 (r, )=. A squeezed state is formed by squeezing a coherent state, |\_ = S\_1 (r, ) |or, |\_ = |r,,=S\_1 (r,)D()|0. Call $b= S_1 a S_1^\dagger$, then b|= |and b= S\_1 a S\_1\^= ar + e\^[2i ]{} a\^r. Thus a squeezed state in the Fock space of $a$ becomes a coherent state in the Fock space of $b$ with the same eigenvalue. From this we see the result of $S_1$ acting on $\hat x$ and $\hat p$ : S\_1 S\_1 \^&=&(r+2r)+(2r)\ S\_1 S\_1 \^&=&(r-2r)+ (2r) . For $\f =\p/2$, these give S\_1 S\_1 \^=e\^[-r]{},S\_1 S\_1 \^=e\^r . Hence the name ‘squeezing’. Two successive squeezes, with the same rotation parameter, result in one squeeze with the squeeze parameter as the sum of the two parameters: S\_1(r, ) S\_1(r’, ) = S\_1 (r+r’, ). The expectation value of squeezing the number operator is S\_1\^a\^a S\_1= \^2 r + (1+2 \^2 r) a\^a + 2r Re \[ e\^[-2i]{} a\^2\] and that of the correlation is S\_1\^|a|\^2 S\_1= 2r |a|\^2+ 2r Re \[ e\^[-2i]{} (a)\^2\] which for the vacuum and coherent states is always greater than or equal to the original value. The two-mode squeeze operator defined before is more suitable for the description of cosmological particle creation. One can show that the $out$ state is generated from the $in$ state by including contributions from all $k$ modes, |out= RS |in,    [or]{}   |) = RS | where S= \^\_[k =0]{} S\_2 (r\_[k]{}, \_[k]{}) In general out| F(b\_, b\_\^) |out= in | F (a\_, a\_\^)|in. The $|in\rangle$ state can be a number state, a coherent state or a squeezed state. If the initial state is a vacuum state, $|in\rangle= |0 in\rangle$, then |0 out= S (r, -) |0 inwhere S(r, -)= exp { \_[k]{} r\_[k]{} \[ e\^[-2i(\_[k]{}-\_[k]{} )]{} a\_[k]{} a\_[-k]{} - e\^[ 2i(\_[k]{}-\_[k]{})]{} a\_[k]{}\^a\_[-k]{}\^\] } The squeeze parameter $\sinh^2 r_{\vec k} = |\b_{\vec k}|^2 $ measures the number of particles created. Rotation does not play a role. Thus, as observed by Grishchuk and Sidorov [@GriSid], cosmological particle creation amounts to squeezing the vacuum. The same can be said about Hawking radiation [@Haw75]. For a massless scalar field in an eternal black hole, call $e^{iJ}$ the unitary operator which connects the Kruskal vacuum with the Schwarzschild vacuum (see e.g., [@BirDav]) |0\_S = e \^[iJ]{} |0\_K where iJ= \_[k]{} \^[-1]{}(e\^[-/ ]{}) (b\_[-k]{}\^[(1)]{} b\_[k]{}\^[(2)]{} - b\_[-k]{}\^[(1)]{} b\_[k]{}\^[(2)]{}). Then the squeeze and rotation parameters can be identified as r\_[k]{} = \^[-1]{}( e\^[-ø/ ]{}),    \_[k]{} = \_[k]{}, where $\k$ is the surface gravity of the black hole. This is the well-known expression for Hawking radiation. We see that for low-momentum modes in a black hole of high temperature, the squeezing is strong. Number, Coherence and Initial States ==================================== Number does not always increase ------------------------------- We will show in this section that the number of particles produced depends very much on the initial state chosen. The common impression of a net number increase associated with cosmological particle creation is premised upon the assumption that the initial state is an eigenstate of the number operator (called ‘number state’ for short here), and an implicitly invoked random-phase approximation. For states other than this, or for fermions, this is not necessarily true. This was already pointed out in [@HuPav; @Kan; @HuKan]. We shall show this explicitly for the coherent state and the squeezed states. The number operator for a particle pair in mode $k$ is given by N=a\_+\^ a\_+ + a\_[\_]{}\^ a\_[\_]{}. The expectation value of the number operator with respect to the $|out\rangle$ vacuum for a general initial state is $$\begin{aligned} (N)= \langle S_2^{\dagger} R^{\dagger} N RS_2\rangle & = & 2|\b|^2 +(1+2|\b|^2)\langle N\rangle \nonumber \\ & - & 2|\a||\b|(e^{2i\phi}\langle a_+^\dagger a_{\_}^\dagger \rangle +e^{-2i\phi}\langle a_+a_{\_}\rangle).\end{aligned}$$ Comparing this expression with (2.22), the factor of two for the first $|\b|^2 $ term comes from the spontaneous creation of particles in the $\pm \vec k$ modes. The net change in the particle number from the initial to the final state is N (N)- N = 2||\^2\[1+N\]-2|||| {e\^[2i]{}a\_+\^ a\_[\_]{}\^+e\^[-2i]{}a\_+a\_[\_]{}}. Here, the first two terms in the square brackets are respectively the spontaneous and stimulated emissions and the last term in the curly brackets is the interference term. The difference between spontaneous and stimulated creation of particles in cosmology was explained first by Parker [@Par69] and explored in more detail by Hu and Kandrup [@HuKan]. Note that since there is no $\theta$ dependence, rotation has no effect. If $r_{\vec k} \ne 0$ for some $\vec k$ both spontaneous and stimulated contributions are positive. The interference term can be negative for states which give nonzero $\langle a_+ a_{-}\rangle $. Only when this term is non-zero can $\Delta N$ be negative. We will calculate the change in particle number for some specific initial states. [**a. number state**]{} For an initial number state $ |n \rangle = |n_+,n_{\_}\rangle$ N=2||\^2(1+n\_+ +n\_[\_]{}). We see that the number of particles will always increase. [**b. coherent state**]{} For an initial coherent state |=D(\_+)D(\_[\_]{})|0,0we find that N=2||\^2\[1+N\_++N\_[\_]{}\] -4|||| (2-\_+-\_[\_]{}), where \_+= e\^[i\_+]{}, \_[\_]{}=e\^[i\_[\_]{}]{}. Note the existence of the interference term which can give a negative contribution. It depends not only on the squeeze parameters $|\b|$ and $\f$, but also on the particles present and the phase of the initial coherent state. Conditions favorable to a decrease in $\Delta N$ are $\cos (2\phi-\zeta_+-\zeta_{\_})=1$ and $\langle N_+\rangle=\langle N_{\_}\rangle=\langle N\rangle/2$. In this case we find $\Delta N$ is negative if N&gt;. For an initial one-mode squeezed state |\_1=S\_[1+]{}(r\_+,\_+)S\_[1-]{}(r\_-,\_-)|0,0generated by squeezing the vacuum with $S_{1\pm}$ for the $\pm \vec k$ modes, we get N = 2||\^2(1+N\_++N\_[\_]{}). Once again particle number will always increase. [**d. two-mode squeezed vacuum state**]{} For an initial two-mode squeezed vacuum |\_2=S\_2(r\_0,\_0)|0,0where $S_2$ is defined earlier, N = 2||\^2\[1+N\]+ 2||||2(-\_0). The cosine factor shows that particle number can decrease given the right phase relations. It can be shown that for $\cos 2(\phi-\phi_0)=-1$ particle number would decrease ($\D N \le 0$) if $r_0 \ge r/2$. If the phase information is randomized the cosine factor averages to zero and there is a net increase in particle number. Since a squeezed state is the end result of squeezing a vacuum via particle creation, one might naively expect to see a monotonic increase in number. Our result shows that this is true only if the phase information is lost in the squeezed state to begin with. In summary we can make the following observations: a\) Rotation $R$ in the evolution operator $U=RS$ does not influence particle creation. b\) For an initial number state or single mode squeezed vacuum we find a net increase in the number of particles. c\) For an initial coherent state and two mode squeezed vacuum, particle number can increase or decrease. A net increase can nevertheless be obtained by suitable choices of $S_2(r, \f)$ and $S_2(r_0, \f_0)$. d\) If random phase is assumed for the initial state the interference term can be averaged out to zero and there will be a net increase in number of particles. Coherence can persist --------------------- A measure of the coherence of the system is given by the uncertainty (called variance in [@Hu72], [@Mollow] and [@HuPav]) |a|\^2 =(aa \^+ a \^a) where $\D a=a-\langle a\rangle$. The expectation value of the uncertainty with respect to a state $|\psi\rangle$ is thus, ||a|\^2|=|a \^a|- ||a||\^2 + . The expectation value of the uncertainty with respect to a tranformed state $ |\psi) \equiv RS |\psi\rangle$ is given by (| |a|\^2 |) = 2r ||a|\^2|-2 2r Re \[ e\^[-2i]{} | a\_+ a\_- |\] where $|\D a|^2 \equiv |\D a_+|^2 + |\D a_-|^2$. For an initial number state, $|\psi\rangle= |n\rangle$, (n| |a|\^2 |n)= 2(+ ||\^2) n| |a|\^2 |n    n| |a|\^2 |n For a coherent state, $|\psi\rangle = |\m\rangle$ (| |a|\^2 |) = 2 ( + ||\^2) | |a|\^2 |    | |a|\^2 | where the first term corresponds to the vacuum fluctuation and the second term \[whose sum over all modes is equivalent to $ tr(v ^\dagger _{\vec k} v_{\vec k})$ in [@Hu72; @HuPav]\] measures the mixing of the positive and negative frequency components of different modes. This result was first derived in [@Hu72], and discussed further in [@HuPav]. Notice that it is always greater than the original value $ \langle |\D a|^2\rangle_\m$. For a squeezed state, $|\psi \rangle = |\s \rangle = S_2 (r_0, \f_0) |\m\rangle $ (| |a|\^2 |) = 2r ||a|\^2 | -2 2r Re \[ e\^[-2i]{} | a\_+ a\_- |\] , which can be smaller than the initial value. Notice that of the three states we discussed, only the squeezed state can allow for a decrease in the uncertainty, i.e., an increase in the coherence as the system evolves. In addition, even though the total number and the total uncertainty of the initial state of the two modes change with particle creation, their difference remains a constant. This is because cosmological particle creation is described by the two mode squeezed operator which satisfies the relations: $$\langle \psi |S^\dagger (a^\dagger_+ a_+ -a^\dagger _- a_-) S |\psi \rangle~ =~\langle \psi |a^\dagger_+ a_+ -a^\dagger _- a_- |\psi \rangle~,$$ |S\^(|a\_+|\^2 - |a\_-|\^2) S |  = | (|a\_+|\^2 - |a\_-|\^2) | . Fluctuations in Number ====================== Spontaneous particle creation can be viewed as the parametric amplification of vacuum fluctuations (or squeezing the vacuum). Particle number is an interesting quantity as it measures the degree the vacuum is excited. The fluctuation in particle number is another interesting quantity, as it can be related to the noise of the quantum field and the susceptibility of the vacuum. This is similar in nature to the energy fluctuation (measured by the heat capacity at constant volume) of a system being related to the thermodynamic stability of a canonical system, or the number fluctuation (measured by the compressibility at constant pressure) of a system being related to the thermodynamic stability of a grand canonical system. In gravity, we know that the number fluctuation of a self-gravitating system can be used as a measure of its heat capacity (negative) [@LynBel]; and those associated with particle creation from a black hole can be used in a linear-response theory description as a measure of the susceptibility of spacetime [@CanSci; @Mot]. We expect that this quantity associated with cosmological particle creation may provide some important information about quantum noise and vacuum instability [@nfsg; @HMLA]. Define $\d_i O \equiv [\langle O^2\rangle- \langle O\rangle^2]$ as the variance or mean-square fluctuations of the variable $O$ with respect to the initial state $|~\rangle$, and the corresponding quantity $\d_f O$ as that with respect to the final state $|~)$. Consider the difference between the final and the initial number fluctuation of both the $\pm$ kinds, N = (\_f N\_+ + \_f N\_-) - (\_i N\_+ + \_i N\_-). Using the expressions given in Sec. 2, we obtain N & = & 2 ||\^2||\^2 \[ N\_+ + N\_- + L + (N\_+ N\_-)\]\_i\ & & -(||\^3 || + || ||\^3)\[(N\_+L) + (N\_-L)\]\_i where the subscript $i$ refers to the fact that the expectation values are taken with respect to the initial states $|~\rangle$, the symbol $\partial$ denotes (PQ) and L=e\^[2i]{}a\_+ \^a\_- \^+ e\^[-2i]{}a\_-a\_+. Now for an initial number state $|n \rangle = |n_+, n_-\rangle$, N = 2 ||\^2||\^2(1+ n\_+ +n\_[\_]{} +2n\_+ n\_[\_]{}). we see that the number fluctuations will always increase. For an initial coherent state $|\mu\rangle = D (\mu_+) D (\mu_{\_}) |0, 0\rangle$, where $ \m_{\pm} = \sqrt{\langle N_{\pm}\rangle} e^{i \zeta_{\pm}} $, $$\begin{aligned} \d N & = & 2|\a|^2|\b|^2[1+ 2(\langle N_+\rangle+\langle N_{\_}\rangle)] \nonumber \\ & - & 4\sqrt{\langle N_+\rangle\langle N_{\_}\rangle} (|\a|^3 |\b| + |\a| |\b|^3)\cos(2\phi-\zeta_+ -\zeta_{\_}).\end{aligned}$$ We find that under the conditions $\cos(2\phi-\zeta_+ -\zeta_{\_}) =1$ and $\langle N_+\rangle=\langle N_-\rangle=\langle N\rangle/2 $ N &gt; $\d N$ can be negative. In the weak particle creation limit $|\b|\rightarrow 0, |\a|\rightarrow 1$ we find that (4.7) is equivalent to (3.8). Clearly conditions for a decrease in number fluctuations are not the same as those for a decrease in the number. For a single-mode squeezed state $|\sigma\rangle_1 = S_{1+} (r_+,\f_+) S_{1-} (r_-,\f_- ) |0, 0\rangle$ N & = & 2||\^2||\^2\[(1+N\_++N\_-)\^2 + N\_+(1+ N\_+) + N\_-(1+ N\_-)\ & - & 2 2(2-\_+-\_-)\]. From this it can be shown that, like the change in number, the change in the number fluctuations will always be positive for an initial single mode squeezed vacuum. For a two-mode squeezed state $ |\sigma\rangle_2 = S_2 (r_0,\f_0)|0, 0\rangle$ N & = & ||\^2||\^2{2(1+N)\^2 +N (2+ N) \[ 1+ 4(-\_0)\]}\ & + & 2(||\^3||+||\^3||)(1+N) 2(-\_0). Note that there is no definite relation between $\D N$ and $\d N$. For large $N>>1$ or small $|\b|<<1$, $\d N \le 0$. The relevance of the number fluctuations in cosmological particle creation in defining the susceptibility of the vacuum and the noise of quantum fields have been hinted upon earlier [@VacVis; @GraEnt; @HuPhysica]. The result obtained here will be useful in relating to issues of noise and fluctuation of quantum fields, and dissipation and instability of spacetime in semiclassical gravity and quantum cosmology [@fdrc; @nfsg; @HMLA; @HuMat3]. This work is supported in part by the National Science Foundation under grant PHY91-19726. [refer 99999]{} L. Parker, Ph. D. Thesis, Harvard University, 1966; Phys. Rev. Lett. 21, 562 (1968); [ Phys. Rev. ]{} [**183**]{}, 1057 (1969); [**D3**]{}, 346 (1971) R. U. Sexl and H. K. Urbantke, [ Phys. Rev.]{}, [**179**]{}, 1247 (1969) Ya. B. Zel’dovich, Pis’ma Zh. Eksp. Teor. Fiz, [**12**]{} ,443 (1970) \[JETP Lett. [**12**]{}, 307(1970)\]; Ya. B. Zel’dovich, in [*Physics of the Expanding Universe*]{}, ed. M. Demiansky (Springer, Berlin, 1979) Ya. B. Zel’dovich and A. A. Starobinsky, Zh. Teor. Eksp. Fiz. [**61**]{} 2161 (1971) \[Sov. Phys. JETP [**34**]{}, 1159 (1972)\]; Ya. B. Zel’dovich and A. A. Starobinsky, Pis’ma Zh. Eksp. Teor. Fiz. [**26**]{}, 373 (1977) \[Sov. Phys. JETP Lett. [**26**]{}, 252 (1977)\] B. L. Hu, Ph. D. Thesis, Princeton University, 1972. N. D. Birrell and P. C. W. Davies, [*Quantum Fields in Curved Space,*]{} (Cambridge University Press, Cambridge, 1982). B. L. Hu and L. Parker, Phys. Lett. 63A, 217 (1977); Phys. Rev. [**D17**]{}, 933 (1978) M. V. Fischetti, J. B. Hartle and B. L. Hu, Phys. Rev. [**D20**]{}, 1757 (1979) J. B. Hartle and B. L. Hu, Phys. Rev. D20, 1772 (1979); D21, 2756 (1980). C. M. Caves and B. L. Schumacher, Phys. Rev. A31, 3068, 3093 (1985); B. L. Schumacher, Phys. Rep. 135, 317 (1986). S. Kamefuchi and H. Umezawa, Nuovo Cimento 31, 429 (1964) Appendix A. B. L. Hu, Phys. Rev. D9, 3263 (1974) L. Grishchuk, Zh. Eksp. Teor. Fiz. 67, 825 (1974) \[Sov. Phys. JETP 40, 409 (1975)\]. B. K. Berger, Phys. Rev. [**D11**]{}, 2770 (1975) B. R. Mollow, Phys. Rev. 162, 1256 (1967); L. S. Brown and L. J. Carson, Phys. Rev. A20, 2486 (1979). B. L. Hu and A. Matacz, “Quantum Brownian Motion in a Bath of Parametric Oscillators", Univ. Maryland preprint 93-210 (1993) L. Parker, Nature [**261**]{}, 20 (1976); and in [*Asymptotic Structure of Spacetime*]{} pp. 107-227 eds F. P. Esposito and L. Witten (Plenum Press, N. Y. 1977) B. L. Hu and D. Pavon, Phys. Lett. [**B180**]{}, 329 (1986) H. E. Kandrup, Phys. Rev. [**D37**]{}, 3505 (1988) B. L. Hu and H. E. Kandrup, Phys. Rev. [**D35**]{}, 1776 (1987) L. Grishchuk and Y. V. Sidorov, Phys. Rev. D42, 3414 (1990) A. Albrecht [*et al*]{}, Imperial College preprint TP/92-93/21 (1992) R. H. Brandenberger, V. Mukhanov and T. Prokopec, Phys. Rev. Lett. 69, 3606 (1992); Phys. Rev. D48, 2443 (1993) M. Gasperini and M. Giovanni, Phys. Lett. B301, 334 (1993); M. Gasperini and M. Giovanni and Veneziano, Phys. Rev. D48, R439 (1993). A. Matacz, Univ. of Adelaide preprint ADP-92-198/M13 (1993) B. L. Hu, Phys. Lett. 108B, 19 (1982); 123B, 189 (1983) B. L. Hu, Phys. Lett. [**A90**]{}, 375 (1982); B. L. Hu, in [*Cosmology of the Early Universe*]{}, ed. L. Z. Fang and R. Ruffini (World Scientific, Singapore, 1984) S. W. Hawking, Comm. Math. Phys. 43, 199 (1985) D. Lynden-Bell and R. M. Lynden-Bell, MNRAS 181, 405 (1977) P. Candelas and D. W. Sciama, Phys. Rev. Lett. [**38**]{}, 1372 (1977) E. Mottola, Phys. Rev. [**D33**]{}, 2136 (1986) B. L. Hu, Phys. Lett. [**A97**]{}, 368 (1983); B. L. Hu, Vistas in Astronomy 37, 391 (1993) B. L. Hu, Physica 158, 399 (1989) B. L. Hu and S. Sinha,“Fluctuation-Dissipation Relation in Cosmology", Univ. Maryland preprint 93-164 (1993) E. Calzetta and B. L. Hu,“Noise and Fluctuations in Semiclassical Gravity", Univ. Maryland preprint 93-216 (1993) B. L. Hu and A. Matacz, “Quantum Noise in Gravitation and Cosmology" Invited Talk at the Workshop on [*Fluctuations and Order*]{}, ed. M. Millonas (MIT Press, Cambridge, 1994). Univ. Maryland preprint 94-44 (1994) B. L. Hu and A. Matacz, “Einstein-Langevin Equation for Backreactions in Semiclassical Cosmology”, Univ. Maryland preprint 94-31 (1993) [^1]: Permanant Address: Department of Physics, University of Adelaide, 5005, Australia
--- abstract: 'Supernumerary Robotic Limbs (SRLs) exhibit inherently compliant behavior due to the elasticity present at the intersection of human tissue and the robot. This compliance, can prominently influence the operation of some SRLs, depending on the application. In order to control the residual vibrations of SRLs, we have used an input-shaping method which is a computationally inexpensive approach. The effectiveness of this method in controlling the residual vibrations of a SRL has been proven using robustness analysis. User studies show that reducing the vibrations using input shaping directly increases the user satisfaction and comfort by at least 9%. It is also observed that 36% of the users preferred unshaped commands. We hypothesize that the shaped commands put a higher cognitive load on the user compared to unshaped commands. This shows that when dealing with human-robot interaction, user satisfaction becomes an equally important parameter as traditional performance criteria and should be taken into account while evaluating the success of any vibration-control method.' author: - 'Roozbeh Khodambashi$^{1}$, Gil Weinberg$^{2}$, William Singhose$^{3}$, Shima Rishmawi$^{3}$, Varun Murali$^{3}$, Euisun Kim$^{3}$[^1][^2][^3][^4]' title: - '**Assessment of Input Shaping for Moving a Wearable Robotic Arm Based on User Comfort and Satisfaction\*** ' - ' **User Oriented Assessment of Input Shaping for Controlling the Vibrations of a Wearable Robotic Arm\*** ' - ' **User Oriented Assessment of Command Shaping as a Vibration Controlling Method in a Wearable Robotic Arm\*** ' - ' **User Oriented Assessment of Command Shaping as a Vibration Suppression Method in a Wearable Robotic Arm\*** ' - ' **User Oriented Assessment of Vibration Suppression by Command Shaping in a Wearable Robotic Arm\*** ' --- INTRODUCTION ============ The goal of a supernumerary robotic limb (SRL) [@c23],[@c24] is to restore or augment human abilities in order to perform tasks that are beyond his/her normal abilities regarding power, speed, and dexterity. SRLs share a common feature, they all have an elastic interface to the human body. Elasticity in the human tissue causes the intersection of the robot and the body to act as an elastic joint. This presents some advantages, such as the possibility to use a multipurpose socket which fits a wider range of body sizes. However, an elastic joint introduces passive compliance in the system. While compliance has been effectively used to increase the safety of robots that operate in close proximity to humans [@c20]-[@c22], it results in oscillations. These oscillations affect the quality of physical human-robot interaction. In a human-robot interaction scenario in which the robot arm is attached to the body, these oscillations exert cyclic loads on the user’s body which affect the physical comfort of the user. In addition to physical loads, the vibrations put a high cognitive load on the user because robot movements are not predictable, which in turn affects the user’s satisfaction. Thus the control of vibrations in compliant robots used as SRLs is critical. In this paper, we use input shaping to control the movements of a supernumerary robotic limb. The platform used is the 3rdArm shown in Fig. \[fig:thirdarm\]. This SRL is attached to the drummer’s right shoulder. It moves an additional drum stick to compliment the drummer’s own abilities. This platform was designed to study the concept of augmentation and shared control in human-robot interaction. Robustness analysis and quantitative measures have been used to evaluate the performance of the control method. In addition, we performed user studies to assess the effectiveness of this control method regarding the user comfort criteria. We found that although input shaping has some advantages, such as simplicity of the algorithm, its performance cannot be evaluated solely based on robustness analysis due to the involvement of a human. ![The 3rdArm platform.[]{data-label="fig:thirdarm"}](FinalProject/3rdArm1){width="8cm"} Literature Review ----------------- Vibration of the elastic members in machines and robotic manipulators is a classical problem which has been well studied. An example of a structure which experiences residual vibrations is the crane in which the position of the end effector (trolley) is not a good estimation of the position of the payload due to inherent swing of the hoist cable. Different methods of controlling residual vibrations of the payload in cranes have been reviewed [@c1]. Traditionally robots were designed as stiff structures in order to achieve maximum position control even in presence of external disturbances [@c25]. This can be dangerous in scenarios which involve physical human-robot interaction (pHRI) because the amount of forces that the robot exerts on the environment cannot be controlled. This problem can be addressed by adding compliance to the robots mechanically or by control techniques. However, this added compliance results in residual vibration of the system. A review of the problem of controlling vibrations of robotic manipulators which exhibit compliant behavior due to the presence of flexible joints and links has been presented in [@c2]. The goal in almost all these studies is to reduce residual vibrations, improve tracking of inputs, and decrease sensitivity to modelling errors. However, none of these studies have considered this problem from a human-robot interaction perspective. For example, [@c3] has studied and compared input shaping and model predictive control (MPC) as two approaches for controlling residual vibrations of a humanoid robot and has concluded that MPC has a superior performance compared to input shaping without considering the impact of using these methods on the user experience. Comfort is a key factor in the design of wearable robots [@c4]-[@c8]. The pressure exerted on the skin by the robot is the main parameter that has been known to affect the user comfort [@c9]. The authors of [@c10] have developed a distributed soft sensor that can measure the pressure distribution in the interaction area without affecting the user comfort. Even the sound generated by the robot may be considered as a parameter that affects the user comfort [@c11]. Comfort has different definitions in different wearable technologies and, therefore, various measurement methods have been provided in the literature [@c12; @c13]. The Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST) [@c14] is a widely used tool for measuring the subjective perception of a wearable device [@c4]. Another tool is Locally Experienced Discomfort (LED) [@c15]. Recent advances in robotics and the arising need for augmenting the human body has led to a new category of robots called supernumerary robotic limbs (SRLs). The main difference between SRLs and exoskeletons is that SRL are kinematically independent from the human. The limb can move even when the human limb is stationary. SRLs vary in size, shape, material, and the control method. The challenges in the design of SRLs varies based on the application for which they are designed. Smaller SRLs, such as [@c16], might experience fewer vibrations because they have smaller inertia and less actuator power and, therefore, designing a controller that minimizes the vibrations in these SRLs is not necessary. The vibrations of the larger compliant robotic arms affect the user comfort in two aspects. First, it exerts loads that are cyclic in nature, which have been shown to alter the tissue in the interface area [@c17]. Second, it puts a high cognitive load on the user because the user’s concentration is moved from the task at hand, to predicting and correcting the position or trajectory of the robot. Human-robot interaction in musical scenarios have been studied previously at the Georgia Tech Center for Music Technology (GTCMT) using different robot platforms such as Shimon [@c18]. In order to study how augmenting the human body can affect his performance, the 3rdArm platform was developed which is a SRL attached to the user’s shoulder. It helps a drummer perform complicated rhythms, as well as improvise based on his performance. The problem of residual vibrations was clearly observable in our experiments with the 3rdArm in which the user paused frequently while playing, looked at the arm and tried to understand its behavior and predict its trajectory and final position in order to make appropriate movements. Therefore, the user often failed to follow the rhythm properly. To address this problem, We have studied these vibrations and used input shaping to control them. In contrary to previous studies, which have not considered user experience in evaluating the performance of vibration-control algorithms, we used a modified and simplified version of QUEST to assess the performance of designed input shaper. In the next sections, we first describe the 3rdArm platform. Then, we discuss the design and implementation of the input shaper that is used to minimize the residual vibrations of the 3rdArm. Next, we describe the user studies that have been performed to evaluate the performance of the controller regarding comfort criteria. Conclusions are provided at the end. THE 3RDARM PLATFORM =================== Physical Description -------------------- The 3rdArm platform is a 4 DOF robotic arm with the capability of attaching to the human body. Fig. \[fig:thirdarm\] shows this platform attached to a user’s body. The shoulder attachment socket is made up of a layer of ABS plastic and a layer of soft foam to provide comfort and allow attachment to a wider range of body sizes. Starting from the shoulder mount, the first degree of freedom (DOF) - the shoulder joint - rotates the 3rdArm around the body (horizontal abduction/adduction). The first link connects the shoulder joint to the elbow joint. This joint is the second DOF and performs a function similar to human elbow (flexion/extension). Next is the third DOF which performs rotation of the wrist (supination/pronation). From there, link 2 connects to a fourth DOF which is for moving the drumstick and is solely used for hitting the drum surface. Dynamixel MX-64 servomotors[^5] are used as actuators. The commands are generated based on musical data in Max/MSP[^6] software and sent to the motors using an Arduino Mega2560[^7] board through serial communication. Dynamics Modeling ----------------- The movements of the second and third DOF do not result in considerable residual vibrations because the effective mass of the system is lower in these cases compared to movements involving the first DOF (shoulder joint). Therefore, for simplicity, we only model the movements of the shoulder joint. We also assume that the second motor is positioned in such a way that link 1 and link 2 are collinear, as in Fig. \[fig:thirdarm\]. This is the worst-case scenario in which maximum residual vibration occurs. The arm is moved with maximum actuator effort in order to rotate a distance of $\theta$ in minimum time $t$. This requirement comes from the fact that the robot has to react to the positioning commands as fast as possible or otherwise it cannot play music properly. After the arm is stopped at the end of its travel distance, it continues to vibrate due to the elasticity present in the attachment to the body. When dealing with a complicated system, like the robot arm we are trying to control, it is easier to derive the system model using experimental approaches rather than analytical or numerical methods. This is due to many unknown parameters such as the elastic constants and damping ratios of the human tissue, the shoulder mount material, and the robot material. Geometry of the robot also adds to the complexity. To be able to derive a physical model of the arm, its vibrations should be recorded and model parameters should be extracted based on the actual response. To achieve this, a 9 DOF inertial measurement unit (IMU) was mounted at the end of link 2. The arm was given an initial displacement and then released to vibrate freely. The angular position of the arm was recorded. By looking at the vibration characteristics of the system due to an initial displacement, which is shown in Fig. \[fig:response\] with black solid line, we can see that it is closely matching the response of a simple harmonic oscillator with elastic constant of $K_T$ and damping coefficient of $\zeta$. ![Actual (recorded) and simulated response of the manipulator to an initial displacement.[]{data-label="fig:response"}](FinalProject/Fig4a){width="8cm"} The equation of motion of this system is described as: $$\label{eq:f1} J\ddot{\theta} +B_T \dot{\theta} +K_T \theta= \tau(t)$$ where: - $J$ is the rotational inertia (kg$m^2$) - $K_T$ is the torsional spring stiffness (Nm/rad) - $B_T$ is the torsional damping constant (Nms/rad) - $\tau(t)$ is the input torque (Nm) - $\theta$ is the robot arm’s rotational displacement (rad) Equation \[eq:f1\] can be normalized into the following equation: $$\label{eq:f2} \ddot{\theta} +2\zeta \omega_n \dot{\theta} +\omega_n^2 \theta=\omega_n^2 u(t)$$ where: - $\zeta$ is the damping ratio - $\omega_n$ is the natural frequency rad/s - $u(t)$ is the input signal rad Similar behavior can be observed from the response of the system to a ramp position input. This is shown as the black solid line in Fig. \[fig:response-rev\]. ![Actual (recorded) and simulated response of the manipulator to a ramp position input.[]{data-label="fig:response-rev"}](FinalProject/Fig4b){width="8cm"} The damped frequency of oscillations rad/s is calculated using: $$\omega_d=\frac{2\pi}{T}$$ Where T is the time needed to complete one period of oscillation. Because the damping ratio is relatively small, it can be calculated using logarithmic decrement: $$\zeta=\frac{\ln \frac{x_0}{x_1}}{2\pi}$$ Where $x_0$ and $x_1$ are two successive peaks extracted from the graphs. The natural frequency of oscillation can be calculated by: $$\omega_n=\frac{\omega_d}{\sqrt{(1-\zeta^2 )}}$$ To make sure the calculated parameters are a good approximation of the actual values, both the free and forced responses were recorded five times. The parameters were calculated from all graphs and the average values were found. Results are summarized in Table \[tab:T1\]. --------------------------- --------------------------- --------- --------------------------- $\omega_d (\mbox{rad/s})$ $\zeta$ $\omega_n (\mbox{rad}/s)$ Free vibration response 7.78 0.098 7.82 Forced vibration response 9.58 0.094 9.62 --------------------------- --------------------------- --------- --------------------------- : <span style="font-variant:small-caps;">Damping Ratio and Frequency of the System.</span> \[tab:T1\] In both cases, the damped and natural frequencies are close because the system experiences a small amount of damping. Also, the oscillation frequencies in the forced vibrations case are greater due to the inner PID feedback controller, which controls the position of the motor, and which comprises only a proportional parameter. The appropriateness of the model is demonstrated by considering the relative similarity of the simulated responses to the experimental responses that are shown in Fig. \[fig:response\] and Fig. \[fig:response-rev\] with red dashed lines. Input Shaper Design =================== An input shaper is a sequence of impulses which is convolved with any desired command to create a shaped input that is fed to the system. This will limit residual vibrations. A zero vibration derivative (ZVD) input shaper was selected as the input shaper because it is robust to disturbances and modelling errors, and also is easy to implement. The ZVD shaper was designed based on the calculated system parameters: $\omega_n=9.62$ rad/s and $\zeta=0.1$. ZVD shaper parameter estimation ------------------------------- Referring to [@c19], a ZVD shaper consists of three impulses, whose amplitudes and application times are: $$\label{eq:E6} \begin{bmatrix} A_i\\ t_i \end{bmatrix} = \begin{bmatrix} \frac{1}{(1+k)^2} & \frac{2k}{(1+k)^2} & \frac{k^2}{(1+k)^2}\\ 0 & \frac{\pi}{n} & \frac{2\pi}{n} \end{bmatrix}$$ where $k=e^{\frac{-\zeta\pi}{\sqrt{(1-\zeta^2 )}}}$ . Thus, substituting numerical values in (\[eq:E6\]), the shaper is expressed by: $$\label{eq:E7} \begin{bmatrix} A_i\\ t_i(s) \end{bmatrix} = \begin{bmatrix} 0.3344 & 0.4877 & 0.1778\\ 0 & 0.3266 & 0.6531 \end{bmatrix}$$ When convolving this shaper with the original input command consisting of a ramp input of $1.45$ rad (solid black line in Fig. \[fig:F5\]), the result is the shaped input command shown in Fig. \[fig:F5\] with red dashed line. ![Unshaped and shaped commands used as input to the system.[]{data-label="fig:F5"}](FinalProject/Fig5a){width="8cm"} The simulated system response to the unshaped and shaped commands is shown in Fig. \[fig:F7\] with solid black line and red dashed line respectively. It can be noticed that the ZVD input shaper is capable of completely cancelling the residual vibrations. However, the price to be paid here is a time delay of 0.65 s. To obtain the actual response of the system to the shaped commands, a micro-controller was used to divide the original ramp motion profile to five segments, each having a starting and an ending position and a specified speed. This is illustrated in Table \[tab:T2\]. In this case, which is illustrated in Fig. \[fig:F8\], the ZVD input shaper was capable of reducing the maximum overshoot to 0.82%, thus significantly minimizing residual vibrations. Here also, there is a time penalty of 0.6s. This should be accounted for when designing the trajectory of the robot arm while it is playing music. ![Simulated unshaped and shaped responses of the system to a ramp input.[]{data-label="fig:F7"}](FinalProject/Fig6a){width="8cm"} [|r|r|r|r|r|]{} & & & &\ \ 1 & 0 & 0 & 1.34 & 3.45\ \ 1 & 0 & 0 & 0.37 & 1.154\ 2 & 0.32 & 0.37 & 0.57 & 2.84\ 3 & 0.4 & 0.57 & 0.96 & 1.68\ 4 & 0.62 & 0.96 & 1.15 & 2.3\ 5 & 0.73 & 1.15 & 1.34 & 0.62\ \[tab:T2\] ![Actual (recorded) response of the system to a ramp input.[]{data-label="fig:F8"}](FinalProject/Fig6b){width="8cm"} Robustness Analysis ------------------- Robustness of the input shaper is an important criterion to ensure that the shaper works properly for a wide range of conditions. Two special cases were investigated; first the designed input shaper was tested on different subjects. Every subject had a different arm circumference and a different tissue elasticity. The response of the system for these subjects is shown in Fig. \[fig:F9\]. Then, the designed input shaper was used to move the robot arm to a different final desired position (an angular distance of 1 rad) for a single user. The system response is shown in Fig. \[fig:F10\]. The data shown in Fig. \[fig:F9\] and Fig. \[fig:F10\], demonstrates that the designed input shaper is robust and can be reliably used under different conditions. ![System ramp response for 3 different users wearing the arm.[]{data-label="fig:F9"}](FinalProject/Fig7a){width="8cm"} ![System response to a move distance of 1 rad.[]{data-label="fig:F10"}](FinalProject/Fig7b){width="8cm"} USER SURVEY ============ After achieving a successful controller design, we studied the system perfromance on a sample of 14 subjects which were selected through email advertisements and snowball sampling. The study was approved by Georgia Institute of Technology’s Institutional Review Board (IRB). Amongst the participants, 57.1% were male, 64.2% were between 23-29 years old, 85.7% played at least one musical instrument and 57.1% were music technology students. Studies were performed on each participant individually after they completed the consent form and demographics questionnaire. The circumference and length of the bicep were measured and noted on the questionnaire which were used to observe if there is any correlation between the socket size and the perceived comfort of the 3rdArm movements. Then participants were introduced to the robotic arm. The arm was placed on the participants’ shoulder after they confirmed the understanding of the overall process. Three different scenarios were studied. In the first scenario, the arm was kept stationary and the level of comfort with the socket itself was studied. In the second scenario, the 3rdArm was moved using unshaped commands and in the third scenario, shaped command was used. In all the scenarios, the user was sitting stationary on a chair in a comfortable position and observed the movement of the robotic arm. A simplified version of the QUEST test was administered in the form of an interview to obtain the perceived comfort of using the robotic arm. This test has 12 satisfaction items which are rated from 1 to 5. Four of these items are related to customer service of the assistive devices which are not considered in our study. From the other 8 items, questions related to weight, dimensions and comfort were relevant to our study and were included in the questionnaire. Fig. \[fig:F11\] shows the average level of comfort of the users in three different scenarios based on their arm circumference. The users with arm circumference of 10-11 inches expressed highest comfort. This is due to the fact that the size of the user’s arm matched the socket size used for the shoulder attachment. It is also observed that regardless of the arm circumference, the comfort achieved from the shaped command is always higher. It was found that although using the input shaper reduces the forces perceived by the user, there is still approximately 36% of the users that preferred the unshaped commands. This can be attributed to the fact that different users consider different physical and mental criteria for scoring the level of comfort. Users which used physical criteria for judgment mentioned the keywords such as ’less force’, ’less recoil’ and, ’smoother’ in their comments and thus, voted the movement using shaped commands to be more comfortable. On the contrary, users who use the mental parameters in their decision, use keywords such as ’weird’ in their comments about the shaped movements and preferred the movements using unshaped commands. This shows that in evaluating the performance of a controller which is used in scenarios involving human-robot interaction, it is important to consider user satisfaction and comfort in addition to commonly used criteria such as input tracking ability and robustness. ![Average user comfort level for arm being stationary, as well as moving with shaped and unshaped commands.[]{data-label="fig:F11"}](FinalProject/Fig8a){width="8cm"} CONCLUSIONS =========== Experiments with the 3rdArm platform shows that different criteria should be taken into account while designing supernumerary robotic limbs compared to exoskeleton designs or other robotic manipulators. Depending on the application, residual vibrations might be a concern in low impedance robotic limbs including the SRLs. We have shown that the conventional methods for suppressing the residual vibrations in the structures such as cranes and robotic manipulators can be applied to SRLs, effectively the vibrations in them. However, these methods might not increase the user satisfaction and comfort. Therefore, for better judgment about the performance of these methods, user studies have to be taken into account. Future work may include comparing other methods of vibration control such as model predictive control in addition to input shaping to find out the best control strategy based on the level of comfort of the users with these methods. [99]{} Singhose, W., Command shaping for flexible systems: A review of the first 50 years. International Journal of Precision Engineering and Manufacturing, 2009. 10(4): p. 153-168. Dwivedy, S.K. and P. Eberhard, Dynamic analysis of flexible manipulators, a literature review. Mechanism and machine theory, 2006. 41(7): p. 749-777. Rupert, L., P. Hyatt, and M.D. Killpack. Comparing Model Predictive Control and input shaping for improved response of low-impedance robots. in Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th International Conference on. 2015. Nimawat, D. and P.R.S. Jailiya, Requirement of Wearable Robots in Current Scenario. European Journal of Advances in Engineering and Technology, 2015. 2(2): p. 19-23. Mohammed, S., Y. Amirat, and H. Rifai, Lower-limb movement assistance through wearable robots: state of the Art and challenges. Advanced Robotics, 2012. 26(1-2): p. 1-22. Gopura, R., et al., Developments in hardware systems of active upper-limb exoskeleton robots: A review. Robotics and Autonomous Systems, 2016. 75: p. 203-220. Gálvez-Zúñiga, M.A. and A. Aceves-López, A Review on Compliant Joint Mechanism for Lower-Limb Exoskeletons. Gopura, R., K. Kiguchi, and D. Bandara. A brief review on upper extremity robotic exoskeleton systems. in 2011 6th International Conference on Industrial and Information Systems. 2011. Rocon, E., et al., Human-Robot Physical Interaction. Wearable Robots: Biomechatronic Exoskeletons, 2008: p. 127-163. Lenzi, T., et al., Measuring human-robot interaction on wearable robots: A distributed approach. Mechatronics, 2011. 21(6): p. 1123-1131. Veale, A.J. and S.Q. Xie, Towards compliant and wearable robotic orthoses: A review of current and emerging actuator technologies. Medical engineering & physics, 2016. 38(4): p. 317-325. Knight, J.F., et al. The Comfort Assessment of Wearable Computers. in International Symposium on Wearable Computers (ISWC). 2002. Bodine, K. and F. Gemperle. Effects of functionality on perceived comfort of wearables. in Proceedings of the Seventh IEEE International Symposium on Wearable Computers (ISWC’03). 2003. Demers, L., R. Weiss-Lambrou, and B. Ska, The Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST 2.0): an overview and recent progress. Technology and Disability, 2002. 14(3): p. 101-105. Corlett, E. and R. Bishop, A technique for assessing postural discomfort. Ergonomics, 1976. 19(2): p. 175-182. Ort, T., et al. Supernumerary Robotic Fingers as a Therapeutic Device for Hemiparetic Patients. in ASME 2015 Dynamic Systems and Control Conference. 2015. American Society of Mechanical Engineers. Mak, A.F., M. Zhang, and D.A. Boone, State-of-the-art research in lower-limb prosthetic biomechanics-socket interface: a review. Journal of rehabilitation research and development, 2001. 38(2): p. 161. Hoffman, G. and G. Weinberg. Shimon: an interactive improvisational robotic marimba player. in CHI’10 Extended Abstracts on Human Factors in Computing Systems. 2010. Singer, N.C. and W.P. Seering, Preshaping command inputs to reduce system vibration. Journal of Dynamic Systems, Measurement, and Control, 1990. 112(1): p. 76-82. Bicchi, A., M.A. Peshkin, and J.E. Colgate, Safety for physical human-robot interaction, in Springer handbook of robotics. 2008, Springer. p. 1335-1348. Bicchi, A., S.L. Rizzini, and G. Tonietti. Compliant design for intrinsic safety: General issues and preliminary design. in Intelligent Robots and Systems, 2001. Proceedings. 2001 IEEE/RSJ International Conference on. 2001. Park, J.-J., et al. Safe link mechanism based on passive compliance for safe human-robot collision. in Proceedings 2007 IEEE International Conference on Robotics and Automation. 2007. Llorens-Bonilla, B., F. Parietti, and H.H. Asada. Demonstration-based control of supernumerary robotic limbs. in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2012. Parietti, F. and H. Asada, Supernumerary Robotic Limbs for Human Body Support. IEEE Transactions on Robotics, 2016. 32(2): p. 301-311. Kim, S., C. Laschi, and B. Trimmer, Soft robotics: a bioinspired evolution in robotics. Trends in biotechnology, 2013. 31(5): p. 287-294. [^1]: \*This work was supported by National Science Foundation [^2]: $^{1}$Roozbeh Khodambashi is with Center for Music Technology, Georgia Institute of Technology, 840 McMillan St NW Atlanta, GA 30318, USA [khodambashi@gatech.edu]{} [^3]: $^{2}$Gil Weinberg is with the Center for Music Technology, Georgia Institute of Technology, Atlantaa, GA 30318, USA [gilw@gatech.edu]{} [^4]: $^{3}$William Singhose, Sima Rishmawi, Varun Murali and Euisun Kim are with the Department of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30313, USA [singhose@gatech.edu]{} [^5]: $http://www.robotis.com/xe/dynamixel\_en$ [^6]: $https://cycling74.com/$ [^7]: $https://www.arduino.cc/en/Main/ArduinoBoardMega2560$
--- abstract: | We present a strong field theory of matter wave splitting in the presence of various gravitational, inertial and trapping potentials. The effect of these potentials on the resonance condition (between the splitting potential and the considered effective two-level system) and on the atomic Borrmann effect is investigated in detail. The dispersive structuring of an incident atomic wave packet - due to such generalized beam splitters - is studied and modeled, and several important dynamical features of the solutions are detailed (generalized Rabi oscillations, velocity selection, anomalous dispersion, generalized Borrmann effect and anomalous gravitational bending). Finally, we show how to express this triple interaction matter - splitting potential - gravito-inertial and trapping potentials as an equivalent instantaneous interaction which turns out to be a very efficient tool for the modeling of atom interferometers. PACS number(s): 03.75.-b, 32.80.-t, 33.80.-b, 39.25.+k, 42.50.-p author: - | Charles Antoine$^{1,2}$ and Christian J. Bordé$^{3,4}$\ $^{1}$[*ERGA, LERMA, CNRS-Observatoire de Paris,*]{}\ [*Université P. et M. Curie, 75005 Paris, France*]{}\ $^{2}$[*Group of Gravitation and Experiments with Cold Atoms, TIFR, 400005 Mumbai, India*]{}\ $^{3}$[*BNM-SYRTE, CNRS-Observatoire de Paris, 75014 Paris, France*]{}\ $^{4}$[*LPL, CNRS, Université Paris Nord, 93430 Villetaneuse, France*]{} date: 'July 16, 2005' title: 'Theory of matter wave beam splitters in gravito-inertial and trapping potentials' --- Introduction ============   The matter wave beam splitters are nowadays the cornerstone of a wide range of experiments, from atomic clocks and gravito-inertial sensors to laser cooling and ultracold atoms characterization, quantum computing and cavity QED experiments, atom lithography and chemical reaction dynamics, detection of tiny effects of General Relativity and test of fundamental theories, measurement of atom surface interactions... In view of the recent progress in non-dissipative atom optics (coherent beam splitters, mirrors, lenses...) as well as in dissipative atom optics (slowing, trapping and cooling of atoms and molecules), it is needed to deepen our comprehension of light-matter interactions in the presence of other external potentials, like gravito-inertial or trapping potentials. In particular, the precision and stability of atom interferometers are now so outstanding [@gustavson00] that it is necessary to go beyond the former modeling of their main component, namely the beam splitters. In fact, the concept of atomic beam splitter is not confined to light-matter interactions, and can be extended to any interaction process between matter waves. It is thus possible to write the action of such atom optical elements as a S matrix between the incident and diffused matter waves, where the S matrix depends mainly on the splitting potential, which can be material (slits, periodic microstructures...) or electromagnetic (magnetic or electric static fields, laser fields...). This S matrix description is particularly useful for the modeling of atom interferometers, and more generally for any set up having a succession of such beam splitters [@borde94; @antoinejopb; @antoineThese]. However, for a long time, the precision of atom optics experiments has remained low enough not to require an accurate study of matter wave beam splitters. Thus, in the most common simplified modeling of these elements, only the following effects were considered: 1) the splitting of an incident atomic wave packet into several wave packets, 2) among them one was equal to the incident wave packet, up to a change of amplitude, 3) and where the others could differ from the incident wave packet in their central momentum, internal state, amplitude and phase. However, this practical modeling - sometimes called infinitely thin because it amounts to neglecting the duration of the interaction - does not take into account several important effects, like the dispersive structuring of the incident wave packet (velocity selection and sidebands, Borrmann effect, anomalous dispersion...), the time and space dependency of the splitting potential, the effect of relaxation processes, or the effect of other external fields during the splitting (time-dependent gravito-inertial effects, trapping potentials...). During the last past two decades, several authors studied some of these problems, namely: the effect of a non-trivial time dependency of the splitting potential (for running and standing laser beam splitters) [@hioe85; @suominen92; @ishikawa94; @carmel00; @ishkhanyan00]; the atomic Borrmann effect and anomalous dispersion effect without any other external potential [@oberthaler96; @eiermann03] or with a constant and uniform acceleration (WKB solution [@lammerzahl99]); the atomic splitting in a constant and uniform acceleration (exact solution in the temporal case and WKB solution in the spatial case) [@lammerzahl95]; a common modeling for both spatial and temporal beam splitters to the first order in the splitting potential (weak field theory) [@borde04]... In the light of what happened in neutron optics, where the beam splitters modeling proved to be crucial to understand properly the origin of the interferometer phase shifts [@horne86; @rauch00], it appears to be necessary to go beyond these studies, so as to provide a comprehensive modeling of the true action of a matter wave beam splitter (strong fields theory for all the involved external fields). This paper is organized as follows. First, we give some details on our framework and explain how to put in equation the problem of the triple interaction matter - splitting potential - other external fields. Then, we detail how to transform the obtained equation in a simpler one thanks to unitary transformations (interaction picture) and passage into the rotating frames. We expound then how to solve this equation (analytically or numerically, with different developments or relevant approximations), and we go back to the initial representation to explain how to write the effect of such beam splitters as an effective instantaneous interaction (generalized $ttt$ scheme). Finally, we study the atomic Borrmann effect and other anomalous dispersive properties and model them in the general framework detailed in the second part.   Framework and approximations\[part2\] ===================================== General framework ----------------- The matter wave beam splitters we consider in this paper consist of multi-level atomic systems subject to an interaction potential which couples the levels together. This interaction is usually made in the presence of other external potentials and miscellaneous relaxation processes. In fact, these atomic systems can refer to atoms (neutral or not) as well as molecules, and more generally to any quantity of matter which can be coherently manipulated. Furthermore, the atomic levels are not restricted to internal atomic levels, but more generally refer to energy-momentum states (i.e. eigenstates of both the internal and kinetic Hamiltonians). The transitions can occur between internal states only (spectroscopy without Doppler effect for example), external states only (diffraction in Kapitza-Dirac and Bragg regimes, optical Stern-Gerlach effect, magnetic atom mirror...), or between entangled states, where the entanglement may be between the internal and external states (stimulated Raman transitions for example) or between the previous energy-momentum states and the eigenstates of the interaction potential (Fock states of the quantized electromagnetic field for example). There are many techniques to coherently split a matter wave, and each of them corresponds to a particular kind of splitting potential. Like the other atom optics elements, matter wave beam splitters use essentially two properties of atoms (or molecules): their wave property, and their interaction with external fields, electromagnetic or material. To date, the demonstrated matter wave beam splitters are based on: 1. atom-matter interaction: front wave division (material slits) [@carnal91], amplitude splitting (transmission material gratings) [@keith88], reflection at crystalline surface [@clauser88], quantum reflection [@shimizu01]… 2. interaction with static magnetic or electric fields (for atoms having a dipolar electric or magnetic moment): longitudinal Stark effect [@sokolov73], transverse and longitudinal Stern-Gerlach effects [@robert91], magnetic mirrors [@opat92], mirror for polar molecules [@wark92], Y shaped magnetic guides [@cassettari00]… 3. resonant or quasi-resonant interaction with laser fields: reflection and diffraction by standing [@arimondo79], running [@baklanov76; @kasevich91] or evanescent [@cook82] laser waves (with spatial and/or temporal working), optical Stern-Gerlach effect [@kazantsev75], stimulated Raman effect [@kasevich91] and its derivatives (adiabatic transfer [@oreg84], STIRAP [@gaubatz88], CHIRAP [@band93]…), magneto-optical beam splitters [@pfau93], X shaped dipolar guides [@houde00]… In this paper, we will focus on this third kind of interaction, and more generally on the beam splitters for which the two-beam approximation is valid. As for the relaxation processes, they refer to all the processes which lead to a loss of coherence and/or a loss of atoms (spontaneous emission, inter-atomic collisions, absorption and interaction with the material microstructures…). When they can not be neglected, the use of a density operator formalism is needed. Other external potentials may be present during the matter wave splitting: inertial and gravitational fields, trapping potential, van der Waals and Casimir potentials… In this paper, we consider all the time-dependent potentials which are at most quadratic in position and momentum. The corresponding Hamiltonian is therefore: $$H_{ext}=\frac{1}{2m}\overrightarrow{p}.\overset{\Rightarrow}{\beta}\left( t\right) .\overrightarrow{p}-\frac{m}{2}\overrightarrow{r}.\overset {\Rightarrow}{\gamma}\left( t\right) .\overrightarrow{r}-\overrightarrow {r}.\overset{\Rightarrow}{\alpha}\left( t\right) .\overrightarrow {p}-m\overrightarrow{g}\left( t\right) .\overrightarrow{r}+\overrightarrow {f}\left( t\right) .\overrightarrow{p}$$ This includes the effect of non-uniform accelerations ($\overrightarrow {g}\left( t\right) $ and $\overset{\Rightarrow}{\gamma}\left( t\right) $), rotations (with an angular velocity $\overrightarrow{\Omega}\left( t\right) $ such that $\overset{\Rightarrow}{\alpha}\left( t\right) .\overrightarrow {u}:=-\overrightarrow{\Omega}\left( t\right) \times\overrightarrow{u}$ for any vector $\overrightarrow{u}$), trapping potentials ($-$ $\overset {\Rightarrow}{\gamma}\left( t\right) $), non-zero curvature tensor ($\overset{\Rightarrow}{\gamma}\left( t\right) $), gravitational waves in Fermi’s gauge ($\overset{\Rightarrow}{\gamma}\left( t\right) $) or Einstein’s gauge ($\overset{\Rightarrow}{\beta}\left( t\right) $), and all the electromagnetic potentials which can be written as a development at most quadratic in position and momentum ($\overrightarrow{g}\left( t\right) $, $\overset{\Rightarrow}{\gamma}\left( t\right) $, $\overrightarrow{f}\left( t\right) $). Furthermore, to keep an overall approach, the coefficients of $H_{ext}$ are time-dependent, and $\overset{\Rightarrow}{\alpha}$, $\overset{\Rightarrow}{\beta}$ and $\overset{\Rightarrow}{\gamma}$ are expressed with non-diagonal 3x3 matrices. Approximations considered in this paper --------------------------------------- It is often possible to simplify this general framework and obtain an evolution equation between only two effective states by using some justified approximations. First, the two-beam approximation is indeed valid when only two energy-momentum eigenstates are coupled. The coupling can be direct (for true two-level systems) or indirect (Raman transitions, spatial beam splitters in Bragg regime). In fact, one can show that any N-photon transition (thanks to several running or standing laser waves) of a multilevel atom is equivalent to an effective 1-photon transition between two atomic levels if the other levels can be adiabatically eliminated [@shore91; @borde97]. The effective photon may not be real. For example, in the Bragg regime, the wave vector of this effective photon is equal to $N\hbar\overrightarrow{k}$ and its frequency is equal to $0$. Eventually, the spatial and temporal structure of the true laser beams appears only in the amplitude of the effective running laser beam. Second, the laser fields are considered as classical (coherent states of the quantized electromagnetic fields), but the calculations which follow are also valid for a transition between two dressed states [@cohen92]. Third, we suppose that the two atomic levels have a long lifetime and we neglect all the relaxation processes listed before. However, the instability of these levels, due to spontaneous emission, can be taken into account in an approximative manner adding a non-Hermitian part to the atom-laser Hamitonian $V_{em}$ [@cohen92]. In addition, $V_{em}$ is chosen equal to the usual dipolar electric Hamiltonian (without spin) and we suppose that the other external fields are sufficiently weak to neglect their effect on the atomic levels and laser fields. Finally, the triple interaction laser - matter - other external fields can be written as a Schrödinger equation concerning two atomic states coupled by an effective running laser wave: $$i\hbar\frac{d}{dt}\left| \Psi\left( t\right) \right\rangle =\left( H_{0}+H_{ext}\left( \overrightarrow{r_{op}},\overrightarrow{p_{op}},t\right) +V_{em}\left( \overrightarrow{r_{op}},t\right) \right) \left| \Psi\left( t\right) \right\rangle \label{eq1}$$ where $\overrightarrow{r_{op}}$ and $\overrightarrow{r_{op}}$ are the position and momentum operators, and $H_{0}$ the internal Hamiltonian ($E_{b}>E_{a}$): $$H_{0}=\left( \begin{array} [c]{cc}E_{b} & 0\\ 0 & E_{a}\end{array} \right) =\frac{E_{a}+E_{b}}{2}1+\frac{\hbar\omega_{0}}{2}\sigma_{3}$$ where $\omega_{0}=\left( E_{b}-E_{a}\right) /\hbar$ is the atomic transition frequency and $\sigma_{3}$ the usual third Pauli matrix. It is also possible to account for some relativistic effects by introducing two different masses [@borde04; @antoineThese]. For simplicity however, we will not take into account these relativitic corrections and use only one atomic mass in what follows. Interaction picture and rotating frames\[part3\] ================================================ It is generally impossible to solve directly the equation (\[eq1\]) (non-trivial time dependence of the right hand side, presence of two non-commuting operators $\overrightarrow{r_{op}}$ and $\overrightarrow{r_{op}}$), but it is possible to simplify it with the help of well chosen unitary transformations [@antoineThese]. The main idea of this series of transformations is to eliminate progressively the different sources of evolution (internal and external) of the right hand side of (\[eq1\]). As each unitary transformation corresponds to a change of frame, we can see this succession of transformations as a succession of frame changes which aims at reaching the proper frame of the atom, or, at least, at reaching a least movement frame for the atom (from external as well as internal points of view). In this especially suitable frame, it is easier to solve the evolution equation and several important pieces of information about the solution can be directly seen. First, let us go to the interaction picture with respect to $H_{0}$ and$\ H_{ext}$: $$\left| \Psi\left( t\right) \right\rangle =U_{1}\left( t,t_{1}\right) \left| \varphi_{1}\left( t\right) \right\rangle \label{eqU1}$$ with :$$U_{1}\left( t,t_{1}\right) =e^{-\frac{i}{\hbar}H_{0}.\left( t-t_{1}\right) }U_{ext}\left( t,t_{1}\right)$$ and: $$U_{ext}\left( t,t_{1}\right) =\mathcal{T}\left( \exp\left( -\frac{i}{\hbar}\int_{t_{1}}^{t}H_{ext}\left( \overrightarrow{r_{op}},\overrightarrow {p_{op}},t^{\prime}\right) dt^{\prime}\right) \right)$$ where $\mathcal{T}$  is the chronological Dyson operator and $t_{1}$ an arbitrary time (different from $t$ by definition). The equation (\[eq1\]) becomes: $$i\hbar\frac{d}{dt}\left| \varphi_{1}\left( t\right) \right\rangle =e^{\frac{i}{\hbar}H_{0}.\left( t-t_{1}\right) }V_{em}\left( \overrightarrow{R_{op}}\left( t,t_{1}\right) ,t\right) e^{-\frac{i}{\hbar }H_{0}.\left( t-t_{1}\right) }\left| \varphi_{1}\left( t\right) \right\rangle \label{eq2}$$ where $\overrightarrow{R_{op}}$ is defined as: $$\overrightarrow{R_{op}}\left( t,t_{1}\right) =U_{ext}\left( t,t_{1}\right) ^{-1}\text{ }\overrightarrow{r_{op}}\text{ }U_{ext}\left( t,t_{1}\right)$$ The external Hamiltonian $H_{ext}$ being at most quadratic in position and momentum, $\overrightarrow{R_{op}}$ depends linearly on $\overrightarrow {r_{op}}$ and $\overrightarrow{p_{op}}$: $$\overrightarrow{R_{op}}\left( t,t_{1}\right) =A\left( t,t_{1}\right) \text{ }\overrightarrow{r_{op}}+B\left( t,t_{1}\right) \text{ }\overrightarrow{p_{op}}/m+\overrightarrow{\xi}\left( t,t_{1}\right)$$ and is simply obtained through the classical solution of the Hamilton’s equations. One can show that the matrices $A$ and $B$ depend on the quadratic terms of $H_{ext}$ ($\overset{\Rightarrow}{\alpha}\left( t\right) $, $\overset{\Rightarrow}{\beta}\left( t\right) $ and $\overset{\Rightarrow }{\gamma}\left( t\right) $) only, whereas $\overrightarrow{\xi}\left( t,t_{1}\right) $ also depends on its linear terms ($\overrightarrow{g}\left( t\right) $ and $\overrightarrow{f}\left( t\right) $). These matrices are in fact the well known $ABCD$ matrices, usually used in Gaussian optics, and introduced recently in atom optics [@borde91; @antoineThese]. As far as $V_{em}$ is concerned, it may have diagonal terms (AC stark shifts, slowly varying in space and time). However, in a first approach, we can take the latter constant and eliminate their common part by a unitary transformation. Finally, $V_{em}$ can be taken as purely anti-diagonal: $$V_{em}\left( \overrightarrow{r_{op}},t\right) =V\left( \overrightarrow {r_{op}},t\right) \left( \begin{array} [c]{cc}0 & 1\\ 1 & 0 \end{array} \right)$$ with: $$V\left( \overrightarrow{r},t\right) =-2\hbar\Omega_{0}F\left( \overrightarrow{r},t\right) \cos\left( \omega t-\overrightarrow {k}.\overrightarrow{r}+\phi\right)$$ where $F\left( \overrightarrow{r},t\right) $ is the amplitude of the effective running laser beam, and where $\Omega_{0}$ is the Rabi frequency of the atomic transition. An other important approximation is the rotating wave approximation (RWA), which consists in neglecting the off-resonant terms (i.e. with frequency $\omega+\omega_{0}$) in (\[eq2\]). In our case, this approximation is supposed to be justified. Indeed, if the two effective levels are indirectly coupled through optical photons (as for Raman transitions for example), one can show that the Bloch-Siegert effect is negligible [@kasevich92bis]. Finally, the evolution equation is equal to: $$\frac{d}{dt}\left| \varphi_{1}\left( t\right) \right\rangle =i\Omega _{0}F\left( \overrightarrow{R_{op}}\left( t,t_{1}\right) ,t\right) \left( \begin{array} [c]{cc}0 & e^{-i\Phi_{op}\left( t,t_{1}\right) }\\ e^{+i\Phi_{op}\left( t,t_{1}\right) } & 0 \end{array} \right) \left| \varphi_{1}\left( t\right) \right\rangle$$ where $\Phi_{op}$ is defined as: $$\Phi_{op}\left( t,t_{1}\right) =\omega t-\omega_{0}\left( t-t_{1}\right) -\overrightarrow{k}.\overrightarrow{R_{op}}\left( t,t_{1}\right) +\phi$$ The next unitary transformation corresponds to the usual passage into the rotating frame. In fact, there is an infinity of such transformations [@antoineThese]. For example, the following family of transformations (indexed by the real parameter $x$): $$\left| \varphi_{1}\left( t\right) \right\rangle =U_{2}\left( x,t,t_{1}\right) \left| \varphi_{2}\left( x,t\right) \right\rangle \label{eqU2}$$ with: $$U_{2}\left( x,t,t_{1}\right) =\left( \begin{array} [c]{cc}e^{-i\Phi_{op}\left( t,t_{1}\right) .x} & 0\\ 0 & e^{-i\Phi_{op}\left( t,t_{1}\right) .\left( x-1\right) }\end{array} \right)$$ leads to the equation: $$\fbox{$\frac{d}{dt}\left\vert \varphi_{2}\left( x,t\right) \right\rangle =iM_{op}\left( x,t\right) \left\vert \varphi_{2}\left( x,t\right) \right\rangle $} \label{eq3}$$ with $M_{op}$ defined as: $$M_{op}\left( x,t\right) =\left( \begin{array} [c]{cc}x\left( \omega-\omega_{0}-\overrightarrow{k}.\overset{\cdot}{\overrightarrow {R_{op}}}-x\delta\right) & \Omega_{0}F_{op}\\ \Omega_{0}F_{op} & \left( x-1\right) \left( \omega-\omega_{0}-\overrightarrow{k}.\overset{\cdot}{\overrightarrow{R_{op}}}-\left( x-1\right) \delta\right) \end{array} \right)$$ where $\delta\left( t,t_{1}\right) $ is the “generalized recoil”: $$\delta\left( t,t_{1}\right) =\hbar\overrightarrow{k}\beta\overrightarrow {k}/2m$$ and where the point above letters refers to the time derivative. Each value of $x$ corresponds to a particular evolution equation. For example, the most “symmetric” choice is $x=1/2$, whereas the most “physical” choice is $x=1$, and leads to: $$M_{op}\left( 1,t\right) =\left( \begin{array} [c]{cc}\Delta_{op1} & \Omega_{0}F_{op}\\ \Omega_{0}F_{op} & 0 \end{array} \right)$$ where $\Delta_{op1}$ is the “generalized detuning” [@antoineThese]: $$\Delta_{op1}\left( t,t_{1}\right) =\omega-\omega_{0}-\overrightarrow {k}.\overset{\cdot}{\overrightarrow{R_{op}}}-\delta$$ i.e., the operator which generalizes the usual “free” detuning in the presence of several gravitational, inertial and trapping potentials. It can also be written as: $$\Delta_{op1}\left( t,t_{1}\right) =\omega-\left( \omega_{0}+\overrightarrow {k}\left[ \frac{\beta}{m}\left( \overrightarrow{P_{op}}+\frac{\hbar \overrightarrow{k}}{2}\right) +\alpha\overrightarrow{R_{op}}+\overrightarrow {f}\right] \right)$$ where $\overrightarrow{P_{op}}$ is defined in the same way as $\overrightarrow {R_{op}}$ [@antoinejopb; @antoineThese]: $$\overrightarrow{P_{op}}\left( t,t_{1}\right) /m=C\left( t,t_{1}\right) \text{ }\overrightarrow{r_{op}}+D\left( t,t_{1}\right) \text{ }\overrightarrow{p_{op}}/m+\overrightarrow{\phi}\left( t,t_{1}\right)$$ The expression of $\Delta_{op1}\left( t,t_{1}\right) $ can be easily interpreted considering the energy-momentum conservation for a non-excited atom absorbing a photon ($\omega$, $\overrightarrow{k}$) at the instant $t$ (the considered atom is then at the position $\overrightarrow{R}\left( t,t_{0}\right) $ with the momentum $\overrightarrow{P}\left( t,t_{0}\right) $ when the absorption occurs): $$H_{ext}\left( \overrightarrow{R}\left( t,t_{0}\right) ,\overrightarrow {P}\left( t,t_{0}\right) ,t\right) +E_{b}+\hbar\omega\text{ }=\text{ }H_{ext}\left( \overrightarrow{R}\left( t,t_{0}\right) ,\overrightarrow {P}\left( t,t_{0}\right) +\hbar\overrightarrow{k},t\right) +E_{a}$$ which gives the non-operatorial version of the condition $\Delta_{op1}\left( t,t_{1}\right) =0$ (exact resonance condition in the presence of the external potentials described by $H_{ext}$). This generalized detuning can also be expressed directly with the coefficients of $H_{ext}$. For example, if $H_{ext}$ is constant, the first terms of its Taylor expansion (in $\alpha(t-t_{1})$ and $\gamma(t-t_{1})^{2}$) are found to be [@antoineThese]: $$\begin{aligned} \Delta_{op1}\left( t,t_{1}\right) & =\omega-\omega_{0}-\overrightarrow {k}.\frac{\overrightarrow{p_{op}}}{m}-\delta-\overrightarrow{k}.\overrightarrow{g}\left( t-t_{1}\right) -\overrightarrow{k}.\alpha .\overrightarrow{r_{op}}\label{eqdeltaop}\\ & -2\overrightarrow{k}.\alpha.\frac{\overrightarrow{p_{op}}}{m}\left( t-t_{1}\right) -\overrightarrow{k}.\left( \alpha^{2}+\gamma\right) .\overrightarrow{r_{op}}\left( t-t_{1}\right) -\overrightarrow{k}.\alpha.\overrightarrow{g}\left( t-t_{1}\right) ^{2}-...\nonumber\end{aligned}$$ where only the first five terms of the right hand side are non-negligible in usual experiments (weak rotations and acceleration gradients on the Earth). We can then use chirped laser pulses to eliminate the gravitational induced Doppler shift $\overrightarrow{k}.\overrightarrow{g}\left( t-t_{1}\right) $ and finally get back the usual “free” detuning. The other element of $M_{op}\left( x,t\right) $ which may depend on the two canonical operators is the effective amplitude $F\left( \overrightarrow {R_{op}}\left( t,t_{1}\right) ,t\right) $. The two main sources of its spatial dependency are its transverse profile and the speckle due to the miscellaneous optical elements used to bring the lasers to the atoms. Generally, one can not neglect this speckle and the best is to map it, and then, to use the zones where the speckle is sufficiently weak. Concerning the laser transverse profile, one can show that it is seen roughly uniform by each individual atom of the initial atomic cloud (described by a statistical mixture of wave packets). One can therefore replace $\overrightarrow{R_{op}}\left( t,t_{1}\right) $ in $F$ by its semi-classical action on a typical wave packet which evolves inside such beam splitters. For example, and as we see thereafter, $\overrightarrow{R_{op}}\left( t,t_{1}\right) $ can be approximated by: $$\overrightarrow{R_{op}}\left( t,t_{1}\right) \simeq A\left( t,t_{1}\right) \text{ }\overrightarrow{r_{0}}+B\left( t,t_{1}\right) \left( \text{ }\overrightarrow{p_{0}}+\hbar\overrightarrow{k}/2\right) /m+\overrightarrow {\xi}\left( t,t_{1}\right)$$ where $\overrightarrow{r_{0}}$ and $\overrightarrow{p_{0}}$ are the initial central position and momentum of the considered atomic wave packet. The main result is that $F\left( \overrightarrow{R_{op}}\left( t,t_{1}\right) ,t\right) \simeq\overline{F}\left( t,t_{1}\right) $ is hence now independent of $\overrightarrow{r_{op}}$ and $\overrightarrow{p_{op}}$ (but is still time-dependent). Resolution methods\[part4\] =========================== The main problem in the integration of (\[eq3\]) is that, in the general case considered in this paper, $M_{op}\left( x,t\right) $ is a (2x2) matrix which depends both on time and on the two non-commuting canonical operators $\overrightarrow{r_{op}}$ and $\overrightarrow{p_{op}}$. These are the two reasons why $M_{op}\left( x,t\right) $ does not commute with itself at different times, and why one can not apply the common rules to integrate (\[eq3\]) directly. However, in some particular cases, (\[eq3\]) may depend on one time-independent operator only, and one can solve it analytically in the representation of this operator. It is thus important to list the maximum of these exactly solvable cases. This is the aim of the $z(t)$ theory, initiated in [@rosen32], improved in [@demkov69; @hioe85] and generalized recently in [@ishkhanyan00] (for a detailed review, see [@ishkhanyan00] and [@antoineThese]). Among these exact solutions, let us underline the Landau-Zener model (solution with cylinder parabolic functions) which accounts for the effect of a time-independent and uniform acceleration during the atomic splitting, and the Rosen-Zener model (solution with hyperbolic secant functions) which is significant in the study of matter wave solitons. Some other analytical methods are particularly useful to deal with the equation (\[eq3\]): Floquet theory [@autler55] for periodic time-dependence and its generalizations (multi-periodic Floquet method [@ho84], (t,t’) theory [@peskin93]…); bands theory (i.e. use of Bloch states) when one can not make the RWA [@letokhov78]; use of quasi-probabilities (Wigner and Shirley representations) and phase space functions (Q function of Hushimi and Kano, P distribution of Glauber and Sudarshan) when some QED effects can not be neglected (for a recent review, see [@schleich01])… Apart from these particular exactly solvable cases, it is always possible to write the general solution of (\[eq3\]) as a formal development. This development may be linear (Dyson) or not (Magnus, Fer, Cayley…), and may preserve the unitarity (for a review of the recent advances concerning the Magnus expansion, see [@iserles00; @moan01; @antoineThese]). However, due to the entanglement of operators $\overrightarrow{r_{op}}$ and $\overrightarrow {p_{op}}$ in the different terms of these developments, it is impossible to choose any representation which leads to an analytical expression of the solution. This problem can be solved either in eliminating one of the two canonical operators directly in the equation (\[eq3\]) (“operatorial elimination method”), or in approximating the generalized detuning $\Delta_{op1}\left( t,t_{1}\right) $, or finally in solving the equation numerically. The operatorial elimination method, which is detailed in [@antoineThese], leads to a double development, easily calculable but rather long, which is why its use would be limited to the numerical domain. As for the approximations, one has already underlined that the effect of rotations and acceleration gradients may often be neglected in the generalized detuning (leading to a trivial integration of (\[eq3\])). If not, several tactics may be used [@antoineThese]: 1. freezing $\Delta_{op1}\left( t,t_{1}\right) $ at a particular time (mid-time for example) or taking a temporal average, and choosing the resulting time-independent representation; 2. or replacing $\overrightarrow{r_{op}}$ and $\overrightarrow{p_{op}}$ by their semi-classical value (WKB approximation); 3. or replacing only one of these operators ($\overrightarrow{r_{op}}$ for example) by its action on the initial atomic wave packet (or on a wave packet closer to the final solution) When the equation (\[eq3\]) is made scalar, one can use either the previous analytical methods, or the previous developments (and truncate them when they converge, see [@miao00; @moan01]), or some intermediate approaches which are based on the eigenstates of the matrix $M_{op}\left( x,t\right) $ (super-adiabatic scheme [@berry90] or successive adiabatic states method [@antoineThese]). The latter are extremely interesting because they directly provide important information on the solution, like the true energies of the system “atom – laser – other external fields” and the corresponding group velocities. Several numerical methods may also be implemented (for a recent review, see [@lubich02]): Magnus expansion, median exponential rule, Runge-Kutta method, Strang-Marchuk-Trotter method, Chebyshev or Lanczos approximations… Finally, we obtain a solution which is more or less close to the exact solution of (\[eq3\]): $$\left| \varphi_{sol}\left( t\right) \right\rangle =S_{op}\left( t,t_{0}\right) \left| \varphi_{2}\left( t_{0}\right) \right\rangle \simeq\left| \varphi_{2}\left( t\right) \right\rangle$$ with the evolution operator: $$S_{op}\left( t,t_{0}\right) \simeq\mathcal{T}\left( \exp\left( i\int_{t_{0}}^{t}M_{op}\left( t^{\prime}\right) dt^{\prime}\right) \right)$$ which can be written explicitely as a $S$ matrix between the initial and final atomic states: $$\left( \begin{array} [c]{c}\left| b_{sol}\left( t\right) \right\rangle \\ \left| a_{sol}\left( t\right) \right\rangle \end{array} \right) =\left( \begin{array} [c]{cc}S_{bb,op}\left( t,t_{0}\right) & S_{ba,op}\left( t,t_{0}\right) \\ S_{ab,op}\left( t,t_{0}\right) & S_{aa,op}\left( t,t_{0}\right) \end{array} \right) \left( \begin{array} [c]{c}\left| b_{2}\left( t_{0}\right) \right\rangle \\ \left| a_{2}\left( t_{0}\right) \right\rangle \end{array} \right)$$ The resulting action of this $S$ matrix on the initial atomic wave packet is a possible change of internal state (generalized Rabi oscillations) and a structuring into several wave packets that can be quite different from the initial one. In particular, the group velocities of the created wave packets may be identical or not (Borrmann effect, see below) according to the value of the generalized detuning. Return to the initial picture and equivalent instantaneous interaction\[part5\] =============================================================================== Solution in the initial representation -------------------------------------- Once we have obtained the matrix $S_{op}\left( t,t_{0}\right) $, we can do the reverse unitary transformations of (\[eqU1\]) and (\[eqU2\]), and go back to the initial representation. One obtains (written here for the symmetric choice $x=1/2$): $$\begin{aligned} \left| \Psi_{sol}\left( t\right) \right\rangle & =e^{-i\int_{t_{0}}^{t}\delta\left( t^{\prime},t_{1}\right) dt^{\prime}/4}.U_{1}\left( t,t_{1}\right) U_{2}\left( 1/2,t,t_{1}\right) \label{eq4}\\ & S_{op}\left( t,t_{0}\right) \text{ }U_{2}^{-1}\left( 1/2,t_{0},t_{1}\right) U_{1}^{-1}\left( t_{0},t_{1}\right) \left| \Psi\left( t_{0}\right) \right\rangle \nonumber\end{aligned}$$ which expresses the link between the general solution $\left| \Psi _{sol}\left( t\right) \right\rangle $ and the initial ket $\left| \Psi\left( t_{0}\right) \right\rangle $. It may be noticed that the solution depends on $t_{1}$, the arbitrary time introduced to define the previous unitary transformations. This instant has no physical meaning and can be removed explicitly at each step of our calculations [@antoineThese]. However, it is more interesting to keep it and eventually assign it a particular value (the central time of the interaction for example) which may be useful both for calculations and the interpretation of the obtained solutions. The relevance of this instant to describe the atomic beam splitting as an equivalent instantaneous interaction (generalized $ttt$ scheme) will appear more clearly at the end of this part. It is easy to interpret the expression of $\left| \Psi_{sol}\left( t\right) \right\rangle $ by writing the typical terms $\exp\left( \pm\frac{i}{2}\Phi_{op}\left( t,t_{0}\right) \right) $ of $U_{2}^{\pm1}$ as a product of exponentials. Indeed, let us consider the following initial ket: $$\left| \Psi\left( t_{0}\right) \right\rangle =\left( \begin{array} [c]{c}0\\ \left| a\left( t_{0}\right) \right\rangle \end{array} \right)$$ which represents the incident atomic wave packet (atoms in the lower state $a$). It results that the upper component of $\left| \Psi_{sol}\left( t\right) \right\rangle $ is: $$\begin{aligned} & e^{i\theta_{1ba}}\text{ }U_{ext}\left( t,t_{1}\right) \text{ }e^{\frac {i}{2}\overrightarrow{k}A\left( t,t_{1}\right) \overrightarrow{r_{op}}}\text{ }e^{\frac{i}{2}\overrightarrow{k}B\left( t,t_{1}\right) \overrightarrow{p_{op}}/m}\text{ }\label{eq5}\\ & S_{ba,op}\left( t,t_{0}\right) \text{ }e^{\frac{i}{2}\overrightarrow {k}A\left( t_{0},t_{1}\right) \overrightarrow{r_{op}}}\text{ }e^{\frac{i}{2}\overrightarrow{k}B\left( t_{0},t_{1}\right) \overrightarrow{p_{op}}/m}\text{ }U_{ext}\left( t_{1},t_{0}\right) \text{ }\left| a\left( t_{0}\right) \right\rangle \nonumber\end{aligned}$$ which can be interpreted as follows: 1. evolution from $t_{0}$ to $t_{1}$ due to $U_{ext}$, i.e. to the gravito-inertial and trapping potentials described by $H_{ext}$ 2. translation of $-\widetilde{B}\left( t_{0},t_{1}\right) \frac {\hbar\overrightarrow{k}}{2m}$ in position and $+\widetilde{A}\left( t_{0},t_{1}\right) \frac{\hbar\overrightarrow{k}}{2}$ in momentum due to the term $\exp\left( \frac{i}{2}\overrightarrow{k}A\left( t_{0},t_{1}\right) \overrightarrow{r_{op}}\right) .\exp\left( \frac{i}{2}\overrightarrow {k}B\left( t_{0},t_{1}\right) \overrightarrow{p_{op}}/m\right) $ 3. action of the non-diagonal element of the $S$ matrix $S_{ba,op}\left( t,t_{0}\right) $ (change of internal state and dispersive structuring) 4. again two translations: $-\widetilde{B}\left( t,t_{1}\right) \frac{\hbar\overrightarrow{k}}{2m}$ in position and $+\widetilde{A}\left( t,t_{1}\right) \frac{\hbar\overrightarrow{k}}{2}$ in momentum 5. then the external evolution from $t_{1}$ to $t$ due to $H_{ext}$ 6. and finally a phase factor $\exp\left( i\theta_{1ba}\right) $ with $$\begin{aligned} \theta_{1ba} & =-\frac{E_{b}\left( t-t_{0}\right) }{\hbar}-\frac{\left( \omega-\omega_{0}\right) \left( t-t_{0}\right) -\overrightarrow{k}\left( \overrightarrow{\xi}\left( t,t_{1}\right) +\overrightarrow{\xi}\left( t_{0},t_{1}\right) \right) }{2}-\omega t_{0}-\phi\\ & +\frac{\hbar\overrightarrow{k}}{8m}\left( A\left( t,t_{1}\right) \widetilde{B}\left( t,t_{1}\right) +A\left( t_{0},t_{1}\right) \widetilde{B}\left( t_{0},t_{1}\right) \right) \overrightarrow{k}-\int_{t_{0}}^{t}\frac{\delta\left( t^{\prime},t_{1}\right) }{4}dt^{\prime}$$ Similarly, the lower component of $\left| \Psi_{sol}\left( t\right) \right\rangle $ is equal to: $$\begin{aligned} & e^{i\theta_{1aa}}\text{ }U_{ext}\left( t,t_{1}\right) \text{ }e^{-\frac{i}{2}\overrightarrow{k}A\left( t,t_{1}\right) \overrightarrow {r_{op}}}e^{-\frac{i}{2}\overrightarrow{k}B\left( t,t_{1}\right) \overrightarrow{p_{op}}/m}\label{eq6}\\ & \text{ }S_{aa,op}\text{ }e^{\frac{i}{2}\overrightarrow{k}A\left( t_{0},t_{1}\right) \overrightarrow{r_{op}}}e^{\frac{i}{2}\overrightarrow {k}B\left( t_{0},t_{1}\right) \overrightarrow{p_{op}}/m}\text{ }U_{ext}\left( t_{1},t_{0}\right) \left| a\left( t_{0}\right) \right\rangle \nonumber\end{aligned}$$ with the following similar interpretation: 1. evolution from $t_{0}$ to $t_{1}$ due to $H_{ext}$ 2. translations of $-\widetilde{B}\left( t_{0},t_{1}\right) \frac {\hbar\overrightarrow{k}}{2m}$ ** in position and of $+\widetilde {A}\left( t_{0},t_{1}\right) \frac{\hbar\overrightarrow{k}}{2}$ in momentum 3. action of the diagonal element $S_{aa,op}\left( t,t_{0}\right) $ of the $S$ matrix (dispersive structuring without internal change) 4. translations of $+\widetilde{B}\left( t,t_{1}\right) \frac {\hbar\overrightarrow{k}}{2m}$* * in position and of $-\widetilde {A}\left( t,t_{1}\right) \frac{\hbar\overrightarrow{k}}{2}$ in momentum 5. evolution from $t_{1}$ to $t$ due to $H_{ext}$ 6. and a phase factor $\exp\left( i\theta_{1aa}\right) $ with $$\begin{aligned} \theta_{1aa} & =-\frac{E_{a}\left( t-t_{0}\right) }{\hbar}+\frac{\left( \omega-\omega_{0}\right) \left( t-t_{0}\right) -\overrightarrow{k}\left( \overrightarrow{\xi}\left( t,t_{1}\right) -\overrightarrow{\xi}\left( t_{0},t_{1}\right) \right) }{2}\\ & +\frac{\hbar\overrightarrow{k}}{8m}\left( A\left( t,t_{1}\right) \widetilde{B}\left( t,t_{1}\right) +A\left( t_{0},t_{1}\right) \widetilde{B}\left( t_{0},t_{1}\right) \right) \overrightarrow{k}-\int_{t_{0}}^{t}\frac{\delta\left( t^{\prime},t_{1}\right) }{4}dt^{\prime}$$ which differs from $\theta_{1ba}$ by $\omega t+\phi-\overrightarrow {k}.\overrightarrow{\xi}\left( t,t_{1}\right) $. Generalized atomic Borrmann effect\[part52\] -------------------------------------------- In certain conditions, the transfer matrix $S_{op}$ may have only a weak effect on the external state of the incident atoms. In this case, the center of the two main wave packets (associated with the upper and lower adiabatic atomic states, see below) evolves along the same trajectory during the splitting: it is the (atomic) Borrmann effect. This effect is well known in dynamical diffraction of X rays [@borrmann41], of neutron waves [@knowles56; @rauch78], and more recently of atomic waves [@oberthaler96]. Historically, this effect was defined as a more specific phenomenon, valid for any kind of waves diffracting in an absorbing crystal. It can be stated as: “for a certain angle of incidence with respect to the crystal surface, the propagation of the incident wave packet inside the crystal is made with no attenuation along a unique trajectory which is orthogonal to the crystal surface”. This particular angle of incidence is the well known Bragg angle defined as: $$\overrightarrow{P}.\left( \overrightarrow{p_{0}}+\overrightarrow{P}/2\right) =0$$ where $\overrightarrow{p_{0}}$ is the central momentum of the incident wave packet and $\overrightarrow{P}$ is the quantum of momentum which is communicated to the diffracted wave packet (the Bragg condition is written here for the first order diffraction). Conversely, two wave packets with two different trajectories are created inside the crystal if this condition is not fulfilled (defining the well known Borrmann fan). Furthermore, this anomalous transmission is very sensitive to the Bragg condition, and any deviation from it greatly amplifies the angle between the two trajectories [@rauch00]. In the case of atom-laser interactions, there is no absorption of atoms and the Bragg condition (which means nothing but the energy-momentum conservation, i.e. the resonance condition) is partly relaxed due to the atomic internal structure: $$\omega-\omega_{0}-\overrightarrow{k}.\left( \overrightarrow{p_{0}}+\hbar\overrightarrow{k}/2\right) /m=0$$ To date, this atomic Borrmann effect was only studied in the free case [@oberthaler96] (for which $H_{ext}$ is limited to the usual kinetic Hamiltonian $\overrightarrow{p}^{2}/2m$) or in the presence of a time-independent and uniform acceleration [@lammerzahl99]. In this paper, it is obtained in the presence of the various gravitational, inertial and trapping potentials described by $H_{ext}$. By the way, we will show that the common Borrmann trajectory is equal to the average of the two extreme trajectories: $A$ $\overrightarrow{r_{0}}+B$ $\overrightarrow{p_{0}}/m+\overrightarrow{\xi}$ (atom absorbing a photon at the final time $t$) and $A$ $\overrightarrow{r_{0}}+B\left( \text{ }\overrightarrow{p_{0}}+\hbar\overrightarrow{k}\right) /m+\overrightarrow{\xi}$ (atom absorbing a photon at the initial time $t_{0}$). Indeed, let us consider the previous example (atoms initially in state $a$) and suppose that $S_{op}$ has a negligible effect on the central position and momentum of the corresponding incident wave packet. Then, according to the expression (\[eq4\]) and the Ehrenfest theorem, we can show that the central position $\overrightarrow{r_{0}}$ of the initial wave packet is changed into: $$\overrightarrow{r_{c}}\left( t,t_{0}\right) =A\left( \overrightarrow{r_{0}}-\widetilde{B}\frac{\hbar\overrightarrow{k}}{2m}\right) +B\left( \overrightarrow{p_{0}}+\left( 1+\widetilde{A}\right) \frac{\hbar \overrightarrow{k}}{2}\right) /m+\overrightarrow{\xi}$$ where $\overrightarrow{p_{0}}$ is its initial central momentum. Finally, thanks to some simple properties of $ABCD$ matrices, we obtain the previously stated result: $$\begin{aligned} \overrightarrow{r_{c}}\left( t,t_{0}\right) & =A\overrightarrow{r_{0}}+B\left( \overrightarrow{p_{0}}+\frac{\hbar\overrightarrow{k}}{2}\right) /m+\overrightarrow{\xi}\\ & =\frac{1}{2}\left[ \left( A\overrightarrow{r_{0}}+B\overrightarrow{p_{0}}/m+\overrightarrow{\xi}\right) +\left( A\overrightarrow{r_{0}}+B\left( \overrightarrow{p_{0}}+\hbar\overrightarrow{k}\right) /m+\overrightarrow{\xi }\right) \right]\end{aligned}$$ This unique central trajectory differs by $B\left( t,t_{0}\right) .\hbar\overrightarrow{k}/2$ from the one obtained without any splitting potential. It means that, even for the atoms which are finally in the same internal state as the initial one (no effective internal change), there is a non-trivial change of their central trajectory, which results in a measurable spatial shift at the end of the interaction (see Figure \[figborrmann\]). \[h\] [FigurePRAindia1.ps]{} As for the central momentum, we obtain similarly the two following momenta: $$\overrightarrow{p_{c}}\left( t,t_{0}\right) =mC\overrightarrow{r_{0}}+D\left( \overrightarrow{p_{0}}+\frac{\hbar\overrightarrow{k}}{2}\right) +m\overrightarrow{\phi}\pm\frac{\hbar\overrightarrow{k}}{2}$$ which differ from each other on $\hbar\overrightarrow{k}$, as expected. On the contrary, $S_{op}$ may have a non-negligible effect on the external state of the incident wave packet (change of central position and momentum, respectively due to the addition of a non-trivial group velocity and to the change of momentum distribution). In this case, one can show that the initial wave packet is split into two main wave packets, which evolve along two different trajectories (i.e. with two distinct group velocities) which form the atomic Borrmann fan. For each of these wave packets, we can do the same calculation as before and obtain the trajectories in the initial frame [@antoineThese]. Generalized ttt scheme\[part53\] -------------------------------- However, in both previous cases, it is noticeable that the expression (\[eq4\]) can be written as a product of three evolution operators: $$\left| \Psi_{sol}\left( t\right) \right\rangle =U_{1}\left( t,t_{1}\right) \text{{\Large S}}_{_{1}}\left( \overrightarrow{r_{op}},\overrightarrow{p_{op}},t,t_{0},t_{1}\right) U_{1}\left( t_{1},t_{0}\right) \left| \Psi\left( t_{0}\right) \right\rangle$$ where $U_{1}\left( t_{1},t_{0}\right) $ and $U_{1}\left( t,t_{1}\right) $ describe the evolution due to $H_{0}+H_{ext}$ only, and where [S]{}$_{_{1}}$ represents the evolution part depending on the splitting potential $V$. The aim of this arrangement is to clearly separate the effect of $H_{ext}$ and $V$, and to describe the interaction as an effective instantaneous interaction (generalization of the $ttt$ scheme introduced in [@borde04]). In addition to the link with the infinitely thin modeling of atomic beam splitters, we aim at providing a clear and practical beam splitter modeling, in the presence of the external fields described by $H_{ext}$, which can be used easily in atom interferometric phase shift calculations [@antoinepla; @antoinejopb; @antoineThese]. To clearly separate the splitting terms from the translation and phase terms in (\[eq5\]) and (\[eq6\]), we can transform the following expression: $$e^{\pm\frac{i}{2}\Phi_{op}\left( t_{0},t_{1}\right) }\text{ }S_{uv}\left( \overrightarrow{r_{op}},\overrightarrow{p_{op}}\right) \text{ }e^{\mp\frac {i}{2}\Phi_{op}\left( t_{0},t_{1}\right) }$$ in [@antoineThese]: $$S_{uv}\left( \overrightarrow{r_{op}}\mp\widetilde{B}\left( t_{0},t_{1}\right) \frac{\hbar\overrightarrow{k}}{2m},\overrightarrow{p_{op}}\pm\widetilde{A}\left( t_{0},t_{1}\right) \frac{\hbar\overrightarrow{k}}{2}\right)$$ thanks to the following algebraic properties:$$\begin{aligned} e^{A}.f\left( B\right) .e^{-A} & =f\left( e^{A}.B.e^{-A}\right) \\ e^{A}.B.e^{-A} & =B+\left[ A,B\right] +\frac{1}{2!}\left[ A,\left[ A,B\right] \right] +...\end{aligned}$$ where $A$ and $B$ refer to operators or square matrices, and where $f$ is a function. Finally, the diffusion matrix [S]{}$_{1}$ can be written as: $$\text{{\Large S}}_{1}=\left( \begin{array} [c]{cc}\text{{\Large S}}_{1,bb} & \text{{\Large S}}_{1,ba}\\ \text{{\Large S}}_{1,ab} & \text{{\Large S}}_{1,aa}\end{array} \right)$$ where its elements are equal to ($a$ and $b$ are the lower and upper states respectively): 1. for the $a\longrightarrow a$ transition:$$\text{{\Large S}}_{1,aa}=e^{i\Phi_{aa}}e^{\frac{i}{\hbar}\overrightarrow {r_{op}}.\overrightarrow{p_{aa}}}e^{-\frac{i}{\hbar}\overrightarrow{p_{op}}.\overrightarrow{r_{aa}}}S_{aa}\left( \overrightarrow{r_{op}}-\widetilde {B}\left( t_{0},t_{1}\right) \frac{\hbar\overrightarrow{k}}{2m},\overrightarrow{p_{op}}+\widetilde{A}\left( t_{0},t_{1}\right) \frac {\hbar\overrightarrow{k}}{2}\right)$$ with: $$\begin{aligned} \overrightarrow{p_{aa}} & =-\frac{\widetilde{A}\left( t,t_{1}\right) -\widetilde{A}\left( t_{0},t_{1}\right) }{2}\hbar\overrightarrow{k}\\ \overrightarrow{r_{aa}} & =+\frac{\widetilde{B}\left( t,t_{1}\right) -\widetilde{B}\left( t_{0},t_{1}\right) }{2}\frac{\hbar\overrightarrow{k}}{m}\\ \Phi_{aa} & =+\frac{1}{2}\left[ \left( \omega-\omega_{0}\right) \left( t-t_{0}\right) -\overrightarrow{k}.\left( \overrightarrow{\xi}\left( t,t_{1}\right) -\overrightarrow{\xi}\left( t_{0},t_{1}\right) \right) \right] \\ & +\frac{\hbar\overrightarrow{k}}{8m}\left[ A\left( t,t_{1}\right) \widetilde{B}\left( t,t_{1}\right) +A\left( t_{0},t_{1}\right) \widetilde{B}\left( t_{0},t_{1}\right) -2A\left( t_{0},t_{1}\right) \widetilde{B}\left( t,t_{1}\right) \right] \overrightarrow{k}-\int_{t_{0}}^{t}\frac{\delta\left( t^{\prime},t_{1}\right) }{4}dt^{\prime}$$ 2. for the $a\longrightarrow b$ transition:$$\text{{\Large S}}_{1,ba}=e^{i\Phi_{ba}}e^{\frac{i}{\hbar}\overrightarrow {r_{op}}.\overrightarrow{p_{ba}}}e^{-\frac{i}{\hbar}\overrightarrow{p_{op}}.\overrightarrow{r_{ba}}}S_{ba}\left( \overrightarrow{r_{op}}-\widetilde {B}\left( t_{0},t_{1}\right) \frac{\hbar\overrightarrow{k}}{2m},\overrightarrow{p_{op}}+\widetilde{A}\left( t_{0},t_{1}\right) \frac {\hbar\overrightarrow{k}}{2}\right)$$ with: $$\begin{aligned} \overrightarrow{p_{ba}} & =+\frac{\widetilde{A}\left( t,t_{1}\right) +\widetilde{A}\left( t_{0},t_{1}\right) }{2}\hbar\overrightarrow{k}\\ \overrightarrow{r_{ba}} & =-\frac{\widetilde{B}\left( t,t_{1}\right) +\widetilde{B}\left( t_{0},t_{1}\right) }{2}\frac{\hbar\overrightarrow{k}}{m}\\ \Phi_{ba} & =-\left[ \omega\frac{t+t_{0}}{2}-\omega_{0}\left( \frac{t+t_{0}}{2}-t_{1}\right) +\phi-\overrightarrow{k}.\frac{\overrightarrow {\xi}\left( t,t_{1}\right) +\overrightarrow{\xi}\left( t_{0},t_{1}\right) }{2}\right] \\ & +\frac{\hbar\overrightarrow{k}}{8m}\left[ A\left( t,t_{1}\right) \widetilde{B}\left( t,t_{1}\right) +A\left( t_{0},t_{1}\right) \widetilde{B}\left( t_{0},t_{1}\right) +2A\left( t_{0},t_{1}\right) \widetilde{B}\left( t,t_{1}\right) \right] \overrightarrow{k}-\int_{t_{0}}^{t}\frac{\delta\left( t^{\prime},t_{1}\right) }{4}dt^{\prime}$$ 3. for the $b\longrightarrow a$ transition:$$\text{{\Large S}}_{1,ab}=e^{i\Phi_{ab}}e^{\frac{i}{\hbar}\overrightarrow {r_{op}}.\overrightarrow{p_{ab}}}e^{-\frac{i}{\hbar}\overrightarrow{p_{op}}.\overrightarrow{r_{ab}}}S_{ab}\left( \overrightarrow{r_{op}}+\widetilde {B}\left( t_{0},t_{1}\right) \frac{\hbar\overrightarrow{k}}{2m},\overrightarrow{p_{op}}-\widetilde{A}\left( t_{0},t_{1}\right) \frac {\hbar\overrightarrow{k}}{2}\right)$$ with: $$\begin{aligned} \overrightarrow{p_{ab}} & =-\frac{\widetilde{A}\left( t,t_{1}\right) +\widetilde{A}\left( t_{0},t_{1}\right) }{2}\hbar\overrightarrow{k}\\ \overrightarrow{r_{ab}} & =+\frac{\widetilde{B}\left( t,t_{1}\right) +\widetilde{B}\left( t_{0},t_{1}\right) }{2}\frac{\hbar\overrightarrow{k}}{m}\\ \Phi_{ab} & =+\left[ \omega\frac{t+t_{0}}{2}-\omega_{0}\left( \frac{t+t_{0}}{2}-t_{1}\right) +\phi-\overrightarrow{k}.\frac{\overrightarrow {\xi}\left( t,t_{1}\right) +\overrightarrow{\xi}\left( t_{0},t_{1}\right) }{2}\right] \\ & +\frac{\hbar\overrightarrow{k}}{8m}\left[ A\left( t,t_{1}\right) \widetilde{B}\left( t,t_{1}\right) +A\left( t_{0},t_{1}\right) \widetilde{B}\left( t_{0},t_{1}\right) +2A\left( t_{0},t_{1}\right) \widetilde{B}\left( t,t_{1}\right) \right] \overrightarrow{k}-\int_{t_{0}}^{t}\frac{\delta\left( t^{\prime},t_{1}\right) }{4}dt^{\prime}$$ 4. for the $b\longrightarrow b$ transition:$$\text{{\Large S}}_{1,bb}=e^{i\Phi_{bb}}e^{\frac{i}{\hbar}\overrightarrow {r_{op}}.\overrightarrow{p_{bb}}}e^{-\frac{i}{\hbar}\overrightarrow{p_{op}}.\overrightarrow{r_{bb}}}S_{bb}\left( \overrightarrow{r_{op}}+\widetilde {B}\left( t_{0},t_{1}\right) \frac{\hbar\overrightarrow{k}}{2m},\overrightarrow{p_{op}}-\widetilde{A}\left( t_{0},t_{1}\right) \frac {\hbar\overrightarrow{k}}{2}\right)$$ où: $$\begin{aligned} \overrightarrow{p_{bb}} & =+\frac{\widetilde{A}\left( t,t_{1}\right) -\widetilde{A}\left( t_{0},t_{1}\right) }{2}\hbar\overrightarrow{k}\\ \overrightarrow{r_{bb}} & =-\frac{\widetilde{B}\left( t,t_{1}\right) -\widetilde{B}\left( t_{0},t_{1}\right) }{2}\frac{\hbar\overrightarrow{k}}{m}\\ \Phi_{bb} & =-\frac{1}{2}\left[ \left( \omega-\omega_{0}\right) \left( t-t_{0}\right) -\overrightarrow{k}.\left( \overrightarrow{\xi}\left( t,t_{1}\right) -\overrightarrow{\xi}\left( t_{0},t_{1}\right) \right) \right] \\ & +\frac{\hbar\overrightarrow{k}}{8m}\left[ A\left( t,t_{1}\right) \widetilde{B}\left( t,t_{1}\right) +A\left( t_{0},t_{1}\right) \widetilde{B}\left( t_{0},t_{1}\right) -2A\left( t_{0},t_{1}\right) \widetilde{B}\left( t,t_{1}\right) \right] \overrightarrow{k}-\int_{t_{0}}^{t}\frac{\delta\left( t^{\prime},t_{1}\right) }{4}dt^{\prime}$$ The interpretation of these terms is simple and constitutes the core of the generalized $ttt$ scheme: 1. from $t_{0}$ to $t_{1}$, the initial ket $\left\vert \Psi\left( t_{0}\right) \right\rangle $ evolves with $H_{0}+H_{ext}$ only (as if there was no splitting potential); 2. at $t_{1}$, this evolved ket is subject to an effective instantaneous interaction which modifies its structure (structuring into several wave packets, see part \[part6\] and the illustration of the ttt scheme on figure \[figttt\]), its central position and momentum, and its phase; 3. from $t_{1}$ to $t$, the obtained wave packets evolve with $H_{0}+H_{ext}$ only, as before $t_{1}$. Eventually, we can show that the $t_{1}$ value which simplifies the most the previous expressions is the central time of interaction: $$t_{1}=t_{1/2}=\frac{t+t_{0}}{2}$$ \[h\] [FigurePRAindia2.ps]{} Let us see now what are the features of the wave packets which appear inside the beam splitters. Before studying this wave packet structuring in the general case (i.e. in the presence of trapping and gravito-inertial potentials), we will focus on the simple “free” case (i.e. without any such other external potential). Structure of the solutions in the free case\[part6\] ==================================================== The study of the free case is important for at least two reasons. First, as we have seen in part \[part3\], the effect of the quadratic terms of $H_{ext}$ (rotations, gradients, trap…) can often be neglected during the resolution of equation (\[eq3\]), and any uniform acceleration can be easily removed by modulating the laser frequency. In this case, the equation (\[eq3\]) is equivalent to the one stemming from the free case. Second, and as far as we know, to date, there has been no global study of matter wave beam splitters even in the simplest free case. Indeed, each of the previously quoted works aims at studying only a few aspects of the atomic beam splitting: mechanical effect, internal and external splitting, group velocities… Actually, the question consisting in finding what exactly goes out of an atomic beam splitter is still relevant, even for a simple laser potential (temporal square amplitude) and an incident Gaussian matter wave packet. It is therefore necessary to specify what is the exact structuring of this incident wave packet, namely the number of created wave packets (two main ones for each transition amplitude), their amplitude (Rabi oscillations, velocity selection, anomalous dispersion…), their group velocity (Borrmann fan), their central momentum and their phase, and eventually how these quantities evolve in time. Group velocities and adiabatic states\[part61\] ----------------------------------------------- The laser-atom interaction induces particular states, the adiabatic or dressed states, which are nothing but the eigenstates of the interaction. If $\overline{F}\left( t,t_{1}\right) $ is a unit amplitude square pulse between $t_{0}$ and $t$, these adiabatic states are simply the eigenvectors of the matrix $iM\left( 1/2,t\right) $. Therefore, the two corresponding eigenenergies are: $$\mp\varepsilon\left( \overrightarrow{p}\right) =\mp\hbar\sqrt{\Omega_{0}^{2}+\left[ \Delta_{l}\left( \overrightarrow{p}\right) /2\right] ^{2}}$$ with $\Delta_{l}\left( \overrightarrow{p}\right) =\omega-\omega _{0}-\overrightarrow{k}.\overrightarrow{p}/m$, and the corresponding group velocities are simply obtained by deriving these dispersion relations with respect to the momentum: $$\pm\overrightarrow{v_{g}}\left( \overrightarrow{p}\right) =\mp\frac{y\left( \overrightarrow{p}\right) }{\sqrt{1+y\left( \overrightarrow{p}\right) ^{2}}}\frac{\hbar\overrightarrow{k}}{2m}$$ where $y\left( \overrightarrow{p}\right) $ is the well known “off Braggness parameter” introduced in neutron optics [@rauch78; @rauch00] and defined here as: $$y\left( \overrightarrow{p}\right) =\left( \omega-\omega_{0}-\overrightarrow {k}.\overrightarrow{p}/m\right) /2\Omega_{0}$$ As we can see on (\[eq5\]) and (\[eq6\]), the matrices $S_{uv,op}$ act on a wave packet which is shifted by a global momentum of $+\hbar\overrightarrow {k}/2$ or $-\hbar\overrightarrow{k}/2$, depending on the initial internal state of atoms. For the example previously described (atoms initially in the lower internal state), we obtain the following parameter: $$y\left( \overrightarrow{p}+\hbar\overrightarrow{k}/2\right) =y_{+}\left( \overrightarrow{p}\right) =\left( \omega-\omega_{0}-\overrightarrow {k}.\overrightarrow{p}/m-\delta\right) /2\Omega_{0} \label{eqy+}$$ which can be called “inelasticity parameter” as it refers to the way the resonance condition is fulfilled [@antoineThese]. In the initial representation, these group velocities become: $$\frac{\overrightarrow{p}}{m}+\frac{\hbar\overrightarrow{k}}{2m}\left( 1\pm\frac{y_{+}\left( \overrightarrow{p}\right) }{\sqrt{1+y_{+}\left( \overrightarrow{p}\right) ^{2}}}\right)$$ The examination of these velocities leads to several important conclusions. First, the difference between momentum and group velocity in the presence of an electromagnetic field is naturally confirmed. For a weak inelasticity parameter $\left\vert y_{+}\left( \overrightarrow{p}\right) \right\vert \ll1$, we obtain only one group velocity for both the adiabatic states (atomic Borrmann effect, see part \[part52\]): $$\frac{\overrightarrow{p}}{m}+\frac{\hbar\overrightarrow{k}}{2m}$$ whereas for $\left| y_{+}\left( \overrightarrow{p}\right) \right| \gg1$, we obtain the two extreme velocities (defining the Borrmann fan): $$\frac{\overrightarrow{p}}{m}\text{ \ and \ }\frac{\overrightarrow{p}}{m}+\frac{\hbar\overrightarrow{k}}{m}$$ But in this case, the beam splitter is inefficient as we will see thereafter. However, we can show [@antoineThese] that these group velocities are closely linked to the average momenta of the considered two level system, and that they are more precisely equal to the most probable momenta of adiabatic states (divided by $m$). For $y\neq0$, two distinct atomic wave packets are created in the beam splitter, and their physical separation may be observable in certain conditions (more than few $\mu m$ after $10^{-4}$ $s$ for an initial atomic coherent state of $1$ $\mu m$ width). Finally, it is noteworthy that these group velocities depend on $\overrightarrow{p}$. Therefore, the (optical) medium where atoms evolve is dispersive and this effect leads to the phenomenon of anomalous dispersion (see part \[part64\]). Rabi oscillations ----------------- Let us consider the free solution of equation (\[eq3\]). If the internal state of initial atoms is the lower state, we obtain in the initial picture the following lower state: $$T_{aa}\left( \overrightarrow{p_{op}},t\right) \left| a\left( t_{0}\right) \right\rangle$$ with:$$T_{aa}\left( \overrightarrow{p},t\right) =\exp\left[ -i\left( \frac{E_{a}}{\hbar}+\left( \overrightarrow{p}+\frac{\hbar\overrightarrow{k}}{2}\right) ^{2}/2m\hbar+\frac{\delta}{4}-\frac{\omega-\omega_{0}}{2}\right) \left( t-t_{0}\right) \right] S_{aa}\left( \overrightarrow{p}+\frac{\hbar \overrightarrow{k}}{2}\right)$$ and the following upper state:$$e^{-i\left( \omega t_{0}-\overrightarrow{k}.\overrightarrow{r_{op}}+\phi\right) }T_{ba}\left( \overrightarrow{p_{op}},t\right) \left| a\left( t_{0}\right) \right\rangle$$ with:$$T_{ba}\left( \overrightarrow{p},t\right) =\exp\left[ -i\left( \frac{E_{b}}{\hbar}+\left( \overrightarrow{p}+\frac{\hbar\overrightarrow{k}}{2}\right) ^{2}/2m\hbar+\frac{\delta}{4}+\frac{\omega-\omega_{0}}{2}\right) \left( t-t_{0}\right) \right] S_{ba}\left( \overrightarrow{p}+\frac{\hbar \overrightarrow{k}}{2}\right)$$ $S_{aa}$ and $S_{ba}$ having the following usual expressions: $$S_{aa}\left( \overrightarrow{p},t-t_{0}\right) =\cos\left[ \varepsilon \left( \overrightarrow{p}\right) .\left( t-t_{0}\right) \right] -i\frac{y\left( \overrightarrow{p}\right) }{\sqrt{1+y\left( \overrightarrow {p}\right) ^{2}}}\sin\left[ \varepsilon\left( \overrightarrow{p}\right) .\left( t-t_{0}\right) \right]$$ $$S_{ba}\left( \overrightarrow{p},t-t_{0}\right) =i\frac{1}{\sqrt{1+y\left( \overrightarrow{p}\right) ^{2}}}\sin\left[ \varepsilon\left( \overrightarrow{p}\right) .\left( t-t_{0}\right) \right]$$ The temporal evolution of the initial state is therefore sinusoidal with an amplitude equal to $1/\left( 1+y_{+}\left( \overrightarrow{p}\right) ^{2}\right) $ and a frequency equal to $\Omega_{0}\sqrt{1+y_{+}\left( \overrightarrow{p}\right) ^{2}}/2\pi$. Tuning the interaction parameters, we can realize true atomic mirrors or $\pi$ pulses (transfer of all the atoms from one state to another) and semi-reflecting plates or $\pi/2$ pulses (50-50 splitting), whose efficiency depends in a crucial way of the value of $y_{+}\left( \overrightarrow{p}\right) $ which is actually taken by the central momentum of the incident wave packet. Velocity selection ------------------ This effect comes from the fact that the two pre-factors $1/\sqrt {1+y_{+}\left( \overrightarrow{p}\right) ^{2}}$ and $y_{+}\left( \overrightarrow{p}\right) /\sqrt{1+y_{+}\left( \overrightarrow{p}\right) ^{2}}$, in the expression of $S_{aa}$ and $S_{ba}$, depend on the momentum $\overrightarrow{p}$, and more particularly on its part which is collinear to the laser wave vector $\overrightarrow{k}$ (transverse momentum). Therefore, the terms $S_{uv}$ act as momentum filters on the incident atomic momentum distribution. For example, in the case of the $a\longrightarrow b$ transition, one obtains the well known sinus cardinal filter. It is characterized by a central lobe with an amplitude of $\sin\left[ \Omega_{0}\tau\right] $ and a total width of $4\Omega_{0}\left( \sqrt{\left( \pi/\Omega_{0}\tau\right) ^{2}-1}\right) /k$ (for a fixed interaction duration of $\tau=t-t_{0}$). This width may be less than the transverse width of the incident atomic momentum distribution. The resulting (transverse) velocity selection is very useful in atom interferometry (to increase the fringes contrast) and constitutes the basis of Raman cooling [@kasevich92]. Furthermore, this central lobe is centered on the transverse momentum $p_{l}$: $$p_{l}=m\left( \omega-\omega_{0}-\delta\right) /k$$ which can be quite different from the initial transverse central momentum $\overrightarrow{p_{0}}.\overrightarrow{k}/k$. If so, the central momentum $\overrightarrow{p_{c}}$ of the filtered distribution is distinct from $\overrightarrow{p_{0}}$ and is bounded by $\overrightarrow{p_{0}}$ and $\overrightarrow{p_{l}}$. Consequently, the central momentum of the outgoing wave packets (in excited state) is not $\overrightarrow{p_{0}}+\hbar \overrightarrow{k}$, as one would think in view of the mechanical recoil effect of light, but $\overrightarrow{p_{c}}+\hbar\overrightarrow{k}$ (see Figure \[figveloselect\]). \[h\] [FigurePRAindia3.ps]{} Apart from the central lobe, the $sinc$ filter has nodes and sidelobes with an amplitude that rapidly decreases (this particular structure is linked to the pulse shape, and can be softened or removed by the use of apodization functions \[Blackman for example\] to tailor laser pulses). If the incident atomic distribution is sufficiently broad to encompass one or several sidelobes, the filtered matter wave packet will then be structured into several wave packets whith central momenta quite different from $\overrightarrow{p_{0}}$, $\overrightarrow{p_{l}}$ or $\overrightarrow{p_{c}}$. With respect to the other transition amplitude ($a\longrightarrow a$, i.e. without change of internal state), we obtain the complement filter and the structuring can be studied in the same way [@antoineThese]. Anomalous dispersion \[part64\] ------------------------------- This effect refers to the modification of spreading of atomic wave packets inside a beam splitter. It is more convenient to examine it for an incident atomic wave packet which has a thin momentum distribution, although it can be studied and modeled for any momentum distribution (see [@antoineThese]). As we consider the wave packets which evolve inside the beam splitter, we have to go into the adiabatic states. In the adiabatic picture, these wave packets have the two energies (written here for the $a\longrightarrow b$ transition): $$\pm\hbar\Omega_{0}\sqrt{1+y_{+}\left( \overrightarrow{p}\right) ^{2}}$$ where the term $\sqrt{1+y_{+}\left( \overrightarrow{p}\right) ^{2}}$ can be Taylor expanded with respect to $\overrightarrow{k}.\left( \overrightarrow {p}-\overrightarrow{p_{0}}\right) /2m\Omega_{0}$ as: $$\sqrt{1+y_{+}\left( \overrightarrow{p}\right) ^{2}}=\sqrt{1+y_{0+}^{2}}-\frac{y_{0+}}{\sqrt{1+y_{0+}^{2}}}\frac{\overrightarrow{k}.\left( \overrightarrow{p}-\overrightarrow{p_{0}}\right) }{2m\Omega_{0}}+\frac {1}{\left( 1+y_{0+}^{2}\right) ^{3/2}}\frac{\left( \overrightarrow {p}-\overrightarrow{p_{0}}\right) .\overset{\Rightarrow}{\delta}.\left( \overrightarrow{p}-\overrightarrow{p_{0}}\right) }{4m\hbar\Omega_{0}^{2}}+...$$ where $y_{0+}$ is defined as: $$y_{0+}=y_{+}\left( \overrightarrow{p_{0}}\right)$$ and where $\overset{\Rightarrow}{\delta}$ is the complement matrix of the recoil term $\delta$: $$\overset{\Rightarrow}{\delta}=\frac{\hbar\overrightarrow{k}.\widetilde {\overrightarrow{k}}}{2m}=\frac{\hbar}{2m}\left( \begin{array} [c]{ccc}k_{x}^{2} & k_{x}k_{y} & k_{x}k_{z}\\ k_{x}k_{y} & k_{y}^{2} & k_{y}k_{z}\\ k_{x}k_{z} & k_{y}k_{z} & k_{z}^{2}\end{array} \right)$$ In the initial picture, the first order term of this expansion gives the group velocities obtained in part \[part61\]. As for the second order term, it corresponds to an additional dispersion and it indicates that one wave packet spreads more than the natural spreading, and that the other one spreads less. In certain conditions, this spreading can even be stopped or changed into a contraction [@antoineThese; @eiermann03]. In the case of a non-thin incident atomic wave packet, the main results of this study are still valid provided $y_{0+}$ is changed into $y_{c+}=y_{+}\left( \overrightarrow{p_{c}}\right) $. One can then model the outgoing atomic wave packets. A simple but powerful way to do this is the Gaussian modeling which consists in writing these wave packets as Gaussians. This “strong field ttt modeling”, and more generally the use of Gaussian wave packets, is found to be particularly relevant in atom interferometry (see [@antoineThese]). Structure of the solutions in the general case\[part7\] ======================================================= We have already seen in part \[part4\] how to deal with the double non-commutation problem which appears in $\Delta_{op1}$ and in the resolution of (\[eq3\]). In particular, we have seen that one can often neglect, in the expression of $\Delta_{op1}$, the terms depending on the quadratic terms of $H_{ext}$, namely $\alpha$, $\beta-1$ and $\gamma$. In certain configurations (when $\overrightarrow{k}$ is orthogonal to $\overrightarrow{g}$ or when $\omega$ is modulated to compensate the gravity induced Doppler shift), the generalized detuning is even equal to the free one and it boils down to the free case in (and only in) the resolution of (\[eq3\]). In this case, we obtain the same previous adiabatic energies and consequently the same adiabatic group velocities, Rabi oscillations, velocity selection and anomalous dispersion effect as stated before. In the initial picture (i.e. in the lab frame), the elements of the previous [S]{}$_{_{1}}$ matrix are therefore expressed as ($u,v=a,b$): $$\text{{\Large S}}_{1,uv}=\exp\left[ i\Phi_{uv}\right] .\exp\left[ \frac {i}{\hbar}\overrightarrow{r_{op}}.\overrightarrow{p_{uv}}\right] .\exp\left[ -\frac{i}{\hbar}\overrightarrow{p_{op}}.\overrightarrow{r_{uv}}\right] .S_{uv}\left( \overrightarrow{p_{op}}\mp\frac{\hbar\overrightarrow{k}}{2}\right)$$ where the elements $S_{uv}$ are equal to the ones obtained in the free case (see the previous part). Finally, the only differences from the free case lie in the expression of: 1. the effective incident wave packet at time $t_{1}$ (which is equal to the initial wave packet evolved from $t_{0}$ to $t_{1}$ thanks to $H_{0}+H_{ext}$). In particular, its central position and momentum are no more $\overrightarrow{r_{0}}+\overrightarrow{p_{0}}\left( t_{1}-t_{0}\right) /m$ and $\overrightarrow{p_{0}}$ but $\overrightarrow{R}\left( t_{1},t_{0},\overrightarrow{r_{0}},\overrightarrow{p_{0}}\right) $ and $\overrightarrow{P}\left( t_{1},t_{0},\overrightarrow{r_{0}},\overrightarrow {p_{0}}\right) $ (expressed with the $ABCD$ matrices, see part \[part3\]) 2. the terms $\overrightarrow{r_{uv}}$, $\overrightarrow{p_{uv}}$ and $\Phi_{uv}$ which are detailed in part \[part53\] 3. $U_{1}\left( t,t_{1}\right) $ which accounts for the evolution from $t_{1}$ to $t$ due to $H_{0}+H_{ext}$. Then, we can do the same Gaussian modeling as in the free case previously studied (see [@antoineThese]). In some cases however, we can not neglect the effect of (unavoidable) gravitational, inertial and trapping potentials for the resolution of (\[eq3\]), and it is important to examine what are the changes, due to $H_{ext}$, of the properties listed before (group velocities, Rabi oscillations, velocity selection and anomalous dispersion). Moreover, we have already underlined that the key parameter is the inelasticity parameter $y$ which is proportional to the generalized detuning $\Delta$. This $y$ parameter generally depends on the two canonical operators $\overrightarrow{r_{op}}$ and $\overrightarrow{p_{op}}$, but in the case of linear potentials (uniform acceleration for example) it depends on one operator ($\overrightarrow{p_{op}}$) only. Before dealing with the general case, let us consider the effect of a uniform (but time-dependent) acceleration $\overrightarrow{g}\left( t\right) $. From the equation (\[eq3\]), which becomes scalar in momentum representation, one can extract the adiabatic energies and then the adiabatic group velocities (written here for $t_{1}=t_{0}$): $$\pm\overrightarrow{v_{g0}}\left( t,t_{0}\right) =\mp\frac{y_{0}}{\sqrt{1+y_{0}^{2}}}\frac{\hbar\overrightarrow{k}}{2m}$$ with ($\overline{F}$ is taken constant and equal to $1$ for simplicity): $$y_{0}=y\left( \overrightarrow{p_{0}}\right) =y_{0free}-\overrightarrow {k}.\int_{t_{0}}^{t}\overrightarrow{g}\left( t^{\prime}\right) dt^{\prime }/2\Omega_{0}$$ where $y_{0free}$ is the inelasticity parameter obtained without $\overrightarrow{g}$ (free case, see (\[eqy+\])). In the initial picture, this result becomes eventually: $$\frac{\overrightarrow{p_{0}}}{m}+\frac{\hbar\overrightarrow{k}}{2m}+\int_{t_{0}}^{t}\overrightarrow{g}\left( t^{\prime}\right) dt^{\prime}\pm\overrightarrow{v_{g0}}$$ where two sources of atomic trajectories bending can be identified: the common gravitational bending coming from the third term $\int_{t_{0}}^{t}\overrightarrow{g}\left( t^{\prime}\right) dt^{\prime}$, and an anomalous gravity induced Doppler bending coming from the time-dependent adiabatic group velocities. As for the free case, this anomalous bending can be described either in terms of effective mass tensors [@lammerzahl99; @eiermann03] or in terms of position (and time) dependent effective refractive index inside the beam splitter [@eiermann03; @agrawal01]. Let us remark that the acceleration induced Doppler term $-\overrightarrow {k}.\overrightarrow{g}$ may produce a non-trivial bending of central atomic trajectories. Indeed, if $y_{0free}$ and the scalar $\overrightarrow {k}.\overrightarrow{g}$ are non zero, and if $y_{0free}$ has the same sign as $\overrightarrow{k}.\overrightarrow{g}$, then the trajectories bending due to the $\overrightarrow{k}.\overrightarrow{g}$ term makes these trajectories get closer (as usually considered in literature). On the contrary, if $y_{0free}$ has the opposite sign of $\overrightarrow{k}.\overrightarrow{g}$, then these two main trajectories repel each other (see Figure \[figkg\]). In both cases, if the vectors $\overrightarrow{k}$ and $\overrightarrow{g}$ point at the same direction ($\overrightarrow{k}.\overrightarrow{g}>0$), downward for example, then one of the two main atomic wave packets will always be accelerated upward, even if all forces (acceleration and laser pulse) seem to push the atoms downward. \[h\] [FigurePRAindia4.ps]{} Furthermore, this particular behavior is not limited to the adiabatic picture and can be observed fully in the laboratory frame (i.e. in the initial picture). Indeed, if $\delta>2\Omega_{0}\left( 1+y_{0free}^{2}\right) ^{3/2}$ (which corresponds, for $y_{0free}=0$, to the Bragg regime, as it is defined in [@oberthaler99] in contrast with the channeling regime for which $\delta<2\Omega_{0}$), this anomalous upward acceleration exceeds the downward acceleration of gravity (for $t-t_{0}$ much less than $1/\Omega_{0}$). As a result, some atoms are accelerated upward in the lab frame, even if the action of $\overrightarrow{g}$ and laser beams is directed downward (see Figure \[figantigrav\]). Of course, this anomalous anti-gravitational bending can be explained simply by considering the conservation of momentum and energy during the interaction process. \[h\] [FigurePRAindia5.ps]{} In the general case of an at-most-quadratic $H_{ext}$ (with quadratic terms like rotations, gradients of acceleration, trapping potentials, etc), the generalized detuning $\Delta_{op1}$ depends on the two non-commuting canonical operators, and it can not be made scalar in any representation. As we saw previously, one of the most relevant strategies is to approximate, in the resolution of equation (\[eq3\]), the effect of one of these two operators by considering its action on a typical atomic wave packet which evolves inside the beam splitter. In the free case, the Gaussian approximation of these typical wave packets, called here wp for convenience, leads to (in the adiabatic picture): $$\overrightarrow{r_{op}}\left( wp\right) \simeq\left( \overrightarrow{r_{0}}\pm\int_{t_{0}}^{t}\overrightarrow{v_{g0}}\left( t^{\prime},t_{0}\right) dt^{\prime}+\left( U_{0}\mp U_{AD}\right) .\left( \overrightarrow {p}-\overrightarrow{p_{0}}\right) \right) .wp \label{eq7}$$ where $U_{0}$ is linked to the complex momentum width of the initial atomic wave packet, and where $U_{AD}$ accounts for the anomalous dispersion phenomenon. In fact, we can make $U_{AD}$ arbitrarily small (for a sufficiently small interaction duration) and incorporate the time-independent matrix $U_{0}$ into the expression of $y$ (see (\[eqdeltaop\]) and (\[eqy+\])), and consider only the two first terms of (\[eq7\]). Finally, the quantity $\overrightarrow{x}\left( t\right) =\int_{t_{0}}^{t}\overrightarrow{v_{g0}}\left( t^{\prime},t_{0}\right) dt^{\prime}$ meets the following first order differential equation (which can be solved numerically): $$\frac{d}{dt}\overrightarrow{x}=-\frac{\overline{y_{0}}-\overrightarrow {k}.\overset{\cdot}{A}\left( t,t_{0}\right) .\overrightarrow{x}/2\Omega _{0}\overline{F}\left( t\right) }{\sqrt{1+\left( \overline{y_{0}}-\overrightarrow{k}.\overset{\cdot}{A}\left( t,t_{0}\right) .\overrightarrow{x}/2\Omega_{0}\overline{F}\left( t\right) \right) ^{2}}}\overset{\cdot}{\widetilde{B\left( t,t_{0}\right) }}\frac{\hbar \overrightarrow{k}}{2m}$$ with: $$\overline{y_{0}}=\left( \omega-\omega_{0}-\delta-\overrightarrow{k}.\overset{\cdot}{A}\left( t,t_{0}\right) .\overrightarrow{r_{0}}-\overrightarrow{k}.\overset{\cdot}{B}\left( t,t_{0}\right) .\overrightarrow{p_{0}}/m-\overrightarrow{k}.\overset{\cdot}{\overrightarrow {\xi}}\left( t,t_{0}\right) \right) /2\Omega_{0}\overline{F}\left( t\right)$$ A simpler approximation (WKB approximation) amounts to neglecting also $\int_{t_{0}}^{t}\overrightarrow{v_{g0}}\left( t^{\prime},t_{0}\right) dt^{\prime}$ in the expression (\[eq7\]). In this case, we obtain readily: $$\pm\overrightarrow{v_{g0}}\left( t,t_{0}\right) =\mp\frac{\overline{y_{0}}}{\sqrt{1+\overline{y_{0}}^{2}}}\overset{\cdot}{\widetilde{B\left( t,t_{0}\right) }}\frac{\hbar\overrightarrow{k}}{2m}$$ which gives a good approximation of the group velocities inside a matter wave beam splitter when this latter is subject to the various external potentials described by $H_{ext}$. For example, for a non-uniform (but time-independent) acceleration $\left( \overrightarrow{g},\gamma\right) $ (or in the case of a trapping potential, for which the sign of $\gamma$ has to be reversed), one obtains: $$\begin{aligned} \overline{y_{0}} & =\left( \omega-\omega_{0}-\delta-\overrightarrow {k}.\cosh\left( \gamma^{1/2}\left( t-t_{0}\right) \right) .\overrightarrow {p_{0}}/m\right. \\ & \left. -\overrightarrow{k}.\gamma^{1/2}\sinh\left( \gamma^{1/2}\left( t-t_{0}\right) \right) .\overrightarrow{r_{0}}-\overrightarrow{k}.\gamma^{-1/2}\sinh\left( \gamma^{1/2}\left( t-t_{0}\right) \right) .\overrightarrow{g}\right) /2\Omega_{0}\overline{F}\left( t\right)\end{aligned}$$ Similarly, in the case of a (time-independent) rotation $\overrightarrow {\Omega}$, one obtains: $$\overline{y_{0}}=\left( \omega-\omega_{0}-\delta-\overrightarrow {k}.\mathcal{R}\left( t,t_{0}\right) .\left[ \overrightarrow{p_{0}}/m-\overrightarrow{\Omega}\times\left( \overrightarrow{r_{0}}+\overrightarrow{p_{0}}\left( t-t_{0}\right) /m\right) \right] \right) /2\Omega_{0}\overline{F}\left( t\right)$$ where the rotation matrix $\mathcal{R}\left( t,t_{0}\right) $ can be seen as acting either on the atomic wave packet or on the wave vector $\overrightarrow {k}$. Conclusion ========== In conclusion, we have shown in this paper how to solve the problem of matter wave splitting in the presence of various gravitational, inertial and trapping potentials. In particular, we have seen how the resonance condition between the splitting potential and the effective two-level atoms has to be changed. Then, we have shown how to express this triple interaction matter - splitting potential - other external potentials as an equivalent instantaneous interaction (generalized $ttt$ scheme). Finally, we have investigated in detail what is the dispersive structuring of an incident atomic wave packet inside such beam splitters, both in the free case (for which $H_{ext}$ is reduced to $\overrightarrow{p}^{2}/2m$) and in the general case of $H_{ext}$. Several significant features of the solutions have been studied: group velocities, generalized Rabi oscillations, velocity selection, anomalous dispersion effects... In the light of this study, the generalized $ttt$ scheme leads to a very practical and efficient (Gaussian) modeling of atomic beam splitters which is particularly relevant for atom interferometric signal calculations [@antoinejopb; @antoineThese]. It is worth pointing out that these results stem from a strong field theory for both the splitting potential and the other external potentials described by $H_{ext}$. However, several points still have to be cleared up: for instance, the problem of the effective mass change which occurs when the atomic internal state is changed, and which leads to non-trivial (small) relativistic corrections. Furthermore, it is necessary to extend our formalism to additional external potentials which are more than quadratic (in position and momentum) if we want to investigate the effect of van-der-Waals, Casimir or Yukawa-type potentials on the matter wave splitting. More generally speaking, it would be interesting to go beyond the various approximations listed in part \[part2\], and in particular beyond the two-beam approximation. [99]{} T.L. Gustavson, A. Landragin and M.A. Kasevich., Class. Quantum Grav. 17, 2385 (2000); A. Peters, K.Y. Chung and S. Chu, Metrologia 38, 25 (2001); J.M. McGuirk, G.T. Foster, J.B. Fixler, M.J. Snadden, and M.A. Kasevich, Phys. Rev. A 65, 033608 (2002); G. Wilpers, T. Binnewies, C. Degenhardt, U. Sterr, J. Helmcke, and F. Riehle, Phys. Rev. Lett. 89, 230801 (2002). Ch. J. Bordé, N. Courtier, F. du Burck, A.N. Goncharov and M. Gorlicki, Phys. Lett. A 188, 187 (1994). Ch. Antoine and Ch.J. Bordé, J. Opt. B: Quantum Semiclass. Opt. 5, S199 (2003). Ch. Antoine, Contribution à la théorie des interféromètres atomiques, Ph.D. thesis (in French), Université Pierre et Marie Curie, Paris (2004). F.T. Hioe and C.E. Caroll, Phys. Rev. A 32, 1541 (1985). K.-A. Suominen and B.M. Garraway, Phys. Rev. A 45, 374 (1992). J. Ishikawa, F. Riehle, J. Helmcke and Ch.J. Bordé, Phys. Rev. A 49, 4794 (1994). L. Carmel and A. Mann, Phys. Rev. A 61, 052113 (2000). A.M. Ishkhanyan, Opt. Comm. 176, 155 (2000). M.K. Oberthaler, R. Abfalterer, S. Bernet, J. Schmiedmayer, and A. Zeilinger , Phys. Rev. Lett. 77, 4980 (1996); Ch.J. Bordé and C. Lämmerzahl, Ann. Phys. (Leipzig) 8, 83 (1999). B. Eiermann, P. Treutlein, Th. Anker, M. Albiez, M. Taglieber, K.-P. Marzlin and M. K. Oberthaler, Phys. Rev. Lett. 91, 060402 (2003); B. Eiermann, Th. Anker, M. Albiez, M. Taglieber, P. Treutlein, K.-P. Marzlin and M. K. Oberthaler, Phys. Rev. Lett. 92, 230401 (2004). C. Lämmerzahl and Ch.J. Bordé, Gen. Rel. Grav. 31, 635 (1999). C. Lämmerzahl and Ch.J. Bordé, Phys. Lett. A 203, 59 (1995); K.-P. Marzlin and J. Audretsch, Phys. Rev. A 53, 1004 (1996). Ch.J. Bordé, Gen. Rel. Grav. 36, 475 (2004). M.A . Horne, Physica A 137, 260 (1986). H. Rauch and S.A. Werner, *Neutron Interferometry* (Clarendon Press, Oxford, 2000). O. Carnal and J. Mlynek, Phys. Rev. Lett. 66, 2689 (1991); F. Shimizu, K. Shimizu and H. Takuma, Phys. Rev. A 46, R17 (1992). D.W. Keith, M.L. Schattenburg, H.I. Smith, and D.E. Pritchard , Phys. Rev. Lett. 61, 1580 (1988). J.F. Clauser, Physica B 151, 262 (1988). F. Shimizu, Phys. Rev. Lett. 86, 987 (2001); T.A. Pasquini, Y. Shin, C. Sanner, M. Saba, A. Schirotzek, D.E. Pritchard, and W. Ketterle, Phys. Rev. Lett. 93, 223201 (2004). Yu.L. Sokolov, Sov. Phys. JETP 36, 243 (1973). J. Robert, Ch. Miniatura, S. Le Boiteux, J. Reinhardt, V. Bocvarski, J. Baudon, J. Europhys. Lett., 16, 29 (1991); Ch. Miniatura, J. Robert, S. Le Boiteux, J. Reinhardt and J. Baudon, Appl. Phys. B 54, 347 (1992). G.I. Opat, S.J. Wark, and A. Cimmino, Appl. Phys. B 54, 396 (1992); T.M. Roach, H. Abele, M.G. Boshier, H.L. Grossman, K.P. Zetie and E.A. Hinds, Phys. Rev. Lett. 75, 629 (1995); A.I. Sidorov, R.J. McLean, G.I. Opat, W.J. Rowlands, D.C. Lau, J.E. Murphy, M. Walkiewicz and P. Hannaford, Quantum Semiclass. Opt. 8, 713 (1996). S.J. Wark and G.I. Opat, J. Phys. B: At. Mol. Opt. Phys. 25, 4229 (1992); S.A. Schulz, H.L. Bethlem, J. van Veldhoven, J. Kupper, H. Conrad and G. Meijer, Phys. Rev. Lett. 93, 020406 (2004). D. Cassettari, B. Hessmo, R. Folman, T. Maier and J. Schmiedmayer, Phys. Rev. Lett. 85, 5483 (2000). E. Arimondo, H. Lew and T. Oka, Phys. Rev. Lett. 43, 753 (1979); P.E. Moscowitz, P.L. Gould, S.R. Atlas and D.E. Pritchard, Phys. Rev. Lett. 51, 370 (1983); E. M. Rasel, M.K. Oberthaler, H. Batelaan, J. Schmiedmayer and A. Zeilinger, Phys. Rev. Lett. 75, 2633 (1995); D. M. Giltner, R.W. McGowan and S.A. Lee, Phys. Rev. Lett. 75, 2638 (1995); M. Kozuma, L. Deng, E.W. Hagley, J. Wen, R. Lutwak, K. Helmerson, S.L. Rolston and W.D. Phillips, Phys. Rev. Lett. 82, 871 (1999); S. Wu, Y.-J. Wang, Q. Diot and M. Prentiss, Phys. Rev. A 71, 043602 (2005). Y.V. Baklanov, B.Y. Dubetsky and V.P. Chebotayev, Appl. Phys. A 9, 171 (1976); Ch.J. Bordé, Ch. Salomon, S. Avrillier, A. Van Lerberghe, Ch. Bréant, D. Bassi and G. Scoles, Phys. Rev. A 30, 1836 (1984). M. Kasevich and S. Chu, Phys. Rev. Lett. 67, 181 (1991). R.J. Cook and R.K. Hill, Opt. Comm. 43, 258 (1982); J.V. Hajnal and G.I. Opat, Opt. Comm. 71, 119 (1989); M. Christ, A. Scholz, M. Sciffer, R. Deutschmann and W. Ertmer, Opt. Comm. 107, 211 (1994); A. Steane, P. Szriftgiser, P. Desbiolles and J. Dalibard, Phys. Rev. Lett. 74, 4972 (1995); R. Brouri, R. Asimov, M. Gorlicki, S. Feron, J. Reinhardt, V. Lorent and H. Haberland, Opt. Comm. 124, 448 (1996); A. Landragin, G. Labeyrie, C. Henkel, R. Kaiser, N. Vansteenkiste, C. I. Westbrook and A. Aspect, Opt. Lett. 21, 1591 (1996); C. Henkel, K. Molmer, R. Kaiser, N. Vansteenkiste, C.I. Westbrook and A. Aspect, Phys. Rev. A 55, 1160 (1997). A.P. Kazantsev, Sov. Phys. JETP 40, 825 (1975); T. Sleator, T. Pfau, V. Balykin, O. Carnal and J. Mlynek, Phys. Rev. Lett. 68, 1996 (1992). J. Oreg, F.T. Hioe and J. H. Eberly, Phys. Rev. A 29, 690 (1984); P. Marte, P. Zoller and J. L. Hall, Phys. Rev. A 44, R4418 (1991); P. Pillet, C. Valentin, R.-L. Yuan and J. Yu, Phys. Rev. A 48, 845 (1993); J. Lawall and M. Prentiss, Phys. Rev. Lett. 72, 993 (1994); P.D. Featonby, G.S. Summy, J.L. Martin, H. Wu, K.P. Zetie, C.J. Foot and K. Burnett, Phys. Rev. A 53, 373 (1996). U. Gaubatz, P. Rudecki, M. Becker, S. Schiemann, M. Kulz and K. Bergmann, Chem. Phys. Lett. 149, 463 (1988); K. Bergmann, H. Theuer, and B.W. Shore, Rev. Mod. Phys. 70, 1003 (1998). Y.B. Band, Phys. Rev. A 47, 4970 (1993); V.S. Malinovsky and P.R. Berman, Phys. Rev. A 68, 023610 (2003). T. Pfau, Ch. Kurtsiefer, C.S. Adams, M. Sigel, and J. Mlynek, Phys. Rev. Lett. 71, 3427 (1993). O. Houde, D. Kadio and L. Pruvost, Phys. Rev. Lett. 85, 5543 (2000); R. Dumke, T. Muther, M. Volk, W. Ertmer and G. Birkl, Phys. Rev. Lett. 89, 220402 (2002). B.W. Shore, K. Bergmann, J. Oreg and S. Rosenwaks, Phys. Rev. A 44, 7442 (1991); K. Moler, D.S. Weiss, M. Kasevich and S. Chu, Phys. Rev. A 45, 342 (1992). Ch.J. Bordé, in *Atom interferometry*, edited by P. Berman (Academic, New York, 1997). C. Cohen-Tannoudji, J. Dupont-Roc and G. Grynberg, *Atom-Photon Interactions: Basic Processes and Applications* (Wiley, New York, 1992). Ch.J. Bordé, in *Fundamental Systems in Quantum Optics*, edited by J. Dalibard, J.M. Raimond and J. Zinn-Justin (Elsevier, 1991); Ch.J. Bordé, C. R. Acad. Sci. Paris, t. 2 (Série IV), 509 (2001). M. Kasevich and S. Chu, Appl. Phys. B 54, 321 (1992); B. Young, M. Kasevich and S. Chu, in *Atom interferometry*, edited by P. Berman (Academic, New York, 1997). N. Rosen and C. Zener, Phys. Rev. 40, 502 (1932). Yu.N. Demkov and M. Kunike, Vest. Leningr. Univ. Fiz. Khim. 16, 39 (1969). S. Autler and C. Townes, Phys. Rev. 100, 70 (1955); S. Guérin and H.R. Jauslin, Adv. Chem. Phys. 125, 1 (2003). T.-S. Ho and S.-I. Chu, J. Phys. B 17, 2101 (1984). U. Peskin and N. Moiseyev, J. Chem. Phys. 99, 4590 (1993). V.S. Letokhov and V.G. Minogin, Zh. Eksp. Teor. Fiz. 74, 1318 (1978); Y. Castin and J. Dalibard, Europhys. Lett. 14, 761 (1991); M. Büchner, R. Delhuille, A. Miffre, C. Robilliard, J. Vigué and C. Champenois, Phys. Rev. A 68, 013607 (2003). W. P. Schleich, *Quantum Optics in Phase Space* (Wiley, Berlin, 2001). A. Iserles, H.Z. Munthe-Kaas, S.P. Nørsett and A. Zanna, Acta Numerica, 215 (2000); R. Suarez and L. Saenz, J. Math. Phys. 42, 4582 (2001). P.C. Moan and J.A. Oeo, J. Math. Phys. 42, 501 (2001). X. Miao, Phys. Lett. A 271, 296 (2000). M.V. Berry, Proc. R. Soc. London A 429, 61 (1990); K. Drese and M. Holthaus, Eur. Phys. J. D 5, 119 (1999). Ch. Lubich, NIC Series 10, 459 (2002). G. Borrmann, Phys. Z. 42, 157 (1941); J. M. Cowley, *Diffraction Physics* (North-Holland, Amsterdam, 1990). J. W. Knowles, Acta Cryst. 9, 61 (1956). H. Rauch and D. Petrascheck, in *Neutron Diffraction* (Springer, New York, 1978). Ch. Antoine and Ch.J. Bordé, Phys. Lett. A 306, 277 (2003). M. Kasevich and S. Chu, Phys. Rev. Lett. 69, 1741 (1992); N. Davidson, H.J. Lee, M. Kasevich and S. Chu, Phys. Rev. Lett. 72, 3158 (1994). G.P. Agrawal, *Applications of Nonlinear Fiber Optics* (Academic Press, San Diego, 2001); *Nonlinear Fiber Optics* (Academic Press, San Diego, 1995). M. K. Oberthaler, R. Abfalterer, S. Bernet, C. Keller, J. Schmiedmayer and A. Zeilinger, Phys. Rev. A 60, 456 (1999).
--- abstract: 'In this paper we study Dixmier traces of powers of Hankel operators in Lorentz ideals. We extend results of Engliš-Zhang to the case of powers $p\geq 1$ and general Lorentz ideals starting from abstract extrapolation results of Gayral-Sukochev. In the special case $p=2,4,6$ we give an exact formula for the Dixmier trace. For general $p$, we give upper and lower bounds on the Dixmier trace. We also construct, for any $p$ and any Lorentz ideal, examples of non-measurable Hankel operators.' address: ' Email: goffeng@chalmers.se, usachev@chalmers.seDepartment of Mathematical Sciences Chalmers University of Technology and University of Gothenburg SE-412 96 Gothenburg Sweden' author: - 'Magnus Goffeng, Alexandr Usachev' title: Estimating Dixmier traces of Hankel operators in Lorentz ideals --- Introduction ============ The construction of Dixmier traces goes back to work of Dixmier [@Dixie] who was motivated by the problem of finding a non-normal trace on the von Neumann algebra of bounded operators. Since then Dixmier traces have taken a prominent role in Connes’ program for noncommutative geometry [@C_book] and found applications in the analysis of rough structures such as Julia sets [@juliaconnes], limit sets of quasi-Fuchsian groups [@quasiconnes] and in complex geometry [@EZe; @GG0; @GG]. The non-normality of the Dixmier trace and the non-separability of its domain of definition makes computations and estimates of Dixmier traces a challenging problem. In this paper we propose a methodology to estimate Dixmier traces of powers of Hankel operators, building on work of Gayral-Sukochev [@GS]. The inspiration for this work is a paper by Engliš-Zhang [@EZ] where Dixmier traces of Hankel operators in the Lorentz ideal $\mathcal{M}_{1,\infty}$ were estimated by means of Besov norms. Recent work in fractal geometry [@juliaconnes; @quasiconnes] and the questions posed in [@EZ Section 7.3] lead us to ask for an extension of the estimates in [@EZ] to powers $p\geq 1$ and more general Lorentz ideals. The approach we take in this paper differs from that of [@EZ]. Our method consists of a rather straightforward application of extrapolation results of Gayral-Sukochev [@GS]. In the classical examples, naturally appearing physical and geometrical operators are measurable, that is all Dixmier traces take the same value on such operators. An example of a non-measurable pseudo-differential operator with symbol of Hörmander type $(1,0)$ can be found in [@LSZ Proposition 11.3.22]. Engliš-Zhang [@EZ Theorem 4] constructed a non-measurable Hankel operator from $\mathcal{M}_{1,\infty}$. We show that there are non-measurable Hankel operators in any ($p$-convexified) Lorentz ideal. Let us summarize our main results in a theorem. For a function $f$ on the circle $S^1$, we let $H_{\bar{f}}$ denote the associated Hankel operator on the Hardy space $H^2(S^1)$ (see more below in Section \[hanekopdldl\] below). We let $\mathcal{M}^{(p)}_\psi$ denote the $p$-convexification of the Lorentz ideal $\mathcal{M}_\psi$ and $\mathcal{M}^{(p)}_{\psi,0}$ its separable subspace (see more in Subsection \[opidelalsos\] below), and let $\mathrm{Tr}_{\omega,\psi}: \mathcal{M}_\psi\to {{\mathbb C}}$ be the Dixmier trace associated with an exponentiation invariant extended limit $\omega$. We write $A\sim B$ if there is a universal constant $C>0$ such that $C^{-1}A\leq B\leq CB$. When saying universal, we still allow for a dependence on $p$ and $\psi$. \[mainthmintro\] Let $p\geq 1$, $(\|\cdot\|_{B^{1/q}_{q,q},*})_{q\geq p}$ a family of norms on the Besov spaces $B^{1/q}_{q,q}(S^1)$ for $q\geq p$ satisfying the conditions of Corollary \[equivbesovgeneerl\], and $\psi:[0,\infty)\to [0,\infty)$ be an increasing concave function with regular variation of index $0$ satisfying $\psi(0)=0$, $\lim_{t\to \infty}\psi(t)=\infty$ and the conditions and . Then for any holomorphic function $f$ the following holds: 1. $H_{\bar{f}}\in \mathcal{M}_{\psi}^{(p)}$ if and only if $\sup_{q>p}\frac{1}{\psi(\mathrm{e}^{(q-p)^{-1}})} \|f\|_{B^{1/q}_{q,q},*}^q<\infty$. 2. For any exponentiation invariant extended limit $\omega$, $$\mathrm{Tr}_{\omega,\psi}(|H_{\bar{f}}|^p)\sim\lim_{q-p\to \tilde{\omega}}\frac{1}{\psi(\mathrm{e}^{(q-p)^{-1}})} \|f\|_{B^{1/q}_{q,q},*}^q.$$ Here $\tilde{\omega}$ is defined as in Equation (see page ). 3. It holds that $$\mathrm{d}_{\mathcal{M}_\psi}(|H_{\bar{f}}|^p,\mathcal{M}_{\psi,0}):=\inf_{A\in \mathcal{M}_{\psi,0}}\||H_{\bar{f}}|^p-A\|_{\mathcal{M}_\psi}\sim\limsup_{q\searrow p} \frac{1}{\psi(\mathrm{e}^{(q-p)^{-1}})}\|f\|_{B^{1/q}_{q,q},*}^q$$ Moreover, if $\psi$ satisfies that $A_\psi(\alpha)\neq 1$ for some $\alpha>1$ (see Equation ), there are holomorphic functions $f\in \cap_{q>p} B^{1/q}_{q,q}(S^1)$ such that $|H_{\bar{f}}|^p\in \mathcal{M}_\psi$ is non-measurable. Since we only consider $p$:th powers of operators, our results extend mutatis mutandis to $0<p<1$. We restrict our attention to $p\geq 1$ in order to avoid quasi-normed Banach spaces. In Section \[prelsection\] we provide an overview of the theory of Lorentz ideals from an extrapolation point of view. The general form of Theorem \[mainthmintro\] will be considered in Section \[hanekopdldl\]. We consider the special case $p=2,4,6$ of Theorem \[mainthmintro\] in Section \[p246sec\] where a result of Janson-Upmeier-Wallsten allows us to give exact formulas for the Dixmier trace $\mathrm{Tr}_{\omega,\psi}(|H_{\bar{f}}|^p)$. Finally, in Section \[Non-meas\] we construct holomorphic functions $f\in \cap_{q>p} B^{1/q}_{q,q}(S^1)$ such that $|H_{\bar{f}}|^p\in \mathcal{M}_\psi$ is non-measurable.\ [**Acknowledgements:**]{} We are grateful to Genkai Zhang for interesting discussions and particularly for introducing the work [@JUW] to us. We also thank Fedor Sukochev and Evgeniy Semenov for helpful comments on the distance formula to the separable part of a Lorentz ideal. We also acknowledge support from the Swedish Research Council Grant 2015-00137 and Marie Sklodowska Curie Actions, Cofund, Project INCA 600398. This work was finalized at the Erwin Schrödinger Institute in Vienna during the program on “Bivariant $K$-theory in Geometry and Physics". Lorentz spaces and extrapolation {#prelsection} ================================ We will in this section provide an overview of Lorentz ideals and Hankel operators. Most results in this section can be found in the literature, and the remainder of the results in this section are variations on well known results. Operator ideals {#opidelalsos} --------------- The operators we will consider in this paper will in general belong to some ideal of operators on a Hilbert space. The general theory of operator ideals is well defined starting from a semi-finite von Neumann algebra. While this introduces some additional technicalities, it will allow us to treat ideal of operators on the same footing as $L^p$-spaces on a measure space. We will not go through this theory beyond its salient points. The reader is referred to [@LSZ] for a more thorough presentation. Let $\mathcal{N}$ denote a semi-finite von Neumann algebra and $\tau$ a normal, faithful, semi-finite tracial weight on $\mathcal{N}$. The two main examples to keep in mind are $\mathcal{N}=\mathbb{B}(\mathcal{H})$ – the bounded operators on a separable Hilbert space – with $\tau$ being the operator trace and $\mathcal{N}=L^\infty(X,\mu)$ with $\tau(a):=\int_Xa\mathrm{d}\mu$ for a $\sigma$-finite measure space $(X,\mu)$. By definition, a von Neumann algebra is a weak operator closed $*$-subalgebra of $\mathbb{B}(\mathcal{H})$ for a Hilbert space $\mathcal{H}$. We will tacitly assume that $\mathcal{N}$ has a separable pre-dual, which is equivalent to $\mathcal{H}$ being separable. For any closed densely defined positive operator $T$ affiliated with $\mathcal{N}$, we define its singular value function $$\begin{aligned} \mu_T(t):&=\inf\{\|PT\|_{\mathcal{N}}: P\in \mathcal{N} \,\mbox{a projection with} \;\tau(1-P)\leq t\}=\\ &=\inf\{s\geq 0: \tau(\chi_{[s,\infty)}(T))\leq t\}\end{aligned}$$ There is a rich theory of so called symmetrically normed operator ideals, see more in [@LSZ Chapter 3], which carries over to the theory of ideals in $L^\infty(0,\infty)$ by means of the singular value function. We are mainly interested in the following two classes.\ [**$L^p$-spaces.**]{} The noncommutative $L^p$-space $\mathcal{L}^p$ is defined as the set of operators affiliated with $\mathcal{N}$ such that $\mu_T\in L^p(0,\infty)$. The space $\mathcal{L}^p$ is a symmetrically normed operator ideal, in particular a Banach space, in the norm $$\|T\|_{\mathcal{L}^p}:=\|\mu_T\|_{L^p(0,\infty)}.$$ In the case that $\mathcal{N}=\mathbb{B}(\mathcal{H})$, we write $\mathcal{L}^p(\mathcal{H})$ for the associated noncommutative $L^p$-space. The space $\mathcal{L}^p(\mathcal{H})$ coincides with the $p$:th Schatten ideal with the same norm. In the case that $\mathcal{N}=L^\infty(X,\mu)$ with $\tau(a):=\int_Xa\mathrm{d}\mu$ for a $\sigma$-finite measure space $(X,\mu)$, it holds that $\mathcal{L}^p=L^p(X,\mu)$ with the same norm.\ [**Lorentz ideals.**]{} Let $\psi:[0,\infty)\to [0,\infty)$ be an increasing concave function with $\psi(0)=0$, $\lim_{t\to\infty} \psi(t)=\infty$. For later purposes of Dixmier trace computations, we often assume a condition which is slightly stronger than that in the original Dixmier paper. This condition is that the limit $$\label{apsicond} A_{\psi}(\alpha):=\lim_{t\to \infty} \frac{\psi(t^\alpha)}{\psi(t)}\quad\mbox{exists for all $\alpha>1$}.$$ Since $\psi$ is increasing, $A_\psi(\alpha)\geq 1$ for all $\alpha$. Condition guarantees that $\psi$ has regular variation of index $0$. Recall that a function $\psi$ has regular variation of index $\rho\in {{\mathbb R}}$ if $$\lim_{t\to\infty}\frac{\psi(\lambda t)}{\psi(t)}=\lambda^\rho, \quad\forall \lambda>0.$$ By [@RegVar Theorem 1.8.2] we can without restrictions assume that $\psi$ is smooth. For the purpose of extrapolation results, the following condition on $\psi$ often comes into play: $$\label{Phi_as1} \|\psi'\|_p \le C \psi(e^\frac1{p-1}), \ \forall p>1.$$ We define the Lorentz ideal $\mathcal{M}_\psi$ to consist of operators affiliated with $\mathcal{N}$ such that $$\|T\|_{\mathcal{M}_\psi}:=\sup_{t>0}\frac{1}{\psi(t)}\int_0^t\mu_T(s)\mathrm{d}s<\infty.$$ The norm $\|\cdot\|_{\mathcal{M}_\psi}$ makes $\mathcal{M}_\psi$ into a symmetrically normed operator ideal. If the function $\psi$ satisfies condition , the ideal $\mathcal{M}_\psi$ carries a plethora of singular traces, with Dixmier traces being those of most relevance to this paper. For $\alpha\geq 1$ we define $P_\alpha:L^\infty(0,\infty)\to L^\infty(0,\infty)$ by $P_\alpha f(t):=f(t^\alpha)$. If $\omega\in L^\infty(0,\infty)^*$ is a state satisfying that $\omega(f)=0$ if $\lim_{t\to \infty}f(t)=0$ we say that $\omega$ is an extended limit at $\infty$. By an abuse of notation, we write $\lim_{t\to\omega} f(t):=\omega(f)$ for an extended limit $\omega$ and $f\in L^\infty(0,\infty)$. If $\omega=\omega\circ P_\alpha$ for all $\alpha\geq 1$, we say that $\omega$ is an exponentiation invariant extended limit. Associated with an exponentiation invariant extended limit $\omega$ there is a Dixmier trace $\mathrm{Tr}_{\omega,\psi}:\mathcal{M}_\psi\to {{\mathbb C}}$ defined by $$\mathrm{Tr}_{\omega,\psi}(T):=\lim_{t\to \omega} \frac{1}{\psi(t)}\int_0^t \mu_T(t)\mathrm{d}t,$$ for positive $T\in \mathcal{M}_\psi$ and extending to $\mathcal{M}_\psi$ by linearity (see [@GS Proposition 1.12] for the proof). The $p$:th convexification $\mathcal{M}_\psi^{(p)}$ is defined as the set of operators $T$ for which $|T|^p\in \mathcal{M}_\psi$; it is normed by $\|T\|_{\mathcal{M}_\psi^{(p)}}:=\||T|^p\|_{\mathcal{M}_\psi}^{1/p}$. The separable part $\mathcal{M}_{\psi,0}^{(p)}$ is defined as the closure in $\mathcal{M}_\psi^{(p)}$ of the finite trace operators in $\mathcal{N}$. The most studied example of Lorentz ideals comes from the function $\psi(t):=\log(1+t)$. In this case, one often writes $\mathcal{M}_{1,\infty}:=\mathcal{M}_\psi$ and $\mathcal{M}_{p,\infty}:=\mathcal{M}_\psi^{(p)}$. The reader should note that in [@EZ], the Lorentz ideal $\mathcal{M}_{1,\infty}$ associated with $\mathcal{N}=\mathbb{B}(\mathcal{H})$ is denoted by $\mathcal{S}^{Dixm}$. Technical results on extrapolation and Dixmier traces ----------------------------------------------------- The following result takes its starting point in work of Gayral-Sukochev [@GS]. The first statement is found in [@GS Theorem 3.3] and the second statement in  [@GS Proposition 2.17]. The third statement will be proven below, and is inspired by work of Engliš-Zhang [@EZ]. \[allthethingsGSsaid\] Let $\psi: [0,\infty)\to [0,\infty)$ be an increasing concave function satisfying and , $\psi(0)=0$, $\lim_{t\to\infty} \psi(t)=\infty$. We set $k_\psi := \log(A_{\psi}(e))$. 1. For any exponentiation invariant extended limit $\omega\in (L^\infty)^*$ and $T\in \mathcal{M}_\psi^{(p)}$, the formula $$\mathrm{Tr}_{\omega,\psi}(|T|^p)=\frac1{\Gamma(1+k_\psi)}\cdot\lim_{h\to \tilde{\omega}} \frac{1}{\psi(\mathrm{e}^{1/h})}\|T\|_{p+h}^{p+h}$$ holds where $\tilde{\omega}\in (L_\infty)^*$ is the extended limit at $0$ given by $$\label{omegatildedef} \lim_{h\to \tilde{\omega}} x(t) := \lim_{t\to \omega} x(\frac1{\log(t)}).$$ 2. For any $T\in \mathcal{N}$, $$\|T\|_{ \mathcal{M}_\psi^{(p)}}\sim \sup_{h>0} \frac{1}{\psi(\mathrm{e}^{1/h})}\|T\|_{p+h}^{p+h}.$$ In particular, $T\in \mathcal{M}_\psi^{(p)}$ if and only if $\|T\|_{p+h}^{p+h}=O(\psi(\mathrm{e}^{1/h}))$. 3. Assume that $\mathcal{N}$ is atomic. For any $T\in \mathcal{M}_\psi^{(p)}$, we have that $$\begin{aligned} \limsup_{h\searrow 0} \frac{1}{\psi(\mathrm{e}^{1/h})}\|T\|_{p+h}^{p+h}&\leq \mathrm{d}_{\mathcal{M}_\psi}(|T|^p,\mathcal{M}_{\psi,0})\leq \mathrm{e} \limsup_{h\searrow 0} \frac{1}{\psi(\mathrm{e}^{1/h})}\|T\|_{p+h}^{p+h}.\end{aligned}$$ Before proving the third statement of this theorem, we need two lemmas. The following result is an extension of [@EZ Proposition 7]. \[prop7\] Let $\psi: [0,\infty)\to [0,\infty)$ be an increasing concave function satisfying the conditions and and moreover that $\psi(0)=0$, $\lim_{t\to\infty} \psi(t)=\infty$. For a function $f\in \cap_{0<h<\delta}L^{p+h}(0,\infty)$ for some $\delta>0$ we define the quantities $$\|f\|_{p,\limsup} := \limsup_{h\searrow0} \frac{\|f\|_{p+h}^{1+h}}{\sqrt[p]{\psi(\mathrm{e}^{\frac{1}{h}})}}\quad \mbox{and} \quad \|f\|_{p, \lim \psi} := \limsup_{t\to \infty} \sqrt[p]{\frac{1}{\psi(t)}\int_0^t |f(s)|^p \mathrm{d}s}$$ It then holds that $$\|f\|_{p, \lim \psi}\leq \|f\|_{p,\limsup}\leq \mathrm{e} \|f\|_{p, \lim \psi}.$$ If $\psi$ satisfies the conditions and , then so does $\psi^{1/p}$ for any $p\geq 1$. Indeed condition is readily verified for $\psi^{1/p}$ and condition for $\psi^{1/p}$ follows from that $\psi$ has regular variation of index $0$ and [@GS Proposition 2.17 and 2.23]. We can therefore replace $f$ by $H:=|f|^p\geq 0$ and $\psi$ by $\psi^p$, and thusly assume that $p=1$. For any $C>\|H\|_{1,\limsup}$ there is $q_0>0$ such that $$\frac{\|H\|_{1+h}}{\psi(\mathrm{e}^\frac{1}{h})} <C, \quad\mbox{for}\; 0<h<q_0.$$ Using the Hölder inequality, for any $0<q<q_0$ we obtain $$\begin{aligned} \int_0^t H(s) \mathrm{d} s &\le \left( \int_0^t H(s)^{1+q} \mathrm{d}s \right)^\frac1{1+q} \left( \int_0^t \mathrm{d}s \right)^\frac{q}{1+q}\\ &\le C \cdot \psi(\mathrm{e}^\frac1{q})\cdot t^\frac{q}{1+q} \le C \cdot\psi(\mathrm{e}^\frac1{q})\cdot t^q.\end{aligned}$$ If $t>\mathrm{e}^{1/q_0}$ (that is, $q_0>1/\log t$), one can take $q=1/\log t$. Thus, $$\int_0^t H(s) \mathrm{d}s \le C \mathrm{e} \cdot\psi(t), \quad\mbox{for}\; t>\mathrm{e}^{1/q_0}.$$ Therefore, $$\|H\|_{1,\lim \psi} \le \mathrm{e} \|H\|_{1,\limsup}.$$ Conversely for $C> \|H\|_{1,\lim \psi}$ there exists $t_0>0$ such that $$\label{eq1} \frac1{\psi(t)} \int_0^t H(s) \mathrm{d}s \le C, \quad\forall t\ge t_0.$$ Equivalently, $$\int_0^t H(s) \mathrm{d}s \le \int_0^t C \psi'(s) \mathrm{d}s, \quad \forall t\ge t_0.$$ For the function $$G(t):=\begin{cases} H(t), &t\ge t_0\\ \min\{H(t), C \psi'(t)\}, &t<t_0. \end{cases}$$ we clearly have $$\int_0^t G(s) \mathrm{d}s \le \int_0^t C \psi'(s) \mathrm{d}s,\quad \forall t> 0.$$ This means that the function $G$ is submajorised by the function $C \psi'$ (in the sense of Hardy-Littlewood). Thus, for every $h>0$ one has $$\int_0^\infty G(s)^{1+h} \mathrm{d}s \le \int_0^\infty (C \psi'(s))^{1+h} \mathrm{d}s.$$ Since the function $\psi$ satisfies , it follows that $$\int_0^\infty G(s)^{1+h} \mathrm{d}s \le C^{1+h} (\psi(\mathrm{e}^{\frac{1}{h}}))^{1+h},$$ or, equivalently, $$\label{G} \limsup_{h\searrow0} \frac{\|G\|_{1+h}}{\psi(\mathrm{e}^{\frac{1}{h}})} \le C.$$ First, $$\frac{1}{\psi(\mathrm{e}^{\frac{1}{h}})} \left(\int_0^{t_0} G(s)^{1+h} \mathrm{d}s\right)^{\frac{1}{1+h}} \le \frac{1}{\psi(\mathrm{e}^{\frac{1}{h}})} \left(\int_0^{t_0} (C \psi'(s))^{1+h} \mathrm{d}s\right)^{\frac{1}{1+h}} \mathop{\longrightarrow}\limits_{h\searrow0} 0,$$ since $\psi(\infty)=\infty$ and $\psi'\in L^{1+h}(0,\infty)$ for every $h>0$. Second, by Lebesque Monotone Convergence Theorem and  we obtain $$\left(\int_0^{t_0} H(s)^{1+h} \mathrm{d}s\right)^{\frac{1}{1+h}} \mathop{\longrightarrow}\limits_{h\searrow0}\int_0^{t_0} H(s) \mathrm{d}s \le C \psi(t_0).$$ Therefore, $$\limsup_{h\searrow0} \frac{1}{\psi(\mathrm{e}^{\frac{1}{h}})} \left(\int_0^{t_0} G(s)^{1+h} \mathrm{d}s\right)^{\frac{1}{1+h}} = \limsup_{h\searrow0}\frac{1}{\psi(\mathrm{e}^{\frac{1}{h}})} \left(\int_0^{t_0} H(s)^{1+h} \mathrm{d}s\right)^{\frac{1}{1+h}}=0.$$ Since $H(t)=G(t)$ for $t\ge t_0$, it follows from  that $$\limsup_{h\searrow0} \frac{\|H\|_{1+h}}{\psi(\mathrm{e}^{\frac{1}{h}})}\le C.$$ This proves that $$\|H\|_{1,\limsup} \le \|H\|_{1,\lim \psi}.$$ The following result is well-known at least in the commutative setting (see e.g. [@DPSSS Proposition 2.1]. For the convenience of the reader we provide a short proof. \[prop7.5\] Let $\psi: [0,\infty)\to [0,\infty)$ be an increasing concave function satisfying the conditions and and moreover that $\psi(0)=0$, $\lim_{t\to\infty} \psi(t)=\infty$. Assume that $\mathcal{N}$ is atomic. For any $T\in \mathcal{M}_\psi^{(p)}$, we have that $$\mathrm{d}_{\mathcal{M}_\psi}(|T|^p,\mathcal{M}_{\psi,0})= \limsup_{t\to \infty} \frac1{\psi(t)}\int_0^t \mu_T(s)^p \mathrm{d}s.$$ Let $\mathcal{M}'_{\psi,0}$ denote the norm closure of the space of elements $T\in \mathcal{M}_\psi$ with compactly supported singular value function. The assumption that $\mathcal{N}$ is atomic ensures that $\mathcal{M}_{\psi,0}=\mathcal{M}_{\psi,0}'$. Our proof will in fact consist of showing that for a general $\mathcal{N}$, it holds that $$\label{disisfifit} \mathrm{d}_{\mathcal{M}_\psi}(|T|^p,\mathcal{M}_{\psi,0}')= \limsup_{t\to \infty} \frac1{\psi(t)}\int_0^t \mu_T(s)^p \mathrm{d}s, \quad\forall T\in \mathcal{M}_\psi^{(p)}$$ for any function $\psi$ additionally satisfying $\lim_{t\to0} \frac{t}{\psi(t)}=0$. Since the original statement is for atomic $\mathcal{N}$, we can always guarantee that this condition holds. It follows from [@Chi_Suk] that for every $T\in \mathcal{M}_\psi^{(p)}$ there exists a rearrangement-preserving (and thus, isometric) embedding $i_T$ of $\mathcal{M}_\psi^{(p)}(0,\infty)$ into $\mathcal{M}_\psi^{(p)}$ such that $i_T(\mu(T))=T$. Thus, following the argument in [@CRSS Page 267], it is sufficient to prove the formula for every $x=\mu(x) \in \mathcal{M}_\psi^{(p)}(0,\infty)$. For every $x=\mu(x) \in \mathcal{M}_\psi^{(p)}(0,\infty)$ and every $n\in\mathbb N$ the function $x^p\chi_{(0,n)} \in \mathcal{M}'_{\psi,0}(0,\infty)$. Hence, for every $n\in\mathbb N$ we have $$\mathrm{d}_{\mathcal{M}_\psi(0,\infty)}(x^p,\mathcal{M}'_{\psi,0}(0,\infty))=\mathrm{d}_{\mathcal{M}_\psi(0,\infty)}(x^p\chi_{[n,\infty)},\mathcal{M}'_{\psi,0}(0,\infty)).$$ Therefore, $$\begin{aligned} \mathrm{d}_{\mathcal{M}_\psi(0,\infty)}(x^p,\mathcal{M}'_{\psi,0}(0,\infty)) &\le\lim_{n\to\infty}\|x^p\chi_{[n,\infty)}\|_{\mathcal{M}_\psi(0,\infty)}\\ &=\lim_{n\to\infty}\sup_{t>0} \frac{1}{\psi(t)} \int_0^t \mu(x^p\chi_{[n,\infty)})(s) \ \mathrm{d}s\\ &=\lim_{n\to\infty}\sup_{t>0} \frac{1}{\psi(t)} \int_0^t (x(s+n))^p \ \mathrm{d}s\\ &=\lim_{n\to\infty}\sup_{t>0} \frac{1}{\psi(t)} \int_n^{t+n} (x(s))^p \ \mathrm{d}s.\end{aligned}$$ By the definition of supremum for every $n\in {{\mathbb N}}$ there exists $t_n>0$ such that $$\sup_{t>0} \frac{1}{\psi(t)} \int_n^{t+n} (x(s))^p \ \mathrm{d}s \le \frac{1}{\psi(t_n)} \int_n^{t_n+n} (x(s))^p \ \mathrm{d}s +\frac1n.$$ Denote for brevity $$a:=\limsup_{t\to \infty} \frac1{\psi(t)}\int_0^t (x(s))^p \mathrm{d}s.$$ 1\. If $\limsup_{n\to\infty} t_n=\infty$, then $$\mathrm{d}_{\mathcal{M}_\psi(0,\infty)}(x^p,\mathcal{M}'_{\psi,0}(0,\infty))\le \lim_{n\to\infty}\sup_{t>n} \frac{1}{\psi(t)} \int_n^{t+n} (x(s))^p \ \mathrm{d}s\le a,$$ since $x=\mu(x)$. 2\. If $0 < \liminf_{n\to\infty} t_n \le \limsup_{n\to\infty} t_n<\infty$, then $$\frac{1}{\psi(t_n)} \int_n^{t_n+n} (x(s))^p \ \mathrm{d}s \le \frac{t_n (x(n))^p}{\psi(t_n)}\mathop{\longrightarrow}\limits_{n\to\infty} 0,$$ since $x=\mu(x)$ and $x(n)\to 0$ as $n\to\infty$. Hence, $\mathrm{d}_{\mathcal{M}_\psi(0,\infty)}(x^p,\mathcal{M}'_{\psi,0}(0,\infty))=0\le a.$ 3\. If $0 = \liminf_{n\to\infty} t_n \le \limsup_{n\to\infty} t_n<\infty$, then $$\frac{1}{\psi(t_n)} \int_n^{t_n+n} (x(s))^p \ \mathrm{d}s \le \frac{t_n (x(n))^p}{\psi(t_n)}\mathop{\longrightarrow}\limits_{n\to\infty} 0,$$ since $x$ is bounded and $\frac{t}{\psi(t)}\to 0$ as $t\to0$. Hence, $\mathrm{d}_{\mathcal{M}_\psi(0,\infty)}(x^p,\mathcal{M}'_{\psi,0}(0,\infty))=0\le a.$ On the other hand, for every $x=\mu(x) \in \mathcal{M}_\psi^{(p)}(0,\infty)$ and $y\in \mathcal{M}'_{\psi,0}(0,\infty)$ by [@KPS Theorem II.3.1] we have $$\begin{aligned} \mathrm{d}_{\mathcal{M}_\psi(0,\infty)}(x^p,\mathcal{M}'_{\psi,0}(0,\infty)) &=\|x^p-y\|_{\mathcal{M}_\psi(0,\infty)}\ge \|\mu(x^p)-\mu(y)\|_{\mathcal{M}_\psi(0,\infty)}\\ &\ge \limsup_{t\to \infty} \frac1{\psi(t)}\int_0^t (\mu(x^p)-\mu(y))(s) \mathrm{d}s\\ &= \limsup_{t\to \infty} \frac1{\psi(t)}\int_0^t \mu(x^p)(s) \mathrm{d}s,\end{aligned}$$ since $y\in \mathcal{M}'_{\psi,0}(0,\infty)$. This proves the assertion. Set $f=\mu_T$. Assuming that $\mathcal{N}$ is atomic, Lemma \[prop7.5\] ensures that $\mathrm{d}_{\mathcal{M}_\psi}(|T|^p,\mathcal{M}_{\psi,0})= \|f\|_{p,\lim \psi}^p$. By definition, $$\limsup_{h\searrow 0} \frac{1}{\psi(\mathrm{e}^{1/h})}\|T\|_{p+h}^{p+h}=\|f\|_{p,\limsup}^p.$$ We conclude the inequality stated in the third statement of Theorem \[allthethingsGSsaid\] from Lemma \[prop7\]. The aspect of Theorem \[allthethingsGSsaid\] relevant to this paper lies in its implications on Hankel operators. To formalize this, we state an immediate corollary of Theorem \[allthethingsGSsaid\]. If $(X_h)_{h\in [0,1]}$ is a family of Banach spaces with $X_h\subseteq X_{h'}$ continuously for $h<h'$, we define the extrapolation space $X_\psi\subseteq \cap_{h\in (0,1]}X_h$ to be the set of all elements $x\in\cap_{h\in (0,1]}X_h$ for which $$\|x\|_{X_\psi}:=\sup_{h>0} \frac{1}{\psi(\mathrm{e}^{1/h})}\|x\|^{1+h}_{X_h}<\infty.$$ \[equivalencescor\] Let $\psi: [0,\infty)\to [0,\infty)$ be an increasing concave function satisfying the conditions and and moreover that $\psi(0)=0$, $\lim_{t\to\infty} \psi(t)=\infty$. Consider the following data: - A family of Banach spaces $(X_h)_{h\in [0,1]}$ with $X_h\subseteq X_{h'}$ continuously for $h<h'$. - A mapping $T:X_1\to \mathcal{L}^{p+1}$ restricting to a continuous mapping $T_h:=T|_{X_h}:X_h\to \mathcal{L}^{p+h}$, for $h\in [0,1]$, such that there are measurable functions $$c_0,c_1:[0,1]\to [r,R],\quad \mbox{for some $0<r\leq R<\infty$},$$ with $$c_0(h)\|x\|_{X_h}\leq \|T_h(x)\|_{\mathcal{L}^{p+h}}\leq c_1(h)\|x\|_{X_h},\quad \forall h\in [0,1], \; x\in X_h.$$ Then $T$ defines a continuous mapping $T:X_\psi \to \mathcal{M}_\psi^{(p)}$ such that 1. For any exponentiation invariant extended limit $\omega\in (L_\infty)^*$ $$\lim_{h\to \tilde{\omega}} \frac{c_0(h)^{p}}{\psi(\mathrm{e}^{1/h})}\|x\|_{X_{h}}^{p+h}\leq \mathrm{Tr}_{\omega,\psi}(|T(x)|^p)\leq \lim_{h\to \tilde{\omega}} \frac{c_1(h)^p}{\psi(\mathrm{e}^{1/h})}\|x\|_{X_{h}}^{p+h},$$ where $\tilde{\omega}$ is defined as in Equation . In particular, if $\lim_{h\to 0}\frac{c_0(h)}{c_1(h)}=1$, then $$\mathrm{Tr}_{\omega,\psi}(|T(x)|^p)= \lim_{h\to \tilde{\omega}} \frac{c_0(h)^p}{\psi(\mathrm{e}^{1/h})}\|x\|_{X_{h}}^{p+h}.$$ 2. For any $x\in X_\psi$ we have that $$r\|x\|_{X_\psi}\leq \|T(x)\|_{\mathcal{M}_\psi^{(p)}}\leq R\|x\|_{X_\psi}.$$ 3. Assume that $\mathcal{N}$ is atomic. For any $x\in X_\psi$ we have that $$r\limsup_{h\searrow 0} \frac{1}{\psi(\mathrm{e}^{1/h})}\|x\|^{p+h}_{X_h}\leq \mathrm{d}_{\mathcal{M}_\psi^{(p)}}(|T(x)|^p,\mathcal{M}_{\psi,0})\leq \mathrm{e}R\limsup_{h\searrow 0} \frac{1}{\psi(\mathrm{e}^{1/h})}\|x\|^{p+h}_{X_h}.$$ In the setup of Corollary \[equivalencescor\], we note that the norms $\|x\|_{X_h}':=\|T(x)\|_{\mathcal{L}^{p+h}}$ on $X_h$ are equivalent to the norms $\|\cdot \|_{X_h}$. After this change of norms, we can take $c_0=c_1=1$ in which case Corollary \[equivalencescor\] is a trivial reformulation of Theorem \[allthethingsGSsaid\]. *The relevance of Corollary \[equivalencescor\] lies in that it is often possible to estimate the norms $\|x\|_{X_h}$ in situations where it is not possible to estimate $\|T(x)\|_{\mathcal{L}^{p+h}}$.* We will utilize this fact below for Hankel operators. In part A of Corollary \[equivalencescor\], we can obtain equivalences that are independent of $\omega$. Indeed the upper and lower bounds on $c_0$ and $c_1$ implies that under the assumptions of of Corollary \[equivalencescor\], $$r\lim_{h\to \tilde{\omega}} \frac{1}{\psi(\mathrm{e}^{1/h})}\|x\|_{X_{h}}^{p+h}\leq \mathrm{Tr}_{\omega,\psi}(|T(x)|^p)\leq R\lim_{h\to \tilde{\omega}} \frac{1}{\psi(\mathrm{e}^{1/h})}\|x\|_{X_{h}}^{p+h}$$ Hankel operators and Peller’s characterization {#hanekopdldl} ============================================== We now turn our focus to Hankel operators on the Hardy space. The reader can recall that the Hardy space $H^2(S^1)\subseteq L^2(S^1)$ is defined as the subspace of functions with a holomorphic extension to the interior of the unit disc. We here consider $S^1$ to be the boundary of the unit disc in the complex plane. The orthogonal projection $P:L^2(S^1)\to H^2(S^1)$ is called the Szegö projection. For $f\in L^\infty(S^1)$, the associated Hankel operator is defined as $$H_f:=(1-P)fP.$$ Clearly, if $f$ is the restriction of holomorphic function in the unit disc, $H_f=0$. In fact, $H_f$ is a well defined bounded operator for $f\in BMO(S^1)$. The space of symbols $f$ for which $H_f\in \mathcal{L}^p(L^2(S^1))$ has been characterized in terms of Besov spaces by Peller [@peller]. We let $B^{1/p}_{p,p}(S^1)$ denote the Besov space on $S^1$, we will review this space and various equivalent norms on this space below. For now we fix a particular choice of norms on the scale of Besov space defined in terms of Littlewood-Paley theory. Let $W_0:=1$ and for $n\in {{\mathbb N}}_+$ we define $$W_n(z):=\sum_{k=2^{n-1}}^{2^{n+1}} \min\left(\frac{k-2^{n-1}}{2^n-2^{n-1}},\frac{2^{n+1}-k}{2^{n+1}-2^{n}}\right) (z^k+z^{-k}).$$ The polynomials $W_n$ are characterized by the property that their Fourier coefficients $(\hat{W}_n(k))_{k\in {{\mathbb Z}}}$ is linearly interpolating between $\hat{W}_n(2^{n-1})=\hat{W}_n(2^{n+1})=0$, $\hat{W}_n(2^n)=1$ and $\hat{W}_n(k)=\hat{W}_n(-k)$. In particular, $\sum_{n=0}^\infty \hat{W}_n(k)=1$ for any $k$. For a function $f$ on $S^1$, we define $$\Phi_n(f):=W_n*f.$$ A well known result from Littlewood-Paley theory guarantees that for any function $f$ on $S^1$, $$\|f\|_{L^p(S^1)}\sim \|(\Phi_n f)_{n\in {{\mathbb N}}}\|_{L^p(S^1\times {{\mathbb N}})}.$$ We define $$\|f\|_{B^{1/p}_{p,p}(S^1)}:=\|(2^{n/p}\Phi_n f)_{n\in {{\mathbb N}}}\|_{L^p(S^1\times {{\mathbb N}})}.$$ \[pellerhem\] Let $f$ be a function on $S^1$ extending holomorphically to the unit disc with $f(0)=0$. Then $H_{\overline{f}}\in \mathcal{L}^p(L^2(S^1))$ if and only if $f\in B^{1/p}_{p,p}(S^1)$. Moreover, for any $p_0>1$ there is a constant $C>0$ such that $$C^{-1}\|f\|_{B^{1/p}_{p,p}(S^1)}\leq \|H_{\overline{f}}\|_{\mathcal{L}^p(L^2(S^1))}\leq C\|f\|_{B^{1/p}_{p,p}(S^1)}, \quad \forall p\in [1,p_0].$$ The reader can note that the statement in [@peller Chapter 6.2, Theorem 2.1] does not give a uniform constant, but existence of a uniform constant follows from the fact that the proof is by interpolation. We shall use Peller’s theorem to compute and estimate Dixmier traces. To do so, it will be important to keep track of the norms used on the Besov spaces. Let us state a general result regarding the estimates of Dixmier traces of Hankel operators. This statement is a direct consequence of Corollary \[equivalencescor\] and Theorem \[pellerhem\]. \[equivbesovgeneerl\] Let $p\geq 1$, and $\psi:$ be a function as in Corollary \[equivalencescor\]. Assume that $(\|\cdot\|_{B^{1/q}_{q,q},*})_{q\geq p}$ is a family of norms on the Besov spaces $B^{1/q}_{q,q}(S^1)$ for $q\geq p$ such that there is a $p_0>p$ and a constant $C_0>0$ such that $$C_0^{-1} \|f\|_{B^{1/q}_{q,q},*}\leq \|f\|_{B^{1/q}_{q,q}}\leq C_0 \|f\|_{B^{1/q}_{q,q},*}, \quad \forall q\in [p,p_0].$$ Then for any holomorphic function $f$, $$H_{\bar{f}}\in \mathcal{M}_\psi^{(p)}\quad\Leftrightarrow \sup_{q>p}\frac{\|f\|_{B^{1/q}_{q,q},*}^q}{\psi(\mathrm{e}^{(q-p)^{-1}})} <\infty.$$ Moreover, there is a constant $C>0$ (independent of $f$) such that for any exponentiation invariant extended limit $\omega$, $$C^{-1}\lim_{q-p\to \tilde{\omega}}\frac{1}{\psi(\mathrm{e}^{(q-p)^{-1}})} \|f\|_{B^{1/q}_{q,q},*}^q\leq \mathrm{Tr}_{\omega,\psi}(|H_{\bar{f}}|^p)\leq C\lim_{q-p\to \tilde{\omega}}\frac{1}{\psi(\mathrm{e}^{(q-p)^{-1}})} \|f\|_{B^{1/q}_{q,q},*}^q.$$ Finally, for any holomorphic function $f\in \cap_{q>p} B^{1/q}_{q,q}(S^1)$ $$\mathrm{d}_{\mathcal{M}_\psi}(|H_{\bar{f}}|^p,\mathcal{M}_{\psi,0})\sim\limsup_{q\searrow p} \frac{1}{\psi(\mathrm{e}^{(q-p)^{-1}})}\|f\|_{B^{1/q}_{q,q},*}^{q}$$ Corollary \[equivbesovgeneerl\] can be applied to a variety of different norms on the scale of Besov spaces. Let $f$ be a function on $S^1$ extending holomorphically to the unit disc and $p\in [1,\infty)$. By an abuse of notation, we identify $f$ with its holomorphic extension $f:\mathbb{D}\to {{\mathbb C}}$, where $\mathbb{D}$ denotes the unit disc. Let $\mu$ denote the measure on $\mathbb{D}$ given by $\mathrm{d}\mu(z)=(1-|z|^2)^{-2}\mathrm{d}m(z)$ where $m$ denotes the Lebesgue measure. We define $$\|f\|_{B^{1/p}_{p,p},\mathbb{D}}:=\|(1-|z|^2)^2f''\|_{L^p(\mathbb{D},\mu)}.$$ The next result can also be found in [@peller Appendix 2.6]. For any $p_0>1$ there is a constant $C>0$ such that for all holomorphic $f$ with $f(0)=f'(0)=0$ $$C^{-1}\|f\|_{B^{1/p}_{p,p},\mathbb{D}}\leq \|f\|_{B^{1/p}_{p,p}(S^1)}\leq C\|f\|_{B^{1/p}_{p,p},\mathbb{D}}, \quad \forall p\in [1,p_0].$$ We remark that the condition $f(0)=f'(0)=0$ plays no role once going to the extrapolation space because we can write any $f=f_0+g_0$ where $f_0(0)=f'_0(0)=0$ and $g_0=-f'(0)z-f(0)$ satisfies that $H_{\bar{g}_0}$ is finite rank. For a holomorphic $f\in \cap_{q>p} B^{1/q}_{q,q}(S^1)$ we define $F_f\in \cap_{q>p}L^q(0,\infty)$ as the decreasing rearrangement of the function $(1-|z|^2)^2f''$ on $\mathbb{D}$ with respect to the measure $\mu$. We also define $\Phi_f\in \cap_{q>p}L^q(0,\infty)$ as the decreasing rearrangement of the function $S^1\times {{\mathbb N}}\ni (\theta,n)\mapsto W_n*f(\mathrm{e}^{i\theta})$ with respect to the product measure $\nu$ on $S^1\times {{\mathbb N}}$. It is follows from the well known fact that $L^q$-norms of functions coincides with the $L^q(0,\infty)$-norm of its decreasing rearrangement that for $q>p$ $$\begin{aligned} \|f\|_{B^{1/q}_{q,q}(S^1)}&=\|W_n*f\|_{L^q(S^1\times {{\mathbb N}},\nu)}=\|\Phi_f\|_{L^q(0,\infty)}\quad \mbox{and}\\ \|f\|_{B^{1/q}_{q,q},\mathbb{D}}&=\|(1-|z|^2)^2f''\|_{L^p(\mathbb{D},\mu)}=\|F_f\|_{L^q(0,\infty)}.\end{aligned}$$ \[Thm1\] Let $p\geq 1$ and $\psi$ be a function as in Corollary \[equivalencescor\]. Assume that $f$ is holomorphic. Then the following assertions are equivalent: 1. $\limsup_{h\searrow 0} \left(\frac{1}{\psi(\mathrm{e}^{\frac{1}{h}})} \int_{\textbf{D}} |f''(z)|^{p+h} (1-|z|^2)^{2p+2h-2} dz\right)^{\frac{p}{p+h}} <\infty;$ 2. $\limsup_{t\to \infty} \frac1{\psi(t)} \int_0^t F_f(s)^p ds < \infty;$ 3. $\limsup_{h\searrow 0} \left(\frac{1}{\psi(\mathrm{e}^{\frac{1}{h}})} \int_{\textbf{T}\times \textbf{N}} |f *W_n)(e^{i\theta})|^{p+h} d\nu(\theta, n)\right)^{\frac{p}{p+h}} <\infty;$ 4. $\limsup_{t\to \infty} \frac1{\psi(t)} \int_0^t \Phi_f(s)^p ds < \infty;$ 5. $H_{\overline{f}} \in \mathcal{M}_\psi^{(p)}.$ The quantities in (1)-(4) are all equivalent to $\mathrm{d}_{\mathcal{M}_\psi}(|H_{\overline{f}}|^p,\mathcal{M}_{\psi,0})$. The proof of this result is a straightforward repetition of that of [@EZ Theorem 1] with the replacement of $\log t$, $p-1$ and the use of [@EZ Proposition 7] by $\psi(t)$, $\psi(e^\frac1{h})$ and the use of Proposition \[prop7\], respectively. Again, as in Proposition \[prop7\] we can reduce the proof to $p=1$. \[prop8\] Let $\psi$ satisfy . Let $H=H^*\in L^q(0,\infty)$ for all $1<q<1+\delta$ for some $\delta>0$. Let $\omega$ be an exponentiation invariant extended limit on $L^\infty(0,\infty)$ and $\hat{\omega}:=\omega \circ {\rm exp}$. \(a) For every $\alpha>1$ and sufficiently large $t>0$ one has $\mu_H(1/t) \le t^\alpha;$ \(b) One has $$\lim_{t\to \omega} \frac1{\psi(t)} \int_0^t H(s) ds = \lim_{t\to \omega} \frac1{\psi(t)} \int_0^{\mu_H(1/t)} H(s) ds;$$ \(c) One has $$\lim_{r\to \hat{\omega}} \frac{\|H\|_{1+1/r}}{\psi(\mathrm{e}^{r})}= \lim_{t\to \omega} \frac1{\psi(t)} \int_0^t H(s) ds.$$ \(a) Denote for brevity $a:=\mu_H(1/t).$ For sufficiently large $t>0$ we have $$c_H := \sup_{t>2} \frac1{\psi(t)} \int_0^t H(s) ds \ge \frac1{\psi(a)} \int_0^a H(s) ds \ge \frac{a H(a)}{\psi(a)}= \frac{a}{t \psi(a)},$$ since $H$ is nonincreasing and $H(\mu_H(1/t))=1/t$. Since the function $\psi$ is slowly varying, it follows that for every $0<\varepsilon <1$ there exists $C>0$ such that $\psi(t)\le C t^\varepsilon$ for all $t>0$. Hence, $$c_H \ge \frac{a}{t C a^\varepsilon}=\frac{a^{1-\varepsilon}}{C t}.$$ Therefore, $$\mu_H(1/t) \le (C c_H t)^\frac{1}{1-\varepsilon}.$$ Since this inequality holds for every $0<\varepsilon <1$, it follows that for every $\alpha>1$ and sufficiently large $t>0$ one has $\mu_H(1/t) \le t^\alpha.$ \(b) For sufficiently large $t>0$ one has $$\int_0^t H(s) ds \le \int_0^{\mu_H(1/t)} H(s) ds +1 \le \int_0^{t^\alpha} H(s) ds+1,$$ where the first inequality was proved in [@EZ Proposition 8] and the second one was proved above. Dividing by $\psi(t)$ and applying extended limits, yields $$\label{eq2} \lim_{t\to \omega} \frac1{\psi(t)} \int_0^t H(s) ds\le\lim_{t\to \omega} \frac1{\psi(t)} \int_0^{\mu_H(1/t)} H(s) ds \le\lim_{t\to \omega} \frac1{\psi(t)} \int_0^{t^\alpha} H(s) ds.$$ Since $\psi$ satisfies , it follows from the property of extended limits that $$\begin{aligned} \lim_{t\to \omega} \frac1{\psi(t^\alpha)} \int_0^{t^\alpha} H(s) ds&\le \lim_{t\to \omega} \frac{\psi(t^\alpha)}{\psi(t)} \frac1{\psi(t^\alpha)} \int_0^{t^\alpha} H(s) ds\\ &=A_\psi(\alpha) \lim_{t\to \omega} \frac1{\psi(t^\alpha)} \int_0^{t^\alpha} H(s) ds.\end{aligned}$$ Since $\omega$ is exponentiation invariant, it follows that $$\label{eq3} \lim_{t\to \omega} \frac1{\psi(t^\alpha)} \int_0^{t^\alpha} H(s) ds\le A_\psi(\alpha) \lim_{t\to \omega} \frac1{\psi(t)} \int_0^{t} H(s) ds.$$ Combining  and , we obtain that $$\lim_{t\to \omega} \frac1{\psi(t)} \int_0^t H(s) ds\le\lim_{t\to \omega} \frac1{\psi(t)} \int_0^{\mu_H(1/t)} H(s) ds \le A_\psi(\alpha) \lim_{t\to \omega} \frac1{\psi(t)} \int_0^{t} H(s) ds$$ holds for every $\alpha>1$. It follows from [@GS Lemma 1.3] that $A_\psi(\alpha)\to 1$ as $\alpha \searrow 1$. This proves part (b). \(c) The proof of part (c) is a straightforward repetition of [@EZ Proposition 8 (c)] with the only difference that instead of the classical Karamata theorem one has to use its generalisation proved in [@GS Proposition 3.2]. \[Thm2\] Let $p\geq 1$, $\psi$ be a function as in Corollary \[equivalencescor\] and $\omega$ an exponentiation invariant extended limit on $L^\infty(0,\infty)$. Assume that $f\in \cap_{q>p} B^{1/q}_{q,q}(S^1)$ is holomorphic. The following quantities are equivalent: 1. $$\lim_{h\to\tilde{\omega}} \left(\frac{1}{\psi(\mathrm{e}^{\frac{1}{h}})} \int_{\textbf{D}} |f''(z)|^{p+h} (1-|z|^2)^{2p+2h-2} dz\right)^{\frac{p}{p+h}} = \lim_{t\to \omega} \frac1{\psi(t)} \int_0^t F_f(s)^p ds;$$ 2. $$\lim_{h\to\tilde{\omega}} \left(\frac{1}{\psi(\mathrm{e}^{\frac{1}{h}})} \int_{\textbf{T}\times \textbf{N}} |f *W_n)(e^{i\theta})|^{p+h} d\nu(\theta, n)\right)^{\frac{p}{p+h}} = \lim_{t\to \omega} \frac1{\psi(t)} \int_0^t \Phi_f(s)^p ds;$$ 3. ${\rm Tr}_{\omega,\psi} |H_{\overline{f}}|^p.$\ Here $\tilde{\omega}$ is defined as in Equation . The proof of this result is a straightforward repetition of that of [@EZ Theorem 2] with the replacement of $\log t$, $1/r$ and the use of [@EZ Proposition 8] by $\psi(t)$, $\psi(\mathrm{e}^{1/h})$ and the use of Proposition \[prop8\], respectively. Let us place the result Theorem \[Thm2\] in context. Let $B^{1/q+}_{q,q}(S^1)$ denote the subspace of $B^{1/q}_{q,q}(S^1)$ consisting of holomorphic functions. By the results above, we can define two continuous linear mappings $$\begin{aligned} T_{LP}:B^{1/q+}(S^1)&\to L^q(S^1\times {{\mathbb N}},\nu), \quad T_{LP}f(z,n):=W_n*f(z),\quad\mbox{and}\\ T_{\mathbb{D}}:B^{1/q+}(S^1)&\to L^q(\mathbb{D},\mu), \quad T_{\mathbb{D}}f(z):=(1-|z|^2)^2f''(z).\end{aligned}$$ We define the spaces $\mathcal{M}_\psi^{(p)}(S^1\times {{\mathbb N}},\nu)\subseteq \cap_{q>p} L^q(S^1\times {{\mathbb N}},\nu)$ and $\mathcal{M}_\psi^{(p)}(\mathbb{D},\mu)\subseteq \cap_{q>p} L^q(\mathbb{D},\mu)$ from the families $(L^q(S^1\times {{\mathbb N}},\nu))_{q>p}$ and $(L^q(\mathbb{D},\mu)_{q>p}$, respectively, by means of extrapolation. For any exponentiation invariant extended limit $\omega$, we can define Dixmier traces $\mathrm{tr}_{\omega,\psi}:\mathcal{M}_\psi(S^1\times {{\mathbb N}},\nu)\to {{\mathbb C}}$ and $\mathrm{tr}_{\omega,\psi}:\mathcal{M}_\psi(\mathbb{D},\mu)\to {{\mathbb C}}$. We write $\mathrm{tr}_{\omega,\psi}$ to emphasize that the Dixmier trace is defined on a commutative von Neumann algebra. Applying Corollary \[equivalencescor\], we can reformulate Theorem \[Thm2\] as the statement that $${\rm Tr}_{\omega,\psi} |H_{\overline{f}}|^p\sim \mathrm{tr}_{\omega,\psi}(|T_{LP}f|^p)\sim\mathrm{tr}_{\omega,\psi}(|T_{\mathbb{D}}f|^p) .$$ The special case $p=2,4,6$ {#p246sec} ========================== A beautiful result of Janson-Upmeier-Wallstén [@JUW] computes the operator trace of $|H_{\overline{f}}|^p$ for $p=2,4,6$ in terms of a particular Besov norm. Indeed, [@JUW Theorem 1] states that for $p=2,4,6$ and $f$ holomorphic in $\mathbb{D}$ we have that $$\label{p246} \mathrm{Tr}(|H_{\overline{f}}|^p)=c_p\int_{S^1\times S^1} \frac{|f(z)-f(w)|^{p}}{|z-w|^2}\mathrm{d}V(z,w),$$ where $c_2=1$, $c_4=\frac{1}{2}$ and $c_6=\frac{1}{6}$. In fact, [@JUW Theorem 1] states that $p=2,4,6$ are the only possible values for which an identity of this type could hold true. \[SIdefn\] For $p>1$ we define $$\|f\|_{B^{1/p}_{p,p},SI}:=\left(\int_{S^1\times S^1} \frac{|f(z)-f(w)|^p}{|z-w|^2}\mathrm{d}V(z,w)\right)^{1/p}.$$ The reader should note that Equation is equivalent to the identity $$\|H_{\overline{f}}\|_{\mathcal{L}^p(L^2(S^1))}^p=c_p\|f\|_{B^{1/p}_{p,p},SI}^p, \quad\mbox{for $p=2,4,6$}.$$ The following norm equivalence is found in [@peller Appendix 2.6]. \[sivsdnorm\] For any $p_0\geq q_0>1$ and there is a constant $C>0$ such that $$C^{-1}\|f\|_{B^{1/p}_{p,p}}\leq \|f\|_{B^{1/p}_{p,p},SI}\leq C\|f\|_{B^{1/p}_{p,p}}, \quad \forall p\in [q_0,p_0].$$ The result of Janson-Upmeier-Wallstén together with Theorem \[pellerhem\] and Proposition \[sivsdnorm\] allow us to deduce the following proposition. \[equiforjuw\] There are constants $0<r< R<\infty$ and measurable functions $c_0,c_1:[3/2,8]\to [r,R]$ such that $$c_0(p)\|f\|_{B^{1/p}_{p,p},SI}\leq \|H_{\overline{f}}\|_{\mathcal{L}^p(L^2(S^1))}\leq c_1(p)\|f\|_{B^{1/p}_{p,p},SI}.$$ Moreover, we can choose $c_0$ and $c_1$ so that $$\lim_{h\to 0} c_0(p+h)^{\frac{1}{p}}=\lim_{h\to 0} c_1(p+h)^{\frac{1}{p}}=c_p\quad\mbox{for $p=2,4,6$}.$$ For $p>1$ and the scale of spaces $(B^{1/q}_{q,q}(S^1))_{q\in [p,p+1]}$ we let $B_{p,\psi}(S^1)$ denote the associated extrapolation space. Using Corollary \[equivalencescor\], we deduce the following theorem from Proposition \[equiforjuw\]. For $p=2,4,6$, and a holomorphic $f\in B_{p,\psi}(S^1)$, we have that $$\mathrm{Tr}_{\omega,\psi}(|H_f|^p)= c_p\lim_{h\to \tilde{\omega}} \frac{1}{\psi(\mathrm{e}^{1/h})}\int_{S^1\times S^1} \frac{|f(z)-f(w)|^{p+h}}{|z-w|^2}\mathrm{d}V(z,w),$$ where $c_2=1$, $c_4=\frac{1}{2}$ and $c_6=\frac{1}{6}$. The special case $p=2$ and $f\in C^{1/2}(S^1)$ was considered in [@GG], where explicit formulas for $\mathrm{Tr}_\omega(|H_{\overline{f}}|^2)$ was given in terms of the Fourier series of $f$. Non-measurability {#Non-meas} ================= The estimates for Dixmier traces will allow us to construct an abundance of non-measurable Hankel operators by means of lacunary Fourier series. Our approach is based on results from [@EZ Section 6]. For $p\in [1,\infty)$ and $c\in \ell^\infty({{\mathbb N}})$ we define the function $\gamma_{p,c}$ on $S^1$ by $$\gamma_{p,c}(z):=\sum_{j=0}^\infty 2^{-j/p}c_j z^{2^j}.$$ The function $\gamma_{p,c}$ is holomorphic in $\mathbb{D}$. We can compute that $$\Phi(t)=2^{-j/p} c_j, \quad t\in [2^j-1,2^{j+1}-1).$$ Therefore, $\|\gamma_{p,c}\|_{B^{1/p}_{p,p}}\sim \|\Phi\|_{L^p(0,\infty)}\sim \|c\|_{\ell^p({{\mathbb N}})}$. Moreover, we can as in [@EZ Page 351] compute that for $t\in [2^j-1,2^{j+1}-1)$ $$\label{equistuff} \frac{\sum_{k=0}^{j-1} |c_k|^p}{\psi(2^j-1)}\lesssim \frac{1}{\psi(t)}\int_0^t \Phi(t)^p\mathrm{d}t\lesssim\frac{\sum_{k=0}^{j} |c_k|^p}{\psi(2^{j+1}-1)}.$$ Define the function $\tilde{\psi}(t):=\psi(2^t-1)$. This is again an increasing function with $\tilde{\psi}(0)=0$ and $\lim_{t\to \infty}\tilde{\psi}(t)=\infty$. If $\psi$ satisfies , then $\tilde{\psi}$ has regular variation of index $k_\psi:=\log A_\psi(e)$. We write $\mathfrak{m}_{\tilde{\psi}}^{(p)}({{\mathbb N}}):=\mathcal{M}_{\tilde{\psi}}^{(p)}(\ell^\infty({{\mathbb N}}))$. The inequalities imply that $$\|c\|_{\mathfrak{m}_{\tilde{\psi}}^{(p)}}^p\sim \sup_{t>0}\frac{1}{\psi(t)}\int_0^t \Phi(t)^p\mathrm{d}t\sim \|\gamma_{p,c}\|_{B_{p,\psi}}^p,$$ so the map $c\mapsto \gamma_{p,c}$ defines a continuous mapping $$\gamma:\mathfrak{m}_{\tilde{\psi}}^{(p)}({{\mathbb N}})\to B_{p,\psi}.$$ It follows from Theorem \[Thm2\] and the inequalities that for any exponentiation invariant extended limit $\omega$ we have $$\begin{aligned} \mathrm{Tr}_{\omega,\psi}(|H_{\overline{\gamma_{p,c}}}|^p) &\sim \lim_{t\to \omega} \frac{1}{\psi(t)}\int_0^t \Phi(t)^p\mathrm{d}t \sim \lim_{t\to \omega} \frac{\sum_{k=0}^{\lfloor \log_2 t \rfloor} |c_k|^p}{\psi(2^{\lfloor \log_2 t \rfloor+1}-1)} \\ &= \lim_{t\to \omega\circ \log_2} \frac{\sum_{k=0}^{\lfloor t \rfloor} |c_k|^p}{\psi(2^{\lfloor t \rfloor+1}-1)}=\lim_{t\to \omega\circ \log_2} \frac{\sum_{k=0}^{\lfloor t \rfloor} |c_k|^p}{\tilde\psi( t )} .\end{aligned}$$ Denote $$\begin{aligned} \label{small_DT} \mathrm{tr}_{\omega\circ \log_2,\tilde{\psi}}(x):= \lim_{t\to\omega\circ \log_2} \frac{\sum_{k=0}^{\lfloor t \rfloor} x_k}{\tilde\psi( t )}, \quad x \in \mathfrak{m}_{\tilde{\psi}}({{\mathbb N}})_+.\end{aligned}$$ Here we note that $\mathrm{tr}_{\omega\circ \log_2,\tilde{\psi}}$ extends to a singular linear functional on $\mathfrak{m}_{\tilde{\psi}}({{\mathbb N}})$ because it is the Dixmier trace $\mathrm{tr}_{\omega,\psi}$ on $\mathfrak{m}_{\psi}({{\mathbb N}})$ pulled back along the isometric order preserving embedding $\mathfrak{m}_{\tilde{\psi}}({{\mathbb N}})\hookrightarrow \mathfrak{m}_{\psi}({{\mathbb N}})$ defined by $b=(b_n)_{n\in {{\mathbb N}}}\mapsto (b_{\log_2(n)}\chi_{2^{{\mathbb N}}}(n))_{n\in {{\mathbb N}}}$. Here $2^{{\mathbb N}}=\{1,2,4,8,16,\ldots\}$ denotes the dyadic natural numbers. It should be pointed out that the ideal $\mathfrak{m}_{\tilde{\psi}}({{\mathbb N}})$ supports Dixmier traces defined directly from $\tilde{\psi}$ if and only if $A_\psi(e)=1$ (in which case $\tilde{\psi}$ has regular variation of index $k_\psi=0$). Summing up, there are constants $\alpha_0,\alpha_1>0$ such that for any exponentiation invariant extended limit $\omega$, and $c\in \mathfrak{m}_{\tilde{\psi}}^{(p)}$ $$\label{estofrtrTr} \alpha_0 \mathrm{tr}_{\omega\circ \log_2,\tilde{\psi}}(|c|^p)\leq \mathrm{Tr}_{\omega,\psi}(|H_{\overline{\gamma_{p,c}}}|^p)\leq \alpha_1 \mathrm{tr}_{\omega\circ \log_2,\tilde{\psi}}(|c|^p).$$ For a function $g\in L^\infty(0,\infty)$ define the function $\bar{g}\in L^\infty(0,\infty)$ by the formula $$\bar{g}(t):=\int_{\lfloor t \rfloor }^{\lfloor t \rfloor+1}g(s)\mathrm{d} s.$$ Set $C:=-\liminf_{t\to \infty} g(t)$ and define $$\label{cnfromg} c_n:=\left( |\bar{g}(n)+C|\cdot \tilde \psi'(n)\right)^{1/p}, \ n\ge0.$$ It is easy to see that $c=|c|\in \mathfrak{m}_{\tilde{\psi}}^{(p)}({{\mathbb N}})$. \[integracom\] Assume that $g\in L^\infty(0,\infty)$ for some $\beta>0$ satisfies that $$\label{conditionongorfor} g(t)-\bar{g}( t)= O(t^{-\beta}), \quad \mbox{as $t\to \infty$}.$$ For $c=(c_n)_{n\in {{\mathbb N}}}\in\mathfrak{m}^{(p)}_{\tilde{\psi}}({{\mathbb N}})$ defined as in Equation , it holds that $$\mathrm{tr}_{\omega\circ \log_2,\tilde{\psi}}(|c|^p)=\lim_{t\to\omega\circ \log_2} \frac{1}{\tilde{\psi}(t)}\int_0^t g(s)\cdot \tilde \psi'(s) \mathrm{d}s+C,$$ where $C=-\liminf_{t\to \infty} g(t)$. By the definition of liminf, we have that $g+C-|g+C|=o(1)$. It follows that the function $$\frac{1}{\tilde{\psi}(t)}\int_0^t \sum_{j=0}^\infty|\bar{g}(j)+C|\cdot \tilde \psi'(j) \chi_{(j,j+1]}(s) \mathrm{d}s - \frac{1}{\tilde{\psi}(t)}\int_0^t \sum_{j=0}^\infty(\bar{g}(j)+C)\cdot \tilde \psi'(j) \chi_{(j,j+1]}(s)\mathrm{d}s$$ is $o(1)$ as $\to\infty$. We therefore have $$\begin{aligned} \mathrm{tr}_{\omega\circ \log_2,\tilde{\psi}}(|c|^p)&=\lim_{t\to\omega\circ \log_2} \frac{1}{\tilde{\psi}(t)}\int_0^t \sum_{j=0}^\infty|c_j|^p \chi_{(j,j+1]}(s) \mathrm{d}s\\ &=\lim_{t\to\omega\circ \log_2} \frac{1}{\tilde{\psi}(t)}\int_0^t \sum_{j=0}^\infty| \bar{g}(j)+C|\cdot \tilde \psi'(j) \chi_{(j,j+1]}(s) \mathrm{d}s\\ &=\lim_{t\to\omega\circ \log_2}\frac{1}{\tilde{\psi}(t)}\int_0^t \sum_{j=0}^\infty(\bar{g}(j)+C)\cdot \tilde \psi'(j) \chi_{(j,j+1]}(s)\mathrm{d}s.\end{aligned}$$ The function $\tilde{\psi}$ has regular variation, so [@RegVar Theorem 1.5.11] implies that $\frac{\tilde \psi'(t)}{\tilde \psi(t)}=o(1)$ as $t\to \infty$. In particular, $$\frac{1}{\tilde{\psi}(t)}\int_0^t \sum_{j=0}^\infty(\bar{g}(j)+C)\cdot \tilde \psi'(j) \chi_{(j,j+1]}(s) \mathrm{d}s-\frac{1}{\tilde{\psi}(t)}\int_0^t (\bar{g}(s)+C)\cdot \tilde \psi'(s) \mathrm{d}s=o(1).$$ Consider $$\begin{aligned} &\left|\frac{1}{\tilde{\psi}(t)}\int_0^t (\bar{g}(s)+C)\cdot \tilde \psi'(s) \mathrm{d}s - \frac{1}{\tilde{\psi}(t)}\int_0^t (g(s)+C)\cdot \tilde \psi'(s) \mathrm{d}s\right|\\ &\leq \frac{1}{\tilde{\psi}(t)}\int_0^t |g(s)-\bar{g}(s)| \tilde \psi'(s)\mathrm{d}s. \end{aligned}$$ Since $|g(t)-\bar{g}(t)|\leq \rho t^{-\beta}$ for $t\ge t_0$ for some $t_0>0$ and constant $\rho$, it follows that $$\begin{aligned} &\left|\frac{1}{\tilde{\psi}(t)}\int_0^t (\bar{g}(s)+C)\cdot \tilde \psi'(s) \mathrm{d}s - \frac{1}{\tilde{\psi}(t)}\int_0^t (g(s)+C)\cdot \tilde \psi'(s) \mathrm{d}s\right|\\ &\leq \frac{2\|g\|_{L^\infty}\tilde{\psi}(t_0)}{\tilde{\psi}(t)}+\frac{\rho}{\tilde{\psi}(t)}\int_{t_0}^ts^{-\beta}\tilde \psi'(s)\mathrm{d}s.\end{aligned}$$ Since $\tilde{\psi}$ has regular variation of index $k_\psi$, it follows that $\tilde{\psi}'$ has regular variation of index $k_\psi - 1$ and by [@RegVar Theorem 1.5.11] we have $$\label{eq111} \lim_{t\to\infty}\frac{\int_{t_0}^ts^{-\beta}\tilde \psi'(s)\mathrm{d}s}{t^{1-\beta}\tilde \psi'(t)}=\frac{1}{k_\psi-\beta}.$$ Of course, $\beta$ can be chosen to be less than $k_\psi$. Hence, $$\begin{aligned} &\left|\frac{1}{\tilde{\psi}(t)}\int_0^t (\bar{g}(s)+C)\cdot \tilde \psi'(s) \mathrm{d}s - \frac{1}{\tilde{\psi}(t)}\int_0^t (g(s)+C)\cdot \tilde \psi'(s) \mathrm{d}s\right|\\ &=o(1)+O\left( \frac{t^{1-\beta}\tilde \psi'(t)}{\tilde{\psi}(t)}\right).\end{aligned}$$ Since $k_\psi\neq 0$, it follows from  that the latter estimate is, in fact, $o(1)$ and we conclude that condition on $g$ ensures that $$\frac{1}{\tilde{\psi}(t)}\int_0^t (\bar{g}(s)+C)\cdot \tilde \psi'(s) \mathrm{d}s - \frac{1}{\tilde{\psi}(t)}\int_0^t (g(s)+C)\cdot \tilde \psi'(s) \mathrm{d}s=o(1),$$ as $t\to \infty$. Using the properties of extended limits, we obtain $$\begin{aligned} \mathrm{tr}_{\omega\circ \log_2,\tilde{\psi}}(|c|^p)&=\lim_{t\to\omega\circ \log_2} \frac{1}{\tilde{\psi}(t)}\int_0^t (g(s)+C)\cdot \tilde \psi'(s) \mathrm{d}s\\ &=\lim_{t\to\omega\circ \log_2} \frac{1}{\tilde{\psi}(t)}\int_0^t g(s)\tilde \psi'(s) \mathrm{d}s+C.\end{aligned}$$ Let $\mathrm{Lip}[0,\infty)$ denote the space of Lipschitz continuous functions on $[0,\infty)$. We define the space $$\mathcal{W}:=\left\{h\in L^\infty(0,\infty)\cap \mathrm{Lip}[0,\infty): \; h'(t)=O\left(\frac{1}{t}\right), \; \mbox{as $t\to \infty$}\right\}.$$ \[solvingceseq\] Let $\psi: [0,\infty)\to [0,\infty)$ be a smooth increasing concave function satisfying the conditions and and moreover that $\psi(0)=0$, $\lim_{t\to\infty} \psi(t)=\infty$. Assume that $A_\psi({\mathrm{e}})\neq 1$ (see ). Then $h\in \mathcal{W}$ if and only if $h\in L^\infty(0,\infty)$ and there exists a function $g\in L^\infty(0,\infty)$ such that $$\label{cphitildeeq} h(t)=\frac{1}{\tilde{\psi}(t)}\int_0^t g(s)\tilde \psi'(s) \mathrm{d}s \quad\mbox{for a.e. $t$}.$$ If $h\in \mathcal{W}$ there is a unique solution $g\in L^\infty(0,\infty)$ to Equation . As remarked above, it poses no restriction to assume that $\psi$ is smooth by [@RegVar Theorem 1.8.2]. Uniqueness of solutions is clear. If $h$ solves Equation then $$g(t) =\frac{(\tilde{\psi}\cdot h)'(t)}{\tilde \psi'(t)}= h(t)+ \frac{\tilde{\psi}(t)\cdot h'(t)}{\tilde \psi'(t)}$$ If $g\in L^\infty$, we conclude that Equation has a solution $h\in L^\infty$ if and only if $h\in \mathrm{Lip}[0,\infty)$ and $$h'(t)=O\left(\frac{\tilde \psi'(t)}{\tilde \psi(t)}\right).$$ Note that $A_\psi(\alpha)\neq 1$ for some $\alpha$ if and only if $A_\psi(\alpha)\neq 1$ for all $\alpha$. Moreover, $\tilde{\psi}$ has regular variation of index $k_\psi:=\log A_\psi({\mathrm{e}})$. By [@RegVar Theorem 1.5.11], we have $$\frac{\tilde \psi'(t)}{\tilde \psi(t)}= \frac{k_\psi}{t}+o\left(\frac{1}{t}\right), \quad\mbox{as $t\to \infty$.}$$ In particular, if $k_\psi\neq 0$ then $h\in L^\infty(0,\infty)\cap \mathrm{Lip}[0,\infty)$ satisfies that $h'(t)=O\left(\frac{\tilde \psi'(t)}{\tilde \psi(t)}\right)$ if and only if $h\in \mathcal{W}$. Let $C^{1,1}[1,\infty)$ denote the space of functions $h\in C^1[0,\infty)$ such $h'\in \mathrm{Lip}[0,\infty)$. For $\beta>0$, we define the space $$\mathcal{W}_\beta:=\left\{h\in\mathcal{W}\cap C^{1,1}[0,\infty): \; h''(t)=O(t^{-1-\beta}), \; \mbox{as $t\to \infty$}\right\}.$$ \[gsatisfiesmean\] Let $h\in L^\infty[0,\infty)$, $\beta\in [0,1]$ and $\psi$ a function as in Proposition \[solvingceseq\]. The equation has a solution $g\in \mathrm{Lip}[0,\infty)$ with $g'(t)=O(t^{-\beta})$ if and only if $h\in \mathcal{W}_\beta$. In particular, if $h\in \mathcal{W}_\beta$ and $g$ solves the equation then $g$ fulfils Condition . We compute that $$g'(t)=2h'(t)+\frac{\tilde{\psi}(t)}{\tilde{\psi}'(t)}h''(t)-\frac{\tilde{\psi}(t)\tilde{\psi}''(t)}{\tilde{\psi}'(t)^2}h'(t).$$ Since has a solution, $h'(t)=O(t^{-1})$ by Proposition \[solvingceseq\]. Moreover, by the same argument as in Proposition \[solvingceseq\], $\frac{\tilde{\psi}(t)}{\tilde{\psi}'(t)}=O(t)$ whenever $k_\psi\neq 0$ and since $\tilde{\psi}'$ has regular variation, we can also conclude that $\frac{\tilde{\psi}''(t)}{\tilde{\psi}'(t)}=O(t^{-1})$. In particular, $$g'(t)=\frac{\tilde{\psi}(t)}{\tilde{\psi}'(t)}h''(t)+O(t^{-1})=O(t)h''(t)+O(t^{-1}).$$ It follows that $g'(t)=O(t^{-\beta})$ if and only if $h\in \mathcal{W}_\beta$. Finally, the mean value theorem for integrals guarantees that for some $\xi\in [\lfloor t\rfloor, \lfloor t\rfloor+1]$, $\bar{g}(t)=g(\xi)$. The mean value theorem for derivatives guarantees that if $g$ satisfies $g'(t)=O(t^{-\beta})$ then $$|\bar{g}(t)-g(t)|\leq \sup_{s\in [\lfloor t\rfloor,\lfloor t\rfloor+1]}|g'(s)|=O(t^{-\beta}).$$ For $b> 1$, we write $\exp_b(x):=b^x$ with the obvious notation $\exp=\exp_{\mathrm{e}}$. For any translation invariant extended limit $\eta$ on $L^\infty$ we define the extended limit $\omega$ by $$\omega(f):=\eta(f\circ \exp \circ \exp_2), \ f \in L^\infty.$$ Recall the notation $(P_\alpha f)(t)=f(t^\alpha)$ for $\alpha\ge1$. We also consider the operator $T_l : L^\infty \to L^\infty$, $(T_lf)(t)=f(t+l)$ defined for $l>0$. We say that $\eta$ is translation invariant if $\eta\circ T_l=\eta$ for all $l>0$. For every $\alpha\geq 1$ we have $$\begin{aligned} \label{expinvariance} \omega(P_\alpha f)&=\eta((P_\alpha f)\circ \exp \circ \exp_2)=\eta(\sigma^\alpha(f\circ \exp) \circ \exp_2)\\ \nonumber &=\eta(T_{\log_2 \alpha}(f\circ \exp \circ \exp)).\end{aligned}$$ Hence, $\omega$ is exponentiation invariant if and only if $\eta$ is translation invariant. We define the space $$\mathcal{E}:=\{h\in L^\infty(0,\infty) \ : \ h(t+l)-h(t)=o(1), \ t\to\infty, \; \forall l>0\}.$$ The reader should note that we have the inclusion $\mathcal{W}\subseteq \mathcal{E}$. Moreover, $E\subseteq L^\infty(0,\infty)$ is by definition a closed subspace. \[proponlimif\] For any $h\in \mathcal{E}$ there are translation invariant extended limits $\eta_1$ and $\eta_2$ such that $$\lim_{t\to\eta_1} h(t) = \liminf_{t\to \infty} h(t), \quad\mbox{and}\quad \lim_{t\to\eta_2} h(t) = \limsup_{t\to \infty} h(t).$$ By the Hahn-Banach theorem we can find singular states $\eta_1',\eta_2'\in \mathcal{E}^*$ such that $$\eta_1'(h) = \liminf_{t\to \infty} h(t), \quad\mbox{and}\quad \eta_2'(h) = \limsup_{t\to \infty} h(t).$$ The action by translations preserves $\mathcal{E}$ and acts trivially modulo the closure of the compactly supported elements of $L^\infty(0,\infty)$. Therefore, the invariant Hahn-Banach theorem (see e.g. [@Edwards Theorem 3.3.1]) implies that $\eta_1',\eta_2'\in \mathcal{E}^*$ extend to translation invariant extended limits $\eta_1,\eta_2\in L^\infty(0,\infty)^*$ with $\eta_1'=\eta_1|_{\mathcal{E}}$ and $\eta_2'=\eta_2|_{\mathcal{E}}$. The proposition follows. Let us summarize the outcome of the above results on Dixmier traces. \[smalldixcomp\] Let $p\in [1,\infty)$ and let $\psi: [0,\infty)\to [0,\infty)$ be an increasing concave function satisfying the conditions and and moreover that $\psi(0)=0$, $\lim_{t\to\infty} \psi(t)=\infty$. Assume that $k_\psi\neq 0$. Let $\beta\in (0,1]$ and assume that $h\in \mathcal{W}_\beta$ satisfies that $h\circ \exp\in \mathcal{E}$ and take $c=(c_n)_{n\in{{\mathbb N}}}$ given as in by $$c_n:=(|\bar{g}(n)+C|\cdot \tilde{\psi}'(n))^{1/p},$$ where $g$ solves and $C:=-\liminf_{t\to \infty} g(t)$. Then there are exponentiation invariant extended limits $\omega_1,\omega_2\in L^\infty(0,\infty)^*$ such that $$\mathrm{tr}_{\omega_j\circ \log_2,\tilde{\psi}}(|c|^p)=\begin{cases} \liminf_{t\to \infty} h(t)-\liminf_{t\to \infty} g(t), \quad &j=1,\\ &\\ \limsup_{t\to \infty} h(t)-\liminf_{t\to \infty} g(t), \quad &j=2.\end{cases}$$ We note that $g$ exists by Proposition \[solvingceseq\]. Since $h\in \mathcal{W}_\beta$ for a $\beta>0$, $g$ satisfies that $\bar{g}(t)-g(t)=O(t^{-\beta})$ by Proposition \[gsatisfiesmean\]. By Lemma \[integracom\], for any exponentiation invariant extended limit $\omega$, $$\label{transdmdmda} \mathrm{tr}_{\omega\circ \log_2,\tilde{\psi}}(|c|^p)=\lim_{t\to\omega\circ \log_2} h(t)+C.$$ Since $h\circ \exp\in \mathcal{E}$ we can take translation invariant extended limits $\eta_1$ and $\eta_2$ as in Proposition \[proponlimif\] such that $$\begin{aligned} \label{transdmdmdatwo} \lim_{t\to\eta_1\circ \exp} h(t) &= \liminf_{t\to \infty} h({\mathrm{e}}^t)=\liminf_{t\to \infty} h(t), \quad\mbox{and}\\ \nonumber \lim_{t\to\eta_2\circ \exp} h(t) &= \limsup_{t\to \infty} h({\mathrm{e}}^t)=\limsup_{t\to \infty} h(t).\end{aligned}$$ Define the extended limits $\omega_j:=\eta_j\circ \exp\circ\exp_2$, $j=1,2$, which are exponentiation invariant because $\eta_1$ and $\eta_2$ are translation invariant. We conclude the proposition from combining the two statements and with the fact that $\omega_j\circ \log_2=\eta_j\circ \exp$. \[cnonstructionofh\] Let $\psi: [0,\infty)\to [0,\infty)$ be an increasing concave function satisfying the conditions and and such that $\psi(0)=0$, $\lim_{t\to\infty} \psi(t)=\infty$, $k_\psi\neq 0$. For every $h_0\in C^{1,1}[0,\infty)$ such that $h_0, h_0', h_0''\in L^\infty(0,\infty)$, the function $$h(t):=h_0(\log(1+\log (1+t))), \ t>0$$ belongs to $\mathcal{W}_1$ and satisfy the following: - $h\circ \exp\in \mathcal{E}$, - $\liminf_{t\to\infty} h(t)=\liminf_{t\to \infty} g(t)$. Here $g$ denotes the solution to . Moreover, the function $h_0$ is not convergent as $t\to\infty$ if and only if $$\limsup_{t\to\infty} h(t)>\liminf_{t\to \infty} g(t).$$ Since $$h'(t)=h_0'(\log(1+\log (1+t)))\frac{1}{\log (1+t)}\frac{1}{1+t},$$ we have that $h\in \mathcal{W}$. Moreover, $$\begin{aligned} h''(t)&=h_0''(\log(1+\log (1+t)))\frac{1}{(\log (1+t))^2}\frac{1}{(1+t)^2}\\ &-h_0'(\log(1+\log (1+t))\frac{1}{(1+t)^2}\left(\frac{1}{(\log (1+t))^2}+\frac{1}{\log (1+t)}\right)=O(t^{-2}),\end{aligned}$$ so $h\in \mathcal{W}_1$. Since $$(h\circ \exp)'(t)=h_0'(\log(1+\log (1+{\mathrm{e}}^t)))\frac{1}{1+\log (1+{\mathrm{e}}^t)}\frac{{\mathrm{e}}^t}{1+{\mathrm{e}}^t}=O(\frac{1}{t}),$$ it follows that $h\circ \exp\in \mathcal{W}\subseteq \mathcal{E}$. Solving equation for $g$, we obtain $$\begin{aligned} g(t) &=\frac{(\tilde{\psi}\cdot h)'(t)}{\tilde \psi'(t)}=h(t)+\frac{\tilde{\psi}(t)}{\tilde \psi'(t)}h'(t).\end{aligned}$$ Since $k_\psi\neq 0$, we have $\frac{\tilde{\psi}(t)}{\tilde \psi'(t)}=O(t)$. Thus, the fact that $h'(t)=o(t^{-1})$ implies that $g(t)=h(t)+o(1)$. Therefore $$\liminf_{t\to\infty} g(t)=\liminf_{t\to\infty} h(t)=\liminf_{t\to \infty} h_0(t).$$ It is clear that $$\limsup_{t\to\infty} h(t)=\limsup_{t\to\infty} h_0(t).$$ We can conclude that the function $h_0$ is not convergent as $t\to\infty$ if and only if $\limsup_{t\to\infty} h(t)>\liminf_{t\to \infty} g(t)$. \[nonmeasthm\] Let $p\in [1,\infty)$, $\psi: [0,\infty)\to [0,\infty)$ be as in Proposition \[smalldixcomp\] and assume that $h_0\in C^{1,1}[0,\infty)$ is such that $h_0, h_0', h_0''\in L^\infty(0,\infty)$ and the function $h_0$ is not convergent as $t\to\infty$. Define the holomorphic function $$f(z):=\sum_{n=0}^\infty 2^{-n/p}c_n z^{2^n},$$ where $c=(c_n)_{n\in{{\mathbb N}}}$ is given as in from the solution $g$ to for $h(t):=h_0(\log(1+\log(1+t)))$. Then $f\in B_{p,\psi}(S^1)$ and moreover $|H_f|^p\in \mathcal{M}_\psi$ is non-measurable. More precisely, $f$ satisfies that there are exponentiation invariant extended limits $\omega_1$ and $\omega_2$ such that $$\mathrm{Tr}_{\omega_1,\psi}(|H_{\overline{f}}|^p)=0\quad \mbox{and}\quad \mathrm{Tr}_{\omega_2,\psi}(|H_{\overline{f}}|^p)>0.$$ By Lemma \[cnonstructionofh\] $h\in \mathcal{W}_1$ satisfies that $h\circ \exp\in \mathcal{E}$ and $g$ satisfies that $\liminf_{t\to\infty} h(t)=\liminf_{t\to \infty} g(t)$ and $\limsup_{t\to\infty} h(t)>\liminf_{t\to \infty} g(t)$. It follows from Proposition \[smalldixcomp\] that there are exponentiation invariant extended limits $\omega_1,\omega_2\in L^\infty(0,\infty)^*$ such that $$\mathrm{tr}_{\omega_1\circ \log_2,\tilde{\psi}}(|c|^p)=0\quad\mbox{and}\quad \mathrm{tr}_{\omega_2\circ \log_2,\tilde{\psi}}(|c|^p)>0.$$ By positivity of the Dixmier trace and , we have that $$\begin{aligned} 0\leq &\mathrm{Tr}_{\omega_1,\psi}(|H_{\overline{f}}|^p)\leq \alpha_1\mathrm{tr}_{\omega_1\circ \log_2,\tilde{\psi}}(|c|^p)=0\quad\mbox{and}\\ &\mathrm{Tr}_{\omega_2,\psi}(|H_{\overline{f}}|^p)\geq \alpha_0\mathrm{tr}_{\omega_2\circ \log_2,\tilde{\psi}}(|c|^p)>0.\end{aligned}$$ Theorem \[nonmeasthm\] extends [@EZ Theorem 4] to general $p$ and general $\psi$ with $k_\psi\neq 0$. Our proof is longer. The length is not just due to the reason that we are in a more general setting. The reason for the length of the proof is two-fold. Firstly, we wanted to better understand the mechanism that creates non-measurability in terms of functions $h$ as in Proposition \[smalldixcomp\]. Secondly, we wanted to improve the construction of the two exponentiation invariant extended limits $\omega_0$ and $\omega_1$ that realizes the non-measurability as is done in Proposition \[smalldixcomp\]. The construction in the proof of [@EZ Theorem 4] starts from an extended limit $\eta\in \ell^\infty({{\mathbb N}})^*$ and is used to construct two different extended limits $\omega_1$ and $\omega_2$ on $L^\infty(0,\infty)$. The process of going from sequences to function is delicate when it comes to extended limits. In [@EZ], starting from a translation invariant extended limit $\eta\in \ell^\infty({{\mathbb N}})^*$ and the mapping $b_j:{{\mathbb N}}\to {{\mathbb R}}_+$, $b_j(n):=a^{(2k+j)\pi}$ for an $a>1$ and $j=1,2$, Engliš-Zhang [@EZ] defined extended limits $$\omega_j(f):=\eta(((M(f\circ \exp))\circ b_j), \quad \mbox{for $f\in L^\infty(0,\infty)$}.$$ Here $M$ denotes the logarithmic Cesaro mean. Since $\eta$ is only invariant for translations by natural numbers, a computation as in Equation shows that $\omega_j$ need only satisfy $\omega_j\circ P_\alpha=\omega_j$ for $\alpha$ in the multiplicative subgroup of ${{\mathbb R}}_+$ generated by $a^2$. To our knowledge, one needs full exponentiation invariance in order for a relation as in Theorem \[allthethingsGSsaid\] part i) to hold. It is unclear to us how the conclusion [@EZ Theorem 4] is reached from only knowing invariance with respect to $P_{a^2}$. Proposition \[smalldixcomp\] above circumvents this problem. [10]{} , vol. 27 of [*Encyclopedia of Mathematics and its Applications*]{}. Cambridge University Press, Cambridge, 1989. The [D]{}ixmier trace and asymptotics of zeta functions. , 2 (2007), 253–283. V. Chilin, F. Sukochev Weak convergence in non-commutative symmetric spaces. [*J. Operator Theory 31*]{}, 1 (1994), 35–65. . Academic Press Inc., San Diego, CA, 1994. A. Connes, E. McDonald, F. Sukochev, and D. Zanin, Conformal trace theorem for Julia sets of quadratic polynomials, Ergodic Theory and Dynamical Systems, 1-26, 2017. A. Connes, F. Sukochev, and D. Zanin, Trace theorem for quasi-Fuchsian groups, (Russian) Mat. Sb. 208 (2017), no. 10, 59–90; translation in Sb. Math. 208 (2017), no. 10, 1473–1502. Existence de traces non normales. (1966), A1107–A1108. P. G. Dodds, B. de Pagter, A. A. Sedaev, E. M. Semenov, and F. A. Sukochev. Singular symmetric functionals. , 290 (Issled. po Linein. Oper. i Teor. Funkts. 30), 178 (2002), 42–71. . Corrected reprint of the 1965 original. Dover Publications, Inc., New York, 1995. M. Engliš, and G. Zhang, Hankel operators and the Dixmier trace on strictly pseudoconvex domains. Doc. Math. 15 (2010), 601–622. Miroslav Engliš and Genkai Zhang. Hankel operators and the Dixmier trace on the Hardy space. *J. Lond. Math. Soc.* 94:2 337–356, 2016. Dixmier traces and extrapolation description of noncommutative [L]{}orentz spaces. , 10 (2014), 6256–6317. Commutator estimates on contact manifolds and applications, arXiv:1312.7677, to appear in Journal of Noncommutative Geometry. Nonclassical spectral asymptotics and [D]{}ixmier traces: from circles to contact manifolds. (2017), e3, 57. S. Janson, H. Upmeier, R. Wallstén, *Schatten-norm identities for Hankel operators*, J. Funct. Anal. 119 (1994), no. 1, 210–216. S. Krein, Yu. Petunin, E. Semenov, *Interpolation of linear operators*, Translations of Mathematical Monographs, Amer. Math. Soc. 54, 1982. , vol. 46 of [ *Studies in Mathematics*]{}. De Gruyter, 2012. V. V. Peller, *Hankel operators and their applications*, Springer Monographs in Mathematics. Springer-Verlag, New York, 2003.
In the two-dimensional (2D) metal-insulator transition (MIT) regime, both the Coulomb interactions between electrons and the disorder are expected to be strong, leading to the formation of an electron glass [@Review; @Rice]. Recent experiments in 2D electron systems have revealed changes in the characteristics and the amplitude of the conduction noise as the change density of the system is varied from a high charge density metallic phase to a lower charge density insulating phase [@Popovic; @Sn; @Noise]. This has been interpreted as evidence for an onset of glassy dynamics near the insulating phase. These studies also find that the noise has a $1/f^{\alpha}$ characteristic, with $\alpha = 1.0$ in the metallic phase changing over to $\alpha \approx 1.8$ in the glassy phase. Other experiments have found similar results with $\alpha = 0.75$ in the metallic phase and $\alpha =1.3$ near the insulating phase [@N2d]. The glassy phase is signified by a large increase in the noise power along with the change in $\alpha$ [@Popovic; @Sn]. In addition, the noise power was observed to [*decrease*]{} with temperature [@Sn; @Noise], in contrast to single electron models with thermally activated trapping [@Weissman] and other models [@Voss] that predict an increase in the noise power with $T$. This suggests the importance of electron-electron interactions at the MIT. Other recent experiments near the 2D MIT have also found $1/f^{\alpha}$ noise, strong increases in noise power with decreasing charge density, and decreasing noise with increasing $T$ [@N2d; @Tourbot]. Additionally, $1/f^{\alpha}$ noise fluctuations in thin granular films have been interpreted as evidence for a glassy electron state [@Wu]. In theoretical studies, it was proposed that at the 2D MIT a freezing from an electron liquid to a partially ordered Wigner glass [@Kivelson] or a more strongly disordered electron glass [@Thakur; @Glass] may occur. Other theories suggest that an intermediate metallic glass phase appears between the liquid and insulating phases [@Pastor]. It is also possible that the metallic glass phase may consist of solid phase insulating regions coexisting with string-like liquid regions. Studies of glassy systems often find cooperative string-like motions or dynamical heterogeneities [@Glotzer]. Such motions can give rise to correlated dynamics and large fluctuations near a glass transition. It is, however, unclear what the origin of such cooperativity would be in the electron glass systems. The strong enhancement of the noise observed when passing from a liquid to a glassy phase in the 2D charge system has noticeable similarities to the noise found in studies of vortex matter in superconductors [@Higgins]. In the vortex system, the existence of a transition from a weakly pinned liquid phase to a more strongly pinned glassy phase has been established. Noise studies in low temperature superconductors have shown that, as the glassy phase is approached from the liquid state, the voltage noise has a $1/f^{\alpha}$ characteristic with increasing $\alpha$, corresponding to an increase in the noise power [@Higgins]. The vortex system studies suggest that similar physics may be occurring in the 2D electron systems when there is a transition from a liquid like phase with low noise power to a pinned glassy phase with high noise power. This is also consistent with the theoretical prediction that the electron liquid freezes into a 2D disordered solid. In this work we propose a simple model for a classical 2D electron system consisting of interacting electrons with random disorder and temperature. We monitor the fluctuations and noise characteristics of the current as a function of electron density or temperature. The advantage of our model is that a large number of interacting electrons can be conveniently simulated, while a full quantum mechanical model of similar size would be computationally prohibitive. Despite the limitations of this model, we show that this approach captures many of the key experimental observations. Additionally, although our primary focus is to gain insight into the physics near the 2D MIT, our model is also relevant for other classical charge systems undergoing transitions from glass to liquid states, such as charged colloids interacting with random disorder. Our model is similar to previous studies of 2D classical electron systems with disorder [@Fertig; @Reichhardt]; however, these previous studies focused on the microscopics of the defects in the lattice [@Fertig] or the sliding dynamics [@Reichhardt]. In the present work we focus on the noise fluctuations in the strongly disordered phase as it changes from a liquid to a frozen state as a function of electron density for a fixed amount of disorder. Our model consists of a 2D system of $N_{s}$ interacting electrons with periodic boundary conditions in the $x$ and $y$-directions. There are also $N_{p}$ defect sites which attract the electrons. We assume the electron motion is at finite temperature and the time evolution occurs through Langevin dynamics. The damping on the electrons comes from their interactions with phonons or small scattering sites. The equation of motion for a electron $i$ is $$\eta {\bf v}={\bf f}_{i}=-\sum_{j}^{N_{s}}\nabla U(r_{ij}) + {\bf f}_{i}^{s} + {\bf f}_{i}^{T} + {\bf f}_{d}$$ Here $\eta= 1$ is the damping constant and $U_{i}(r) = -q^2/r$, with $q=1$, is the electron-electron interaction potential, treated as in [@Jensen]. The term ${\bf f}_{i}^{s}$ comes from the $N_p$ randomly spaced defect sites modeled as parabolic traps of radius $r_{p}=0.2$ and strength $f_{p} = 1.0$. The thermal noise ${\bf f}_{i}^{T}$ arises from random Langevin kicks with $<f^{T}(t)> = 0$ and $<f^{T}_i(t)f^{T}_j(t^{\prime})> = 2\eta k_B T \delta_{ij} \delta(t - t^{\prime})$. The driving term ${\bf f}_{d}=f_d \hat x$ comes from an applied voltage, and we take $f_d= 0.1$. We start at a high temperature where the charges are diffusing rapidly and cool to a lower temperature. We then wait for $10^4$ simulation time steps to reduce transient effects before appling the drive and measuring the average velocity $v$ of the electrons, which is proportional to the conduction or resistance. We do this for a series of electron densities at fixed disorder strength. We have considered samples with constant pin densities $n_{p}$ for different system sizes such that $N_{p}$ ranges from $317$ to $1200$. We first consider the average electron velocity as a function of temperature and charge density for a system with a fixed $N_{p}=1200$. In Fig. 1 we show a series of conductance curves vs $T$ for charge density varied over nearly an order of magnitude, $0.4 \leq N_{s}/N_{p} \leq 2.67$. For high $N_{s}/N_{p}> 0.6$ the conductance is finite down to $T = 0$, while for $N_{s}/N_{p} \leq 0.6$, the electron velocity drops to zero within our resolution, indicating that all the electrons are strongly pinned in an insulating phase. As $N_{s}/N_{p}$ increases above 1.0, the downward curvature of $v$ at low $T$ decreases. These curves appear very similar to those typically observed in 2D MIT studies [@Popovic]. One difference is that we do not find a charge density $n_{s}^{up}$ above which the slope of the velocities turns up slightly at low $T$, as in the experiments. This may be due to the fact that in our model we do not directly include phonons. In the experimental regime of interest to us here, the large noise increases occur at charge densities $n_s<n^{up}_s$, where the velocity curves bend down at low temperature. We next consider the relative fluctuations in the velocities, $\delta v(t)=(v(t) - <v>)$ for varied $N_{s}/N_{p}$ at a fixed $T = 0.09$ for the system in Fig. 1. This analysis is similar to that performed in experiments [@Popovic; @Sn; @Noise]. In Fig. 2 we show the time traces of the relative velocity fluctuations for $N_{s}/N_{p} = 0.5, 0.7, 1.05, 1.64$ and $2.67$. Here the fluctuations increase as $N_s$ drops, in agreement with the experiments [@Popovic]. For $N_{s}/N_{p} < 0.44$, the system is pinned and there are no fluctuations. It is possible that, over a longer time interval such as that accessible experimentally, there would be even larger fluctuations at these small $N_s$ values; however, this is beyond the time scale we can access with simulations. A similar series of time traces can be obtained for $\delta v$ at fixed $N_{s}/N_{p}$ for increasing temperature (not shown). Here the fluctuations are reduced at the higher temperatures, in agreement with experiments [@Popovic]. From the fluctuations $\delta v$ we measure the power spectrum $$S(\nu) = \left|\int \delta v(t)e^{-2\pi i\nu t}dt\right|^2 .$$ In Fig. 3 we plot $S(\nu)$ for two different charge densities. At $N_{s}/N_{p} = 0.5$ (upper curve), the spectrum shows a $1/f^{\alpha}$ characteristic with $\alpha=1.37$ over a few orders of magnitude in the frequency. In contrast, for $N_{s}/N_{p} = 1.67$ (lower curve), the noise power at lower frequencies is considerably reduced and the spectra is white with $\alpha \approx 0$. We not that it is the lower frequcies which will be most readly accesible in experiment. For fixed $N_{s}/N_{p} = 0.5$, we find that the power spectrum becomes white upon increasing $T$. We note that our results differ quantitatively from the experimental noise measurements [@Popovic] which find $1/f^{\alpha}$ noise with $\alpha = 1.0$ in the metallic phase and $\alpha = 1.8$ in the glassy regime. Our exponent $\alpha=1.37$ is close to the $\alpha = 1.3$ found in the glassy regime in other experiments [@N2d], where $\alpha=0.75$ in the metallic regime. It is possible that the exponents are not universal but depend on the details of the disorder strength; nevertheless, our results are in qualitative agreement with the experiments. In Fig. 4(a) we show the noise power $S_{0}$ integrated over the first octave vs $N_{s}/N_{p}$ for a fixed $T = 0.09$. The noise power increases by four orders of magnitude as $N_{s}/N_p$ is reduced. At low $N_{s}/N_{p}$, the noise power decreases almost exponentially with charge density and begins to saturate at high $N_{s}/N_p$. Both these observations are in agreement with the experimental results [@Sn; @Noise]. In the inset of Fig. 4(a) we plot the noise spectrum exponent $\alpha$ vs $N_{s}/N_{p}$. A large increase in $\alpha$ occurs near $N_{s}/N_{p} = 0.7$. A similar sharp increase in the exponent is also observed in experiments [@Sn; @Noise; @N2d] as a function of charge density and has been interpreted as the glassy freezing transition. In Fig. 4(b) we plot $S_{0}$ vs $T$ for fixed $N_{s}/N_{p} = 0.5$. Here the noise power drops exponentially over four orders of magnitude with increasing $T$, which is in agreement with the experiments [@Sn; @Noise]. We note that most single electron models predict an [*increase*]{} in the noise power with temperature [@Weissman; @Voss]. Other models in the hopping regime predict a noise power that is independent of $T$ or has a power law decrease with $T$ [@Kozub]. These discrepancies suggest that the noise in the experiment is not due to single electron hopping events, but is instead caused by correlated electron motions. In the inset of Fig. 4(b) we plot $\alpha$ vs $T$, showing that a sharp increase in $\alpha$ occurs near $T = 0.125$ at the onset of the glassy freezing. We have also measured the non-Gaussian nature of the noise. At high $N_{s}$ and $T$ the noise fluctuations are Gaussian; however, in the regions of high noise power we find non-Gaussian noise fluctuations with a skewed distribution. Experiments have also found evidence for non-Gaussian fluctuations in the glassy regimes [@Noise]. Next we show evidence that the large noise is due to correlated regions of string like electron flow, and that within these regions the electrons move in 1D or quasi 1D channels. Because of the reduced dimensionality the electron motion is more correlated. In Fig. 5(a) we show the trajectories of the electrons for a fixed period of time for a system with $T = 0.09$ at $N_{s}/N_{p} = 1.67$. Here the electrons can flow freely throughout the sample, although there are some areas where electrons become temporarily trapped by a defect site. In Fig. 5(b) at $N_{s}/N_{p} = 1.37$, where the noise power is larger than in the system shown in Fig. 5(a), larger pinned regions appear and the electron motion consists of a mixture of 2D and 1D regions. If the trajectories are followed over longer times, motion occurs throughout the entire sample. In Fig. 5(c) at $N_{s}/N_{p} = 0.5$, where the noise power and $\alpha$ are both maximum, the electron motion occurs mostly in the form of 1D channels that percolate through the sample. There are also regions where the electron motion occurs in small rings. The channel structures change very slowly with time, with a channel occasionally shutting off while another emerges elsewhere. It is the intermittent opening of the 1D channels which gives rise to the large noise fluctuations in this regime. When a percolating 1D channel opens, all the electrons in that channel move in a correlated fashion leading to a large increase in the conduction. Conversely, if a percolating channel closes all the electrons in that channel cease to move. It is well known that fluctuations in 1D are much more strongly enhanced than in 2D. As $T$ or $N_{s}$ is increased, the motion becomes increasingly 2D in nature and the strong correlations of the electron motion are lost. We also note that the appearance of string like motions in the large noise regions is consistent with studies in glassy systems, where dynamical heterogeneities in the form of 1D stringlike motions of particles have been observed in conjunction with large noise [@Glotzer]. In Fig. 5(d) at $N_{s}/N_{p} = 0.3$, deep in the insulating regime, there are no channels. Instead, the infrequent motion of electrons occurs only by small jumps from defect to defect. In conclusion, we have presented a simple model for the glassy freezing of interacting electrons in 2D with random disorder. For high electron density or high temperatures, the electrons form a 2D liquid state and we find low conduction noise power with a white spectra. As the density of the electrons is lowered for fixed temperature, or conversely, as the temperature is lowered for fixed low electron density, there is a crossover to a $1/f^{\alpha}$ noise with large low frequency power and $\alpha = 1.37$. In this glassy regime, the electrons move in 1D intermittent stringlike paths which percolate throughout the sample. Similar stringlike motions are also observed in other glass forming systems. For low electron density, all the electrons are frozen by the defect sites and the motion occurs only by single electron hopping events. We find that the noise power decreases exponentially with temperature, in agreement with experiment. Many of our results are in qualitative agreement with recent experiments on 2D electron systems near the metal insulator transition. This work was supported by the US DoE under Contract No. W-7405-ENG-36. For a review, see: E. Abrahams, S.V. Kravchenko, and M.P. Sarachik, Rev. Mod. Phys. [**73**]{}, 251 (2001). J.H. Davies, P.A. Lee, and T.M. Rice, Phys. Rev. Lett. [**49**]{}, 758 (1982). S. Bogdanovich and D. Popovi' c, Phys. Rev. Lett. [**88**]{}, 236401 (2002). S. Bogdanovich and D. Popovi' c, Physica E [**12**]{}, 604 (2002). J. Jaroszy' nski, D. Popovi' c, and T.M. Klapwijk, Phys. Rev. Lett. [**89**]{}, 276401 (2002). S. Kar [*et al.*]{}, Phys. Rev. Lett. [**91**]{}, 216603 (2003). M.B. Weissman, Rev. Mod. Phys. [**60**]{}, 537 (1988); P. Dutta [*et al.*]{}, Rev. Mod. Phys. [**53**]{}, 497 (1981). R.F. Voss and J. Clarke, Phys. Rev. B [**13**]{}, 556 (1976); O. Cohen and Z. Ovadyahu, Phys. Rev. B [**50**]{}, 10442 (1994). R. Leturcq [*et al.*]{}, Phys. Rev. Lett. [**90**]{}, 076402 (2003). E. Bielejec and W. Wu, Phys. Rev. Lett. [**87**]{}, 256601 (2001). S. Chakravarty, S. Kivelson, C. Nayak, and K. Voelker, Philos. Mag. B [**79**]{}, 859 (1999). J.S. Thakur and D. Neilson, Phys. Rev. B [**59**]{}, R5280 (1999). A.A. Pastor and V. Dobrosavljevic, Phys. Rev. Lett. [**83**]{}, 4642 (1999). V. Dobrosavljevic, D. Tanaskovic, and A.A. Pastor, Phys. Rev. Lett. [**90**]{}, 016402 (2003). For reviews, see: S.C. Glotzer, J. Non-Cryst. Solids [**274**]{}, 342 (2000); R. Richert, J. Phys. Cond. Mat. [**14**]{}, R703 (2002). A.C. Marley, M.J. Higgins, and S. Bhattacharya, Phys. Rev. Lett. [**74**]{}, 3029 (1995). M.-C. Cha and H.A. Fertig, Phys. Rev. Lett. [**73**]{}, 870 (1994). M.-C. Cha and H.A. Fertig, Phys. Rev. B [**50**]{}, 14368 (1994); C. Reichhardt [*et al.*]{}, Phys. Rev. Lett. [**86**]{}, 4354 (2001). N.Grønbech-Jensen, Int. J. Mod. Phys. C [**8**]{}, 1287 (1997). B.I. Shklovskii, Solid State Commun. [**33**]{}, 273 (1980); V.I. Kozub, Solid State Commun. [**97**]{}, 843 (1996); K. Shtengel and C.C. Yu, Phys. Rev. B [**67**]{}, 165106 (2003).
--- abstract: 'Let $G=G_1\ast\dots\ast G_k\ast F$ be a countable group which splits as a free product, where all groups $G_i$ are freely indecomposable and not isomorphic to $\mathbb{Z}$, and $F$ is a finitely generated free group. If for all $i\in\{1,\dots,k\}$, both $G_i$ and its outer automorphism group $\text{Out}(G_i)$ satisfy the Tits alternative, then $\text{Out}(G)$ satisfies the Tits alternative. As an application, we prove that the Tits alternative holds for outer automorphism groups of right-angled Artin groups, and of torsion-free groups that are hyperbolic relative to a finite family of virtually polycyclic groups.' author: - Camille Horbez bibliography: - '/Users/Camille/Documents/Bibliographie.bib' title: The Tits alternative for the automorphism group of a free product --- \[section\] \[de\][Theorem]{} \[de\][Proposition]{} \[de\][Lemma]{} \[de\][Corollary]{} \[de\][Proposition-Definition]{} \[de\][Remark]{} \[de\][Example]{} \[de\][Question]{} Introduction {#introduction .unnumbered} ============ In his celebrated 1972 paper [@Tit72], Tits proved that any subgroup of a finitely generated linear group over an arbitrary field is either virtually solvable, or contains a rank two free subgroup. This dichotomy has since been shown to hold for various classes of groups, such as hyperbolic groups (Gromov [@Gro87]), mapping class groups of compact surfaces (Ivanov [@Iva84], McCarthy [@McC85]), outer automorphism groups $\text{Out}(F_N)$ of finitely generated free groups (Bestvina, Feighn and Handel [@BFH00; @BFH05]), groups acting freely and properly on a CAT(0) cube complex (Sageev and Wise [@SW05]), the group of polynomial automorphisms of $\mathbb{C}^2$ (Lamy [@Lam01]), groups of bimeromorphic automorphisms of compact complex Kähler manifolds (Oguiso [@Ogu06]), groups of birational transformations of compact complex Kähler surfaces (Cantat [@Can11]). For the four first classes of groups mentioned above, as well as in Oguiso’s theorem, a slightly stronger result than Tits’ actually holds, since virtually solvable subgroups can be shown to be finitely generated and virtually abelian, with a bound on the index of the abelian subgroup (see [@BLM83] for the mapping class group case, see [@Ali02; @BFH04] for the $\text{Out}(F_N)$ case, see [@BH99] for the case of groups acting on a CAT(0) cube complex). A group $G$ *satisfies the Tits alternative* if every subgroup of $G$ (finitely generated or not) is either virtually solvable, or contains a rank two free subgroup. More generally, we will make the following definition. The classical Tits alternative corresponds to the case where $\mathcal{C}$ is the class of virtually solvable groups. Let $\mathcal{C}$ be a collection of groups. A group $G$ *satisfies the Tits alternative relative to $\mathcal{C}$* if every subgroup of $G$ either belongs to $\mathcal{C}$, or contains a rank two free subgroup. It is often interesting to show stability results for the Tits alternative: when a group $G$ is built in some way out of simpler subgroups $G_i$, it is worth knowing that one can deduce the Tits alternative for $G$ from the Tits alternative for the $G_i$’s. The Tits alternative is known to be stable under some basic group-theoretic constructions, such as passing to subgroups or to finite index supergroups; it is also stable under extensions – we insist that it is important here to allow for subgroups of $G$ that are not finitely generated in the definition of the Tits alternative. Antolín and Minasyan established stability results of the Tits alternative for graph products of groups [@AM13].\ \ Our main result is about deducing the Tits alternative for the outer automorphism group of a free product of groups $G_i$, under the assumption that all groups $G_i$ and $\text{Out}(G_i)$ satisfy it. A celebrated theorem of Grushko [@Gru40] states that any finitely generated group $G$ splits as a free product of the form $$G=G_1\ast\dots\ast G_k\ast F,$$ where for all $i\in\{1,\dots,k\}$, the group $G_i$ is nontrivial, not isomorphic to $\mathbb{Z}$, and freely indecomposable, and $F$ is a finitely generated free group. This *Grushko decomposition* is unique in the sense that both the number $k$ of indecomposable factors, and the rank of the free group $F$, are uniquely determined by $G$, and the conjugacy classes of the freely indecomposable factors are also uniquely determined, up to permutation. Our main result reduces the study of the Tits alternative of the outer automorphism group of any finitely generated group to that of its indecomposable pieces. It answers a question of Charney and Vogtmann, who were interested in the Tits alternative for outer automorphisms of right-angled Artin groups. \[Tits-intro\] Let $G$ be a finitely generated group, and let $$G:=G_1\ast\dots\ast G_k\ast F$$ be the Grushko decomposition of $G$. Assume that for all $i\in\{1,\dots,k\}$, both $G_i$ and $\text{Out}(G_i)$ satisfy the Tits alternative.\ Then $\text{Out}(G)$ satisfies the Tits alternative. Again, we insist on the fact that when we assume that the groups $G_i$ and $\text{Out}(G_i)$ satisfy the Tits alternative, it is important to consider all their subgroups (finitely generated or not) in the definition of the Tits alternative, even if we are only interested in establishing this alternative for finitely generated subgroups of $\text{Out}(G)$. Under the assumptions of Theorem \[Tits-intro\], since the Tits alternative is stable under extensions, the full automorphism group $\text{Aut}(G)$ also satisfies the Tits alternative. When $k=0$, we get a new, shorter proof of the Tits alternative for the outer automorphism group $\text{Out}(F_N)$ of a finitely generated free group, that was originally established by Bestvina, Feighn and Handel [@BFH00; @BFH05]. In particular, this gives a new proof of the Tits alternative for the mapping class group of a compact surface with nonempty boundary. More generally, if $\mathcal{C}$ is a collection of groups that is stable under isomorphisms, contains $\mathbb{Z}$, and is stable under passing to subgroups, to extensions, and to finite index supergroups, we show that $\text{Out}(G)$ satisfies the Tits alternative relative to $\mathcal{C}$, as soon as all $G_i$ and $\text{Out}(G_i)$ do, see Theorem \[Tits\]. This applies for example to the class of virtually polycyclic groups. Bestvina, Feighn and Handel actually proved the Tits alternative for $\text{Out}(F_N)$ relative to the collection of all abelian groups [@BFH04], which does not follow from our main result. More generally, it would be of interest to know whether the version of Theorem \[Tits-intro\] relative to the class of abelian groups holds.\ \ Theorem \[Tits-intro\] can be applied to prove the Tits alternative for outer automorphism groups of various interesting classes of groups. In [@CV11], Charney and Vogtmann proved the Tits alternative for the outer automorphism group of a right-angled Artin group $A_{\Gamma}$ associated to a finite simplicial graph $\Gamma$, under a homogeneity assumption on $\Gamma$. As noticed in [@CV11 Section 7], Theorem \[Tits-intro\] enables us to remove this assumption. This was Charney and Vogtmann’s original motivation for asking the question about the Tits alternative for the outer automorphism group of a free product. Basically, when $\Gamma$ is disconnected, the group $A_{\Gamma}$ splits as a free product of the subgroups $A_{\Gamma_i}$ associated to its connected components, and Theorem \[Tits-intro\] enables us to argue by induction on the number of vertices of $\Gamma$, using Charney and Vogtmann’s results from [@CV11]. \[Tits-raag\] For all finite simplicial graphs $\Gamma$, the group $\text{Out}(A_{\Gamma})$ satisfies the Tits alternative. Theorem \[Tits-intro\] also applies to the outer automorphism group of a torsion-free group $G$ that is hyperbolic relative to a finite collection $\mathcal{P}$ of virtually polycyclic subgroups. Indeed, it enables to restrict to the case where $G$ is freely indecomposable relative to $\mathcal{P}$, i.e. $G$ does not split as a free product of the form $G=A\ast B$, where all subgroups in $\mathcal{P}$ are conjugate into either $A$ or $B$. In the freely indecomposable case, the group of outer automorphisms of $G$ was described by Guirardel and Levitt as being built out of mapping class groups and subgroups of linear groups [@GL14]. \[Tits-relhyp\] Let $G$ be a torsion-free group that is hyperbolic relative to a finite collection of virtually polycyclic subgroups. Then $\text{Out}(G)$ satisfies the Tits alternative. More generally, if $G$ is a torsion-free group that is hyperbolic relative to a finite family of finitely generated parabolic subgroups, we show that if all parabolic subgroups, as well as their outer automorphism groups, satisfy the Tits alternative, then the subgroup of $\text{Out}(G)$ made of those automorphisms that preserve the conjugacy classes of all parabolic subgroups also satisfies the Tits alternative. We refer to Theorem \[tits-rh\] for a precise statement.\ \ We now describe the main ideas in our proof of Theorem \[Tits-intro\]. In the case of the mapping class group $\text{Mod}(S)$ of a compact surface $S$, one way of proving the Tits alternative is to start by proving the following trichotomy: every subgroup $H\subseteq\text{Mod}(S)$ either - contains two pseudo-Anosov diffeomorphisms of $S$ that generate a rank two free subgroup of $H$, or - is virtually cyclic, virtually generated by a pseudo-Anosov diffeomorphism, or - virtually fixes the isotopy class of a simple closed curve on $S$. This trichotomy was proved by Ivanov in [@Iva92], and independently by McCarthy and Papadopoulos in [@McP89]. They started by proving that every subgroup of $\text{Mod}(S)$ either contains a pseudo-Anosov diffeomorphism, or virtually fixes the isotopy class of a simple closed curve on $S$, before studying subgroups of $\text{Mod}(S)$ that contain a pseudo-Anosov diffeomorphism. Once the above trichotomy is established, a second step in the proof of the Tits alternative consists in arguing by induction, in the case where $H$ preserves the isotopy class of a simple closed curve $\gamma$. In this case, by cutting $S$ along $\gamma$, we get a collection of subsurfaces. The Tits alternative is proved by induction, by considering the restrictions of the diffeomorphisms in $H$ to these subsurfaces.\ \ Our proof of Theorem \[Tits-intro\] follows the same strategy. For the inductive step, we will need to work with decompositions of $G$ into free products that are not necessarily equal to the Grushko decomposition. From now on, we let $G$ be a countable group that splits as a free product of the form $$G:=G_1\ast\dots\ast G_k\ast F,$$ where $F$ is a finitely generated free group, and all $G_i$ are nontrivial. We do not require this decomposition to be the Grushko decomposition of $G$: some factors $G_i$ can be equal to $\mathbb{Z}$, or be freely decomposable. We actually do not even require $G$ to be finitely generated: some $G_i$ might be infinitely generated (however the number $k$ of factors arising in the splitting is finite, and $F$ is finitely generated). We denote by $\mathcal{F}:=\{[G_1],\dots,[G_k]\}$ the finite set of all $G$-conjugacy classes of the $G_i$’s, which we call a *free factor system* of $G$. We denote by $\text{Out}(G,\mathcal{F})$ the subgroup of $\text{Out}(G)$ made of those outer automorphisms of $G$ that send each $G_i$ to a conjugate. Theorem \[Tits-intro\] is a particular case of the following version, which is suitable for our inductive arguments. \[theo-Tits-2\] Let $G$ be a countable group, and let $\mathcal{F}$ be a free factor system of $G$. Assume that for all $i\in\{1,\dots,k\}$, both $G_i$ and $\text{Out}(G_i)$ satisfy the Tits alternative relative to $\mathcal{C}$, where $\mathcal{C}$ is a collection of groups that is stable under isomorphisms, contains $\mathbb{Z}$, and is stable under subgroups, extensions, and passing to finite index supergroups.\ Then $\text{Out}(G,\mathcal{F})$ satisfies the Tits alternative relative to $\mathcal{C}$. As mentioned above, our proof of Theorem \[theo-Tits-2\] will consist in two steps: establishing a trichotomy for subgroups $H\subseteq\text{Out}(G,\mathcal{F})$, and applying an inductive argument. The induction step consists in dealing with the case where $H$ virtually preserves the conjugacy class of a proper *$(G,\mathcal{F})$-free factor*. A *$(G,\mathcal{F})$-free factor* is a subgroup $A\subseteq G$ such that $G$ splits as a free product of the form $G=A\ast B$, and for all $i\in\{1,\dots,k\}$, the group $G_i$ is conjugate into either $A$ or $B$. A $(G,\mathcal{F})$-free factor is *proper* if it is nontrivial, not conjugate to any of the $G_i$’s, and not equal to $G$. When $H$ preserves the conjugacy class of a proper free factor $A$, the group $H$ is contained in $\text{Out}(G,\mathcal{F}')$, where $\mathcal{F'}$ is the free factor system of $G$ obtained from $\mathcal{F}$ by removing all subgroups in $\mathcal{F}$ that are conjugate into $A$, and replacing them by the $G$-conjugacy class of the factor $A$. When passing from $(G,\mathcal{F})$ to $(G,\mathcal{F}')$, some measure of complexity decreases, which enables us to argue by induction.\ \ We now describe our analogue of Ivanov’s trichotomy for subgroups of $\text{Out}(G,\mathcal{F})$. We first state an analogous trichotomy for subgroups of $\text{Out}(F_N)$. We recall that an automorphism $\Phi\in\text{Out}(F_N)$ is *fully irreducible* if no nontrivial power of $\Phi$ preserves the conjugacy class of a proper free factor of $F_N$. Every subgroup of $\text{Out}(F_N)$ (finitely generated or not) either - contains two fully irreducible automorphisms that generate a rank two free subgroup, or - is virtually cyclic, virtually generated by a fully irreducible automorphism, or - virtually fixes the conjugacy class of a proper free factor of $F_N$. In [@HM09], Handel and Mosher proved that any finitely generated subgroup of $\text{Out}(F_N)$ either contains a fully irreducible automorphism, or virtually fixes the conjugacy class of a proper free factor. Their proof uses the same kinds of techniques as Bestvina, Feighn and Handel’s proof of the Tits alternative [@BFH00], so it cannot be used to get a new proof of the Tits alternative for $\text{Out}(F_N)$. The study of subgroups of $\text{Out}(F_N)$ that contain a fully irreducible element is due to Bestvina, Feighn and Handel [@BFH97], another approach is due to Kapovich and Lustig [@KL11]. In [@Hor14-3], we gave a new, shorter proof of the above trichotomy, independent from the work in [@BFH00], that also works for non finitely generated subgroups of $\text{Out}(F_N)$. Our proof of this statement uses the action of $\text{Out}(F_N)$ on the *free factor complex* $FF_N$, whose hyperbolicity was originally proved by Bestvina and Feighn [@BF12]. Bestvina and Feighn also proved that an automorphism $\Phi\in\text{Out}(F_N)$ acts loxodromically on $FF_N$ if and only if $\Phi$ is fully irreducible. In terms of the action of $\text{Out}(F_N)$ on $FF_N$, the above trichotomy can be restated as follows: every subgroup of $\text{Out}(F_N)$ either - contains a rank two free subgroup generated by two loxodromic isometries of $FF_N$, or - is virtually cyclic, virtually generated by a loxodromic isometry of $FF_N$, or - has a finite orbit in $FF_N$. More generally, given a group $G$ acting by isometries on a (possibly non-proper) hyperbolic space $X$, it follows from a classification of groups of isometries of hyperbolic spaces due to Gromov [@Gro87] that either $G$ - contains a rank two free subgroup, generated by two loxodromic isometries of $X$, or - has a fixed point in the Gromov boundary $\partial_{\infty}X$, or - has a bounded orbit in $X$. The key point for deducing the above trichotomy statement for subgroups of $\text{Out}(F_N)$ from Gromov’s statement consists in showing that if $H$ has a bounded orbit in $FF_N$, then $H$ has a finite orbit in $FF_N$. This is not obvious because $FF_N$ is not locally finite. To bypass this difficulty, we studied stationary measures on the compact closure of Culler and Vogtmann’s outer space $CV_N$, and projected them to the Gromov boundary of the complex of free factors. In our proof of the above trichotomy, we also need to understand stabilizers of points in $\partial_{\infty}FF_N$ for dealing with the second case in Gromov’s theorem.\ \ We prove a similar trichotomy for subgroups of $\text{Out}(G,\mathcal{F})$, with $(G,\mathcal{F})$ as above. To this means, we work with relative outer space $P\mathcal{O}(G,\mathcal{F})$, and the complex of relative cyclic splittings $FZ(G,\mathcal{F})$. The geometry of these complexes was investigated in a series of previous papers [@Hor14-5; @Hor14-6]. In [@Hor14-5], we described a compactification $\overline{P\mathcal{O}(G,\mathcal{F})}$ of the relative outer space in terms of very small actions of $G$ on $\mathbb{R}$-trees. In [@Hor14-6], we proved the hyperbolicity of the complex of relative cyclic splittings, and described its Gromov boundary as a quotient of a subspace $P\mathcal{X}(G,\mathcal{F})$ of $\overline{P\mathcal{O}(G,\mathcal{F})}$. Assume that the pair $(G,\mathcal{F})$ is *nonsporadic*, i.e. we do not have $G=G_1\ast G_2$ and $\mathcal{F}=\{[G_1],[G_2]\}$, or $G=G_1\ast\mathbb{Z}$ and $\mathcal{F}=\{[G_1]\}$. The trichotomy that we prove for subgroups of $\text{Out}(G,\mathcal{F})$ is the following: every subgroup $H\subseteq\text{Out}(G,\mathcal{F})$ (finitely generated or not) either - contains a rank two free subgroup, generated by two loxodromic isometries of $FZ(G,\mathcal{F})$, or - virtually fixes a tree with trivial arc stabilizers in $\partial P\mathcal{O}(G,\mathcal{F})$, or - virtually preserves the conjugacy class of a proper $(G,\mathcal{F})$-free factor. Again, the key point is to understand subgroups of $\text{Out}(G,\mathcal{F})$ with bounded orbits in $FZ(G,\mathcal{F})$. We show that if a subgroup $H\subseteq \text{Out}(G,\mathcal{F})$ does not virtually preserve the conjugacy class of any proper $(G,\mathcal{F})$-free factor, then the $H$-orbit of any point of $FZ(G,\mathcal{F})$ has a limit point in the Gromov boundary. Our argument relies on techniques coming from the theory of random walks on groups. Given a probability measure $\mu$ on $\text{Out}(F_N)$ whose support generates the subgroup $H$, we consider *$\mu$-stationary* measures $\nu$ on $\overline{P\mathcal{O}(G,\mathcal{F})}$, i.e. probability measures that satisfy $$\nu(E)=\sum_{\Phi\in\text{Out}(G,\mathcal{F})}\mu(\Phi)\nu(\Phi^{-1}E)$$ for all $\nu$-measurable subsets $E\subseteq \overline{P\mathcal{O}(G,\mathcal{F})}$. Compactness of $\overline{P\mathcal{O}(G,\mathcal{F})}$ yields the existence of a $\mu$-stationary measure on $\overline{P\mathcal{O}(G,\mathcal{F})}$ that describes the repartition of accumulation points of sample paths of the random walk on $\text{Out}(G,\mathcal{F})$, realized on $P\mathcal{O}(G,\mathcal{F})$ via the action. This is the Markov chain whose position at time $n$ is obtained by successive multiplications on the right of $n$ independent automorphisms, all distributed with law $\mu$. We prove that any $\mu$-stationary measure $\nu$ on $\overline{P\mathcal{O}(G,\mathcal{F})}$ is supported on the subspace $P\mathcal{X}(G,\mathcal{F})$. The measure $\nu$ therefore projects to a $\mu$-stationary measure on the Gromov boundary of $FZ(G,\mathcal{F})$. The closure of the $H$-orbit of any point in $FZ(G,\mathcal{F})$ meets the support of $\nu$, which shows the existence of a limit point in the Gromov boundary. To prove the Tits alternative for $\text{Out}(G,\mathcal{F})$, we also need to understand subgroups of $\text{Out}(G,\mathcal{F})$ that stabilize a tree with trivial arc stabilizers in $\partial P\mathcal{O}(G,\mathcal{F})$, which is made possible by work of Guirardel and Levitt [@GL14-2]. When $H$ fixes the conjugacy class of a proper free factor, we argue by induction, as explained above.\ \ As we are considering invariant free factors (and not invariant splittings) for the inductive step, it could seem to be more natural to work directly in the complex of proper $(G,\mathcal{F})$-free factors, whose hyperbolicity was recently proved by Handel and Mosher [@HM14-2], and try to prove that every subgroup of $\text{Out}(G,\mathcal{F})$ either has a finite orbit, or has a limit point in the Gromov boundary. However, describing the Gromov boundary of the complex of proper $(G,\mathcal{F})$-free factors is still an open problem. We bypass this difficulty by working in the complex $FZ(G,\mathcal{F})$, whose Gromov boundary was described in [@Hor14-6].\ \ The paper is organized as follows. In Section \[sec-1\], we review basic facts about Gromov hyperbolic spaces, free products of groups, and relative spaces associated to them. In Section \[sec-2\], we deal with the *sporadic* cases where either $G=G_1\ast G_2$ and $\mathcal{F}=\{[G_1],[G_2]\}$, or $G=G_1\ast\mathbb{Z}$ and $\mathcal{F}=\{[G_1]\}$. In Section \[sec-3\], we state Guirardel and Levitt’s theorem about stabilizers of trees in $\overline{P\mathcal{O}(G,\mathcal{F})}$ that is needed in our proof of Theorem \[theo-Tits-2\]. Section \[sec-4\] contains a study of *arational* $(G,\mathcal{F})$-trees, which is used in Section \[sec-5\] to establish the trichotomy for subgroups of $\text{Out}(G,\mathcal{F})$. Theorem \[sec-6\] is devoted to the inductive arguments. The reader will also find complete versions of our various statements of the Tits alternative in this section. Finally, in Section \[sec-7\], we give applications of our main result to automorphism groups of right-angled Artin groups, and of relatively hyperbolic groups. Acknowledgements {#acknowledgements .unnumbered} ================ It is a great pleasure to thank my advisor Vincent Guirardel for the many interesting discussions we had together. I acknowledge support from ANR-11-BS01-013 and from the Lebesgue Center of Mathematics. Review {#sec-1} ====== Gromov hyperbolic spaces ------------------------ A geodesic metric space $(X,d)$ is *Gromov hyperbolic* if there exists $\delta>0$ such that for all $x,y,z\in X$, and all geodesic segments $[x,y],[y,z]$ and $[x,z]$, we have $N_{\delta}([x,z])\subseteq N_{\delta}([x,y])\cup N_{\delta}([y,z])$ (where given $Y\subseteq X$, we denote by $N_{\delta}(Y)$ the $\delta$-neighborhood of $Y$ in $X$). The *Gromov boundary* $\partial_{\infty} X$ of $X$ is the space of equivalence classes of quasi-geodesic rays in $X$, two rays being equivalent if their images lie at bounded Hausdorff distance (we recall that a *quasi-geodesic ray* is a map $\gamma:\mathbb{R}_+\to X$, so that there exist $K,L>0$ such that for all $s,t\in\mathbb{R}_+$, we have $\frac{1}{K}|t-s|-L\le d(\gamma(s),\gamma(t))\le K|t-s|+L$). An isometry $\phi$ of $X$ is *loxodromic* if for all $x\in X$, we have $$\lim_{n\to +\infty}\frac{1}{n}d(x,\phi^nx)>0.$$ Given a group $G$ acting by isometries on $X$, we denote by $\Lambda_XG$ the *limit set* of $G$ in $\partial_{\infty} X$, which is defined as the intersection of $\partial_{\infty} X$ with the closure of the orbit of any point in $X$ under the $G$-action. The following theorem, essentially due to Gromov, gives a classification of isometry groups of (possibly nonproper) Gromov hyperbolic spaces. A sketch of proof can be found in [@CCMT13 Proposition 3.1], see also [@Ham13 Theorem 2.7]. (Gromov [@Gro87 Section 8.2])\[Gromov-1\] Let $X$ be a hyperbolic geodesic metric space, and let $G$ be a group acting by isometries on $X$. Then $G$ is either - *bounded*, i.e. all $G$-orbits in $X$ are bounded; in this case $\Lambda_X G=\emptyset$, or - *horocyclic*, i.e. $G$ is not bounded and contains no loxodromic element; in this case $\Lambda_X G$ is reduced to one point, or - *lineal*, i.e. $G$ contains a loxodromic element, and any two loxodromic elements have the same fixed points in $\partial_{\infty} X$; in this case $\Lambda_X G$ consists of these two points, or - *focal*, i.e. $G$ is not lineal, contains a loxodromic element, and any two loxodromic elements have a common fixed point in $\partial_{\infty} X$; in this case $\Lambda_X G$ is uncountable and $G$ has a fixed point in $\Lambda_X G$, or - *of general type*, i.e. $G$ contains two loxodromic elements with no common endpoints; in this case $\Lambda_X G$ is uncountable and $G$ has no finite orbit in $\partial_{\infty} X$. In addition, the group $G$ contains two loxodromic isometries that generate a rank two free subgroup. In particular, we have the following result. (Gromov [@Gro87 Section 8.2])\[Gromov\] Let $X$ be a hyperbolic geodesic metric space, and let $G$ be a group acting by isometries on $X$. If $\Lambda_X G\neq\emptyset$, and $G$ has no finite orbit in $\partial_{\infty} X$, then $G$ contains a rank two free subgroup generated by two loxodromic isometries. Free factor systems and relative complexes {#sec-relative} ------------------------------------------ #### Free factor systems. {#free-factor-systems. .unnumbered} Let $G$ be a countable group that splits as a free product of the form $$G:=G_1\ast\dots\ast G_k\ast F,$$ where $F$ is a finitely generated free group. We let $\mathcal{F}:=\{[G_1],\dots,[G_k]\}$ be the finite collection of all $G$-conjugacy classes of the $G_i$’s. We fix a free basis $\{g_1,\dots,g_N\}$ of $F$, and we let $T^{\text{def}}$ be the $G$-tree defined as the Bass–Serre tree of the graph of group decomposition of $G$ depicted on Figure \[fig-def\]. The rank of the free group $F$ arising in the splitting of $G$ only depends on $\mathcal{F}$. We call it the *free rank* of $(G,\mathcal{F})$ and denote it by $\text{rk}_f(G,\mathcal{F})$. The *Kurosh rank* of $(G,\mathcal{F})$ is defined as $\text{rk}_K(G,\mathcal{F}):=|\mathcal{F}|+\text{rk}_f(G,\mathcal{F})$. Subgroups of $G$ which are conjugate into one of the subgroups of $\mathcal{F}$ will be called *peripheral* subgroups. A *$(G,\mathcal{F})$-free splitting* is a minimal, simplicial $G$-tree $T$ in which all peripheral subgroups are *elliptic* (i.e. they fix a point in $T$), and edge stabilizers are trivial. #### Subgroups of free products. {#subgroups-of-free-products. .unnumbered} Subgroups of free products were studied by Kurosh in [@Kur34]. Let $H$ be a subgroup of $G$. By considering the $H$-minimal subtree in the tree $T^{\text{def}}$ (see the definition in Section \[sec-o\] below), we get the existence of a (possibly infinite) set $J$, together with an integer $i_j\in\{1,\dots,k\}$, a nontrivial subgroup $H_{j}\subseteq G_{i_j}$ and an element $g_{j}\in G$ for each $j\in J$, and a (not necessarily finitely generated) free subgroup $F'\subseteq G$, so that $$H=\ast_{j\in J}~ g_{j}H_{j}g_{j}^{-1}\ast F'.$$ This splitting will be called the *Kurosh decomposition* of $H$. The *Kurosh rank* of $H$ is equal to $\text{rk}_K(H):=|J|+\text{rk}(F')$, its *free rank* is $\text{rk}_f(H):=\text{rk}(F')$. They can be infinite in general. We also let $\mathcal{F}_H$ denote the set of $H$-conjugacy classes of the subgroups $g_{j}H_{j}g_{j}^{-1}$, which might also be infinite in general. We note that $\text{rk}_f(G,\mathcal{F})$ and $\mathcal{F}_H$ (and hence $\text{rk}_K(G,\mathcal{F})$) only depend on $H$ and $\mathcal{F}$, and not of our initial choice of $T^{\text{def}}$. #### Free factors. {#free-factors. .unnumbered} A *$(G,\mathcal{F})$-free factor* is a subgroup of $G$ that is a point stabilizer in some $(G,\mathcal{F})$-free splitting. A $(G,\mathcal{F})$-free factor is *proper* if it is nonperipheral (in particular nontrivial), and not equal to $G$. The Kurosh decomposition of a proper $(G,\mathcal{F})$-free factor reads as $$H=G'_{i_1}\ast\dots\ast G'_{i_r}\ast F',$$ where each of the subgroups $G'_{i_j}$ is conjugate in $G$ to one of the factors in $\mathcal{F}$ (with no repetition in the indices, i.e. the $G'_{i_j}$’s are pairwise non conjugate in $G$), and $F'$ is a finitely generated free group. In particular, the Kurosh rank of $H$ is finite. The group $G$ then splits as $$G=H\ast G'_{i_{r+1}}\ast\dots\ast G'_{i_k}\ast F'',$$ where $F''$ is a finitely generated free subgroup of $G$, and the $G'_{i_j}$’s are conjugate to the factors in $\mathcal{F}$ that do not arise in the Kurosh decomposition of $H$. The finite collection $\mathcal{F}':=\{[H],[G_{i_{r+1}}],\dots,[G_{i_k}]\}$ (where we consider $G$-conjugacy classes) is a free factor system of $G$, and we have $$\label{J} |\mathcal{F}'|+|\mathcal{F}_H|=|\mathcal{F}|+1,$$ and $$\label{rkf} \text{rk}_f(G,\mathcal{F}')+\text{rk}_f(H)=\text{rk}_f(G,\mathcal{F}),$$ whence $$\label{rkK} \text{rk}_K(G,\mathcal{F}')+\text{rk}_K(H)=\text{rk}_K(G,\mathcal{F})+1.$$ Let $H$ and $H'$ be two $(G,\mathcal{F})$-free factors, and let $T$ be a $(G,\mathcal{F})$-free splitting, one of whose elliptic subgroups is equal to $H$. By looking at the $H'$-minimal subtree of $T$, we see that $H\cap H'$ is an $(H',\mathcal{F}_{H'})$-free factor, so it is a $(G,\mathcal{F})$-free factor. This implies that the intersection of any family of $(G,\mathcal{F})$-free factors is again a free factor. In particular, any subgroup $A\subseteq G$ is contained in a smallest $(G,\mathcal{F})$-free factor, obtained as the intersection of all $(G,\mathcal{F})$-free factors that contain $A$. We denote it by $\text{Fill}(A)$. #### Relative automorphisms. {#relative-automorphisms. .unnumbered} Let $G$ be a countable group, and $\mathcal{F}$ be a free factor system of $G$. We denote by $\text{Out}(G,\mathcal{F})$ the subgroup of $\text{Out}(G)$ made of those automorphisms that preserve the conjugacy classes in $\mathcal{F}$. We denote by $\text{Out}(G,\mathcal{F}^{(t)})$ the subgroup of $\text{Out}(G)$ made of those automorphisms that act as a conjugation by an element of $G$ on each peripheral subgroup. For all $i\in\{1,\dots,k\}$, the group $G_i$ is equal to its normalizer in $G$. Therefore, any element of $\text{Out}(G)$ that preserves the conjugacy class of $G_i$ induces a well-defined outer automorphism of $G_i$. In other words, there is a morphism $$\text{Out}(G,\{[G_i]\})\to\text{Out}(G_i).$$ By taking the product over all groups $G_i$, we thus get a (surjective) morphism $$\text{Out}(G,\mathcal{F})\to\prod_{i=1}^k\text{Out}(G_i),$$ whose kernel is equal to $\text{Out}(G,\mathcal{F}^{(t)})$. More generally, suppose that we are given a collection of subgroups $A_i\subseteq\text{Out}(G_i)$ for all $i\in\{1,\dots,k\}$, and let $\mathcal{A}=\{A_1,\dots,A_k\}$. We can define the subgroup $\text{Out}(G,\mathcal{F}^{\mathcal{A}})$ of $\text{Out}(G)$ made of those automorphisms that preserve all conjugacy classes in $\mathcal{F}$, and which induce an element of $A_i$ in restriction to $G_i$ for all $i\in\{1,\dots,k\}$. As above, there is a (surjective) morphism $$\text{Out}(G,\mathcal{F}^{\mathcal{A}})\to\prod_{i=1}^k A_i,$$ whose kernel is equal to $\text{Out}(G,\mathcal{F}^{(t)})$. Relative outer spaces {#sec-o} --------------------- An *$\mathbb{R}$-tree* is a metric space $(T,d_T)$ in which any two points $x,y\in T$ are joined by a unique embedded topological arc, which is isometric to a segment of length $d_T(x,y)$. A $(G,\mathcal{F})$-tree is an $\mathbb{R}$-tree equipped with a minimal, isometric action of $G$, in which all peripheral subgroups of $G$ are *elliptic*. We recall that an action on a tree is termed *minimal* if there is no proper and nontrivial invariant subtree. Whenever a group $G$ acts on an $\mathbb{R}$-tree $T$, and some element of $G$ does not fix any point in $T$, there is a unique subtree of $T$ on which the $G$-action is minimal. In particular, whenever $H$ is a subgroup of $G$ that contains a hyperbolic element, we can consider the minimal subtree for the induced action of $H$ on $T$, which we call the *$H$-minimal subtree* of $T$. The action of $H$ on $T$ is *simplicial* if the $H$-minimal subtree is homeomorphic (when equipped with the topology defined by the metric) to a simplicial tree. We say that the action of $H$ on $T$ is *relatively free* if all point stabilizers of the $H$-minimal subtree of $T$ are conjugate into $\mathcal{F}_H$. A *Grushko $(G,\mathcal{F})$-tree* is a simplicial $(G,\mathcal{F})$-tree with trivial edge stabilizers, all of whose elliptic subgroups are peripheral. Two $(G,\mathcal{F})$-trees are *equivalent* if there exists a $G$-equivariant isometry between them. The *unprojectivized outer space* $\mathcal{O}(G,\mathcal{F})$, introduced by Guirardel and Levitt in [@GL07], is defined to be the space of all equivalence classes of Grushko $(G,\mathcal{F})$-trees. *Outer space* $P\mathcal{O}(G,\mathcal{F})$ is defined as the space of homothety classes of trees in $\mathcal{O}(G,\mathcal{F})$. Outer space, as well as its unprojectivized version, comes equipped with a right action of $\text{Out}(G,\mathcal{F})$, given by precomposing the actions (this can be turned into a left action by letting $\Phi.T:=T.\Phi^{-1}$ for all $T\in \mathcal{O}(G,\mathcal{F})$ and all $\Phi\in\text{Out}(G,\mathcal{F})$). For all $g\in G$ and all $T\in \mathcal{O}(G,\mathcal{F})$, the *translation length* of $g$ in $T$ is defined to be $$||g||_T:=\inf_{x\in T}d_T(x,gx).$$ Culler and Morgan have shown in [@CM87] that the map $$\begin{array}{cccc} i:&\mathcal{O}(G,\mathcal{F})&\to &\mathbb{R}^{G}\\ &T&\mapsto &(||g||_T)_{g\in G} \end{array}$$ is injective. We equip $\mathcal{O}(G,\mathcal{F})$ with the topology induced by this embedding, which is called the *axes topology*. Outer space is then embedded as a subspace of the projective space $\mathbb{PR}^{G}$, and is equipped with the quotient topology. Its closure $\overline{P\mathcal{O}(G,\mathcal{F})}$, whose lift to $\mathbb{R}^{G}$ we denote by $\overline{\mathcal{O}(G,\mathcal{F})}$, is compact (see [@CM87 Theorem 4.2] and [@Hor14-5 Proposition 1.2]). We let $\partial P\mathcal{O}(G,\mathcal{F}):=\overline{P\mathcal{O}(G,\mathcal{F})}\smallsetminus P\mathcal{O}(G,\mathcal{F})$, and similarly $\partial\mathcal{O}(G,\mathcal{F}):=\overline{\mathcal{O}(G,\mathcal{F})}\smallsetminus\mathcal{O}(G,\mathcal{F})$. A $(G,\mathcal{F})$-tree $T$ is *very small* if its arc stabilizers are either trivial, or maximally-cyclic and nonperipheral, and its tripod stabilizers are trivial. In [@Hor14-5 Theorem 0.1], we identified the space $\overline{P\mathcal{O}(G,\mathcal{F})}$ with the space of very small, minimal, projective $(G,\mathcal{F})$-trees. We also proved that it has finite topological dimension equal to $3\text{rk}_f(G,\mathcal{F})+2|\mathcal{F}|-4$. The cyclic splitting graph {#sec-hyp-fz} -------------------------- Let $G$ be a countable group, and let $\mathcal{F}$ be a free factor system of $G$. A *$\mathcal{Z}$-splitting* of $(G,\mathcal{F})$ is a minimal, simplicial $(G,\mathcal{F})$-tree, all of whose edge stabilizers are either trivial, or cyclic and nonperipheral. It is a *one-edge* splitting if it has exactly one $G$-orbit of edges. Two $\mathcal{Z}$-splittings are *equivalent* if there exists a $G$-equivariant homeomorphism between them. Given two $(G,\mathcal{F})$-trees $T$ and $T'$, a map $f:T\to T'$ is *alignment-preserving* if the $f$-image of every segment in $T$ is a segment in $T'$. If there exists a $G$-equivariant alignment-preserving map from $T$ to $T'$, we say that $T$ is a *refinement* of $T'$. The *cyclic splitting graph* $FZ(G,\mathcal{F})$ is the graph whose vertices are the equivalence classes of one-edge $\mathcal{Z}$-splittings of $(G,\mathcal{F})$, two distinct vertices being joined by an edge if the corresponding splittings admit a common refinement. The graph $FZ(G,\mathcal{F})$ admits a natural right action of $\text{Out}(G,\mathcal{F})$, by precomposition of the actions. In [@Hor14-6], we proved hyperbolicity of the graph $FZ(G,\mathcal{F})$. (Horbez [@Hor14-6 Theorem 3.1]) Let $G$ be a countable group, and let $\mathcal{F}$ be a free factor system of $G$. Then the graph $FZ(G,\mathcal{F})$ is Gromov hyperbolic. We also described the Gromov boundary of $FZ(G,\mathcal{F})$. A tree $T\in\overline{\mathcal{O}(G,\mathcal{F})}$ is *$\mathcal{Z}$-compatible* if it is compatible with some $\mathcal{Z}$-splitting of $(G,\mathcal{F})$, and *$\mathcal{Z}$-incompatible* otherwise. It is *$\mathcal{Z}$-averse* if it is not compatible with any $\mathcal{Z}$-compatible tree $T'\in\overline{\mathcal{O}(G,\mathcal{F})}$ (see [@Hor14-6 Section 5.6.1] for examples of $\mathcal{Z}$-incompatible trees that are not $\mathcal{Z}$-averse). We denote by $\mathcal{X}(G,\mathcal{F})$ the subspace of $\overline{\mathcal{O}(G,\mathcal{F})}$ consisting of $\mathcal{Z}$-averse trees. Two trees $T,T'\in\mathcal{X}(G,\mathcal{F})$ are *equivalent*, which we denote by $T\sim T'$, if they are both compatible with a common tree in $\overline{\mathcal{O}(G,\mathcal{F})}$. There is a natural, coarsely well-defined map $\psi:\mathcal{O}(G,\mathcal{F})\to FZ(G,\mathcal{F})$. (Horbez [@Hor14-6 Theorem 0.2]) \[boundary-fz\] Let $G$ be a countable group, and let $\mathcal{F}$ be a free factor system of $G$. Then there exists a unique $\text{Out}(G,\mathcal{F})$-equivariant homeomorphism $$\partial{\psi}:\mathcal{X}(G,\mathcal{F})/{\sim}\to\partial_{\infty} FZ(G,\mathcal{F}),$$ so that for all $T\in\mathcal{X}(G,\mathcal{F})$, and all sequences $(T_i)_{i\in\mathbb{N}}\in \mathcal{O}(G,\mathcal{F})^{\mathbb{N}}$ converging to $T$, the sequence $(\psi(T_i))_{i\in\mathbb{N}}$ converges to $\partial{\psi}(T)$. We also proved that every $\sim$-class of $\mathcal{Z}$-averse trees contains a unique simplex of mixing representatives. A tree $T\in\overline{\mathcal{O}(G,\mathcal{F})}$ is *mixing* if for all finite subarcs $I,J\subseteq T$, there exist $g_1,\dots,g_k\in G$ such that $J\subseteq g_1I\cup\dots\cup g_kI$, and for all $i\in\{1,\dots,k-1\}$, we have $g_iI\cap g_{i+1}I\neq\emptyset$. Two $\mathbb{R}$-trees $T$ and $T'$ are *weakly homeomorphic* if there exist maps $f:T\to T'$ and $g:T'\to T$ that are continuous in restriction to segments, and inverse of each other. (Horbez [@Hor14-6 Proposition 5.3])\[mixing-representative\] For all $T\in\mathcal{X}(G,\mathcal{F})$, there exists a mixing tree $\overline{T}\in\mathcal{X}(G,\mathcal{F})$ onto which all trees $T'\in\mathcal{X}(G,\mathcal{F})$ that are equivalent to $T$ collapse. In addition, any two such trees are $G$-equivariantly weakly homeomorphic. Any tree $T\in\overline{\mathcal{O}(G,\mathcal{F})}$ that is both $\mathcal{Z}$-incompatible and mixing, is $\mathcal{Z}$-averse. We also mention the following fact about $\mathcal{Z}$-splittings of $(G,\mathcal{F})$. (Horbez [@Hor14-5 Lemma 5.11])\[stab-cyclic\] Let $S$ be a $\mathcal{Z}$-splitting of $(G,\mathcal{F})$. Then every edge stabilizer in $S$ is trivial, or contained in a proper $(G,\mathcal{F})$-free factor. Transverse families, transverse coverings, graphs of actions ------------------------------------------------------------ Let $T$ be a $(G,\mathcal{F})$-tree. A *transverse family* in $T$ is a $G$-invariant collection $\mathcal{Y}$ of nondegenerate (i.e. nonempty and not reduced to a point) subtrees of $T$, such that for all $Y\neq Y'\in\mathcal{Y}$, the intersection $Y\cap Y'$ contains at most one point. A *transverse covering* of $T$ is a transverse family $\mathcal{Y}$ in $T$, all of whose elements are closed subtrees of $T$, such that every finite arc in $T$ can be covered by finitely many elements of $\mathcal{Y}$. A transverse covering $\mathcal{Y}$ of $T$ is *trivial* if $\mathcal{Y}=\{T\}$. The *skeleton* of a transverse covering $\mathcal{Y}$ is the bipartite simplicial tree $S$, whose vertex set is $V(S)=V_0(S)\cup\mathcal{Y}$, where $V_0(S)$ is the set of points of $T$ which belong to at least two distinct trees in $\mathcal{Y}$, with an edge between $x\in V_0(S)$ and $Y\in\mathcal{Y}$ whenever $x\in Y$ [@Gui04 Definition 4.8].\ \ Let $G$ be a countable group, and $\mathcal{F}$ be a free factor system of $G$. A *$(G,\mathcal{F})$-graph of actions* consists of - a metric graph of groups $\mathcal{G}$ (in which we allow some edges to have length $0$), with an isomorphism from $G$ to the fundamental group of $\mathcal{G}$, such that all peripheral subgroups are conjugate into vertex groups of $\mathcal{G}$, and - an isometric action of every vertex group $G_v$ on a $G_v$-tree $T_v$ (possibly reduced to a point), in which all intersections of $G_v$ with peripheral subgroups of $G$ are elliptic, and - a point $p_e\in T_{t(e)}$ fixed by $i_e(G_e)\subseteq G_{t(e)}$ for every oriented edge $e$, where $i_e:G_e\to G_{t(e)}$ denotes the inclusion morphism from the edge group $G_e$ into the adjacent vertex group $G_{t(e)}$. A $(G,\mathcal{F})$-graph of actions is *nontrivial* if $\mathcal{G}$ is not reduced to a point. Associated to any $(G,\mathcal{F})$-graph of actions $\mathcal{G}$ is a $(G,\mathcal{F})$-tree $T(\mathcal{G})$. Informally, the tree $T(\mathcal{G})$ is obtained from the Bass–Serre tree of the underlying graph of groups by equivariantly attaching each vertex tree $T_v$ at the corresponding vertex $v$, an incoming edge being attached to $T_v$ at the prescribed attaching point. The reader is referred to [@Gui98 Proposition 3.1] for a precise description of the tree $T(\mathcal{G})$. We say that a $(G,\mathcal{F})$-tree $T$ *splits as a $(G,\mathcal{F})$-graph of actions* if there exists a $(G,\mathcal{F})$-graph of actions $\mathcal{G}$ such that $T=T({\mathcal{G}})$. (Guirardel [@Gui08 Lemma 1.5])\[skeleton\] A $(G,\mathcal{F})$-tree splits as a nontrivial $(G,\mathcal{F})$-graph of actions if and only if it admits a nontrivial transverse covering. Knowing that a $(G,\mathcal{F})$-tree $T$ is compatible with a simplicial $(G,\mathcal{F})$-tree $S$ provides a nontrivial transverse covering of $T$, defined in the following way (see the discussion in [@Hor14-6 Section 4.7]). Since $T$ and $S$ are compatible, their length functions sum up to the length function of a $(G,\mathcal{F})$-tree, denoted by $T+S$, which comes with $1$-Lipschitz alignment-preserving maps $\pi_T:T+S\to T$ and $\pi_S:T+S\to S$, see [@GL10-2 Section 3.2]. Then the family $\mathcal{Y}$ made of all nondegenerate $\pi_S$-preimages of vertices of $S$, and of the closures of $\pi_S$-preimages of open edges of $S$, is a transverse covering of $T+S$. Its image $\pi_T(\mathcal{Y})$ is a nontrivial transverse covering of $T$. We now mention a result, due to Levitt [@Lev94], which gives a canonical way of splitting any very small $(G,\mathcal{F})$-tree as a $(G,\mathcal{F})$-graph of actions, whose vertex actions have dense orbits. (Levitt [@Lev94])\[Levitt\] Every $(G,\mathcal{F})$-tree $T\in\overline{\mathcal{O}(G,\mathcal{F})}$ splits uniquely as a $(G,\mathcal{F})$-graph of actions, all of whose vertex trees have dense orbits for the action of their stabilizer (they might be reduced to points), and all of whose edges have positive length, and have either trivial, or maximally-cyclic and nonperipheral stabilizer. We call this splitting the *Levitt decomposition* of $T$ as a graph of actions. We note in particular that if $T\in\overline{\mathcal{O}(G,\mathcal{F})}$ is a very small $(G,\mathcal{F})$-tree, and $H\subseteq G$ is a subgroup of $G$ of finite Kurosh rank, then the $H$-minimal subtree of $T$ admits a Levitt decomposition. \[nonsimplicial\] Let $T\in\overline{\mathcal{O}(G,\mathcal{F})}$ be a tree with dense orbits. Let $\mathcal{Y}$ be a transverse family in $T$, and let $Y\in\mathcal{Y}$. If $\text{rk}_K(\text{Stab}(Y))<+\infty$, then the action of $\text{Stab}(Y)$ on $Y$ has dense orbits. If $\text{Stab}(Y)$ is contained in a proper $(G,\mathcal{F})$-free factor $H$, then the $H$-minimal subtree of $T$ is not a Grushko $(H,\mathcal{F}_H)$-tree. Assume that one of the conclusions of the lemma fails. Then $Y$ has a nontrivial simplicial part, which contains a simplicial edge $e$. There is a finite number of $G$-orbits of directions at branch points in $T$ [@Hor14-5 Corollary 4.8]. As $T$ has dense orbits, the arc $e$ contains two distinct branch points $x$ and $x'$ of $T$, and two directions $d$ (resp. $d'$) at $x$ (resp. $x'$), such that there exists $g\in G\smallsetminus\{1\}$ with $gd=d'$. In particular, the intersection $gY\cap Y$ is nondegenerate (i.e. nonempty and not reduced to a point). As $\mathcal{Y}$ is a transverse family, this implies that $g\in\text{Stab}(Y)$. So $ge$ is a simplicial edge of $Y$ that meets $e$, and therefore $ge=e$. This implies that $T$ contains an arc with nontrivial stabilizer, which is impossible because $T$ has dense orbits [@Hor14-5 Proposition 4.17]. Trees of surface type --------------------- A tree $T\in\overline{\mathcal{O}(G,\mathcal{F})}$ is *of surface type* if it admits a transverse covering by trees that are either simplicial arcs, or are dual to arational measured foliations on compact $2$-orbifolds. (Horbez [@Hor14-5 Proposition 5.10])\[outer-limits\] Let $T$ be a minimal, very small $(G,\mathcal{F})$-tree of surface type, and let $\mathcal{Y}$ be the associated transverse covering of $T$. Then either - there exists an element of $G$, represented by a boundary curve of one of the orbifolds dual to a tree in $\mathcal{Y}$, that is nonperipheral, and not conjugate into any edge group of the skeleton of $\mathcal{Y}$, or - the tree $T$ splits as a $(G,\mathcal{F})$-graph of actions over a one-edge $(G,\mathcal{F})$-free splitting $S$, such that all stabilizers of subtrees in $\mathcal{Y}$ dual to arational foliations on compact $2$-orbifolds are elliptic in $S$. (Horbez [@Hor14-5 Lemma 5.8])\[surface-type\] Let $T\in\overline{\mathcal{O}(G,\mathcal{F})}$. If there exists a subgroup $H\subseteq G$ that is elliptic in $T$, and not contained in any proper $(G,\mathcal{F})$-free factor, then $T$ is of surface type. Sporadic cases {#sec-2} ============== Let $G$ be a countable group, and let $\mathcal{F}$ be a free factor system of $G$. We say that $(G,\mathcal{F})$ is *sporadic* if either $G=G_1\ast G_2$ and $\mathcal{F}=\{[G_1],[G_2]\}$, or $G=G_1\ast$ and $\mathcal{F}=\{[G_1]\}$. Otherwise $(G,\mathcal{F})$ is *nonsporadic*. We noticed in [@Hor14-6 Corollary 5.8] that the graph $FZ(G,\mathcal{F})$ is unbounded if and only if $(G,\mathcal{F})$ is nonsporadic. Given a group $A$, we denote by $Z(A)$ its center. The following propositions, which describe $\text{Out}(G,\mathcal{F}^{(t)})$ when $(G,\mathcal{F})$ is sporadic, are particular cases of Levitt’s work about automorphisms of graphs of groups [@Lev04]. \[sporadic-1\] Let $G_1$ and $G_2$ be nontrivial countable groups. Then $\text{Out}(G_1\ast G_2,\{[G_1],[G_2]\}^{(t)})$ is isomorphic to $G_1/Z(G_1)\times G_2/Z(G_2)$. \[sporadic-2\] Let $G_1$ be a countable group. Then $\text{Out}(G_1\ast,\{[G_1]\}^{(t)})$ has a subgroup of index $2$ that is isomorphic to $(G_1\times G_1)/Z(G_1)$, where $Z(G_1)$ sits as a subgroup of $G_1\times G_1$ via the diagonal inclusion map. Stabilizers of trees in $\overline{\mathcal{O}(G,\mathcal{F})}$ {#sec-3} =============================================================== Let $G$ be a countable group, and let $\mathcal{F}$ be a free factor system of $G$. Given $T\in\overline{\mathcal{O}(G,\mathcal{F})}$ (resp. $[T]\in\overline{P\mathcal{O}(G,\mathcal{F})}$), we denote by $\text{Out}(T)$ (resp. $\text{Out}([T])$) the subgroup of $\text{Out}(G,\mathcal{F}^{(t)})$ consisting of those automorphisms that fix $T$ (resp. $[T]$). Notice that $\text{Out}(T)$ sits inside $\text{Out}([T])$ as a normal subgroup. There is a natural morphism $$\lambda:\text{Out}([T])\to\mathbb{R}_+^{\ast},$$ where $\lambda(\Phi)$ is defined as the unique real number such that $T.\Phi=\lambda(\Phi)T$. The kernel of $\lambda$ is equal to $\text{Out}(T)$, so $\text{Out}([T])$ is an abelian extension of $\text{Out}(T)$. One can actually show that the image of $\lambda$ is a cyclic subgroup of $\mathbb{R}_+^{\ast}$ [@GL14-2].\ \ In [@Hor14-5 Corollary 3.5], we proved the following about point stabilizers of trees in $\overline{\mathcal{O}(G,\mathcal{F})}$. (Horbez [@Hor14-5 Corollary 3.5])\[per-finite\] Let $T\in\overline{\mathcal{O}(G,\mathcal{F})}$ be a tree with trivial arc stabilizers. Then there are finitely many orbits of points in $T$ with nontrivial stabilizer. For all $v\in T$, we have $\text{rk}_K(\text{Stab}(v))<\text{rk}_K(G,\mathcal{F})$. Let $T\in\overline{\mathcal{O}(G,\mathcal{F})}$ be a tree with trivial arc stabilizers. Let $V$ be the collection of $G$-orbits of points with nontrivial stabilizer in $T$. Let $\{G_v\}_{v\in V}$ be a set of representatives of the $G$-conjugacy classes of point stabilizers in $T$. We define $\text{Out}(T,\{[G_v]\}_{v\in V}^{(t)})$ to be the subgroup of $\text{Out}(T)$ made of those automorphisms that are a conjugation by an element of $G$ in restriction to every point stabilizer of $T$. (Guirardel–Levitt [@GL14-2])\[Guirardel-Levitt\] Let $T\in\overline{\mathcal{O}(G,\mathcal{F})}$ be a tree with trivial arc stabilizers. Let $V$ be the collection of orbits of points in $T$ with nontrivial stabilizer, and let $\{G_v\}_{v\in V}$ be the collection of point stabilizers in $T$. Then $\text{Out}(T,\{[G_v]\}^{(t)})$ has a finite index subgroup $\text{Out}^0(T,\{[G_v]\}^{(t)})$ which admits an injective morphism $$\text{Out}^0(T,\{[G_v]\}^{(t)})\hookrightarrow\prod_{v\in V}G_v^{d_v}/Z(G_v),$$ where $d_v$ denotes the degree of $v$ in $T$, and $Z(G_v)$ denotes the center of $G_v$, and $Z(G_v)$ sits as a diagonal subgroup of $G_v^{d_v}$ via the diagonal inclusion map. A consequence of Guirardel and Levitt’s theorem is the following fact. \[Tits-stabilizer\] Let $T\in\overline{\mathcal{O}(G,\mathcal{F})}$ be a tree with trivial arc stabilizers. Let $V$ be the collection of orbits of points in $T$ with nontrivial stabilizer, and let $\{G_v\}_{v\in V}$ be the collection of point stabilizers in $T$. If $G$ satisfies the Tits alternative, then $\text{Out}(T,\{[G_v]\}^{(t)})$ satisfies the Tits alternative. Arational $(G,\mathcal{F})$-trees {#sec-4} ================================= Let $G$ be a countable group, and let $\mathcal{F}:=\{[G_1],\dots,[G_k]\}$ be a free factor system of $G$. We recall that a $(G,\mathcal{F})$-free factor is *proper* if it is nonperipheral (in particular nontrivial), and not equal to $G$. A $(G,\mathcal{F})$-tree $T\in\overline{\mathcal{O}(G,\mathcal{F})}$ is *arational* if $T\in\partial \mathcal{O}(G,\mathcal{F})$ and for every proper $(G,\mathcal{F})$-free factor $H\subset G$, the factor $H$ is not elliptic in $T$, and the $H$-minimal subtree $T_H$ of $T$ is a Grushko $(H,\mathcal{F}_{H})$-tree, i.e. the action of $H$ on $T_H$ is simplicial and relatively free. We denote by $\mathcal{AT}(G,\mathcal{F})$ the subspace of $\overline{\mathcal{O}(G,\mathcal{F})}$ consisting of arational $(G,\mathcal{F})$-trees. Arational surface $(G,\mathcal{F})$-trees ----------------------------------------- We describe a way of constructing arational $(G,\mathcal{F})$-trees, illustrated in Figure \[fig-arational\]. We first need the following fact. \[finite-index\] Let $T$ be a tree dual to an arational measured foliation on a compact $2$-orbifold $\mathcal{O}$ with conical singularities, and let $H\subseteq\pi_1(\mathcal{O})$ be a finitely generated subgroup of $\pi_1(\mathcal{O})$ of infinite index. Then the $H$-minimal subtree of $T$ is simplicial. A proof of Proposition \[finite-index\] appears in [@Rey11-2] in the case where $\mathcal{O}$ is a compact surface, and it adapts to the case where $\mathcal{O}$ is a $2$-orbifold. Proposition \[finite-index\] can also be deduced from the surface case by using Selberg’s Lemma, which states that $\pi_1(\mathcal{O})$ has a finite-index subgroup which is the fundamental group of a compact surface.\ \ Let $\mathcal{O}$ be a compact $2$-orbifold of genus $g$ with conical singularities, having $s+1$ boundary curves $b_0,b_1,\dots,b_s$, and $q$ conical points $b_{s+1},\dots,b_{s+q}$, equipped with an arational measured foliation. We build a graph of groups $\mathcal{G}'$ in the following way. One of the vertex groups of $\mathcal{G}'$ is the fundamental group of the orbifold $\mathcal{O}$, and the others are the peripheral subgroups $G_i$. For all $i\in\{1,\dots,s+q\}$, we choose $j_i\in\{1,\dots,k\}$, and an element $g_i\in G_{j_i}$, of same order as $b_i$. We put an edge between the vertex of $\mathcal{G}'$ associated to $\mathcal{O}$ and the vertex associated to $G_{j_i}$, and we amalgamate $b_i$ with $g_i$. Choices are made in such a way that the graph $\mathcal{G}'$ we get is connected. We then define a graph of groups $\mathcal{G}$ as the minimal subgraph of groups of $\mathcal{G}'$, i.e. $\mathcal{G}$ is obtained from $\mathcal{G}'$ by removing vertices $G_j$ with exactly one incident edge, and such that $G_i$ is cyclic and generated by $b_i$. Notice that the element of $\pi_1(\mathcal{O})$ corresponding to the boundary curve $b_0$ does not fix any edge in $\mathcal{G}$. The fundamental group of $\mathcal{G}$ is isomorphic to $G:=G_1\ast\dots\ast G_k\ast F_N$, where $N=2g+b_1(\mathcal{G})$ if $\mathcal{O}$ is orientable, and $N=g+b_1(\mathcal{G})$ if $\mathcal{O}$ is nonorientable. Dual to the foliation on $\mathcal{O}$ is a $\pi_1(\mathcal{O})$-tree $Y$. We form a graph of actions over $\mathcal{G}$: vertex trees are the $\pi_1(\mathcal{O})$-tree $Y$, and a trivial $G_i$-tree for all $i\in\{1,\dots,k\}$, attaching points in $Y$ are the points fixed by the $b_i$’s, and edges have length $0$. We denote by $T$ the $(G,\mathcal{F})$-tree defined in this way. A $(G,\mathcal{F})$-tree obtained by the above construction is called an *arational surface* $(G,\mathcal{F})$-tree. We claim that the $(G,\mathcal{F})$-tree $T$ we have built is an arational $(G,\mathcal{F})$-tree, which justifies our terminology. We start by making the following remarks: all point stabilizers in $Y$ are peripheral, except $b_0$. The element $b_0$ is not contained in any proper $(G,\mathcal{F})$-free factor. Indeed, otherwise, there would exist a $(G,\mathcal{F})$-free splitting $S$ in which $b_0$ is elliptic, and all other boundary components of $\mathcal{O}$ would also be elliptic in $S$ because they are peripheral. The splitting $S$ would then restrict to a free splitting of $\pi_1(\mathcal{O})$ in which all boundary components are elliptic. Such a splitting does not exist, so we have reached a contradiction. Let now $H$ be a proper $(G,\mathcal{F})$-free factor. Assume towards a contradiction that the $H$-minimal subtree of $T$ is not a Grushko $(H,\mathcal{F}_H)$-tree. The action of $H$ on $T$ is relatively free because $b_0$ is not contained in any proper $(G,\mathcal{F})$-free factor, so the action of $H$ is not discrete. The transverse covering of $T$ made of the translates of the $\pi_1(\mathcal{O})$-minimal subtree of $T$ induces a transverse covering of the $H$-minimal subtree of $T$, whose nontrivial elements are $H\cap\pi_1(\mathcal{O})^g$-trees, for some $g\in G$. Therefore, there exists a conjugate $H^g$ of $H$ so that $H^g\cap\pi_1(\mathcal{O})\neq\{e\}$, and the action of $H^g\cap\pi_1(\mathcal{O})$ on its minimal subtree is non-simplicial. By Proposition \[finite-index\], this implies that $H^g\cap\pi_1(\mathcal{O})$ has finite index in $\pi_1(\mathcal{O})$. As $H$ is elliptic in a $(G,\mathcal{F})$-free splitting $S$, so is $\pi_1(\mathcal{O})$: the group $\pi_1(\mathcal{O})$ fixes a unique point in $S$. All other vertex stabilizers of the Bass–Serre tree $S_0$ of $\mathcal{G}$ are peripheral, so each of them fixes a unique point in $S$. Since edge stabilizers of $S_0$ are peripheral, the stabilizers of any two adjacent vertices in $S_0$ contain a common peripheral element. This implies that they have the same fixed point in $S$, because no peripheral element fixes an arc in $S$. Therefore, all vertex groups of $S_0$ fix the same point in $S$. Hence $G$ is elliptic in $S$, a contradiction. A classification result ----------------------- The goal of this section is to provide a classification result for trees in $\overline{\mathcal{O}(G,\mathcal{F})}$. When $T\in\overline{\mathcal{O}(G,\mathcal{F})}$ is not arational, a proper $(G,\mathcal{F})$-free factor is a *dynamical free factor* for $T$ if it acts with dense orbits on its minimal subtree but does not fix any point in $T$. The following proposition is an extension of [@Hor14-4 Proposition 2.1] to the context of $(G,\mathcal{F})$-trees. \[classification\] Let $G$ be a countable group, and let $\mathcal{F}$ be a free factor system of $G$. Then for all $(G,\mathcal{F})$-trees $T\in\overline{\mathcal{O}(G,\mathcal{F})}$, either - we have $T\in\mathcal{O}(G,\mathcal{F})$, or - the tree $T$ is arational, or - the tree $T$ has a dynamical free factor, or - the tree $T$ has no dynamical free factor, and there exists $x\in T$ whose stabilizer is nonperipheral, and is contained in a proper $(G,\mathcal{F})$-free factor. \[dense\] Let $T$ be a $(G,\mathcal{F})$-tree with trivial arc stabilizers. Let $H\subseteq G$ be a nonperipheral subgroup of $G$ that is contained in a proper $(G,\mathcal{F})$-free factor. If $H$ fixes a point in $T$, then $T$ is not arational. If the $H$-minimal subtree of $T$ is not simplicial, then $T$ has a dynamical proper free factor. Let $F$ be a proper $(G,\mathcal{F})$-free factor that contains $H$. If $H$ fixes a point in $T$, then the action of $F$ is not relatively free, which implies that $T$ is arational. By Proposition \[Levitt\], the $F$-minimal subtree $T_F$ of $T$ splits as a graph of actions $\mathcal{G}$ with trivial edge stabilizers, in which all vertex actions have dense orbits (they may be trivial). Vertex groups of $\mathcal{G}$ are $(G,\mathcal{F})$-free factors. If the $H$-minimal subtree of $T$ is non-simplicial, then $T_F$ is non-simplicial, so one of the vertex groups of $\mathcal{G}$ is a dynamical proper $(G,\mathcal{F})$-free factor of $T$. \[classif\] Let $T$ be a $(G,\mathcal{F})$-tree with trivial arc stabilizers. Assume that $T$ is not relatively free. Then either - the tree $T$ is an arational surface tree (in particular, all elliptic subgroups in $T$ are either cyclic or peripheral), or - the tree $T$ has a dynamical proper free factor, or - there exists a nonperipheral point stabilizer in $T$ that is contained in a proper $(G,\mathcal{F})$-free factor, and all noncyclic, nonperipheral point stabilizers in $T$ are contained in proper $(G,\mathcal{F})$-free factors. If all elliptic subgroups of $T$ are contained in proper $(G,\mathcal{F})$-free factors, then the last assertion holds. Otherwise, Lemma \[surface-type\] implies that $T$ is a tree of surface type. Let $\mathcal{Y}$ be the transverse covering of $T$ provided by the definition of trees of surface type. If the stabilizer of a tree in $\mathcal{Y}$ dual to an arational measured foliation on a compact $2$-orbifold is contained in a proper $(G,\mathcal{F})$-free factor, then the second assertion holds by Lemma \[dense\]. This occurs in particular if the skeleton of $\mathcal{Y}$ contains an edge with trivial stabilizer, so we can assume that this is not the case. Otherwise, Proposition \[outer-limits\] implies that there exists an element of $G$, represented by a boundary curve $c$ of an orbifold $\Sigma$ dual to a tree in $\mathcal{Y}$, that is nonperipheral, and not conjugate into any edge group of the skeleton of $\mathcal{Y}$. If the transverse covering $\mathcal{Y}$ contains at least two orbits of nondegenerate trees, then an arc on $\Sigma$ whose endpoints lie on $c$ determines a $(G,\mathcal{F})$-free splitting, in which the other orbifold groups are elliptic, and hence contained in a proper $(G,\mathcal{F})$-free factor. Again, the second assertion of the lemma holds. Similarly, if there exists a point in $T$, whose stabilizer is nonperipheral and not conjugate to $c$, then the third conclusion of the lemma holds. In the remaining case, the skeleton of $\mathcal{Y}$ contains a single orbit of vertices $v$ associated to a tree $T_0$ dual to an arational lamination on a $2$-orbifold $\mathcal{O}$. All vertices $v'$ adjacent to $v$ have stabilizer isomorphic to some $G_i$. The edge joining $v'$ to $v$ has nontrivial stabilizer, so it is attached in $T_0$ to a point corresponding to a boundary curve or a conical point of $\mathcal{O}$. In addition, all boundary curves (and conical points) of $\Sigma$ distinct from $c$ are peripheral. This implies that $T$ is an arational surface $(G,\mathcal{F})$-tree. Let $T\in\partial{\mathcal{O}(G,\mathcal{F})}$ be a tree which is not arational, and has no dynamical proper $(G,\mathcal{F})$-free factor. Then the $G$-action on $T$ is not relatively free. If $T$ has trivial arc stabilizers, then the conclusion follows from Lemma \[classif\]. We now assume that $T$ contains an arc $e$ with nontrivial stabilizer, and let $S$ be the very small simplicial $(G,\mathcal{F})$-tree obtained by collapsing to points all vertex trees in the Levitt decomposition of $T$ as a graph of actions (Proposition \[Levitt\]). The stabilizer $G_e$ of $e$ in $T$ also stabilizes an edge in $S$. By Lemma \[stab-cyclic\], the group $G_e$ is contained in a proper $(G,\mathcal{F})$-free factor, and in addition $G_e$ is nonperipheral because $T$ is very small. We can thus choose for $x$ some interior point of $e$. Arational $(G,\mathcal{F})$-trees are $\mathcal{Z}$-averse. ----------------------------------------------------------- \[ATX\] Let $G$ be a countable group, and let $\mathcal{F}$ be a free factor system of $G$. Then $\mathcal{AT}(G,\mathcal{F})\subseteq\mathcal{X}(G,\mathcal{F})$. In view of Proposition \[mixing-representative\], it is enough to show that any tree $T\in\mathcal{AT}(G,\mathcal{F})$ is both $\mathcal{Z}$-incompatible and mixing. This will be done in Lemmas \[at-incompatible\] and \[at-mixing\]. \[at-incompatible\] Every arational $(G,\mathcal{F})$-tree is $\mathcal{Z}$-incompatible. Let $T\in\overline{\mathcal{O}(G,\mathcal{F})}$ be a $\mathcal{Z}$-compatible tree. It follows from the discussion below Proposition \[skeleton\] that $T$ splits as a $(G,\mathcal{F})$-graph of actions $\mathcal{G}$, whose edge groups are either trivial, or cyclic and nonperipheral. If $\mathcal{G}$ contains a nontrivial edge group $G_e$, then $G_e$ must be elliptic in $T$. The group $G_e$ is contained in a proper $(G,\mathcal{F})$-free factor $F$ (Lemma \[stab-cyclic\]), and it is nonperipheral because $T$ is very small. By Lemma \[dense\], the tree $T$ is not arational. If all edge groups of $\mathcal{G}$ are trivial, then all vertex groups of $\mathcal{G}$ are proper $(G,\mathcal{F})$-free factors. If all vertex actions of $\mathcal{G}$ are Grushko $(G_v,\mathcal{F}_{G_v})$-trees, then $T$ is simplicial, with trivial edge stabilizers. So either $T$ is a Grushko $(G,\mathcal{F})$-tree, or some vertex stabilizer of $T$ is a proper free factor that acts elliptically on $T$. In both cases, the tree $T$ is not arational. The following lemma was proved by Reynolds in [@Rey12 Proposition 8.3] in the case of $F_N$-trees in the closure of Culler and Vogtmann’s outer space. \[at-mixing\] Every arational $(G,\mathcal{F})$-tree is mixing. Let $T,\overline{T}\in\overline{\mathcal{O}(G,\mathcal{F})}$. We say that $T$ *collapses* onto $\overline{T}$ if there exists a $G$-equivariant map $p:T\to \overline{T}$ that sends segments of $T$ onto segments of $\overline{T}$. The following lemma follows from work by Guirardel and Levitt [@GL14-2], together with [@Hor14-6 Proposition 5.17]. \[mixing-collapse\] Let $T\in\overline{\mathcal{O}(G,\mathcal{F})}$ be a tree with dense $G$-orbits, and let $Y\varsubsetneq T$ be a proper subtree, such that for all $g\in G$, either $gY=Y$, or $gY\cap Y=\emptyset$. Then either $T$ is compatible with a $(G,\mathcal{F})$-free splitting, or else $T$ collapses onto a mixing tree $\overline{T}\in\overline{\mathcal{O}(G,\mathcal{F})}$ in which $\text{Stab}(Y)$ is elliptic. Let $T\in\mathcal{AT}(G,\mathcal{F})$. Then $T$ has dense orbits, otherwise any simplicial edge in $T$ would be dual to a $\mathcal{Z}$-splitting that is compatible with $T$, contradicting Lemma \[at-incompatible\]. Assume towards a contradiction that $T$ is not mixing, and let $I\subset T$ be a segment. Define $Y_I$ to be the subtree of $T$ consisting of all points $x\in T$ such that there exists a finite set of elements $\{g_0=e,g_1,\dots,g_r\}\subset G$, with $x\in g_rI$, and $g_iI\cap g_{i+1}I\neq\emptyset$ for all $i\in\{0,\dots,r-1\}$. Then for all $g\in G$, we either have $gY_I=Y_I$, or $gY_I\cap Y_I=\emptyset$. As $T$ is not mixing, there exists a nondegenerate arc $I\subset T$ such that $Y_I$ is a proper subtree of $T$. By Lemma \[mixing-collapse\], either $T$ is compatible with a $(G,\mathcal{F})$-free splitting, or else $T$ collapses onto a mixing tree $\overline{T}\in\overline{\mathcal{O}(G,\mathcal{F})}$, in which $\text{Stab}(Y_I)$ is elliptic. The first case is excluded by Lemma \[at-incompatible\], so we assume that we are in the second case. As $T$ has dense orbits, the stabilizer $\text{Stab}(Y_I)$ is not cyclic by Lemma \[nonsimplicial\]. It thus follows from Lemma \[classif\] that either $\overline{T}$ has a dynamical proper $(G,\mathcal{F})$-free factor $F$ (if the second situation of Lemma \[classif\] occurs), or else $\text{Stab}(Y_I)$ is contained in a proper $(G,\mathcal{F})$-free factor (if the third situation of this lemma occurs). In the first case, the $F$-minimal subtree $T_F$ of $T$ cannot be a Grushko $(F,\mathcal{F}_F)$-tree, because $T_F$ collapses to a nontrivial tree with dense orbits in $\overline{T}$. This contradicts arationality of $T$. Hence the second case occurs, i.e. $\text{Stab}(Y_I)$ is contained in a proper $(G,\mathcal{F})$-free factor $F$. By Lemma \[nonsimplicial\], the $F$-minimal subtree of $T$ is not a Grushko $(F,\mathcal{F}_{F})$-tree, again contradicting arationality of $T$. Finite sets of reducing factors associated to non-arational $(G,\mathcal{F})$-trees ----------------------------------------------------------------------------------- Given a $(G,\mathcal{F})$-tree $T\in\overline{P\mathcal{O}(G,\mathcal{F})}$, we denote by $\text{Dyn}(T)$ the set of minimal (with respect to inclusion) conjugacy classes of dynamical proper $(G,\mathcal{F})$-free factors for $T$. We denote by $\text{Ell}(T)$ the set of nonperipheral conjugacy classes of point stabilizers in $T$. Recall that given a subgroup $H\subseteq G$, we denote by $\text{Fill}(H)$ the smallest $(G,\mathcal{F})$-free factor that contains $H$. For all $\Phi\in\text{Out}(G,\mathcal{F}^{(t)})$, we have $\Phi \text{Dyn}(T)=\text{Dyn}(\Phi T)$, and $\Phi \text{Fill}({\text{Ell}}(T))=\text{Fill}(\text{Ell}(\Phi T))$. It follows from Proposition \[per-finite\] that $\text{Ell}(T)$ is finite, we will now show that $\text{Dyn}(T)$ is also finite. \[dyn-finite\] For all $T\in\overline{P\mathcal{O}(G,\mathcal{F})}$, the set $\text{Dyn}(T)$ is finite. Let $T\in\overline{\mathcal{O}(G,\mathcal{F})}$. A finite subtree $K\subseteq T$ (i.e. the convex hull of a finite set of points) is a *supporting subtree* of $T$ if for all segments $J\subseteq T$, there exists a finite subset $\{g_1,\dots g_r\}\subseteq G$ such that $J\subseteq g_1K\cup\dots\cup g_rK$. \[cost-1\] Let $T\in\overline{\mathcal{O}(G,\mathcal{F})}$ be a tree with dense orbits. For all $\epsilon>0$, there exists a finite supporting subtree $K\subseteq T$ whose volume is at most $\epsilon$. As $T$ has dense orbits, it follows from [@Hor14-5 Theorem 5.3] that there exists a sequence $(T_n)_{n\in\mathbb{N}}\in\mathcal{O}(G,\mathcal{F})^{\mathbb{N}}$, such that the volume of the quotient graph $T_n/G$ converges to $0$, and for all $n\in\mathbb{N}$, there exists a $1$-Lipschitz $G$-equivariant map $f_n:T_n\to T$. Letting $K_n$ be a finite supporting subtree of $T_n$, with volume converging to $0$ as $n$ goes to $+\infty$, the images $f_n(K_n)$ are finite supporting subtrees of $T$ whose volumes converge to $0$. Given a finite system $S=(F,A)$ of partial isometries of a finite forest $F$, we define $m(S)$ as the volume of $F$, and $d(S)$ as the sum of the volumes of the domains of the partial isometries in $A$. We say that $S$ has *independent generators* if no reduced word in the partial isometries in $A$ and their inverses defines a partial isometry of $F$ that fixes a nondegenerate arc. Gaboriau, Levitt and Paulin have shown in [@GLP94 Proposition 6.1] that if $S$ has independent generators, then $m(S)-d(S)\ge 0$. The following proposition is a generalization of [@Rey11-2 Lemma 3.10] to the context of $(G,\mathcal{F})$-trees. \[stab\] Let $T\in\overline{\mathcal{O}(G,\mathcal{F})}$ be a tree with dense orbits, and let $H$ be a $(G,\mathcal{F})$-free factor. Assume that $H$ acts with dense orbits on its minimal subtree $T_H$ in $T$. Then $\text{Stab}(T_H)=H$, and $\{gT_H|g\in G\}$ is a transverse family in $T$. Let $g\in G\smallsetminus H$. Assume towards a contradiction that $gT_H\cap T_H$ contains a nondegenerate arc $I$ of length $L>0$. Let $\epsilon>0$, with $\epsilon<\frac{L}{2}$. Lemma \[cost-1\] applied to the $(H,\mathcal{F}_H)$-tree $T_H$ ensures the existence of a finite tree $F_{\epsilon}\subseteq T_H$ of volume smaller than $\epsilon$, such that $I$ is covered by finitely many translates of $F_{\epsilon}$, and we can choose $F_{\epsilon}$ to be disjoint from $I$. We can therefore subdivide $I$ into finitely many subsegments $I_1,\dots,I_k$ such that for all $i\in\{1,\dots,k\}$, there exists $g_i\in H$ with $g_iI_i\subseteq F_{\epsilon}$. Similarly, there exists a finite forest $F'_{\epsilon}\subseteq gT_H$ of volume smaller than $\epsilon$, such that $I$ is covered by finitely many translates of $F'_{\epsilon}$, and again we can choose $F'_{\epsilon}$ to be disjoint from both $I$ and $F_{\epsilon}$ in $T$. We similarly have a subdivision $I'_1,\dots, I'_l$ of $I$, and an element $g'_j\in H^g$ for each $j\in\{1,\dots,l\}$, so that $g'_jI'_j\subseteq F'_{\epsilon}$. We build a system of partial isometries $S$ on the forest $I\cup F_{\epsilon}\cup F'_{\epsilon}$, with an isometry $\phi_i$ from $I_i$ to $F_{\epsilon}$ corresponding to the action of $g_i$ for all $i\in\{1,\dots,k\}$, and an isometry $\phi'_j$ from $I'_j$ to $F'_{\epsilon}$ corresponding to the action of $g'_j$ for all $j\in\{1,\dots,l\}$. Then $m(S)\le L+2\epsilon$, while $d(S)=2L$. Therefore $m(S)-d(S)<0$, and hence the system of isometries $S$ does not have independent generators [@GLP94 Proposition 6.1]. This means that there exists a reduced word $w$ in the partial isometries $\phi_i$, $\phi'_j$ and their inverses, associated to an element $g\in G$ which fixes an arc in $T$. It follows from the construction of the system of isometries that up to cyclic conjugation, the word $w$ is a concatenation of $2$-letter words of the form $\phi_{i_1}\circ\phi_{i_2}^{-1}$ and $\phi'_{j_1}\circ{\phi'_{j_2}}^{-1}$, with $i_1\neq i_2$ and $j_1\neq j_2$, and these two types of subwords alternate in $w$. So $g$ is of the form $h_1h_2^g\dots h_{s-1}h_{s}^g$, where $h_i\in H$ is a nontrivial element for all $i\in\{1,\dots,s\}$. Since $H$ is a proper $(G,\mathcal{F})$-free factor, and $g\in G\smallsetminus H$, we have $\langle H,H^g\rangle = H\ast H^g$, so $g\neq e$. This contradicts the fact that $T$ has dense orbits, and hence trivial arc stabilizers [@Hor14-5 Proposition 4.17]. Therefore, for all $g\in G\smallsetminus H$, the intersection $gT_H\cap T_H$ consists in at most one point. This implies that $\text{Stab}(T_H)=H$, and that $\{gT_H\}_{g\in G}$ is a transverse family in $T$. \[minimal-transverse\] Let $T\in\overline{\mathcal{O}(G,\mathcal{F})}$ be a tree with dense orbits. Then the collection $\{gT_H|H\in\text{Dyn}(T),g\in G\}$ is a transverse family in $T$. Let $H,H'\in\text{Dyn}(T)$, and assume that $T_H\cap T_{H'}$ contains a nondegenerate arc. By Proposition \[stab\], since $H$ and $H'$ are proper $(G,\mathcal{F})$-free factors, we have $\text{Stab}(T_H)=H$ and $\text{Stab}(T_{H'})=H'$. The collections $\{gT_H\}_{g\in G}$ and $\{gT_{H'}\}_{g\in G}$ are transverse families in $T$ (Proposition \[stab\]), hence so is the collection of nondegenerate intersections of the form $gT_H\cap g'T_{H'}$ for $g,g'\in G$. If $g\in G$ stabilizes $T_H\cap T_{H'}$, then $gT_H\cap T_H$ and $gT_{H'}\cap T_{H'}$ both contain a nondegenerate arc, and hence $gT_H=T_H$ and $gT_{H'}=T_{H'}$. So we have $\text{Stab}(T_H\cap T_{H'})=\text{Stab}(T_H)\cap\text{Stab}(T_{H'})=H\cap H'$. By Lemma \[nonsimplicial\], the $(G,\mathcal{F})$-free factor $H\cap H'$ acts with dense orbits on the minimal subtree of $T_H\cap T_{H'}$. By minimality of the factors in $\text{Dyn}(T)$, this implies that $H=H'$ and $T_H=T_{H'}$. So $\{gT_H|H\in\text{Dyn}(T),g\in G\}$ is a transverse family in $T$. Finiteness of $\text{Dyn}(T)$ for all trees $T\in\overline{P\mathcal{O}(G,\mathcal{F})}$ follows from Proposition \[minimal-transverse\], since every transverse family in a tree with dense orbits contains boundedly many orbits of trees (where the bound is given by the number of orbits of directions at branch points in $T$). Nonelementary subgroups of $\text{Out}(G,\mathcal{F})$, and a trichotomy for subgroups of $\text{Out}(G,\mathcal{F})$ {#sec-5} ===================================================================================================================== Let $G$ be a countable group, and let $\mathcal{F}$ be a free factor system of $G$, such that $(G,\mathcal{F})$ is nonsporadic. A subgroup $H\subseteq\text{Out}(G,\mathcal{F})$ is *nonelementary* if - it does not preserve any finite set of proper $(G,\mathcal{F})$-free factors, and - it does not preserve any finite set of points in $\partial_{\infty} FZ(G,\mathcal{F})$. We now aim at showing that any nonelementary subgroup of $\text{Out}(G,\mathcal{F})$ contains a rank two free subgroup. \[nonelementary\] Let $G$ be a countable group, and let $\mathcal{F}$ be a free factor system of $G$, so that $(G,\mathcal{F})$ is nonsporadic. Then any nonelementary subgroup of $\text{Out}(G,\mathcal{F})$ contains a free subgroup of rank two, generated by two loxodromic isometries of $FZ(G,\mathcal{F})$. As a consequence of Theorem \[nonelementary\] and of our description of the Gromov boundary of $\partial_{\infty}FZ(G,\mathcal{F})$, we get the following trichotomy for subgroups of $\text{Out}(G,\mathcal{F})$. \[trichotomy\] Let $G$ be a countable group, and let $\mathcal{F}$ be a free factor system of $G$, so that $(G,\mathcal{F})$ is nonsporadic. Then every subgroup of $\text{Out}(G,\mathcal{F})$ either - contains a rank two free subgroup generated by two loxodromic isometries of $FZ(G,\mathcal{F})$, or - virtually fixes a tree with trivial arc stabilizers in $\partial P\mathcal{O}(G,\mathcal{F})$, or - virtually fixes the conjugacy class of a proper $(G,\mathcal{F})$-free factor. Let $H$ be a subgroup of $\text{Out}(G,\mathcal{F})$. If $H$ is nonelementary, Theorem \[trichotomy\] follows from Theorem \[nonelementary\]. Otherwise, either $H$ virtually fixes the conjugacy class of a proper $(G,\mathcal{F})$-free factor, or $H$ virtually fixes a point $\xi\in\partial_{\infty}FZ(G,\mathcal{F})$. In the latter case, the group $H$ preserves the simplex of length measures in $\overline{P\mathcal{O}(G,\mathcal{F})}$ corresponding to a mixing representative of $\xi$, provided by Proposition \[mixing-representative\], and this simplex has finite dimension by [@Gui00 Corollary 5.4] (the extension of Guirardel’s result concerning finite dimensionality of this simplex to the case of free products is made possible by the fact that $\overline{P\mathcal{O}(G,\mathcal{F})}$ has finite topological dimension [@Hor14-5 Theorem 0.2]). So $H$ virtually fixes any extremal point of this simplex, which is a tree with trivial arc stabilizers. Our proof of Theorem \[nonelementary\] uses techniques coming from the theory of random walks on groups. These were already used in [@Hor14-3] for giving a new proof of a result of Handel and Mosher [@HM09], which establishes a dichotomy for subgroups of $\text{Out}(F_N)$, namely: every subgroup of $\text{Out}(F_N)$ (finitely generated or not) either contains a fully irreducible automorphism, or virtually fixes the conjugacy class of a proper free factor of $F_N$. All topological spaces will be equipped with their Borel $\sigma$-algebra. Let $\mu$ be a probability measure on $\text{Out}(G,\mathcal{F})$. A probability measure $\nu$ on $\overline{P\mathcal{O}(G,\mathcal{F})}$ is *$\mu$-stationary* if $\mu\ast\nu=\nu$, i.e. for all $\nu$-measurable subsets $E\subseteq \overline{P\mathcal{O}(G,\mathcal{F})}$, we have $$\nu(E)=\sum_{\Phi\in\text{Out}(G,\mathcal{F})}\mu(\Phi)\nu(\Phi^{-1}E).$$ We denote by $P\mathcal{AT}(G,\mathcal{F})$ the image of $\mathcal{AT}(G,\mathcal{F})$ in $\overline{P\mathcal{O}(G,\mathcal{F})}$. Our first goal will be to show that given a probability measure $\mu$ on $\text{Out}(G,\mathcal{F})$, any $\mu$-stationary measure on $\overline{P\mathcal{O}(G,\mathcal{F})}$ is supported on $P\mathcal{AT}(G,\mathcal{F})$. Since $\mathcal{AT}(G,\mathcal{F})\subseteq\mathcal{X}(G,\mathcal{F})$ (Proposition \[ATX\]), it follows that any $\mu$-stationary measure on $\overline{P\mathcal{O}(G,\mathcal{F})}$ pushes to a $\mu$-stationary measure on $\partial_{\infty} FZ(G,\mathcal{F})$ via the map $\partial\psi$ provided by Theorem \[boundary-fz\] (this map factors through $\overline{P\mathcal{O}(G,\mathcal{F})}$). We will make use of the following classical lemma, whose proof is based on a maximum principle argument. The following version of the statement appears in [@Hor14-3 Lemma 3.3]. We denote by $gr(\mu)$ the subgroup of $\text{Out}(G,\mathcal{F})$ generated by the support of the measure $\mu$. \[disjoint-translations\] (Ballmann [@Bal89]) Let $\mu$ be a probability measure on a countable group $G$, and let $\nu$ be a $\mu$-stationary probability measure on a $G$-space $X$. Let $D$ be a countable $G$-set, and let $\Theta:X\to D$ be a measurable $G$-equivariant map. If $E\subseteq X$ is a $G$-invariant measurable subset of $X$ satisfying $\nu(E)>0$, then $\Theta(E)$ contains a finite $gr(\mu)$-orbit. We now define a $G$-equivariant map $\Theta$ from $\overline{P\mathcal{O}(G,\mathcal{F})}$ to the (countable) set $D$ of finite collections of conjugacy classes of proper $(G,\mathcal{F})$-free factors. Given a tree $T\in{P\mathcal{O}(G,\mathcal{F})}$, we define $\text{Red}(T)$ to be the finite collection of proper $(G,\mathcal{F})$-free factors that occur as vertex groups of trees obtained by equivariantly collapsing some of the edges of $T$ to points. The collection $\text{Red}(T)$ is nonempty because $(G,\mathcal{F})$ is nonsporadic. Given $T\in\partial P\mathcal{O}(G,\mathcal{F})$, the set of conjugacy classes of point stabilizers in $T$ is finite [@Jia91]. Every point stabilizer $G_v$ is contained in a unique minimal (possibly non proper) $(G,\mathcal{F})$-free factor $\text{Fill}(G_v)$. We let $\text{Per}(T)$ be the (possibly empty) finite set of conjugacy classes of proper $(G,\mathcal{F})$-free factors that arise in this way, and we set $$\Theta(T):=\left\{ \begin{array}{ll} \emptyset &\text{~if~} T\in P\mathcal{AT}(G,\mathcal{F})\\ \text{Red}(T) &\text{~if~} T\in P\mathcal{O}(G,\mathcal{F})\\ \text{Dyn}(T)\cup\text{Per}(T) &\text{~if~} T\in\partial P\mathcal{O}(G,\mathcal{F})\smallsetminus P\mathcal{AT}(G,\mathcal{F}) \end{array}\right..$$ Proposition \[classification\] implies that $\Theta(T)=\emptyset$ if and only if $T\in P\mathcal{AT}(G,\mathcal{F})$. The following lemma was proved in [@Hor14-3 Lemma 3.4]. Its proof adapts to the context of $(G,\mathcal{F})$-trees. \[Theta-measurable\] The set $P\mathcal{AT}(G,\mathcal{F})$ is measurable, and $\Theta$ is measurable. \[stationary\] Let $G$ be a countable group, and $\mathcal{F}$ be a free factor system of $G$. Let $\mu$ be a probability measure on $\text{Out}(G,\mathcal{F})$, whose support generates a nonelementary subgroup of $\text{Out}(G,\mathcal{F})$. Then every $\mu$-stationary measure on $\overline{P\mathcal{O}(G,\mathcal{F})}$ is concentrated on $P\mathcal{AT}(G,\mathcal{F})$. Let $\nu$ be a $\mu$-stationary measure on $\overline{P\mathcal{O}(G,\mathcal{F})}$. Let $E:=\overline{P\mathcal{O}(G,\mathcal{F})}\smallsetminus P\mathcal{AT}(G,\mathcal{F})$. By Proposition \[classification\], the image $\Theta(E)$ does not contain the empty set. However, nonelementarity of $gr(\mu)$ implies that the only finite $gr(\mu)$-orbit in $D$ is the orbit of the empty set. Lemma \[disjoint-translations\] thus implies that $\nu(E)=0$, or in other words $\nu$ is concentrated on $P\mathcal{AT}(G,\mathcal{F})$. \[nonempty-limit-set\] Let $H\subseteq\text{Out}(G,\mathcal{F})$ be a nonelementary subgroup of $\text{Out}(G,\mathcal{F})$. Then the $H$-orbit of any point $x_0\in P\mathcal{O}(G,\mathcal{F})$ has a limit point in $P\mathcal{AT}(G,\mathcal{F})$. Let $\mu$ be a probability measure on $\text{Out}(G,\mathcal{F})$ such that $\text{gr}(\mu)=H$. An example of such a measure is obtained by giving a positive weight $\mu(h)>0$ to every element $h\in H$, in such a way that $$\sum_{h\in H}\mu(h)=1$$ (and $\mu(g)=0$ if $g\in G\smallsetminus H$). Let $\delta_{x_0}$ be the Dirac measure at $x_0$. Since $\overline{P\mathcal{O}(G,\mathcal{F})}$ is compact [@Hor14-5 Proposition 3.1], the sequence of the Cesàro averages of the convolutions $\mu^{\ast n}\ast\delta_{x_0}$ has a weak-$\ast$ limit point $\nu$, which is a $\mu$-stationary measure on $\overline{P\mathcal{O}(G,\mathcal{F})}$, see [@KM96 Lemma 2.2.1]. We have $\nu(\overline{Hx_0})=1$, where $Hx_0$ denotes the $H$-orbit of $x_0$ in $\overline{P\mathcal{O}(G,\mathcal{F})}$, and Proposition \[stationary\] implies that $\nu(P\mathcal{AT}(G,\mathcal{F}))=1$. This implies that $\overline{Hx_0}\cap P\mathcal{AT}(G,\mathcal{F})$ is nonempty. As a consequence of Theorem \[boundary-fz\] and Corollary \[nonempty-limit-set\], we get the following fact. \[limit-set-fz\] Let $H\subseteq\text{Out}(G,\mathcal{F})$ be a nonelementary subgroup of $\text{Out}(G,\mathcal{F})$. Then the $H$-orbit of any point in $FZ(G,\mathcal{F})$ has a limit point in $\partial_{\infty} FZ(G,\mathcal{F})$. Let $\mathcal{F}$ be a free factor system of $G$, and let $H$ be a nonelementary subgroup of $\text{Out}(G,\mathcal{F})$. Corollary \[nonempty-limit-set\] shows that the $H$-orbit of any point in $FZ(G,\mathcal{F})$ has a limit point in $\partial_{\infty} FZ(G,\mathcal{F})$. As $H$ does not fix any element in $\partial_{\infty} FZ(G,\mathcal{F})$, the conclusion follows from the classification of subgroups of isometries of Gromov hyperbolic spaces (Theorem \[Gromov\]). The inductive argument {#sec-6} ====================== Variations over the Tits alternative ------------------------------------ We recall from the introduction that a group $G$ is said to satisfy the Tits alternative relative to a class $\mathcal{C}$ of groups if every subgroup of $G$ either belongs to $\mathcal{C}$, or contains a rank two free subgroup. Our main result is the following. A group $H$ is *freely indecomposable* if it does not split as a free product of the form $H=A\ast B$, where both $A$ and $B$ are nontrivial. \[Tits\] Let $\{G_1,\dots,G_k\}$ be a finite collection of freely indecomposable countable groups, not isomorphic to $\mathbb{Z}$, let $F$ be a finitely generated free group, and let $$G:=G_1\ast\dots\ast G_k\ast F.$$ Let $\mathcal{C}$ be a collection of groups that is stable under isomorphisms, contains $\mathbb{Z}$, and is stable under subgroups, extensions, and passing to finite index supergroups. Assume that for all $i\in\{1,\dots,k\}$, both $G_i$ and $\text{Out}(G_i)$ satisfy the Tits alternative relative to $\mathcal{C}$.\ Then $\text{Out}(G)$ and $\text{Aut}(G)$ satisfy the Tits alternative relative to $\mathcal{C}$. In particular, Theorem \[Tits\] applies to the case where $\mathcal{C}$ is either the class of virtually solvable groups (see [@Can11 Lemme 6.11] for stability of $\mathcal{C}$ under extensions), or the class of virtually polycyclic groups. Theorem \[Tits\] will be a consequence of the following relative version. For all $i\in\{1,\dots,k\}$, let $A_i\subseteq\text{Out}(G_i)$, and let $\mathcal{A}:=(A_1,\dots,A_k)$. We recall from Section \[sec-relative\] that $\text{Out}(G,\mathcal{F}^{A})$ denotes the subgroup of $\text{Out}(G)$ consisting of those automorphisms that preserve the conjugacy classes of all subgroups $G_i$, and induce an outer automorphism in $A_i$ in restriction to each $G_i$. \[Tits-2\] Let $G$ be a countable group, let $\mathcal{F}$ be a free factor system of $G$, and let $\mathcal{A}$ be as above. Let $\mathcal{C}$ be a collection of groups that is stable under isomorphisms, contains $\mathbb{Z}$, and is stable under subgroups and extensions, and passing to finite index supergroups. Assume that for all $i\in\{1,\dots,k\}$, both $G_i$ and $A_i$ satisfy the Tits alternative relative to $\mathcal{C}$.\ Then $\text{Out}(G,\mathcal{F}^{\mathcal{A}})$ satisfies the Tits alternative relative to $\mathcal{C}$. When all subgroups in $\mathcal{A}$ are trivial, Theorem \[Tits-2\] specifies as follows. \[Tits-relative\] Let $G$ be a countable group, and let $\mathcal{F}$ be a free factor system of $G$. Let $\mathcal{C}$ be a collection of groups that is stable under isomorphisms, contains $\mathbb{Z}$, and is stable under subgroups and extensions, and passing to finite index supergroups. Assume that all peripheral subgroups of $G$ satisfy the Tits alternative relative to $\mathcal{C}$.\ Then $\text{Out}(G,\mathcal{F}^{(t)})$ satisfies the Tits alternative relative to $\mathcal{C}$. In the classical case where $\mathcal{C}$ is the class of virtually solvable groups, we also mention that our proof of Theorem \[Tits\] also provides a bound on the degree of solvability of the finite-index solvable subgroup arising in the statement. If all groups $G_i$ and $\text{Out}(G_i)$ satisfy the Tits alternative relative to the class of virtually abelian subgroups, does $\text{Out}(G)$ also satisfy the Tits alternative relative to this class ? Similarly, if all groups $G_i$ satisfy the Tits alternative relative to the class of virtually abelian subgroups, does $\text{Out}(G,\mathcal{F}^{(t)})$ also satisfy the Tits alternative relative to this class ? The issue here is that this class is not stable under extensions. Our question is motivated by the classical case of finitely generated free groups, for which Bestvina, Feighn and Handel have proved that every virtually solvable subgroup of $\text{Out}(F_N)$ is actually virtually abelian and finitely generated, with a bound on the index of the abelian subgroup that only depends on $N$ ([@BFH05], see also [@Ali02]). We first explain how to derive Theorems \[Tits-2\] and \[Tits\] from Theorem \[Tits-relative\], before proving Theorem \[Tits-relative\] in the next section. There is a morphism from $\text{Out}(G,\mathcal{F}^{\mathcal{A}})$ to the direct product $A_1\times\dots\times A_k$, whose kernel is equal to $\text{Out}(G,\mathcal{F}^{(t)})$. Since $\mathcal{C}$ is stable under extensions, the class of groups satisfying the Tits alternative relative to $\mathcal{C}$ is stable under extensions, so Theorem \[Tits-2\] follows from Theorem \[Tits-relative\]. Let $\mathcal{F}:=\{[G_1],\dots,[G_k]\}$. As all $G_i$’s are freely indecomposable, the group $\text{Out}(G)$ permutes the conjugacy classes in $\mathcal{F}$. Therefore, there exists a finite-index subgroup $\text{Out}^0(G)$ of $\text{Out}(G)$ which preserves all conjugacy classes in $\mathcal{F}$. For all $i\in\{1,\dots,k\}$, the group $G_i$ is equal to its own normalizer in $G$, so every element $\Phi\in\text{Out}^{0}(G)$ induces a well-defined element of $\text{Out}(G_i)$. In other words, the subgroup $\text{Out}^0(G)$ is a subgroup of $\text{Out}(G,\mathcal{F}^{\mathcal{A}})$, with $A_i=\text{Out}(G_i)$ for all $i\in\{1,\dots,k\}$. Theorem \[Tits\] thus follows from Theorem \[Tits-2\] (the statement for the group $\text{Aut}(G)$ also follows, because if both $G$ and $\text{Out}(G)$ satisfy the Tits alternative relative to $\mathcal{C}$, then so does $\text{Aut}(G)$). Proof of Theorem \[Tits-relative\] ---------------------------------- The proof is by induction on the pair $(\text{rk}_K(G,\mathcal{F}),\text{rk}_f(G,\mathcal{F}))$, for the lexicographic order. Let $\mathcal{F}$ be a free factor system of $G$. The conclusion holds if $\text{rk}_K(G,\mathcal{F})=1$: in this case, the group $G$ is either peripheral, or isomorphic to $\mathbb{Z}$. It also holds in the sporadic cases by Propositions \[sporadic-1\] and \[sporadic-2\]. We now assume that $(G,\mathcal{F})$ is nonsporadic, and let $H$ be a subgroup of $\text{Out}(G,\mathcal{F}^{(t)})$. We will show that either $H$ contains a rank two free subgroup, or $H\in\mathcal{C}$. Using Theorem \[trichotomy\], we can assume that either $H$ preserves a finite set of conjugacy classes of proper $(G,\mathcal{F})$-free factors, or that $H$ virtually fixes a tree with trivial arc stabilizers in $\partial P\mathcal{O}(G,\mathcal{F})$.\ \ We first assume that $H$ has a finite index subgroup $H^0$ which preserves the conjugacy class of a proper $(G,\mathcal{F})$-free factor $G'$. We denote by $\text{Out}(G,\mathcal{F}^{(t)},G')$ the subgroup of $\text{Out}(G,\mathcal{F}^{(t)})$ made of those elements that preserve the conjugacy class of $G'$ (so $H^0$ is a subgroup of $\text{Out}(G,\mathcal{F}^{(t)},G')$). Since $G'$ is equal to its own normalizer in $G$, every element $\Phi\in\text{Out}(G,\mathcal{F}^{(t)},G')$ induces by restriction a well-defined outer automorphism $\Phi_{G'}$ of $G'$. The automorphism $\Phi_{G'}$ coincides with a conjugation by an element $g\in G$ in restriction to every factor in $\mathcal{F}_{G'}$ (where we recall that $\mathcal{F}_{G'}$ is the collection of $G'$-conjugacy classes of subgroups in $\mathcal{F}$ that are contained in $G'$). Since $G'$ is malnormal, we have $g\in G'$. In other words, there is a restriction morphism $$\Psi:\text{Out}(G,\mathcal{F}^{(t)},G')\to\text{Out}(G',\mathcal{F}_{G'}^{(t)}).$$ Since $G'$ is a $(G,\mathcal{F})$-free factor, there exist $i_1<\dots<i_s$ such that $G$ splits as $$G=G'\ast G'_{i_1}\ast\dots\ast G'_{i_s}\ast F',$$ where $G'_{i_j}$ is conjugate to $G_{i_j}$ for all $j\in\{1,\dots,s\}$, and $F'$ is a finitely generated free group. We let $\mathcal{F}':=\{[G'],[G'_{i_1}],\dots,[G'_{i_s}]\}$. Then the kernel of $\Psi$ is equal to $\text{Out}(G,{\mathcal{F}'}^{(t)})$. Recall from Equations and in Section \[sec-relative\] that $\text{rk}_f(G',\mathcal{F}_{G'})+\text{rk}_f(G,\mathcal{F}')=\text{rk}_f(G,\mathcal{F})$, and $\text{rk}_K(G',\mathcal{F}_{G'})+\text{rk}_K(G,\mathcal{F}')=\text{rk}_K(G,\mathcal{F})+1$. Since $G'$ is a proper $(G,\mathcal{F})$-free factor, we either have $\text{rk}_K(G',\mathcal{F}_{G'})\ge 2$, in which case $\text{rk}_K(G,\mathcal{F}')<\text{rk}_K(G,\mathcal{F})$, or else $\text{rk}_K(G',\mathcal{F}_{G'})=\text{rk}_f(G',\mathcal{F}_{G'})=1$, in which case $\text{rk}_K(G,\mathcal{F}')=\text{rk}_K(G,\mathcal{F})$ and $\text{rk}_f(G,\mathcal{F}')<\text{rk}_f(G,\mathcal{F})$. Since $G'$ is a proper $(G,\mathcal{F})$-free factor, we also have $\text{rk}_K(G',\mathcal{F}_{G'})<\text{rk}_K(G,\mathcal{F})$. Our induction hypothesis therefore implies that both $\text{Out}(G',\mathcal{F}_{G'}^{(t)})$ and $\text{Out}(G,{\mathcal{F}'}^{(t)})$ satisfy the Tits alternative relative to $\mathcal{C}$. Since $\mathcal{C}$ is stable under extensions, the class of groups satisfying the Tits alternative relative to $\mathcal{C}$ is stable under extensions. So $H^0$, and hence $H$, satisfies the Tits alternative relative to $\mathcal{C}$.\ \ We now assume that $H$ has a finite index subgroup $H^0$ which fixes the projective class of a tree $[T]\in\overline{P\mathcal{O}(G,\mathcal{F})}$ with trivial arc stabilizers. Then $H^0$ is a cyclic extension of a subgroup $H'$ that fixes a nonprojective tree $T\in\overline{\mathcal{O}(G,\mathcal{F})}$ [@GL14-2]. It is enough to show that $\text{Out}(T)$ satisfies the Tits alternative relative to $\mathcal{C}$. Denote by $V$ the finite set of $G$-orbits of points with nontrivial stabilizer in $T$, and by $\{G_v\}_{v\in V}$ the collection of point stabilizers in $T$. As any element of $\text{Out}(T)$ induces a permutation of the finite set $V$, some finite index subgroup $\text{Out}^0(T)$ of $\text{Out}(T)$ preserves the conjugacy class of all groups $G_v$ with $v\in V$. As $T$ has trivial arc stabilizers, all point stabilizers in $T$ are equal to their normalizer in $G$. As above, there is a morphism from $\text{Out}^0(T)$ to the direct product of all $\text{Out}(G_v,\mathcal{F}_{G_v}^{(t)})$, whose kernel is contained in $\text{Out}(T,\{[G_v]\}^{(t)})$. Corollary \[Tits-stabilizer\] shows that $\text{Out}(T,\{[G_v]\}^{(t)})$ satisfies the Tits alternative relative to $\mathcal{C}$. Since $T$ has trivial arc stabilizers, Proposition \[per-finite\] implies that $\text{rk}_K(G_v,\mathcal{F}_{G_v})\le\text{rk}_K(G,\mathcal{F})-1$ for all $v\in V$. Therefore, our induction hypothesis implies that $\text{Out}(G_v,\mathcal{F}_{G_v}^{(t)})$ satisfies the Tits alternative relative to $\mathcal{C}$. As the Tits alternative is stable under extensions, we deduce that $\text{Out}(T)$, and hence $H$, satisfies the Tits alternative relative to $\mathcal{C}$. Applications {#sec-7} ============ Outer automorphisms of right-angled Artin groups ------------------------------------------------ Given a finite simplicial graph $\Gamma$, the *right-angled Artin group* $A_{\Gamma}$ is the group defined by the following presentation. Generators of $A_{\Gamma}$ are the vertices of $\Gamma$, and relations are given by commutation of any two generators that are joined by an edge in $\Gamma$. As a consequence of Theorem \[Tits\] and of work by Charney and Vogtmann [@CV11], we show that the outer automorphism group of any right-angled Artin group satisfies the Tits alternative. \[tits-raag\] For all finite simplicial graphs $\Gamma$, the group $\text{Out}(A_{\Gamma})$ satisfies the Tits alternative. Let $N$ be the number of components of $\Gamma$ consisting of a single point, and let $\Gamma_1,\dots,\Gamma_k$ be the connected components of $\Gamma$ consisting of more than one point. Then we have $A_{\Gamma}=A_{\Gamma_1}\ast\dots\ast A_{\Gamma_k}\ast F_N$. All subgroups $A_{\Gamma_i}$ of this decomposition are freely indecomposable and not isomorphic to $\mathbb{Z}$: it is the Grushko decomposition of $A_{\Gamma}$. Theorem \[tits-raag\] was first proven by Charney, Crisp and Vogtmann in the case where $\Gamma$ is connected and triangle-free [@CCV07], then extended by Charney and Vogtmann in [@CV11] to the case of graphs satisfying some homogeneity condition, where it was noticed that the full version would follow from Theorem \[Tits\]. We now explain how to make this deduction. The reader is referred to [@Cha07] for a survey paper on right-angled Artin groups, and to [@CCV07; @CV09; @CV11] for a study of their automorphism groups. Let $\Gamma$ be a finite simplicial connected graph. Let $v\in\Gamma$ be a vertex of $\Gamma$. The *link* of $v$, denoted by $lk(v)$, is the full subgraph of $\Gamma$ spanned by all vertices adjacent to $v$. The *star* of $v$, denoted by $st(v)$, is the full subgraph of $\Gamma$ spanned by $v$ and $lk(v)$. The relation $\le$ defined on the set of vertices of $\Gamma$ by setting $v\le w$ whenever $lk(v)\subseteq st(w)$ is transitive, and induces a partial ordering on the set of equivalence classes of vertices $[v]$, where $w\in [v]$ if and only if $v\le w$ and $w\le v$ [@CV09 Lemma 2.2]. A vertex $v$ of $\Gamma$ is *maximal* if its equivalence class is maximal for this relation. The *link* $lk(\Theta)$ of a subgraph $\Theta$ of $\Gamma$ is the intersection of the links of all vertices in $\Theta$. The *star* $st(\Theta)$ of $\Theta$ is the full subgraph of $\Gamma$ spanned by both $\Theta$ and its link. Given a full subgraph $\Theta$ of $\Gamma$, the group $A_{\Theta}$ embeds as a subgroup of $A_{\Gamma}$. Laurence [@Lau95], extending work of Servatius [@Ser89], gave a finite generating set of $\text{Out}(A_{\Gamma})$, consisting of graph automorphisms, inversions of a single generator, transvections $v\mapsto vw$ with $v\le w$, and partial conjugations by a generator $v$ on one component of $\Gamma\smallsetminus st(v)$. The subgroup $\text{Out}^0(A_{\Gamma})$ of $\text{Out}(A_{\Gamma})$ generated by inversions, transvections and partial conjugations, has finite index in $\text{Out}(A_{\Gamma})$. Assume that $\Gamma$ is connected, and let $v$ be a maximal vertex. Then any element of $\text{Out}^0(A_{\Gamma})$ has a representative $f_v$ which preserves both $A_{[v]}$ and $A_{st[v]}$ [@CV09 Proposition 3.2]. Restricting $f_v$ to $A_{st[v]}$ gives a *restriction morphism* $$R_v:\text{Out}^0(A_{\Gamma})\to\text{Out}^0(A_{st[v]})$$ [@CV09 Corollary 3.3]. The map from $A_{\Gamma}$ to $A_{\Gamma\smallsetminus [v]}$ that sends each generator in $[v]$ to the identity induces an *exclusion morphism* $$E_v:\text{Out}^0(A_{\Gamma})\to\text{Out}^0(A_{\Gamma\smallsetminus [v]}).$$ Since $v$ is a maximal vertex for the subgraph $st[v]$, and since $lk[v]=st[v]\smallsetminus [v]$, we can compose the restriction morphism on $A_{\Gamma}$ with the exclusion morphism on $A_{st[v]}$ to get a *projection morphism* $$P_v:\text{Out}^0(A_{\Gamma})\to \text{Out}^0(A_{lk[v]}).$$ [@CV09 Corollary 3.3]. By combining the projection morphisms for all maximal equivalence classes of vertices of $\Gamma$, we get a morphism $$P:\text{Out}^0(A_{\Gamma})\to\prod\text{Out}^0(A_{lk[v]}),$$ where the product is taken over the set of maximal equivalence classes of vertices of $\Gamma$. (Charney–Vogtmann [@CV09 Theorem 4.2]) \[amalgamated-projection\] If $\Gamma$ is a connected graph that contains at least two equivalence classes of maximal vertices, then the kernel of $P$ is a free abelian subgroup of $\text{Out}^0(A_{\Gamma})$. (Charney–Vogtmann [@CV09 Proposition 4.4]) \[single-maximal\] If $\Gamma$ is a connected graph that contains a single equivalence class $[v]$ of maximal vertices, then $A_{[v]}$ is abelian, and there is a surjective morphism $$\text{Out}(A_{\Gamma})\to GL(A_{[v]})\times\text{Out}(A_{lk([v])}),$$ whose kernel is a free abelian subgroup of $\text{Out}(A_{\Gamma})$. \[Proof of Theorem \[tits-raag\]\] The proof is by induction on the number of vertices of $\Gamma$. The case of a graph having a single vertex is obvious. Thanks to Theorem \[Tits\] and the description of the Grushko decomopsition of $A_{\Gamma}$, we can assume that $\Gamma$ is connected. Let $v$ be a maximal vertex of $\Gamma$. As $lk_{[v]}$ has stricly fewer vertices than $\Gamma$, it follows from the induction hypothesis that $\text{Out}(A_{lk[v]})$ satisfies the Tits alternative, and so does $\text{Out}^0(A_{lk[v]})$. If $\Gamma$ contains a single equivalence class of maximal vertices, then it follows from Proposition \[single-maximal\], and from Tits’ original version of the alternative for linear groups [@Tit72], that $\text{Out}(A_{\Gamma})$ satisfies the Tits alternative. If $\Gamma$ contains at least two equivalence classes of maximal vertices, then it follows from Proposition \[amalgamated-projection\] that $\text{Out}(A_{\Gamma})$ satisfies the Tits alternative. Outer automorphisms of relatively hyperbolic groups --------------------------------------------------- Let $G$ be a group, and $\mathcal{P}$ be a finite collection of subgroups of $G$. Following Bowditch [@Bow12] (see [@Hru10; @Osi06] for equivalent definitions), we say that $G$ is *hyperbolic relative to $\mathcal{P}$* if $G$ admits a simplicial action on a connected graph $\mathcal{K}$ such that - the graph $\mathcal{K}$ is Gromov hyperbolic, and for all $n\in\mathbb{N}$, every edge of $\mathcal{K}$ is contained in finitely many simple circuits of length $n$, and - the edge stabilizers for the action of $G$ on $\mathcal{K}$ are finite, and there are finitely many orbits of edges, and - the set $\mathcal{P}$ is a set of representatives of the conjugacy classes of the infinite vertex stabilizers. \[tits-rh\] Let $G$ be a torsion-free group, which is hyperbolic relative to a finite family $\mathcal{P}$ of finitely generated subgroups. Let $\mathcal{C}$ be a collection of groups that is stable under isomorphisms, contains $\mathbb{Z}$, and is stable under subgroups, extensions, and passing to finite index supergroups. Assume that for all $H\in\mathcal{P}$, both $H$ and $\text{Out}(H)$ satisfy the Tits alternative relative to $\mathcal{C}$.\ Then $\text{Out}(G,\mathcal{P})$ satisfies the Tits alternative relative to $\mathcal{C}$. The peripheral subgroups $G_i$ arising in the Grushko decomposition of $G$ relative to $\mathcal{P}$ (see [@GL10] for a definition of the relative Grushko decomposition) are torsion-free, freely indecomposable relative to $\mathcal{P}_{G_i}$ (i.e. they do not split as a free product in which all subgroups in $\mathcal{P}_{G_i}$ are conjugate into one of the factors), and hyperbolic relative to $\mathcal{P}_{G_i}$. Each subgroup $G_i$ satisfies the Tits alternative relative to $\mathcal{C}$ as soon as all groups in $\mathcal{P}$ do (this follows from [@Gro87]). Our main result (Theorem \[Tits\]) therefore enables us to reduce to the case where $G$ is freely indecomposable relative to $\mathcal{P}$. In this case, we can use the description of $\text{Out}(G,\mathcal{P})$ stated below, which is due to Guirardel and Levitt. Since the Tits alternative holds for mapping class groups of compact surfaces (Ivanov [@Iva84], McCarthy [@McC85]), we deduce the Tits alternative for $\text{Out}(G,\mathcal{P})$. (Guirardel–Levitt [@GL14 Theorem 1.4]) \[out-relhyp\] Let $G$ be a torsion-free group, which is hyperbolic relative to a finite family $\mathcal{P}$ of finitely generated subgroups, and freely indecomposable relative to $\mathcal{P}$. Then some finite index subgroup $\text{Out}^0(G,\mathcal{P})$ of $\text{Out}(G,\mathcal{P})$ fits in an exact sequence $$1\to\mathcal{T}\to\text{Out}^0(G,\mathcal{P})\to\prod_{i=1}^p MCG(\Sigma_i)\times\prod_{H\in\mathcal{P}}\text{Out}(H),$$ where $\mathcal{T}$ is finitely generated free abelian, and $MCG(\Sigma_i)$ is the mapping class group of a compact surface $\Sigma_i$. When the parabolic subgroups are virtually polycyclic, we get the following result. \[tits-relhyp\] Let $G$ be a torsion-free group, which is hyperbolic relative to a finite family of virtually polycyclic subgroups. Then $\text{Out}(G)$ satisfies the Tits alternative relative to the class of virtually polycyclic groups. We first recall that the outer automorphism group $\text{Out}(P)$ of a virtually polycyclic group $P$ satisfies the Tits alternative relative to the class of virtually polycyclic groups. Indeed, a theorem of Auslander [@Aus67] establishes that $\text{Out}(P)$ embeds as a subgroup of $SL_N(\mathbb{Z})$ for some $N\in\mathbb{N}$. Tits’ original statement of the Tits alternative [@Tit72] implies that $\text{Out}(P)$ satisfies the Tits alternative relative to the class of virtually solvable groups (every linear group over a field of characteristic $0$, finitely generated or not, satisfies the Tits alternative). In addition, a theorem of Mal’cev states that solvable subgroups of $SL_N(\mathbb{Z})$ are polycyclic [@Mal51]. Hence $\text{Out}(P)$ satisfies the Tits alternative relative to the class of virtually polycyclic groups. Denote by $\mathcal{P}$ the collection of parabolic subgroups. We can assume that $\mathcal{P}$ does not contain any virtually cyclic subgroup. Then every element of $\text{Out}(G)$ induces a permutation of the conjugacy classes of the subgroups in $\mathcal{P}$. Indeed, subgroups in $\mathcal{P}$ can be characterized as the maximal subgroups which do not contain a free subgroup of rank $2$, and are not virtually cyclic. Therefore, the group $\text{Out}(G,\mathcal{P})$ is a finite index subgroup of $\text{Out}(G)$. Theorem \[tits-relhyp\] thus follows from Theorem \[tits-rh\].
--- abstract: 'In the last few years, generative adversarial networks (GAN) have shown tremendous potential for a number of applications in computer vision and related fields. With the current pace of progress, it is a sure bet they will soon be able to generate high-quality images and videos, virtually indistinguishable from real ones. Unfortunately, realistic GAN-generated images pose serious threats to security, to begin with a possible flood of fake multimedia, and multimedia forensic countermeasures are in urgent need. In this work, we show that each GAN leaves its specific fingerprint in the images it generates, just like real-world cameras mark acquired images with traces of their photo-response non-uniformity pattern. Source identification experiments with several popular GANs show such fingerprints to represent a precious asset for forensic analyses.' author: - | Francesco Marra, Diego Gragnaniello, Luisa Verdoliva, Giovanni Poggi\ DIETI – University Federico II of Naples\ Via Claudio 21, 80125 Napoli – ITALY\ {francesco.marra, diego.gragnaniello, verdoliv, poggi}@unina.it\ bibliography: - 'refs.bib' title: 'Do GANs leave artificial fingerprints?' --- Introduction ============ Generative adversarial networks are pushing the limits of image manipulation. A skilled individual can easily generate realistic images sampled from a desired distribution [@Salimans2016; @Gulrajani2017; @Berthelot2017], or convert original images to fit a new context of interest [@Thies2016; @Isola2017; @Zhu2017; @Liu2017; @Choi2018]. With progressive GANs [@karras2018], images of arbitrary resolution can be created, further improving the level of photorealism. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Sample images generated by Pro-GAN (a), Cycle-GAN (b), Star-GAN. Top: easily detected bad results. Bottom: photorealistic results.[]{data-label="fig:GANExamples"}](figure/ProGAN_bad.png "fig:"){width="0.30\columnwidth"} ![Sample images generated by Pro-GAN (a), Cycle-GAN (b), Star-GAN. Top: easily detected bad results. Bottom: photorealistic results.[]{data-label="fig:GANExamples"}](figure/CycleGAN_bad.png "fig:"){width="0.30\columnwidth"} ![Sample images generated by Pro-GAN (a), Cycle-GAN (b), Star-GAN. Top: easily detected bad results. Bottom: photorealistic results.[]{data-label="fig:GANExamples"}](figure/StarGAN_bad.png "fig:"){width="0.30\columnwidth"} ![Sample images generated by Pro-GAN (a), Cycle-GAN (b), Star-GAN. Top: easily detected bad results. Bottom: photorealistic results.[]{data-label="fig:GANExamples"}](figure/ProGAN_good.png "fig:"){width="0.30\columnwidth"} ![Sample images generated by Pro-GAN (a), Cycle-GAN (b), Star-GAN. Top: easily detected bad results. Bottom: photorealistic results.[]{data-label="fig:GANExamples"}](figure/CycleGAN_good.png "fig:"){width="0.30\columnwidth"} ![Sample images generated by Pro-GAN (a), Cycle-GAN (b), Star-GAN. Top: easily detected bad results. Bottom: photorealistic results.[]{data-label="fig:GANExamples"}](figure/StarGAN_good.png "fig:"){width="0.30\columnwidth"} (a) (b) (c) -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- There is widespread concern on the possible impact of this technology in the wrong hands. Well-crafted fake multimedia add further momentum to the already alarming phenomenon of fake news, if “seeing is believing”, as they say. Although today’s GAN-based manipulations present often artifacts that raise the suspect of observers, see Fig.1(top), this is not always the case (bottom), and it is only a matter of time before GAN-generated images will consistently pass visual scrutiny. Therefore, suitable multimedia forensic tools are required to detect such fakes. In recent years, a large number of methods have been proposed to single out fake visual data, relying on their semantic, physical, or statistical inconsistencies [@Farid2016]. ---------------------------------------------------------------------- ---------------------------------------------------------------------- ---------------------------------------------------------------------- ---------------------------------------------------------------------- ---------------------------------------------------------------------- ![image](figure/CycleGAN_O2A_F002.jpg){width="32mm" height="32mm"} ![image](figure/CycleGAN_O2A_F008.jpg){width="32mm" height="32mm"} ![image](figure/CycleGAN_O2A_F032.jpg){width="32mm" height="32mm"} ![image](figure/CycleGAN_O2A_F128.jpg){width="32mm" height="32mm"} ![image](figure/CycleGAN_O2A_F512.jpg){width="32mm" height="32mm"} ![image](figure/ProGAN_Kitchen_F002.jpg){width="32mm" height="32mm"} ![image](figure/ProGAN_Kitchen_F008.jpg){width="32mm" height="32mm"} ![image](figure/ProGAN_Kitchen_F032.jpg){width="32mm" height="32mm"} ![image](figure/ProGAN_Kitchen_F128.jpg){width="32mm" height="32mm"} ![image](figure/ProGAN_Kitchen_F512.jpg){width="32mm" height="32mm"} ---------------------------------------------------------------------- ---------------------------------------------------------------------- ---------------------------------------------------------------------- ---------------------------------------------------------------------- ---------------------------------------------------------------------- Statistical-based approaches, in particular, rely on the long trail of subtle traces left in each image by the acquisition devices, traces that can be hardly disguised even by a skilled attacker. In fact, each individual device, due to manufacturing imperfections, leaves a unique and stable mark on each acquired photo, the photo-response non-uniformity (PRNU) pattern [@Lukas2006], which can be estimated and used as a sort of [*device*]{} fingerprint. Likewise, each individual acquisition model, due to its peculiar in-camera processing suite (demosaicking, compression, etc.), leaves further model-related marks on the images, which can be used to extract a [*model*]{} fingerprint [@Cozzolino2018]. Such fingerprints can be used to perform image attribution [@Lukas2006; @Chen2008], as well as to detect and localize image manipulations [@Chen2008; @Cozzolino2018], and represent one of the strongest tools in the hands of the forensic analyst. GANs have little in common with conventional acquisition devices, and GAN-generated images will not show the same camera-related marks. Still, they are the outcome of complex processing systems involving a large number of filtering processes, which may well leave their own distinctive marks on output images. So the question[^1] is: do GANs leave artificial fingerprints? That is, do the images generated by a given GAN share a common and stable pattern that allows to establish their origin? And, if this is the case, how reliable will such a fingerprint be? How robust to defensive measures? And how discriminative about the image origin? In this paper we investigate on this interesting issue, and provide a first answer to the above questions. Our experiments with several popular GAN architectures and datasets, show that GAN do leave specific fingerprints on the image they generate, which can be used to carry out reliable forensic analyses. Related Work ============ Recently there has been a growing interest in distinguishing GAN-generated images from real ones. As shown in Fig.1, the current state of the art in GANs is far from perfection, and often generated images exhibit strong visual artifacts that can be exploited for forensic use. For example, to detect fake faces, [@Matern2019] exploits visual features regarding eyes, teeth and facial contours. Tellingly, the authors observe that in GAN-generated images the color of left and right eye are often inconsistent. Color information is also used in [@McCloskey2018; @Li2018]. In particular, [@McCloskey2018] proposes to use some features shared by different GAN architectures, related to the synthesis of RGB color channels. Other methods rely on deep learning. Several architectures have been tested so far [@Marra2018; @Mo2018; @Tariq2018] showing a good accuracy in detecting GAN-generated images, even on compressed images. Unfortunately, if a network is trained on a specific architecture, its performance degrades sharply when used to detect image generated by another architecture [@Cozzolino2018FT]. This observation suggests the presence of different artifacts peculiar of each specific GAN model. Recently, it has also been shown [@Yu2018] that a deep network can reliably discriminate images generated with different architectures. However, the network requires intensive training on an aligned dataset, and there is no hint, let alone exploitation, of the presence of GAN-induced fingerprints. Exposing GAN fingerprints ========================= In this Section we show evidence on the existence of GAN fingerprints. This goal is pursued in a minimal experimental setting, considering only two GANs, a Cycle-GAN trained to convert orange images into apple images and a Progressive-GAN (Pro-GAN) trained to generate kitchen images, call them GAN A and B, from now on. Lacking any statistical model, we consider an extraction pipeline similar to that of the PRNU pattern. For the generic image $X_i$ generated by a given GAN, the fingerprint represents a disturbance, unrelated with the image semantics. Therefore, we first estimate the high-level image content, $\widehat{X}_i=f(X_i)$, through a suitable denoising filter $f(\cdot)$, then subtract it from the original image to extract the noise residual $$R_i = X_i-f(X_i)$$ Then, we assume the residual to be the sum of a [*non-zero*]{} deterministic component, the fingerprint $F$, and a random noise component $W_i$ $$R_i = F+W_i$$ Accordingly, the fingerprint is estimated by a simple average over the available residuals $$\widehat{F} = \frac{1}{N} \sum_{i=1}^N R_i$$ Fig.2 shows (suitably amplified) the fingerprints of the two GANs, estimated over a growing number of residuals, $N=2,8,32,128,512$. Of course, for low values on $N$, the estimates are dominated by image-related noise. However, as $N$ grows, the additive noise component tends to vanish and both estimates converge to stable quasi-periodical patterns, which we regard as accurate approximations of the true GAN fingerprints. In Fig.3 we show the energy $E(N)$ of these estimated fingerprints as a function of $N$, together with the best fitting curve of the type $$\widehat{E}(N) = E_{\infty}+E_0\times2^{-N}$$ The fitting is clearly very accurate for large values of $N$, and the $E_{\infty}$ value estimates the energy of the limit fingerprint, 0.0377 and 0.0088, respectively. Fig.4, instead, shows the autocorrelation functions of the two estimates for $N$=512, with clear quasi-periodical patterns providing further evidence of the non-random nature of these signals. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![True energy and fitting curve for the Cycle-GAN and Pro-GAN fingerprints of Figure 2.[]{data-label="fig:Energies"}](figure/Energy_O2A.png "fig:"){width="40mm" height="36mm"} ![True energy and fitting curve for the Cycle-GAN and Pro-GAN fingerprints of Figure 2.[]{data-label="fig:Energies"}](figure/Energy_K.png "fig:"){width="40mm" height="36mm"} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Autocorrelation matrices of the Cycle-GAN and Pro-GAN fingerprints ($N$=512) of Figure 2.[]{data-label="fig:Autocorrelations"}](figure/CycleGAN_O2A_F512_autocorrelation.png "fig:"){width="40mm" height="36mm"} ![Autocorrelation matrices of the Cycle-GAN and Pro-GAN fingerprints ($N$=512) of Figure 2.[]{data-label="fig:Autocorrelations"}](figure/ProGAN_Kitchen_F512_autocorrelation.png "fig:"){width="40mm" height="36mm"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- We now take a more application-oriented point of view, looking for these fingerprints’ ability to tell apart images of different origin. Based on image-to-fingerprint correlation or similar indicators, meaningful fingerprints should allow one to decide which of the two GANs generated a given image. Let $${\rm corr}(X,Y) = {\widetilde{X}}\odot {\widetilde{Y}}$$ be the correlation index between images $X$ and $Y$, where ${\widetilde{X}}$ is the zero-mean unit-norm version of $X$ and $\odot$ indicates inner product. For both GANs under analysis, we regard the estimates obtained with $N=2^{9}$ as the “true” fingerprints, $F_A$ and $F_B$, respectively. Then, we compute the correlation indices between residuals $R^A_i, i=1,\ldots,M$ generated by GAN $A$ (and not used to estimate the fingerprint), and the same-GAN ($F_A$) and cross-GAN ($F_B$) fingerprints, that is $$\rho^A_{i,{\rm same}} = {\rm corr}(R^A_i,F_A)$$ and $$\rho^A_{i,{\rm cross}} = {\rm corr}(R^A_i,F_B)$$ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Correlation of Cycle-GAN (left) and Pro-GAN (right) residuals with same/cross-GAN fingerprints.[]{data-label="fig:Rho"}](figure/Correlation_CycleGAN_O2A_residuals.png "fig:"){width="40mm" height="36mm"} ![Correlation of Cycle-GAN (left) and Pro-GAN (right) residuals with same/cross-GAN fingerprints.[]{data-label="fig:Rho"}](figure/Correlation_ProGAN_Kitchen_residuals.png "fig:"){width="40mm" height="36mm"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Fig.5(left) shows the histograms of same-GAN (green) and cross-GAN (red) correlations. Cross-GAN correlations are evenly distributed around zero, indicating no correlation between generated images and the unrelated fingerprint. On the contrary, same-GAN correlations are markedly larger than zero, testifying of a significant correlation with the correct fingerprint. The behavior is very similar when GAN-B residuals are considered and the roles are reversed, see Fig.5(right). Moreover, in both cases the two distributions are well separated, allowing reliable discrimination. The corresponding receiver operating curves (ROC) are nearly perfect with area under curve (AUC) 0.990 and 0.998, respectively. We carried out similar experiments for many other GANs, differing for architecture and/or training set, obtaining always similar results. These results provide a convincing answer to our fundamental question, showing that each GAN leaves a distinctive mark on each image generated by it, which can be legitimately called fingerprint. Source identification experiments ================================= Let us now consider a more challenging experimental setting, to carry out larger-scale source identification tests. We consider three GAN architectures, Cycle-GAN, Pro-GAN, and Star-GAN. Cycle-GAN was proposed in [@Zhu2017] to perform image-to-image translation. The generator takes an input image of the source domain and transforms it into a new image of the target domain ([*e.g.*]{}, apples to oranges). To improve the photorealism of generated images, a cycle consistency constraint is enforced. Here, we consider several Cycle-GAN networks, trained by the authors on different source/target domains. The second architecture, Progressive-GAN [@karras2018], uses progressively growing generator and discriminator to create images of arbitrary-size which mimic images of the target domain. In this case too, six different target domains are considered. Like Cycle-GAN, Star-GAN [@Choi2018] performs image-to-image translation, but adopts a unified approach such that a single generator is trained to map an input image to one of multiple target domains, which can be selected by the user. By sharing the generator weights among different domains, a dramatic reduction of the number of parameters is achieved. Finally, we include also two sets, from the RAISE dataset [@RAISE], of images acquired by real cameras, so as to compare the behavior of real-world and GAN fingerprints. Table I lists all networks and cameras, with corresponding abbreviations. For each dataset ${\cal A}$, we generate/take 1000 RGB images of 256$\times$256 pixels, extract residuals, and use $N$=512 of them to estimate the fingerprint $F_A$, and the remaining $M$=488, $\{R^A_1,\ldots,R^A_M\}$ for testing. ![Average residual-fingerprint correlations.[]{data-label="fig:correlations"}](figure/Residual-to-fingerprint_correlations.png){width="84mm" height="80mm"} First of all we compute the average correlation between all sets of residuals and all fingerprints, that is $$\langle\rho\rangle^A_B = \frac{1}{M} \sum_{i=1}^M {\rm corr}(R^A_i,F_B)$$ with $A,B$ spanning all sets. Fig.6 shows a false-color representation of all such correlations. It appears that diagonal entries are much larger, in general, than off-diagonal ones, confirming that residuals of a dataset correlate well only with the fingerprint of the same dataset, be it GAN or natural. There is also a clear block structure, showing that some (weaker) correlation exists between residuals of a dataset and fingerprints of “sibling” datasets, associated with the same GAN architecture. This holds especially for the Star-GAN datasets, since the weights of a single generator are shared among all target domains. Finally, as expected, no significant correlation exists between real and GAN-generated images, which can be told apart easily based on their respective fingerprints. We now perform camera attribution. For each image, we compute the distance between the corresponding residual and all fingerprints, attributing the image with a minimum-distance rule. In Fig.7 we show the resulting ROCs, and in Fig.8 the confusion matrix (entries below 1% are canceled to improve readability). Despite the 2$\times$ zooming, the ROC figure is hard to read as all curves amass in the upper-left corner. On the other hand, this is the only real message we wanted to gather from this figure: attribution is very accurate in all cases, with the only exception of the Star-GAN male and smiling networks. This observation is reinforced by the confusion matrix, showing almost perfect attribution in all cases (with the same exceptions as before), and with a slightly worse performance for the real cameras, characterized by a lower-energy PRNU. Since real cameras usually compress images at high quality to save storage space, we also repeated the attribution experiment after JPEG compressing all GAN-generated images at QF=95, observing a negligible loss in accuracy, from 90.3% to 90.1%. ------------------------------------------------------------------------------------------------------------------------------ -- ![One-vs.-all source identification ROCs.[]{data-label="fig:ROCs"}](figure/ROCs_zoom.png "fig:"){width="75mm" height="75mm"} ------------------------------------------------------------------------------------------------------------------------------ -- ----------------------------------------------------------------------------------- -- ![image](figure/Identification_confusion_matrix.png){width="160mm" height="80mm"} ----------------------------------------------------------------------------------- -- We conclude this Section by reporting very briefly on the results obtained in the “Forensics GAN Challenge” [@GANchallenge2018] organized in June-July 2018 by the US National Institute of Standards and Technology in the context of the Medifor program. The goal was to classify as real or GAN-generated 1000 images of widely different resolution, from 52$\times$256 to 4608$\times$3072 pixels. As baseline method we used a deep network trained on a large number of images retrieved from the InterNet. However, we also tested the GAN fingerprint idea, following the scheme outlined in Fig.9. We computed fingerprints for several popular GANs and, eventually, identified a large cluster of size-1024$\times$1024 images generated with the same GAN. This allowed us to improve the deep net accuracy by a simple fusion rule, for a final 0.999 AUC. ![GAN-fingerprints for the Forensics Challenge.[]{data-label="fig:Challenge"}](figure/Challenge_scheme.png){width="84mm" height="50mm"} Conclusions and future work =========================== The goal of this work was to demonstrate the existence of GAN fingerprints and their value for reliable forensic analyses. We believe both facts are supported by a sufficient experimental evidence. This answers our fundamental question, but introduces many more questions and interesting topics which deserve further investigation. First of all, it is important to understand how the fingerprint depends on the network, both its architecture (number and type of layers) and its specific parameters (filter weights). This may allow one to improve the fingerprint quality or, with the attacker’s point of view, find ways to remove the fingerprint from generated images as a counter-forensic measure. Along the same path, our preliminary results suggest that training the same architecture with different datasets gives rise to well distinct fingerprints. Is this true in general? Will fine-tuning produce similar effects? Under a more practical point of view, further studies are necessary to assess the potential of GAN fingerprints in multimedia forensics. Major aims, besides source identification, are the discrimination between real and GAN-generated images, and the localization of GAN-generated material spliced in real images. It is also important to study the robustness of such fingerprints to subsequent processing, such as JPEG compression, resizing, blurring, noising. Finally, it is worth assessing the dependence of performance on the number and size of images used for fingerprint estimation, with blind attribution and clustering as an interesting limiting case. [^1]: Winking at P.K.Dick novel: “Do androids dream of electric sheep?”
--- abstract: | We report on the results of a visual search for galaxy-scale strong gravitational lenses over of *HST*/ACS imaging in the Extended Groth Strip (EGS). These deep F606W- and F814W-band observations are in the DEEP2-EGS field. In addition to a previously-known Einstein Cross also found by our search (the “Cross,” , with $z_{\rm lens}=0.8106$ and a published $z_{\rm source}=3.40$), we identify two new strong galaxy-galaxy lenses with multiple extended arcs. The first, (the “Dewdrop”; $z_{\rm lens}=0.5798$), lenses two distinct extended sources into two pairs of arcs ($z_{\rm source}=0.9818$ by nebular \[O[II]{}\] emission), while the second, (the “Anchor”; $z_{\rm lens}=0.4625$), produces a single pair of arcs (source redshift not yet known). Four less convincing arc/counter-arc and two-image lens candidates are also found and presented for completeness. All three definite lenses are fit reasonably well by simple singular isothermal ellipsoid models including external shear, giving $\chi^2_{\nu}$ values close to unity. Using the three-dimensional line-of-sight (LOS) information on galaxies from the DEEP2 data, we calculate the convergence and shear contributions $\kappa_{\rm los}$ and $\gamma_{\rm los}$ to each lens, assuming singular isothermal sphere halos truncated at 200$h^{-1}$kpc. These are compared against a robust measure of local environment, $\delta_{3}$, a normalized density that uses the distance to the third nearest neighbor. We find that even strong lenses in demonstrably underdense local environments may be considerably affected by LOS contributions, which in turn, under the adopted assumptions, may be underestimates of the effect of large scale structure. author: - 'Leonidas A. Moustakas' - Phil Marshall - 'Jeffrey A. Newman$^,$' - 'Alison L. Coil$^,$' - 'Michael C. Cooper' - Marc Davis - 'Christopher D. Fassnacht' - Puragra Guhathakurta - Andrew Hopkins - Anton Koekemoer - 'Nicholas P. Konidaris' - 'Jennifer M. Lotz$^,$' - 'Christopher N. A. Willmer' title: 'A Strong-Lens Survey in AEGIS: the influence of large scale structure' --- Introduction {#sec:intro} ============ Galaxy-scale gravitational lenses have many astrophysical and cosmological applications. These rely on the ability to construct robust and accurate gravitational lens models. However, the contribution of the large-scale structure along the line of sight (LOS) between the observer and the source is often unknown, though it may be significant. In particular, though lens models may detect the influence of the distorting effects of environmental shear ($\gamma$) in a preferred direction, models of even the most richly-constrained Einstein Rings with *Hubble Space Telescope* images [e.g. @dye:05; @wayth:05; @koopmans:06] are still subject to the mass-sheet degeneracy due to extra field convergence ($\kappa$), which can lead to incorrect lens masses [e.g. @kochanek:04]. Indeed, lens galaxies are often massive early-type galaxies, which are generally found in groups or clusters. The most famous example is the two-image lensed QSOQ0957+561 [@walsh:79; @young:80]. The determination of $H_0$ from this system depends crucially on correctly modeling the galaxy cluster surrounding the primary lensing galaxy [e.g. @keeton:00]. Several other lens-galaxy groups and environments have been studied in detail [@kundic:97a; @kundic:97b; @tonry:98; @tonry:99; @fassnacht:02; @morgan:05; @williams:05; @momcheva:06; @fassnacht:06; @auger:06], with sometimes inconclusive results. In analyses such as in @keeton:04, through mock lens realizations, it is shown how local environment may affect key applications of lenses. They argue that $H_0$ and $\Omega_{\Lambda}$ may be overestimated, the expected ratio of four-image to two-image lenses may be underestimated, and predictions for millilensing by dark matter substructure may be off by significant amounts. Other theoretical work [@barkana:96; @metcalf:05; @wambsganss:05] suggests that *all* matter along a line of sight can be important. In the emergent era of large-solid angle, densely-sampled spectroscopic surveys that may include strong lenses, both environmental and large scale structure effects can be explored quantitatively. The DEIMOS spectroscopy of the Extended Groth Strip (EGS) is particularly well-suited to this task, and is employed here to both discover new strong galaxy-lenses, and to begin addressing the quantitative effect of environment in their behavior. The DEEP2-EGS field is a 120$\times$30arcmin strip, the focus of the “All-wavelength EGS International Survey” (AEGIS), includes deep CFHT $BRI$ imaging [@coil:04b] and Keck/DEIMOS spectroscopy of nearly 14000 galaxies to date. The spectroscopy is $\sim75\%$ complete to $R_{\rm AB}<24.1$. For the analysis here, we only employ the most certain redshift assignments [@coil:04a]. Deep *HST*/ACS imaging of nearly over 63 stitched tiles reach $\Vband=28.75$ and $\Iband=28.10$ (AB, 5$\sigma$ point source; Davis et al, this issue). These data lend themselves to two different techniques for searching for heretofore-unknown gravitational lenses: spectroscopically and visually. The spectroscopic redshifts are supplemented as necessary with photometric redshifts measured from deep KPNO $UBVRI$ imaging (A. Hopkins et al., in prep). The spectroscopic approach of searching for “anomalous” emission lines in early-type spectra has some history [e.g. @warren:96], and has recently proved to be spectacularly successful when applied to SDSS spectroscopy [@bolton:04; @willis:05] with *HST*/ACS followup [@bolton:05; @bolton:06; @treu:06]. Explicitly spectroscopic searches for lenses in the DEEP2 data will be explored elsewhere. In the imaging domain, one may hope to search for lens candidates by some automated algorithm, or by visual inspection [e.g. @ratnatunga:95; @zepf:97; @fassnacht:04]. The more quantitative and objective automated approach may eventually be preferred (especially for datasets larger than the one considered here), but would, however, require a training set. The EGS ACS data described here is used for just this purpose in a separate work (Marshall et al. in prep) as a precursor to searching the entire *HST* imaging dataset.[^1] Towards that goal we have undertaken a search for lenses by purely visual inspection. The lens-search methodology is described in §\[sec:data\]. The newly discovered lenses and the modeling results are given in §\[sec:lensmod\], while measurements of the local and LOS environments of the lenses are given in §\[sec:lensenv\]. Discussion and conclusions are the subject of §\[sec:discuss\]. A concordance flat cosmology with $\Omega_{\Lambda}=1-\Omega_{\rm m}=0.7$ and $H_0=100\,h$kms$^{-1}$Mpc$^{-1}$ with $h=0.7$ is used throughout. Unless otherwise stated, all magnitudes are in the AB system. Lens-search methodology {#sec:data} ======================= The search for gravitational lens candidates was conducted by-eye. Three-color images of all of the ACS tiles were built following the @lupton:04 algorithm, using the photometric zeropoints to provide the relative scale factors, and using the mean of the F606W and F814W images for the green channel. The full ACS dataset was inspected repeatedly in the color images at full resolution, with plausible candidates classified with grades of “A” or “B” and marked for further inspection. Object coordinates were then matched against the DEEP2 spectroscopic catalog, which includes a “serendipitous feature” flag, for possible anomalous, higher-redshift emission lines. Emission from a source behind the Dewdrop lens (described below) was found in this way. Lenses & Models {#sec:lensmod} =============== In addition to a previously known Einstein Cross, we find two new unambiguous strong galaxy-galaxy lenses (Fig. \[fig:lenses\]). Four additional plausible lens candidates (Fig. \[fig:otherlenses\]) are also reported on. Here we describe the lens modeling and the model results for each lens. Lens modeling and source reconstruction --------------------------------------- The lensed sources in the EGS all appear to be blue and extended, and are likely star forming galaxies at high redshift ($z \sim 1$). We therefore take the image pixels as our data (rather than simply image-centroid positions), and predict the image using a simple ray tracing forward from the source plane, followed by a PSF convolution. We first subtract the lens galaxy light using a tilted 2D Moffat profile,[^2] and mask the very center of the lens galaxy where some residual flux remains. It is important that the unmasked region contain not only the lensed images but also the clean pixels that do *not* have lensed features. These clean pixels contain at least as much information as the ones with lensed flux, vetoing models that predict images where there are none. For the projected mass profile of the lens we adopt a singular isothermal ellipsoid [SIE; @kormann:94] model, plus an external shear component. Using a Markov chain Monte Carlo procedure presented in detail elsewhere (Marshall et al. in prep), the position, ellipticity, orientation and mass of the lens, external shear amplitude and the direction, position, ellipticity, orientation and Sersic profile parameters of the source are all fit to the data. Since we are interested in accurate estimation of the lens environment, we apply a prior on the orientation of the lens ellipticity to reflect the expected correlation with the lens light [e.g. @koopmans:06]. (A1 – Cross) ------------- This lens was originally discovered by @ratnatunga:95 by visual inspection of the *HST*/WFPC2 Medium Deep Survey (MDS) data. The lens redshift is $z_{\rm lens}=0.8106$ (Table \[tab:lenses\]), and the source is at $z_{\rm source}=3.4$ [@crampton:96]. The large Einstein radius $\theta_{\rm E}=1.447$arcsec and the four-image configuration require a large enclosed mass and a significant amount of external shear, $\gamma_{\rm mod}=0.080$, a result consistent with @treu:04. The best-fit model shows very small residuals at the two outer images, a feature corrected for by @treu:04 with a potential gradient that is presumably associated with a nearby structure. The mass and external shear are not affected by this correction. (A2 – Dewdrop) --------------- The Dewdrop lens at $z_{\rm lens}=0.5798$ lenses two distinct sources into two pairs of arcs. The Keck/DEIMOS spectrum of the system reveals anomalous \[O[II]{}\] nebular emission at $z_{\rm source}=0.9818$ (Fig. \[fig:dewspec\]). The sources in the Dewdrop system are part of a remarkable irregular and loose association of star formation knots and diffuse emitting material that extends over more than 10arcsec, or more than 80kpc comoving in size. (A3 – Anchor) -------------- The Anchor system exhibits a pair of arcs created by a lens at a redshift of $z_{\rm lens}=0.4625$. The best-fitting lens model requires a significant external shear contribution (see Table \[tab:lenses\]), as might be expected from the position and shape of the counter-image to the main arc. Additional lens candidates -------------------------- In Fig. \[fig:otherlenses\] and Table \[tab:lenses\] we identify four additional visually-identified lens candidates. Only two of the four presently have redshifts measured, and require further spectroscopic followup. These are presented for completeness, and do not affect the scope or results of this paper. The Environments of the Lenses {#sec:lensenv} ============================== We explore the environments of the lenses in two different ways. The first makes use of a relatively unbiased measure of the very local environment of any one galaxy, dubbed $\delta_{3}$ and explored in detail in @cooper:05 [@cooper:06]. This parameter is derived from the distance to the third-nearest neighbor among the galaxies within 1000kms$^{-1}$ along the line of sight, and scales as the inverse of the cube of this distance. Thus, more concentrated environments have larger values of $\delta_3$. The typical uncertainties in individual measures of $\delta_{3}$ are $\sim0.5$dex. We only compute this measure for galaxies with spectroscopic redshifts. As a second probe of lens environment we model the contribution to the lensing potential due to individual neighboring galaxies using simple analytic mass distributions. We calculate the convergence $\kappa_{\rm los}$ and shear $\gamma_{\rm los}$ line-of-sight contribution by all galaxies within a projected separation of 200$h^{-1}$kpc from the lens galaxies, out to the redshift of the source. We treat each galaxy as an isolated halo, undoubtedly neglecting the effect of group halos and other structures. Assuming that we can approximate each galaxy $i$ as a singular isothermal sphere (SIS), we have $\kappa_i = b_i / 2 r_i$, where $r$ is the projected distance from the lens and $b$ is the “lens strength” for a background source at angular diameter distances of $D_{\rm s}$ from the observer and $D_{\rm ls}$ from the lens, $$b = 4\pi\left( {{\sigma_{\rm dm}}\over{c}} \right)^2 {{D_{\rm ls}}\over{D_{\rm s}}}.$$ The central dark matter velocity dispersion $\sigma_{\rm dm}$ of each galaxy is assumed to be the same as the central stellar velocity dispersion, which is derived from the estimated rest-frame $B$-band (Vega) magnitude of each galaxy using the Faber-Jackson relationship as given in @mitchell:05 (see also @jonsson:06). (We neglect the dispersion in this relation). The total shear contribution is the “headless-vector” sum of the shears, $\bar{\gamma}_{\rm los}=\Sigma\bar{\gamma_{i}}$, while the total convergence is a scalar sum: $\kappa_{\rm los}=\Sigma\kappa_{i}$. It is worth noting that if at large radii the profiles are steeper than SIS (such as NFW), the convergence contribution will be smaller overall than the shear. These measurements are given in Table \[tab:lenses\] and discussed in the last section. Discussion & Conclusions {#sec:discuss} ======================== The numbers of definite lenses reported on here is consistent with other surveys. For example, @bolton:06 find that $\sim$0.1% of luminous red galaxies are very likely to be strong galaxy-galaxy lenses, although special lines of sight can have much higher lensing rates [e.g. @fassnacht:06b]. The rate above then suggests that there should be $\sim$4 strong lenses in this survey, which is a good match to our three. The main conclusions of this work can be drawn by an examination of Fig. \[fig:drho\]. The Cross is in a fairly overdense local environment, which is consistent with this lens being associated with the $z\approx0.8$ sheet described in @koo:96 and @im:02. Given this, the shear of $\sim$10% required by the model (see also @treu:04) seems plausible. What seems surprising is that even though both the Dewdrop and Anchor lenses are in *under*-dense environments locally, they still require relatively large shear contributions to produce good fits. In all three cases, we also note the large discrepancy between the modeled and LOS-predicted shear values. To explore this, we ran lens models with external shear and orientation *restricted to the predicted values*, and then examining the resulting models, and particularly the fit $\chi_{\nu}^2$. All three new models require lenses with much higher ellipticity than the light suggests, though in the Dewdrop and the Anchor the formal $\chi_{\nu}^2$ remains plausible given the constraints, $\chi_{\nu}^2=1.02$ and $1.04$ (or underfit by $\sim$1- and $\sim$2-$\sigma$), respectively. The new Cross fit, however, is strongly ruled out with $\chi_{\nu}^2=2.00$ (or by $\sim$75-$\sigma$). This suggests that at least in this case, the inferred LOS influence by SIS dark matter halos is insufficient, and that the large scale structure “sheet” must have an important additional effect. Our conclusions may be summarized as follows: 1. We have discovered two new strong galaxy-galaxy lenses by visual inspection, with reasonable lens models and source reconstructions. 2. These lenses are drawn from a range of local-density environments, which do not necessarily reflect the influence of unassociated large scale structure. 3. In at least the case of the Cross, the known large scale structure sheet at the redshift of the lens, which is not formally accounted for in the LOS calculation, has a demonstrable effect on the lens model. We thank Maruša Bradač for discussions. LAM thanks Russell Mirabelli for expert assistance with a GIMP script facilitating the inspection of the ACS data, and UC Berkeley and UC Santa Cruz for their frequent hospitality during the course of this work. The work of LAM was carried out at Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. The work of PJM was supported in part by the U.S. Department of Energy under contract number DE-AC02-76SF00515. JAN and ALC are supported by NASA through the *Hubble* Fellowship grants HF-011065.01-A and HF-01182.01-A, respectively. [43]{} natexlab\#1[\#1]{} , M. W., [Fassnacht]{}, C. D., [Abrahamse]{}, A. L., [Lubin]{}, L. M., & [Squires]{}, G. K. 2006, astro-ph/0603448 , R. 1996, ApJ, 468, 17 , A. S., [Burles]{}, S., [Koopmans]{}, L. V. E., [Treu]{}, T., & [Moustakas]{}, L. A. 2005, ApJL, 624, 21 —. 2006, ApJ, 0, 0 , A. S., [Burles]{}, S., [Schlegel]{}, D. J., [Eisenstein]{}, D. J., & [Brinkmann]{}, J. 2004, AJ, 127, 1860 , A. L., [Newman]{}, J. A., [Kaiser]{}, N., [Davis]{}, M., [Ma]{}, C.-P., [Kocevski]{}, D. D., & [Koo]{}, D. C. 2004, ApJ, 617, 765 , A. L., [et al.]{} 2004, ApJ, 609, 525 , M. C., [Newman]{}, J. A., [Madgwick]{}, D. S., [Gerke]{}, B. F., [Yan]{}, R., & [Davis]{}, M. 2005, ApJ, 634, 833 , M. C., [et al.]{} 2005, ArXiv Astrophysics e-prints , D., [Le Fevre]{}, O., [Hammer]{}, F., & [Lilly]{}, S. J. 1996, A&A, 307, L53 , S., & [Warren]{}, S. J. 2005, ApJ, 623, 31 , C. D., [Gal]{}, R. R., [Lubin]{}, L. M., [McKean]{}, J. P., [Squires]{}, G. K., & [Readhead]{}, A. C. S. 2006, , 642, 30 , C. D., & [Lubin]{}, L. M. 2002, , 123, 627 , C. D., [Moustakas]{}, L. A., [Casertano]{}, S., [Ferguson]{}, H. C., [Lucas]{}, R. A., & [Park]{}, Y. 2004, ApJL, 600, L155 , C. D., [et al.]{} 2006, ArXiv Astrophysics e-prints , M., [et al.]{} 2002, ApJ, 571, 136 , J., [Dahl[é]{}n]{}, T., [Goobar]{}, A., [Gunnarsson]{}, C., [M[ö]{}rtsell]{}, E., & [Lee]{}, K. 2006, ApJ, 639, 991 , C. R., [Falco]{}, E. E., [Impey]{}, C. D., [Kochanek]{}, C. S., [Leh[á]{}r]{}, J., [McLeod]{}, B. A., [Rix]{}, H.-W., [Mu[ñ]{}oz]{}, J. A., & [Peng]{}, C. Y. 2000, , 542, 74 , C. R., & [Zabludoff]{}, A. I. 2004, ApJ, 612, 660 , C. S. 2004, astro-ph/0407232 , D. C., [et al.]{} 1996, ApJ, 469, 535 , L. V. E., [Treu]{}, T., [Bolton]{}, A. S., [Burles]{}, S., & [Moustakas]{}, L. A. 2006, ApJ, 0, 0 , R., [Schneider]{}, P., & [Bartelmann]{}, M. 1994, , 284, 285 , T., [Cohen]{}, J. G., [Blandford]{}, R. D., & [Lubin]{}, L. M. 1997, , 114, 507 , T., [Hogg]{}, D. W., [Blandford]{}, R. D., [Cohen]{}, J. G., [Lubin]{}, L. M., & [Larkin]{}, J. E. 1997, , 114, 2276 , R., [Blanton]{}, M. R., [Fekete]{}, G., [Hogg]{}, D. W., [O’Mullane]{}, W., [Szalay]{}, A., & [Wherry]{}, N. 2004, PASP, 116, 133 , R. B. 2005, ApJ, 629, 673 , J. L., [Keeton]{}, C. R., [Frieman]{}, J. A., & [Sheth]{}, R. K. 2005, ApJ, 622, 81 , I., [Williams]{}, K., [Keeton]{}, C., & [Zabludoff]{}, A. 2006, , 641, 169 , N. D., [Kochanek]{}, C. S., [Pevunova]{}, O., & [Schechter]{}, P. L. 2005, AJ, 129, 2531 , K. U., [Ostrander]{}, E. J., [Griffiths]{}, R. E., & [Im]{}, M. 1995, ApJL, 453, 5 , J. L. 1998, , 115, 1 , J. L., & [Kochanek]{}, C. S. 1999, , 117, 2034 , T., [Koopmans]{}, L. V., [Bolton]{}, A. S., [Burles]{}, S., & [Moustakas]{}, L. A. 2006, , 640, 662 , T., & [Koopmans]{}, L. V. E. 2004, ApJ, 611, 739 , D., [Carswell]{}, R. F., & [Weymann]{}, R. J. 1979, Nature, 279, 381 , J., [Bode]{}, P., & [Ostriker]{}, J. P. 2005, ApJL, 635, L1 , S. J., [Hewett]{}, P. C., [Lewis]{}, G. F., [Moller]{}, P., [Iovino]{}, A., & [Shaver]{}, P. A. 1996, , 278, 139 , R. B., [Warren]{}, S. J., [Lewis]{}, G. F., & [Hewett]{}, P. C. 2005, MNRAS, 360, 1333 , K. A., [Momcheva]{}, I., [Keeton]{}, C. R., [Zabludoff]{}, A. I., & [Lehar]{}, J. 2005, astro-ph/0511593 , J. P., [Hewett]{}, P. C., & [Warren]{}, S. J. 2005, MNRAS, 363, 1369 , P., [Gunn]{}, J. E., [Oke]{}, J. B., [Westphal]{}, J. A., & [Kristian]{}, J. 1980, , 241, 507 , S. E., [Moustakas]{}, L. A., & [Davis]{}, M. 1997, ApJL, 474, L1 [^1]: [^2]: The Moffat function is a modified Lorentzian with variable power law index. The fit is done with the [MPFIT]{} IDL suite of C. Markwardt.
--- author: - | [^1]\ Ottawa-Carleton Institute for Physics\ Department of Physics\ Carleton University\ 1125 Colonel By Drive,Ottawa, K1S 5B6, Canada\ E-mail: title: New Results on Solar Neutrinos --- Introduction ============ The deficit of neutrinos detected coming from the Sun compared with our expectations based on laboratory measurements, known as the Solar Neutrino Problem, has remained one of the outstanding problems in basic physics for over thirty years. It appeared inescapable that either our understanding of the energy producing processes in the Sun is seriously defective, or neutrinos, some of the fundamental particles in the Standard Model, have important properties which have yet to be identified. It was indeed argued by some that we needed to change our ideas on how energy was produced in fusion reactions inside the Sun. Others suggested that the problem arose due to peculiar characteristics of neutrinos such as oscillations and matter effects. It is then useful to review the evolution of our understanding from the data collected by various solar neutrino experiments. For completeness, near term prospects are included in the discussion presented here. Solar Neutrinos =============== The detailed prediction of the electron neutrino flux created by the thermonuclear reactions in the interior of the Sun was performed by John Bahcall and his collaborators (for example see [@bib:BP]). Their calculations are referred to as the Standard Solar Model (SSM). In this paper, the SSM calculations are used to compare experimental results and theoretical predictions. The relevant point for the discussion that follows is that SSM theoretical uncertainties on many solar fluxes have been reduced [@bib:BPS08]. As a result, the errors on the $^7Be$, $pep$, $^8B$ and $hep$ fluxes are now $\pm 6\%$, $\pm 1.1\%$, $\pm 11\%$ and $\pm 16\%$, respectively. Chlorine Experiment =================== The exploration of solar neutrinos started in the mid-1960’s with Ray Davis [@bib:cl]. It led to the first experiment that successfully detected neutrinos coming from the Sun. The experiment of Davis and his team was carried out deep underground in the Homestake mine in the US. The detector was based on a concept first proposed by Bruno Pontecorvo at Chalk River in 1946, in which neutrino reactions on chlorine are measured. Neutrinos striking chlorine can make an isotope of argon through the reaction $$\nu_e \,+\, ^{37}Cl \to e^- \,+\, ^{37}Ar \, ,$$ with an energy threshold of 0.814 MeV. The first results were announced in 1968. The chlorine experiment took data until 1995 and clearly showed argon atoms produced by neutrinos, but the predicted production rate was four times the measured value [@bib:cleveland]: $$\Phi_{\rm{Cl}} = 2.56 \pm 0.23 \mbox{~SNU} \, ,$$ while the SSM predicted rate was about $\Phi_{\rm{Cl}}({\rm{SSM}}) = 7.6$ SNU. A SNU (Solar Neutrino Unit) is the product of the solar neutrino fluxes (measured or calculated) and the calculated cross sections. Hence one SNU equals one capture per second and per $10^{36}$ target atoms. Gallium Experiments =================== While the chlorine detector was mainly sensitive to the highest energy neutrinos, two gallium experiments, one at the Baksan laboratory [@bib:sage] in Russia and one at the Gran Sasso laboratory [@bib:gno] in Italy, were set up to test the oscillation hypothesis at lower energy. Like the $^{37}Cl$ detector, the gallium detectors could only detect electron type neutrinos because they looked for the reaction $$\nu_e \,+\, ^{71}Ga \to e^- \,+\, ^{71}Ge \, .$$ The energy threshold of the $^{71}Ga$ detectors is 0.233 MeV and hence allows the interaction of $pp$, $^7Be$, $^8B$, and $pep$ neutrinos. The Russian-American group (SAGE) used a liquid metal target which contained 50 tons of gallium; while the European group (GALLEX/GNO) used 30 tons of natural gallium in an aqueous acid solution. Small proportional counters are used to count the germanium from the radiochemical target. The $^{71}Ge$ electron capture decay occurs with a mean-life of 16.5 days. The Auger electrons and X-rays produce the typical L-peak and K-peak energy distribution. As a cross-check, both peaks are counted separately. Both experiments found about half of the expected rate. The most recent results of SAGE and GALLEX/GNO yield [@bib:ga] $$\Phi_{\rm{Ga}} = 66.1 \pm 3.1 \mbox{~SNU} \, .$$ The data are incompatible with the SSM since the expected rate is about $\Phi_{\rm{Ga}}({\rm{SSM}}) = 129$ SNU. Kamiokande and SuperKamiokande ============================== Following the first observations from the Cl experiment the first priority was obviously an experimental confirmation of the solar-neutrino deficit. This was provided in 1987 by the Kamiokande water Čerenkov detector [@bib:kamioka] in Japan, which also saw a significant (but, interestingly enough, not an identical) suppression of the measured rate of neutrinos from the Sun. The Kamiokande Collaboration demonstrated that the neutrinos are actually coming from the direction of the Sun by reconstructing the direction of flight of the incident neutrinos from the neutrino-electron scattering (ES) reaction $\nu_x \,+\, e^- \to \nu_x \,+\, e^-$. Light water detectors are mainly sensitive to $\nu_e$, but also to $\nu_{\mu}$ and $\nu_{\tau}$, such that $\sigma(\nu_{\mu\tau} \, e^- \to \nu_{\mu\tau} \, e^-) \simeq 0.15 \times \sigma(\nu_e \, e^- \to \nu_e \, e^-)$. The follow-up of the Kamiokande project is called the SuperKamiokande (SK) experiment [@bib:SK1]. It was built to investigate in more detail the nature of atmospheric and solar neutrino oscillations. The SK detector is a huge, 40 m in diameter and 40 m high, circular cylinder filled with 50,000 tons of ultra-pure light water. The SK detector operates at an energy threshold that permits the study of the $^8B$ neutrinos. It is divided into an outer detector to veto incoming cosmic ray muons and to shield external low energy background; and an inner detector (32,000 tons, of which 22,500 tons is the active fiducial volume) viewed by 11,146 PMT. As in Kamiokande, solar neutrinos are observed by detecting Čerenkov photons emitted by the electrons resulting from ES events. The event rate was about 15 events per day (substantially larger than the rate in the radiochemical experiments). The SK-I data allows measurements of the time dependence of the ES rate. It led to the measurement of the day/night rate asymmetry [@bib:SK1] $$A_{\rm{DN}} = 2 \frac{\Phi_{\rm{D}}-\Phi_{\rm{N}}}{\Phi_{\rm{D}}+\Phi_{\rm{N}}} = -0.021 \pm 0.020 \, ^{+0.013}_{-0.012} \, ,$$ and the precise determination of the ES neutrino rate $\Phi_{\rm{ES}} = (2.35 \pm 0.02 \pm 0.08) \times 10^6~{\rm{cm}}^{-2} {\rm{s^{-1}}}$ [@bib:SK1]. The energy shape of the recoil electron agrees well, within experimental errors, with that predicted from the neutrino spectrum from the beta decay of $^8B$. The measurement of the absolute flux, however, is about 41% of that predicted by the SSM. The results of the second phase of SK (SK-II) are consistent with the results of SK-I. The measured ES neutrino rate is $\Phi_{\rm{ES}}= 2.38 \pm 0.05 _{-0.15}^{+0.16} \times 10^6~{\rm{cm}}^{-2} {\rm{s^{-1}}}$ [@bib:SK2]; while the day-night difference is found to be $A_{\rm{DN}} = -0.063 \pm 0.042 \pm 0.037$ [@bib:SK2]. Borexino ======== The Borexino detector [@bib:bxdetectorpaper] is located in Hall C of the Gran Sasso Laboratory. It is a 300 ton liquid-scintillator based detector with 100 tons of active fiducial mass in a 8.3 m diameter spherical nylon bag surrounded by a 2.6 meter thick spherical shell filled with buffer oil. The liquid scintillator and buffer liquid are viewed by 2,240 PMT which are mounted inside a 13.5 m diameter stainless steel tank; which is in turn surrounded by a 18 m spherical tank filled with ultra-pure light water to act as a radiation shield. Solar neutrinos are detected in Borexino through their elastic scattering on electrons in the scintillator. Electron neutrinos ($\nu_e$) interact through charged and neutral currents and in the energy range of interest have a cross section $\sim$5 times larger than $\nu_\mu$ and $\nu_\tau$, which interact only via the neutral current. The electrons scattered by neutrinos are detected by means of the scintillation light. Borexino is the first solar neutrino experiment to report a real-time observation of low energy $^7Be$ neutrinos. The signature for the mono-energetic $^7Be$ neutrinos in Borexino is the Compton-like edge of the recoil electrons at 665 keV. Borexino reported the direct measurement of the 0.862 MeV $^7Be$ solar neutrinos with the Borexino detector from an analysis of 192 live days in the period from May 16, 2007 to April 12, 2008, totaling a 41.3 ton$\cdot$yr fiducial exposure to solar neutrinos. It yields an interaction rate of 49$\pm$3$_{\rm stat}$$\pm$4$_{\rm syst}$ counts/(day$\cdot$100 ton) [@bib:bxBe7]. Sudbury Neutrino Observatory ============================ The Sudbury Neutrino Observatory (SNO) was a 1,000 ton heavy-water Čerenkov detector [@bib:snonim] situated 2 km underground in INCO’s Creighton mine in Canada. Another 7,000 tons of ultra-pure light water was used for support and shielding. The heavy water was in an acrylic vessel (12 m diameter and 5 cm thick) viewed by 9,456 PMT mounted on a geodesic structure 18 m in diameter; all contained within a polyurethane-coated barrel-shaped cavity (22 m diameter by 34 m high). The SNO detector has been filled with water in May 1999 and data taking ended in 2006. The solar-neutrino detectors in operation prior to SNO were mainly sensitive to the electron neutrino type; while the use of heavy water by SNO allowed the flux of all three neutrino types to be measured. Neutrinos from $^8B$ decay in the Sun were observed in SNO from Čerenkov processes following three reactions: (i) the charged-current (CC) reaction, specific to electron neutrinos $d \,+\, \nu_e \to p \,+\, p \,+\, e^-$. This reaction has a Q value of 1.4 MeV and the electron energy is strongly correlated with the neutrino energy, providing potential sensitivity to spectral distortions; (ii) the neutral-current (NC) reaction, equally sensitive to all non-sterile neutrino types ($x=e, \mu, \tau$) $\nu_x \,+\, d \to n \,+\, p \,+\, \nu_x$. This reaction has a threshold of 2.2 MeV and is observed through the detection of neutrons by three different techniques in separate phases of the experiment; and (iii) the elastic-scattering (ES) reaction $\nu_x \,+\, e^- \to \nu_x \,+\, e^-$. This reaction has a substantially lower cross section than the other two and as mentioned before is predominantly sensitive to electron neutrinos. The relations $\Phi_{\rm{CC}}= \phi_e$, $\Phi_{\rm{ES}}= \phi_e + 0.15 \phi_{\mu\tau}$ and $\Phi_{\rm{NC}}= \phi_e + \phi_{\mu\tau}$ gave SNO the status of an appearance experiment. The SNO experimental plan called for three phases wherein different techniques were employed for the detection of neutrons from the NC reaction. During the first phase, with pure heavy water, neutrons were observed through the Čerenkov light produced when neutrons were captured on deuterium, producing 6.25 MeV gammas. For the second phase, about 2 tons of $NaCl$ was added to the heavy water and neutron detection was enhanced through capture on $Cl$, with about 8.6 MeV gamma energy release. For the third phase, the salt was removed and an array of $^3He$-filled proportional counters was installed to provide direct detection of neutrons. SNO reported results from a joint analysis of Phase I and Phase II data [@bib:snoleta]. The effective electron kinetic energy threshold used is 3.5 MeV, the lowest analysis threshold yet achieved with water Cherenkov detector data. Overall, the low threshold increased the statistics of the CC and ES events by roughly 30%, and of NC events by $\sim$70%. In units of $ 10^6$ cm$^{-2}$ s$^{-1}$, a fit in which the free parameters directly describe the total $^8B$ neutrino flux and the energy-dependent $\nu_e$ survival probability provides a measure of the total $^8B$ neutrino flux $\Phi_{^8B} = 5.046 ^{+0.159}_{-0.152} \, ^{+0.107}_{-0.123}$. The uncertainty on $\Phi_{^8B}$ have been significantly reduced compare to previously published results by SNO. The fit for the survival probability assumes unitarity of the neutrino mixing matrix, and that the underlying neutrino spectrum follows a smoothly-distorted $^8B$ shape. The day survival probability extracted by SNO is parameterized as a second-order polynomial $P_{ee}^{\rm day}(E_\nu) = c_0 + c_1 (E_\nu - 10\;{\rm MeV}) + c_2 (E_\nu - 10\;{\rm MeV})^2$; while allowing for a linear energy-dependent asymmetry between day and night spectra $A(E_\nu) = a_0 + a_1(E_\nu - 10\;{\rm MeV})$. The clear deviation from unity of the constant term of the day survival probability $c_0 = 0.3435 \, ^{+0.0205}_{-0.0197} \, ^{+0.0111}_{-0.0066} \, ^{+0.0050}_{-0.0059}$ provides a undeniable signature of solar neutrino oscillations [@bib:snoleta]. On the other hand, no evidence for either a significant spectral distortion or a day/night asymmetry was found. SNO also used data of Phase III to measure the rate of NC interactions in heavy water and precisely determined the total active ($\nu_{x}$) $^8B$ solar neutrino flux [@bib:snoncd]. The total solar neutrino flux is found to be $5.54 ^{+0.33}_{-0.31} \, ^{+0.36}_{-0.34} \times10^{6}$ cm$^{-2}$s$^{-1}$, in agreement with Phase I and II measurements and the SSM. Global Fits =========== Constraints on neutrino mixing parameters can be derived by comparing neutrino oscillation SSM predictions with experimental data. A three-flavor, active neutrino oscillation model has four parameters: $\theta_{12}$ and $\theta_{13}$, which quantify the strength of the mixing between flavor and mass eigenstates, and $\Delta m^2_{21}$ and $\Delta m^2_{31}$, the differences between the squares of the masses of the neutrino propagation eigenstates. Note that the remaining mixing angle, $\theta_{23}$, and the CP-violating phase, $\delta$, are irrelevant for the oscillation analysis of solar neutrino data. The value of $\Delta m^2_{31}$ was fixed to $+2.3\times 10^{-3}\ \mathrm{eV^2}$. For each set of parameters, the oscillation model was used to predict the rates in the Chlorine [@bib:cleveland], Gallium [@bib:ga] and Borexino [@bib:bxBe7] experiments, the Super-Kamiokande Phase I zenith spectra [@bib:SK1] and Phase II day/night spectra [@bib:SK2], SNO Phases I and II survival probability day/night curves [@bib:snoleta], and the SNO Phase III rates [@bib:snoncd]. The expected rates and spectra were divided by the respective predictions, calculated without oscillations, to remove the effects of the model scaling factors. The unitless rates were then used in a global $\chi^2$ calculation. For completeness, the rates and spectrum measured by the antineutrino reactor experiment KamLAND [@bib:kam] are added in the global fit to constrain even further the solar neutrino mixing parameters. The KamLAND rates and spectrum were predicted using three-flavor vacuum oscillations. Fig. \[f:contour-3nu\](a) shows an overlay of the global solar and the KamLAND allowed regions in $\tan^2\theta_{12}$ and $\Delta m^2_{21}$ parameter space, under a two-flavor hypothesis. Fig. \[f:contour-3nu\](b) shows the same overlay for the three-flavor hypothesis that allows the value of $\sin^2\theta_{13}$ to be non-zero. The three-flavor contours show the effect of marginalizing both $\Phi_{^8{\rm B}}$ and $\sin^2\theta_{13}$ at each point in space. ![Solar and KamLAND oscillation parameter analysis for a) a two-flavor oscillation hypothesis and b) a three-flavor hypothesis. The solar data includes Cl, SAGE, Gallex/GNO, Borexino, SK-I zenith and SK-II day/night spectra, SNO Phase I+II survival probability day/night curves and SNO Phase III integral rates. The $\chi^2$ is minimized with respect to all undisplayed parameters, including $\sin^2\theta_{13}$ and $\Phi_{^8{\rm B}}$ [@bib:snoleta].\[f:contour-3nu\] ](contour_2nuoverlay4.eps "fig:"){width="49.00000%"} ![Solar and KamLAND oscillation parameter analysis for a) a two-flavor oscillation hypothesis and b) a three-flavor hypothesis. The solar data includes Cl, SAGE, Gallex/GNO, Borexino, SK-I zenith and SK-II day/night spectra, SNO Phase I+II survival probability day/night curves and SNO Phase III integral rates. The $\chi^2$ is minimized with respect to all undisplayed parameters, including $\sin^2\theta_{13}$ and $\Phi_{^8{\rm B}}$ [@bib:snoleta].\[f:contour-3nu\] ](contour_3nuoverlay4.eps "fig:"){width="49.00000%"} Table \[t:oscpars\] summarizes the oscillation parameters from a two-flavor oscillation analysis, while Table \[t:oscpars3\] summarizes the results from a three-flavor oscillation analysis, performed in the context of a global fit [@bib:snoleta]. -------------------------------------------------------------------------------------------------------- Oscillation analysis $\tan^2\theta_{12}$ $\Delta m^2_{21}(\mathrm{eV}^2)$ ---------------------- -------------------------------------- ------------------------------------------ Solar $0.457\,^{+0.038}_{-0.041}$ $5.89\,^{+2.13}_{-2.16}\times 10\,^{-5}$ Solar+KamLAND $0.457\,^{+0.040}_{-0.029}$ $7.59\,^{+0.20}_{-0.21}\times 10\,^{-5}$ $\chi^2_{\mathrm{min}}/\mathrm{ndf}$ $\Phi_{^8{\rm B}}$ ($\times 10^6\,\rm cm^{-2}\,s^{-1} $) Solar $67.5/89$ $5.104\,^{+0.199}_{-0.148}$ Solar+KamLAND $82.8/106$ $5.013\,^{+0.119}_{-0.148}$ -------------------------------------------------------------------------------------------------------- : Best-fit neutrino oscillation parameters and extracted $^8$B flux from a two-flavor oscillation analysis. Uncertainties listed are $\pm 1\sigma$ after the $\chi^2$ was minimized with respect to all other parameters.[]{data-label="t:oscpars"} ------------------------------------------------------------------------------------------------------------------------ Oscillation analysis $\tan^2\theta_{12}$ $\Delta m^2_{21}(\mathrm{eV}^2)$ ---------------------- -------------------------------------- ---------------------------------------------------------- Solar $0.468\,^{+0.052}_{-0.050}$ $6.31\,^{+2.49}_{-2.58}\times 10\,^{-5}$ Solar+KamLAND $0.468\,^{+0.042}_{-0.033}$ $7.59\,^{+0.21}_{-0.21}\times 10\,^{-5}$ $\chi^2_{\mathrm{min}}/\mathrm{ndf}$ $\Phi_{^8{\rm B}}$ ($\times 10^6\,\rm cm^{-2}\,s^{-1} $) Solar $67.4/89$ $5.115\,^{+0.159}_{-0.193}$ Solar+KamLAND $81.4/106$ $5.087\,^{+0.171}_{-0.159}$ Solar Solar+KamLAND ------------------------------------------------------------------------------------------------------------------------ : Best-fit neutrino oscillation parameters and extracted $^8$B flux from a three-flavor oscillation analysis. Uncertainties listed are $\pm 1\sigma$ after the $\chi^2$ was minimized with respect to all other parameters.[]{data-label="t:oscpars3"} News and Near Future ==================== The new Borexino measurement [@bib:bxB8] of $\nu$-$e$ elastic scattering from $^8B$ solar neutrinos with a 3MeV energy threshold is not included in the global fit of the previous section. This result is limited by statistics, but it is nevertheless relevant because it is probing low energy $^8B$ neutrinos. The rate of solar neutrino-induced electron scattering events above this energy is $0.217\pm 0.038 \pm 0.008$ counts/(day$\cdot$100 ton), which corresponds to $\Phi^{\rm ES}_{\rm ^8B}$ = [2.4 $\pm$ 0.4$\pm$ 0.1]{}$\times$10$^6$ cm$^{-2}$ s$^{-1}$, in good agreement with measurements from SNO and SuperKamiokande. Preliminary solar neutrino results of the third phase of the SuperKamiokande presented at this conference [@bib:skposter] have already been submitted for publication [@bib:SK3]. With improved detector calibrations, a full detector simulation, and improved analysis methods, the systematic uncertainty on the total neutrino flux has been reduced to $\pm 2.1\%$. This leads to a significant contraction of the systematic uncertainty compared to the first phase. The energy threshold of $^8B$ neutrino events of SK-III is also pushed lower with respect to SK-I. It yields an ES rate of 2.32 $\pm$ 0.04 $\pm$ 0.05 $\times 10^{6}$ cm$^{-2}$sec$^{-1}$, in agreement with previous measurements. It is clear that the uncertainty on the solar mixing angle $\theta_{12}$ has been noticeably reduced with data analyses that use lower energy thresholds since it enhanced the experimental ability to explore the SSM prediction of energy-dependent matter-induced neutrino oscillation. Very soon, Borexino and SNO will release new results on $^7Be$ and $^8B$ neutrinos, respectively. Borexino reported at ICHEP2010 a precise day/night asymmetry measurement $A_{\rm{DN}} = 0.007 \pm 0.073$ [@bib:bxADN]. It is foreseen that the statistical precision of Borexino will allow for a detailed investigation of mixing solutions. Expect further improvement with SNO’s final joint model-independent fit of the solar survival probability with data from Phases I, II and III. Summary ======= Solar neutrino oscillation is clearly established by the combination of the results from the chlorine, gallium, SK, Borexino and SNO experiments. The real-time data of SK, Borexino and SNO do not show large energy distortion nor time-like asymmetry. SNO provided the first direct evidence of flavor conversion of solar electron neutrinos by comparing the CC and NC rates. Matter effects seem to explain the energy dependence of solar oscillation, and Large Mixing Angle (LMA) solutions are favored. After 30 years of hard labor from the nuclear and particle physics community, the Solar Neutrino Problem is now an industry for precise measurements of neutrino oscillation parameters with the next generation of solar neutrino and long baseline neutrino experiments at the horizon. Acknowledgments {#acknowledgments .unnumbered} =============== This article builds upon the careful and detailed work of many people. The author has been financially supported in Canada by the Natural Sciences and Engineering Research Council (NSERC), the Canada Research Chair (CRC) Program, and the Canadian Foundation for Innovation (CFI). [99]{} J.N. Bahcall, H.M. Pinsonneault, and S. Basu, [*[Ap. J.]{}*]{} [**[555]{}**]{}, 990 (2001). C. Pena-Garay and A. Serenelli, arXiv:0811.2424 (2008); A. Serenelli, S. Basu, J. Ferguson, and M. Asplund, [*[Ap. J. L.]{}*]{}, [**[705]{}**]{}, L123 (2009). R. Davis, Jr., [*[Phys. Rev. Lett.]{}*]{}, 302 (1964). B.T. Cleveland [*[et al.]{}*]{}, [*[Ap. J.]{}*]{} [**[496]{}**]{}, 505 (1998). J.N. Abdurashitov [*[et al.]{}*]{}, [*[Phys. Rev.]{}*]{} C [**[60]{}**]{}, 055801 (1999). W. Hampel [*[et al.]{}*]{}, [*[Phys. Lett.]{}*]{} B [**[447]{}**]{}, 127 (1999); M. Altmann [*[et al.]{}*]{}, [*[Phys. Lett.]{}*]{} B [**[490]{}**]{}, 16 (2000). J.N. Abdurashitov [*[et al.]{}*]{}, [*[Phys. Rev.]{}*]{} C [**[80]{}**]{} 015807 (2009). Y. Suzuki [*[et al.]{}*]{}, [*[Nucl. Phys.]{}*]{} (Proc. Suppl.) [**[34]{}**]{}, 54 (1995). J. Hosaka [*[et al.]{}*]{}, [*[Phys. Rev.]{}*]{} D [**[73]{}**]{}, 112001 (2006). J. P. Cravens [*[et al.]{}*]{}, [*[Phys. Rev.]{}*]{} D [**[78]{}**]{} 032002 (2008). G. Alimonti [*[et al.]{}*]{}, [*[Nucl. Inst. Meth.]{}*]{} A [**[600]{}**]{}, 568 (2009). C. Arpesella [*[et al.]{}*]{}, [*[Phys. Rev. Lett.]{}*]{} [**[101]{}**]{}, 091302, (2008). J. Boger [*[et al.]{}*]{}, [*[Nucl. Inst. Meth.]{}*]{} A [**[449]{}**]{}, 172 (2000). B. Aharmim [*[et al.]{}*]{}, [*[Phys. Rev.]{}*]{} C [**[81]{}**]{}, 055504 (2010). B. Aharmim [*[et al.]{}*]{}, [*[Phys. Rev. Lett.]{}*]{} [**[101]{}**]{}, 111301 (2008). S. Abe [*[et al.]{}*]{}, [*[Phys. Rev. Lett.]{}*]{} [**[100]{}**]{}, 221803 (2008). G. Bellini [*[et al.]{}*]{}, [*[Phys. Rev.]{}*]{} D [**[82]{}**]{}, 033006 (2010). Hiroyuki Sekiya in these proceedings. S. Abe [*[et al.]{}*]{}, submitted to PRD, arXiv:1010.0118 (2010). Sandra Zavatarelli in these proceedings. [^1]: On behalf of the SNO Collaboration.
--- abstract: 'In this work, we consider expressions for the masses and decay constants of the pseudoscalar mesons in $SU(3)$ chiral perturbation theory. These involve sunset diagrams and their derivatives evaluated at $p^2=m_P^2$ ($P=\pi, K, \eta$). Recalling that there are three mass scales in this theory, $m_\pi$, $m_K$ and $m_\eta$, there are instances when the finite part of the sunset diagrams do not admit an expression in terms of elementary functions, and have therefore been evaluated numerically in the past. In a recent publication, an expansion in the external momentum was performed to obtain approximate analytic expressions for $m_\pi$ and $F_\pi$, the pion mass and decay constant. We provide fully analytic exact expressions for $m_K$ and $m_\eta$, the kaon and eta masses, and $F_K$ and $F_\eta$, the kaon and eta decay constants. These expressions, calculated using Mellin-Barnes methods, are in the form of double series in terms of two mass ratios. A numerical analysis of the results to evaluate the relative size of contributions coming from loops, chiral logarithms as well as phenomenological low-energy constants is presented. We also present a set of approximate analytic expressions for $m_K$, $F_K$, $m_\eta$ and $F_\eta$ that facilitate comparisons with lattice results. Finally, we show how exact analytic expressions for $m_\pi$ and $F_\pi$ may be obtained, the latter having been used in conjunction with the results for $F_K$ to produce a recently published analytic representation of $F_K/F_\pi$.' --- LU TP 18-04\ 17th April 2018 [**Analytic representations of $m_K$, $F_K$, $m_\eta$ and $F_\eta$ in two loop $SU(3)$ chiral perturbation theory**]{} \ [$^a$ Centre for High Energy Physics\ Indian Institute of Science, Bangalore-560012, Karnataka, India]{}\ [$^b$ Department of Astronomy and Theoretical Physics\ Lund University, Sölvegatan 14A, SE 223-62 Lund, Sweden]{}\ [$^c$ Institut de Physique Nucléaire d’Orsay\ Université Paris-Sud 11, IN2P3-CNRS, F-91405 Orsay Cedex, France]{}\ [$^d$ Institut de Physique Nucléaire de Lyon\ Université Lyon 1, IN2P3-CNRS, F-69622 Villeurbanne Cedex, France]{} Introduction ============ In a recent publication, the important ratio $F_K/F_\pi$ was evaluated in a scheme that allows for the derivation of compact analytic approximations in two loop chiral perturbation theory (ChPT) [@Ananthanarayan:2017qmx], based on the Mellin-Barnes (MB) approach detailed in [@Ananthanarayan:2016pos]. In a prior work, a different scheme was employed to obtain analytic approximations of $m_\pi$ and $F_\pi$ in $SU(3)$ ChPT at two-loops [@Ananthanarayan:2017yhz]. Recall that ChPT is an effective field theory for the pseudo-scalar octet degrees of freedom, namely the pions, kaons and eta. At one-loop order, this theory was elucidated in [@Gasser:1983yg; @Gasser:1984gg]. At two-loop order, the $SU(2)$ theory with just the pion degrees of freedom was worked out in [@Bijnens:1997vq], while the significantly more complicated $SU(3)$ theory has been described in [@Bijnens:2006zp]. For many observables and processes of interest in the $SU(2)$ theory, there is a single mass scale in the problem when isospin violation and electromagnetic corrections are neglected, namely the pion mass. At the two-loop order, integrals that arise in this context have been discussed in [@Gasser:1998qt]. In the $SU(3)$ theory, all three masses of the pseudoscalar mesons may appear in quantities of interest. Of the relevant integrals in the latter case, the self-energy diagram, which is known as the sunset, may be represented as: $$\begin{aligned} H_{\{\alpha,\beta,\gamma\}}^d \{m_1,m_2,m_3; p^2\} = \frac{1}{i^2} \int \frac{d^dq}{(2\pi)^d} \frac{d^dr}{(2\pi)^d} \frac{1}{[q^2-m_1^2]^{\alpha} [r^2-m_2^2]^{\beta} [(q+r-p)^2-m_3^2]^{\gamma}} \label{Eq:SunsetDef}\end{aligned}$$ In our conventions for dimensional regularisation, $d=4-2\epsilon$. ![Sunset diagram[]{data-label="sunset"}](figsunset.eps) Tarasov established that integration by parts allows one to express all sunset integrals using a minimal set of four master integrals (MI) [@Tarasov:1997kx]. For some configurations, such as the one mass scale case, the representation of the sunset in terms of MI may require fewer than the full set of four. For quantities of interest in $SU(3)$ ChPT, such as the masses and decay constants of the pion, kaon and eta (denoted by $m_\pi$, $F_\pi$, $m_K$, $F_K$, $m_\eta$ and $F_\eta$, respectively), several variations of the basic sunset diagram of Eq.(\[Eq:SunsetDef\]) appear in their expressions. These include $H_1$, $H_{21}$ and $H_{22}$, which are the scalar integrals appearing from the Passarino-Veltman decomposition of the tensor sunset integrals: $$\begin{aligned} & H_{\mu}^d = p_{\mu} H_{1} \nonumber \\ & H_{\mu \nu}^d = p_{\mu} p_{\nu} H_{21} + g_{\mu \nu} H_{22}\end{aligned}$$ where $$\begin{aligned} H_{\mu}^d \{m_1,m_2,m_3; p^2\} = \frac{1}{i^2} \int \frac{d^dq}{(2\pi)^d} \frac{d^dr}{(2\pi)^d} \frac{q_{\mu}}{[q^2-m_1^2] [r^2-m_2^2] [(q+r-p)^2-m_3^2] } \nonumber \\ \nonumber \\ H_{\mu \nu}^d \{m_1,m_2,m_3; p^2\} = \frac{1}{i^2} \int \frac{d^dq}{(2\pi)^d} \frac{d^dr}{(2\pi)^d} \frac{q_{\mu} q_{\nu}}{[q^2-m_1^2] [r^2-m_2^2] [(q+r-p)^2-m_3^2] }\end{aligned}$$ These may be expressed in terms of the MI. Similarly, while the meson masses require the evaluation of only basic sunset integrals, the decay constants also need calculation of the derivatives of the sunsets with respect to the square of the external momentum. However, it is possible to discuss both the mass and decay constant on an equal footing by reducing the task to evaluating just the MI. It is of interest to obtain representations of the $m_P$ and $F_P$ that do not require numerical evaluation of the higher order loop integrals. Such analytic approaches allow for more widespread use of these expressions, and facilitate cross-disciplinary studies. An example of the latter would be comparisons with lattice simulations as the quark masses are varied in order to obtain insights into the behaviour of these quantities. In [@Ecker:2010nc; @Ecker:2013pba], for example, an approximation for $F_K/F_\pi$ was obtained by means of a large-$N$ approach at the Lagrangian level, and the resulting expression was used to fit lattice data to extract values of several ChPT parameters. In [@Kaiser:2007kf], $m_\pi$ was treated by taking an approximation at the level of the loop integral [@Kaiser:2007kf]. In [@Ananthanarayan:2017yhz], some of the present authors extended the former work to the case of $F_\pi$, and were able to integrate out the s-quark from the expressions of the pion mass and decay constant, in the same way as the $SU(3)$ ChPT reduces to the $SU(2)$ version, reproducing known results in the chiral limit, as well as evaluating the departure from the chiral limit to leading order in the light quark mass. The goal of this paper is in the same spirit of furthering studies in the area of analytic approaches to observables and other quantities of interest. In particular, the aims of the current work are: - To extend the work of [@Kaiser:2007kf; @Ananthanarayan:2017yhz] to the case of the kaon and eta, and provide approximate analytic expressions for $m_K$, $m_\eta$, $F_K$ and $F_\eta$ that are easily amenable to fitting with lattice data. - To provide exact (non-approximate) two loop analytic expressions for all the pseusoscalar meson masses and decay constants. - To perform a first order study of the numerics of $m_K$, $m_\eta$, $F_K$ and $F_\eta$ to determine the relative contributions to them of their different components, as well as their dependence on the values of parameters such as the low energy constants of ChPT. Although this work is a sequel to [@Ananthanarayan:2017yhz], the approach taken here is completely different and novel. In the aforementioned work, as well as in [@Kaiser:2007kf], the three mass scale sunset integrals were approximated by taking an expansion in the external momentum $p^2=m_\pi^2$ around zero. In the case of the kaon and eta, however, such an expansion around $p^2=m_K^2$ or $p^2=m_\eta^2$ results in a poorly converging series due to the presence of the much smaller $m_\pi^2$ in the propagator. An expansion about the propagator mass $m_\pi^2 = 0$ is also not feasible as this gives rise to an infrared divergence. In this work, therefore, we turn to the MB approach to evaluate the three-mass scale sunset integrals. The analytic evaluation of a sunset integral depends on its mass configuration. For special cases, with upto two distinct mass scales, closed form results are available [@Berends:1997vk; @Czyz:2002re; @Davydychev:1992mt] [^1]. With two mass scales not falling into the threshold or pseudo-threshold configurations [@Ananthanarayan:2016pos], or with three distinct mass scales, the sunsets cannot be written in terms of elementary functions. In fact, the sunset diagram with three different masses and arbitrary $p^2$ cannot even be expressed entirely in terms of multiple polylogarithms. In [@Berends:1993ee], for non-zero $\epsilon$, expressions in terms of Lauricella functions are given but, as emphasized in [@Adams:2016sob], none of the present methods allows for an expansion of the Lauricella functions in terms of multiple polylogarithms. Indeed, it seems that, as shown in [@Adams:2015gva], the sunset diagram requires the introduction of yet another generalisation of the polylogarithms, the so-called elliptic polylogarithms (see also [@Ablinger:2017bjx] for more details on elliptic integrals in the context of sunset integrals). In this work, we adopt a more utilitarian approach to get the analytical expressions needed for our analysis. We keep to the spirit of [@Berends:1993ee] where, once the $\epsilon\rightarrow 0$ limit of the Lauricella functions is taken, one stays with triple series in powers and logarithms of the mass ratios that one may truncate to achieve a desired level of accuracy. Note, however, that the expressions given in [@Berends:1993ee] cannot be used to obtain an analytic expression of the sunset valid in the context of ChPT, since the triple series given there do not converge for the physical values of the pseudo-scalar meson masses. In this work, we therefore present the full analytic expressions valid for the kaon and eta masses and decay constants in terms of double infinite series involving two mass ratios, thus completing the programme first initiated in [@Post:1996gg] of evaluating the three mass scale sunset diagrams in $SU(3)$ ChPT [^2]. Here, we present only the results, and the complete derivation will be given in an upcoming paper [@Ananthanarayan:2018]. An overview of the method used is given in [@Ananthanarayan:2016pos], and detailed descriptions can be found in [@Friot:2011ic; @Aguilar:2008qj]. One of the possible applications of fully analytic representations of the quantities considered here is their use to obtain different analytic approximations that may easily be fitted with data coming from various lattice simulations in conjunction with various lattice data. In addition to the full results, we therefore also present a set of analytic approximations that may easily be fitted with data from lattice simulations. These approximations are obtained by suitably truncating the infinite series so that their omitted tails are numerically smaller than a chosen percentage of the central value obtained from the exact expression[^3] for the inputs being considered. And as these inputs will depend on the precise lattice set being used, the approximations will need to be changed accordingly. Therefore, we present along with this paper, a set of supplementary `Mathematica` files that automates the task of finding a suitable truncation for the sunsets, given a set of (lattice) input masses and a permissible error threshold value. The lattice expressions given in Section \[Sec:LatticeFits\] are suitable for fits with the data given in [@Durr:2016ulb], and an illustrative fit with these expressions is presented in [@Ananthanarayan:2017qmx]. The structure of this paper is as follows. In Section \[Sec:MI\], we present our notation and a short discussion on convergence properties of our series representations of the sunset integrals. In Section \[Sec:KaonMass\], the expression for the kaon mass is given, in Section \[Sec:KaonDecay\] the same is given for the kaon decay constant, in Section \[Sec:EtaMass\] we give the expression for the eta mass, and in Section \[Sec:EtaDecay\] we give the expression for the eta decay constant. In these sections, the expressions presented are simplified using the Gell-Mann-Okubo (GMO) formula. The same expressions, in which the eta mass has been retained, are presented in Appendix \[Sec:NonGMOExpr\]. Simplified analytic results for each of the two sets of four master integrals appearing in the expressions for the kaon and eta masses and decay constants, obtained from the ancillary `Mathematica` code, are discussed in Section \[Sec:NumAnalysis\], while full results for these master integrals are given in Appendix \[Sec:SunsetResults\]. Numerical implications of the expressions presented in the paper are also shown in Section \[Sec:NumAnalysis\]. This work closes by presenting a set of compact expressions for each of $m_K^2$, $m_\eta^2$, $F_K$ and $F_\eta$, which may be used for easy fitting with lattice data, in Section \[Sec:LatticeFits\], which is followed by a detailed summary and conclusion section. In Appendix \[Sec:PionSunsets\], we explain how exact expressions may be obtained for $m_\pi$ and $F_\pi$, and also provide a set of truncated three mass scale sunset results that may be used to produce approximate analytic expressions for $m_\pi$, $F_\pi$ and $F_K/F_\pi$. Sunset Master Integrals: notation and convergence properties of the series representations \[Sec:MI\] ===================================================================================================== The four three-mass-scale sunset master integrals that arise in the kaon mass and decay constant expressions are: $$\begin{aligned} \nonumber H_{\{1,1,1\}}^d\{ m_K, m_{\pi},m_{\eta}; p^2=m_K^2\},\\ \nonumber H_{\{2,1,1\}}^d\{ m_K, m_{\pi},m_{\eta}; p^2=m_K^2\},\\ \nonumber H_{\{1,2,1\}}^d\{ m_K, m_{\pi},m_{\eta}; p^2=m_K^2\}, \\ H_{\{1,1,2\}}^d\{ m_K, m_{\pi},m_{\eta}; p^2=m_K^2\}.\end{aligned}$$ and the three independent three-mass-scale sunset master integrals that arise in the eta mass and decay constant expressions are: $$\begin{aligned} \nonumber H_{\{1,1,1\}}^d\{ m_\pi, m_K, m_K; p^2=m_\eta^2 \},\\ \nonumber H_{\{2,1,1\}}^d\{ m_\pi, m_K, m_K; p^2=m_\eta^2\},\\ H_{\{1,2,1\}}^d\{ m_\pi, m_K, m_K; p^2=m_\eta^2\}.\end{aligned}$$ In ChPT, the renormalisation is normally done using a modified form of the $\overline{MS}$ scheme, and involves multiplying the sunset integral with the factor $(\mu^{2}_{\chi})^{4-d}$, where: $$\begin{aligned} \mu^2_{\chi} \equiv \mu^2 \frac{e^{\gamma_E - 1}}{4\pi}\end{aligned}$$ We therefore define: $$\begin{aligned} H^{\chi} \equiv (\mu^2_{\chi})^{4-d} H^d\end{aligned}$$ which is the sunset integral suitably renormalised. In this paper, we denote the sunset integrals normalised using the $\overline{MS}_{\chi}$ scheme by $H^{\chi}$, to differentiate it from the unrenormalised sunset integral $H^d$ defined in Eq.(\[Eq:SunsetDef\]). This renormalisation introduces in each of these integrals terms containing chiral logarithms: $$\begin{aligned} l_{P}^{r} = \frac{1}{2(4\pi)^2}\log \left[ \frac{m_P^2}{\mu^2} \right], \; \; \; \; \; \; \; \; \; \; P = \pi, K, \eta .\end{aligned}$$ We denote the chiral log terms of a sunset integral using a $\log$ superscript, i.e. $H^{\log}$, and the rest of the integral by a bar, i.e. $\overline{H}$. Therefore we have, for instance: $$\begin{aligned} H^{\chi} \equiv \overline{H}^{\chi} + H^{\chi,\log}\end{aligned}$$ We also adopt the notation of [@Kaiser:2007kf] and denote the sunset integrals in this paper by means of the abbreviation: $$\begin{aligned} H_{aP \, bQ \, cR} \equiv H_{\{a,b,c\}} \{ m_P, m_Q, m_R; p^2 = m_{K}^2 \}\end{aligned}$$ where we normally omit the numerical index $a,b,c$ on the LHS of the above when their values are unity, and with either the $\log$ superscript or bar over the $H$, as well as either a $\chi$ or a $d$ superscript on it, as appropriate. The $H^{\chi,\log}$ for the master integrals considered in this paper are given by: $$\begin{aligned} & H^{\chi,log}_{P \, Q \, R} = 4 m_P^2 (l^r_P)^2 + 4 m_Q^2 (l^r_Q)^2 + 4 m_R^2 (l^r_R)^2 - \frac{m_P^2}{8\pi^2} l^r_P - \frac{m_Q^2}{8\pi^2} l^r_Q -\frac{m_R^2}{8\pi^2} l^r_R + \frac{s}{16\pi^2} l^r_{s} \nonumber \\ & H^{\chi,log}_{2P \, Q \, R} = 4 (l^r_P)^2 + \frac{1}{8\pi^2} l^r_P \nonumber \\ & H^{\chi,log}_{P \, 2Q \, R} = 4 (l^r_Q)^2 + \frac{1}{8\pi^2} l^r_Q \nonumber \\ & H^{\chi,log}_{P \, Q \, 2R} = 4 (l^r_R)^2 + \frac{1}{8\pi^2} l^r_R\end{aligned}$$ where $s=p^2$, and $l^r_s = l^r_K$ or $l^r_\eta$ as the case may be. The full expressions for these master integrals are given in Appendix \[Sec:SunsetResults\] in the form of linear combinations of independent terms, single infinite series, and double infinite series. These series do not converge for all values of the masses. Rather, they converge for values of the masses that satisfy the following set of inequalities: $(m_{\pi} < m_{\eta}) \bigwedge (m_{\pi} + m_{\eta} < 2m_{K})$, which is graphically described by the blue areas in Figure \[Fig:RegOfConv\], plotted with two different sets of mass ratio axes. The black line in the left panel denotes mass values that obey the GMO formula, while the red dot in both panels marks the physical values of the meson masses. ![Region of convergence for results presented in Appendix \[Sec:SunsetResults\] (blue area).[]{data-label="Fig:RegOfConv"}](convergence.eps)    ![Region of convergence for results presented in Appendix \[Sec:SunsetResults\] (blue area).[]{data-label="Fig:RegOfConv"}](convergence2.eps) The pseudoscalar meson masses and decay constants {#Sec:MainResults} ================================================= The expressions for the pseudo-scalar meson masses is given up to two loops in [@Amoros:1999dp] as: $$\begin{aligned} m^2_{P} = m^{2}_{P0} + \left( m^{2}_{P} \right)^{(4)} + \left( m^{2}_{P} \right)^{(6)}_{CT} + \left( m^{2}_{P} \right)^{(6)}_{loop} + \mathcal{O}(p^8) \label{NNLOMass}\end{aligned}$$ where $P$ is the particle in question. The model independent $\mathcal{O}(p^6)$ contribution can be subdivided as: $$\begin{aligned} F_{\pi}^4 \left( m^{2}_{P} \right)^{(6)}_{loop} = c_{sunset}^{P} + c_{log \times log}^{P} + c_{log}^{P} + c_{log \times L_i}^{P} + c_{L_i}^{P} + c_{L_i \times L_j}^{P}\end{aligned}$$ where the $c_{log}^{P}$ represents the terms containing the chiral logarithms, $c_{log \times log}^{P}$ are the bilinear chiral log terms, $c_{L_i}^{P}$ are those terms proportional to the low energy constants $L_i$, $c_{L_i \times L_j}^{P}$ are terms bilinear in the LECs, and $c_{log \times L_i}^{P}$ are those terms that contain a product of the low energy constants and a chiral logarithm. The expressions and breakup for their decay constants is similar: $$\begin{aligned} \frac{F_P}{F_0} = 1 + F_P^{(4)} + \left( F_P \right)^{(6)}_{CT} + \left( F_P \right)^{(6)}_{loop} + \mathcal{O}(p^8) \label{NNLODecay}\end{aligned}$$ where: $$\begin{aligned} F_{\pi}^4 \left( F_P \right)^{(6)}_{loop} = d_{sunset}^{P} + d_{log \times log}^{P} + d_{log}^{P} + d_{log \times L_i}^{P} + d_{L_i}^{P} + d_{L_i \times L_j}^{P}\end{aligned}$$ In the following sections, we give explicit expressions for each of the terms above for the kaon mass and decay constant. The expressions have been simplified by the use of the GMO relation to rewrite the eta mass in terms of the pion and kaon masses, except in the chiral logarithms, in which the eta mass is understood to mean $ m_{\eta0}^2 = (4 m_{K0}^2-m_{\pi0}^2)/3$. The full expressions, not simplified by use of the GMO relation, are given in Appendix \[Sec:NonGMOExpr\] . The kaon mass \[Sec:KaonMass\] ------------------------------ In this section, we use the expression for the kaon mass to two-loops given in Eqs.(A.5)-(A.7) of [@Amoros:1999dp], and rewrite the linear log terms, the bilinear log terms, and the terms involving the sunset integrals (e.g. $H^F$, $H_1^F$) in Eq.(A.7) of [@Amoros:1999dp] by applying Tarasov’s relations [@Tarasov:1997kx]. The kaon mass is given in [@Amoros:1999dp] as: $$\begin{aligned} m^2_{K} = m^{2}_{K0} + \left( m^{2}_{K} \right)^{(4)} + \left( m^{2}_{K} \right)^{(6)}_{CT} + \left( m^{2}_{K} \right)^{(6)}_{loop} + \mathcal{O}(p^8) \end{aligned}$$ The expressions given here have been simplified using the GMO relation. See Appendix \[Sec:NonGMOExpr\] for the full results. The full form of the components above are given by: $$\begin{aligned} m^{2}_{K} = B_0 \left( m_s + \hat{m} \right) \label{cTreeMass}\end{aligned}$$ $$\begin{aligned} \frac{F_{\pi}^2}{m_K^2} \left( m^{2}_{K} \right)^{(4)} = 8(m_{\pi}^2 + 2 m_K^2)(2 L_6^r - L_4^r) + 8 m_K^2 (2L_8^r - L_5^r) + \frac{2}{9} \left(4 m_{K}^2-m_{\pi}^2\right) l^r_{\eta} \label{KaonMassNLOContrib}\end{aligned}$$ $$\begin{aligned} \frac{F_{\pi}^2}{m_K^2} \left( m^{2}_{K} \right)^{(6)}_{CT} =& -32 m_{K}^4 C^r_{12} - 32 m_{K}^2 \left(2 m_{K}^2+m_{\pi}^2\right) C^r_{13} - 16 \left(2 m_{K}^4-2 m_{K}^2 m_{\pi}^2+m_{\pi}^4\right) C^r_{14} \nonumber \\ & -16 m_{K}^2 \left(2 m_{K}^2+m_{\pi}^2\right) C^r_{15} -16 \left(4 m_{K}^4-4 m_{K}^2 m_{\pi}^2+3 m_{\pi}^4\right) C^r_{16} \nonumber \\ & +16 m_{\pi}^2 \left(m_{\pi}^2-2 m_{K}^2\right) C^r_{17} + 48 \left(2 m_{K}^4-2 m_{K}^2 m_{\pi}^2+m_{\pi}^4\right) C^r_{19} \nonumber \\ & +16 \left(8 m_{K}^4-2 m_{K}^2 m_{\pi}^2+3 m_{\pi}^4\right) C^r_{20} + 48 \left(2 m_{K}^2+m_{\pi}^2\right)^2 C^r_{21} \nonumber \\ & +32 m_{K}^4 C^r_{31} + 32 m_{K}^2 \left(2 m_{K}^2+m_{\pi}^2\right) C^r_{32} \label{cCT}\end{aligned}$$ and $$\begin{aligned} F_{\pi}^4 \left( m^{2}_{K} \right)^{(6)}_{loop} = c^{K}_{L_i} + c^{K}_{L_i \times L_j} + c^{K}_{log \times L_i} + c^{K}_{log} + c^{K}_{log \times log} + c^{K}_{sunset}\end{aligned}$$ where $$\begin{aligned} 27 (16 \pi^2) c^{K}_{L_i} =& 108 m_K^6 L_1^r + 6 m^2_K \left(61 m_K^4-8 m_K^2 m_{\pi}^2+28 m_{\pi}^4\right) L^r_2 + m^2_K \left(89 m_K^4 - 4 m_K^2 m_{\pi}^2 + 41 m_{\pi}^4 \right) L_3^r \nonumber \\ & -32 m^2_K \left( m_K^2 - m_{\pi}^2\right)^2 \left( L^r_5 -12 L^r_7 -6 L^r_8 \right)\end{aligned}$$ $$\begin{aligned} c^{K}_{L_i \times L_j} =& -128 \left(4 m_K^6 + 4 m_K^4 m_{\pi}^2 + m_K^2 m_{\pi}^4\right) (L_4^r)^2 - 128 \left(3 m_K^6 + 2 m_K^4 m_{\pi}^2 + m_K^2 m_{\pi}^4\right) L_4^r L_5^r \nonumber \\ & + 512 \left(4 m_K^6 + 4 m_K^4 m_{\pi}^2 + m_K^2 m_{\pi}^4\right) L_4^r L_6^r + 128 \left(8 m_K^6 + 3 m_K^4 m_{\pi}^2 + m_K^2 m_{\pi}^4\right) L_4^r L_8^r \nonumber \\ & - 64 \left(m_K^6 + m_K^4 m_{\pi}^2\right)(L_5^r)^2 + 256 \left(3 m_K^6 + 2 m_K^4 m_{\pi}^2 + m_K^2 m_{\pi}^4\right) L_5^r L_6^r \nonumber \\ & + 128 \left(3 m_K^6 + m_K^4 m_{\pi}^2\right) L_5^r L_8^r -512 \left(4 m_K^6 + 4 m_K^4 m_{\pi}^2 + m_K^2 m_{\pi}^4\right) (L_6^r)^2 \nonumber \\ & - 256 \left(8 m_K^6 + 3 m_K^4 m_{\pi}^2 + m_K^2 m_{\pi}^4\right) L_6^r L_8^r - 512 m_K^6 (L_8^r)^2 \label{cLiLj}\end{aligned}$$ $$\begin{aligned} c^{K}_{log \times L_i} =& 2 m_K^2 m_{\pi}^2 \bigg\{ -3 m_{\pi}^2 (16 L^r_{1}+4 L^r_{2}+5 L^r_{3})+4 \left(8 m_K^2+17 m_{\pi}^2\right) L^r_{4} + 4 \left(4 m_K^2+3 m_{\pi}^2\right) (L^r_{5}-2 L^r_{8}) \nonumber \\ & \quad -8 \left(8 m_K^2+11 m_{\pi}^2\right) L^r_{6} \bigg\} l_{\pi}^r \nonumber \\ & - 4 m_K^4 \bigg\{ m_{K}^2 (36 L^r_{1}+18 L^r_{2}+15 L^r_{3}-16 L^r_{5}+32 L^r_{8})-4 \left(10 m_{K}^2+m_{\pi}^2\right) L^r_{4} + 8 \left(8 m_{K}^2+m_{\pi}^2\right) L^r_{6} \bigg\} l_K^r \nonumber \\ & - \frac{2}{9} m_K^2 \bigg\{ \left(4 m_K^2-m_{\pi}^2\right)^2 (16 L^r_{1}+4 L^r_{2}+7 L^r_{3})-12 L^r_{4} \left(32 m_K^4-12 m_K^2 m_{\pi}^2+m_{\pi}^4\right) \nonumber \\ & \quad -4 \left(32 m_K^4-2 m_K^2 m_{\pi}^2-3 m_{\pi}^4\right) L^r_{5} + 8 \left(64 m_K^4-20 m_K^2 m_{\pi}^2+m_{\pi}^4\right) L^r_{6} + 96 m_{\pi}^2 \left(m_K^2-m_{\pi}^2\right) L^r_{7} \nonumber \\ & \quad +8 \left(32 m_K^4-6 m_K^2 m_{\pi}^2-5 m_{\pi}^4\right) L^r_{8} \bigg\} l_{\eta}^r\end{aligned}$$ $$\begin{aligned} \left(-16\pi^2\right) c^{K}_{log} =& \left( \frac{11}{4} m_{K}^4 m_{\pi}^2 + \frac{455}{144} m_{K}^2 m_{\pi}^4 \right) l^r_{\pi} + \left( \frac{148}{27} m_{K}^6 - \frac{5}{4} m_{K}^4 m_{\pi}^2 - \frac{13}{432} m_{K}^2 m_{\pi}^4 \right) l^r_{\eta} \nonumber \\ & + \left( \frac{41}{18} m_{K}^4 m_{\pi}^2 + \frac{487}{72} m_{K}^6 \right) l^r_{K} \label{cBarlog}\end{aligned}$$ $$\begin{aligned} c^{K}_{log \times log} =& \left(-\frac{11}{81} m_{K}^6 - \frac{47}{81} m_{K}^4 m_{\pi}^2 + \frac{1279}{1296} m_{K}^2 m_{\pi}^4 - \frac{5}{24} m_{\pi}^6 \right) (l^r_{\eta})^2 \nonumber \\ & + \left( \frac{14}{9} m_{K}^6 + \frac{19}{18} m_{K}^4 m_{\pi}^2 -\frac{1}{4} m_{K}^2 m_{\pi}^4 \right) l^r_{\eta} l^r_{K} + \left(\frac{4}{9} m_{K}^4 m_{\pi}^2 + \frac{43}{6} m_{K}^6 \right) (l^r_{K})^2 \nonumber \\ & - \left(\frac{3}{2} m_{K}^4 m_{\pi}^2 - \frac{1}{4} m_{K}^2 m_{\pi}^4 \right) l^r_{\pi} l^r_{K} + \left(\frac{1}{2} m_{K}^4 m_{\pi}^2 + \frac{169}{48} m_{K}^2 m_{\pi}^4 - \frac{5}{24} m_{\pi}^6 \right) (l^r_{\pi})^2 \nonumber \\ & - \left(\frac{55}{18} m_{K}^4 m_{\pi}^2 + \frac{97}{72} m_{K}^2 m_{\pi}^4 - \frac{5}{12} m_{\pi}^6 \right) l^r_{\pi}l^r_{\eta} \label{cBarloglog} \end{aligned}$$ The expressions of Eq.(\[cBarlog\])-(\[cBarloglog\]) above are a combination of the linear and bilinear chiral logarithms arising from the evaluation of the sunset integrals, those stemming directly from the $\mathcal{O}(p^6)$ kaon mass expression, as well as contributions arising from the $\mathcal{O}(p^4)$ term due to application of the GMO relation. Similarly, the $c^{K}_{L_i}$and $c^{K}_{log \times L_i}$ components are also made up of terms taken directly from the $\mathcal{O}(p^6)$ kaon mass expression, and contributions arising from the $\mathcal{O}(p^4)$ term due to application of the GMO relation. The $c^{K}_{sunset}$ term itself has the following contributions to it, where the terms of the first line are a combination of terms from the kaon mass expression as well as from the single mass sunset integrals: $$\begin{aligned} c^{K}_{sunset} = \frac{1}{\left(16 \pi ^2\right)^2} & \bigg\{ \left(\frac{767}{108}+\frac{427 \pi^2}{1296}\right) m_{K}^4 m_{\pi}^2 - \left(\frac{12307}{3456}+\frac{275 \pi ^2}{648}\right) m_{K}^6 - \left(\frac{571}{288}+\frac{59 \pi ^2}{216}\right) m_{K}^2 m_{\pi}^4 \nonumber \\ & - \left(\frac{49}{72}+\frac{\pi ^2}{48}\right) m_{\pi}^6 \bigg\} + c^{K}_{K \pi \pi} + c^{K}_{K \eta \eta} + c^{K}_{K \pi \eta} \end{aligned}$$ where $$\begin{aligned} c^{K}_{K \pi \pi} &= \left(-\frac{3}{32} m_{K}^4 + \frac{9}{16} m_{K}^2 m_{\pi}^2 + \frac{9}{32} m_{\pi}^4 \right) \overline{H}^{\chi}_{K \pi \pi} + \left(\frac{3}{8} m_{K}^6 - \frac{3}{8} m_{K}^2 m_{\pi}^4 \right) \overline{H}^{\chi}_{2K \pi \pi} \label{cBarkpp}\end{aligned}$$ $$\begin{aligned} c^{K}_{K \eta \eta} &= \left(\frac{289}{288} m_{K}^4 -\frac{41}{48} m_{K}^2 m_{\pi}^2 + \frac{5}{32} m_{\pi}^4 \right) \overline{H}^{\chi}_{K \eta \eta} + \left(-\frac{73}{72} m_{K}^6 + \frac{11}{9} m_{K}^4 m_{\pi}^2 - \frac{5}{24} m_{K}^2 m_{\pi}^4 \right) \overline{H}^{\chi}_{2K \eta \eta}\end{aligned}$$ $$\begin{aligned} c^{K}_{K \pi \eta} &= \left( \frac{17}{16} m_{K}^4 - \frac{17}{24} m_{K}^2 m_{\pi}^2 + \frac{7}{48} m_{\pi}^4 \right) \overline{H}^{\chi}_{K \pi \eta} - \left( m_{K}^4 m_{\pi}^2 - \frac{5}{4} m_{K}^2 m_{\pi}^4 + \frac{1}{4} m_{\pi}^6 \right) \overline{H}^{\chi}_{K 2\pi \eta} \nonumber \\ & - \left( \frac{1}{3} m_{K}^6 - \frac{7}{36} m_{K}^4 m_{\pi}^2 - \frac{7}{36} m_{K}^2 m_{\pi}^4 + \frac{1}{18} m_{\pi}^6 \right) \overline{H}^{\chi}_{K \pi 2\eta}\end{aligned}$$ The terms $c^{K}_{\pi \pi K}, c^{K}_{\eta \eta K}, c^{K}_{K \pi \eta}$ are the result of applying Tarasov’s relations to the variety of sunset integrals appearing in Eq.(A.7) of [@Amoros:1999dp] and rewriting them in terms of the master integrals given in Appendix \[Sec:SunsetResults\]. The kaon decay constant \[Sec:KaonDecay\] ----------------------------------------- The treatment of the kaon decay constant is similar to that of the kaon mass in the previous section, except that the expression for the kaon decay constant also involves derivatives of the sunsets with respect to the external momentum. The kaon decay constant to two-loops is given in Eqs.(A.15)-(A.17) of [@Amoros:1999dp] as: $$\begin{aligned} \frac{F_K}{F_0} = 1 + F_K^{(4)} + \left( F_K \right)^{(6)}_{CT} + \left( F_K \right)^{(6)}_{loop} + \mathcal{O}(p^8) \end{aligned}$$ where: $$\begin{aligned} F_{\pi}^2 F_K^{(4)} = 4\left(2 m_{K}^2+m_{\pi}^2\right) L_4^r + 4 m_{K}^2 L_5^r - \frac{3}{4} m_{\pi}^2 l_{\pi}^r - \frac{3}{2} m_{K}^2 l^r_{K} -\frac{1}{4} \left(4 m_{K}^2-m_{\pi}^2\right) l^r_{\eta} \label{KaonDecayNLOContrib}\end{aligned}$$ $$\begin{aligned} F_{\pi}^4 \left( F_K \right)^{(6)}_{CT} &= 8 \left(2 m_{K}^4-2 m_{K}^2 m_{\pi}^2+m_{\pi}^4\right) C^r_{14} + 8 m_{K}^2 \left(2 m_{K}^2+m_{\pi}^2\right) C^r_{15} \nonumber \\ & + 8 \left(4 m_{K}^4-4 m_{K}^2 m_{\pi}^2+3 m_{\pi}^4\right) C^r_{16} + 8 m_{\pi}^2 \left(2 m_{K}^2-m_{\pi}^2\right) C^r_{17} \label{dCT}\end{aligned}$$ and $$\begin{aligned} F_{\pi}^4 \left( F_K \right)^{(6)}_{loop} = d^K_{L_i} + d^K_{L_i \times L_j} + d^K_{log \times L_i} + d^K_{log} + d^K_{log \times log} + d^K_{sunset}\end{aligned}$$ where: $$\begin{aligned} - 54(16\pi^2) d^K_{L_i} =& 108 m_{K}^4 L_1^r + 6 \left(61 m_{K}^4 - 8 m_{K}^2 m_{\pi}^2 + 28 m_{\pi}^4 \right) L_2^r + \left(89 m_{K}^4 - 4 m_{K}^2 m_{\pi}^2 + 41 m_{\pi}^4 \right) L_3^r \nonumber \\ & - 72 \left(m_{K}^2-m_{\pi}^2\right)^2 \left( L^r_{5} - 12 L^r_{7} - 6 L^r_{8} \right)\end{aligned}$$ $$\begin{aligned} d^K_{L_i \times L_j} =& 56 \left(4 m_{K}^4 + 4 m_{K}^2 m_{\pi}^2 + m_{\pi}^4 \right) (L_{4}^r)^2 + 16 \left(10 m_{K}^4 + 7 m_{K}^2 m_{\pi}^2 + 4 m_{\pi}^4\right) L_4^r L_5^r \nonumber \\ & -64 \left(4 m_{K}^4 + 4 m_{K}^2 m_{\pi}^2 + m_{\pi}^4\right) L_4^r L_6^r - 64 \left(2 m_{K}^4 + m_{\pi}^4\right) L_4^r L_8^r \nonumber \\ & + 8 \left(3 m_{K}^4 + 4 m_{K}^2 m_{\pi}^2\right) (L_{5}^r)^2 -64 \left(2 m_{K}^4 + m_{K}^2 m_{\pi}^2\right) L_5^r L_6^r -64 m_{K}^4 L_5^r L_8^r \label{dBarLiLj}\end{aligned}$$ $$\begin{aligned} d^K_{log \times L_i} =& \left\{ 48 m_{\pi}^4 L^r_{1} + 12 m_{\pi}^4 L^r_{2} + 15 m_{\pi}^4 L^r_{3} - \left(38 m_{K}^2 m_{\pi}^2+47 m_{\pi}^4\right) L^r_{4} - \left(19 m_{K}^2 m_{\pi}^2+6 m_{\pi}^4 \right) L^r_{5} \right\} l^r_{\pi} \nonumber \\ & + \left\{ 72 m_{K}^4 L^r_{1} + 36 m_{K}^4 L^r_{2} + 30 m_{K}^4 L^r_{3} - 2 m_{K}^2 \left(30 m_{K}^2+7 m_{\pi}^2\right) L^r_{4} - 2 m_{K}^2 \left(7 m_{K}^2+6 m_{\pi}^2\right) L^r_{5} \right\} l^r_{K} \nonumber \\ & + \bigg\{ \frac{1}{9} \left(4 m_K^2-m_{\pi}^2\right)^2 \left( 16 L^r_{1} + 4 L^r_{2} +7 L^r_{3} \right) - \frac{1}{3} \left(4 m_K^2-m_{\pi}^2\right) \left(22 m_K^2 - m_{\pi}^2\right) L^r_{4} \nonumber \\ & \quad - \frac{1}{3} \left(4 m_K^4 + 37 m_K^2 m_{\pi}^2 - 14 m_{\pi}^4\right) L^r_{5} - 16 \left(m_K^2-m_{\pi}^2\right)^2 \left( 2 L^r_{7} + L^r_{8} \right) \bigg\} l^r_{\eta}\end{aligned}$$ $$\begin{aligned} \left(16\pi^2\right) d^K_{log} =& \left(\frac{3}{8} m_{K}^2 m_{\pi}^2 + \frac{53}{32} m_{\pi}^4 \right) l^r_{\pi} + \left( \frac{19}{9} m_{K}^4 - \frac{65}{72} m_{K}^2 m_{\pi}^2 + \frac{3}{32} m_{\pi}^4 \right) l^r_{\eta} \nonumber \\ & + \left(\frac{245}{48} m_{K}^4 + \frac{173}{72} m_{K}^2 m_{\pi}^2 \right) l^r_{K} \label{dBarlog}\end{aligned}$$ $$\begin{aligned} d^K_{log \times log} =& \left(\frac{5}{16} \frac{m_{\pi}^6}{m_{K}^2} +\frac{2}{3} m_{K}^2 m_{\pi}^2 - \frac{5}{48} m_{\pi}^4 \right) (l^r_{\pi})^2 - \left(\frac{5}{8} \frac{m_{\pi}^6}{m_{K}^2} - \frac{25}{6} m_{K}^2 m_{\pi}^2 - \frac{47}{24} m_{\pi}^4 \right) l^r_{\eta} l^r_{\pi} \nonumber \\ & + \left(\frac{31}{9} m_{K}^4 + \frac{5}{16} \frac{m_{\pi}^6}{m_{K}^2} - \frac{11}{18} m_{K}^2 m_{\pi}^2 - \frac{21}{16} m_{\pi}^4 \right) (l^r_{\eta})^2 + \left(\frac{155}{72} m_{K}^4 + \frac{11}{36} m_{K}^2 m_{\pi}^2 \right) (l^r_{K})^2 \nonumber \\ & - \left(\frac{91}{18} m_{K}^4 + \frac{53}{72} m_{K}^2 m_{\pi}^2 - \frac{3}{8} m_{\pi}^4 \right) l^r_{\eta} l^r_{K} + \left(\frac{51}{8} m_{K}^2 m_{\pi}^2 - \frac{3}{8} m_{\pi}^4 \right) l^r_{K} l^r_{\pi} \label{dBarloglog}\end{aligned}$$ The linear and bilinear chiral log terms given in Eq.(\[dBarlog\]) and Eq.(\[dBarloglog\]) are a combination of the terms coming directly from the $\mathcal{O}(p^6)$ kaon decay constant expression, the chiral logs arising from the sunset integrals, and contributions stemming from the $\mathcal{O}(p^4)$ term due to application of the GMO relation. $d^K_{L_i}$ and $d^K_{log \times L_i}$ are similarly made up of terms taken directly from the $\mathcal{O}(p^6)$ expression, and contributions arising from the $\mathcal{O}(p^4)$ term due to application of the GMO relation. As in the case of the kaon mass, we break up the sunset contribution as follows, in which the first line contains contributions from the single mass sunsets, as well as terms arising from the free terms (i.e. not containing a chiral logarithm or a low energy constant) from the expression for the $\mathcal{O}(p^6)$ contribution to the kaon decay constant: $$\begin{aligned} d^K_{sunset} = \frac{1}{\left( 16 \pi ^2\right)^2} & \bigg\{ \left(\frac{17671}{2304}+\frac{1195 \pi ^2}{2592}\right) m_{K}^4 + \left(\frac{49}{48}+\frac{\pi ^2}{32}\right) \frac{m_{\pi}^6}{m_{K}^2}-\left(\frac{1625}{144}+\frac{689 \pi ^2}{1296}\right) m_{K}^2 m_{\pi}^2 \nonumber \\ & + \left(\frac{2153}{576}+\frac{151 \pi ^2}{432}\right) m_{\pi}^4 \bigg\} + d^K_{K \pi \pi} + d^K_{K \eta \eta} + d^K_{K \pi \eta} \end{aligned}$$ where $$\begin{aligned} d^K_{K \pi \pi} &= -\left(\frac{27 m_{\pi}^4}{64 m_{K}^2}+\frac{m_{K}^2}{64}+\frac{9 m_{\pi}^2}{16}\right) \overline{H}^{\chi}_{K \pi \pi} + \left(\frac{m_{K}^4}{16}+\frac{m_{K}^2 m_{\pi}^2}{8}+\frac{9 m_{\pi}^4}{16}\right) \overline{H}^{\chi}_{2K \pi \pi} \label{dBarkpp}\end{aligned}$$ $$\begin{aligned} d^K_{K \eta \eta} &= - \left(\frac{15 m_{\pi}^4}{64 m_{K}^2}+\frac{1189 m_{K}^2}{576}-\frac{65 m_{\pi}^2}{48}\right) \overline{H}^{\chi}_{K \eta \eta} + \left(\frac{143 m_{K}^4}{48}-\frac{139 m_{K}^2 m_{\pi}^2}{72}+\frac{5 m_{\pi}^4}{16}\right) \overline{H}^{\chi}_{2K \eta \eta}\end{aligned}$$ $$\begin{aligned} d^K_{K \pi \eta} &= \left( - \frac{7}{32} \frac{m_{\pi}^4}{m_{K}^2} + \frac{5}{96} m_{K}^2 + \frac{7}{6} m_{\pi}^2 \right) \overline{H}^{\chi}_{K \pi \eta} + \left( \frac{3}{8} \frac{m_{\pi}^6}{m_{K}^2} + \frac{1}{4} m_{K}^2 m_{\pi}^2 - \frac{15}{8} m_{\pi}^4 \right) \overline{H}^{\chi}_{K 2\pi \eta} \nonumber \\ & - \left( \frac{11}{18} m_{K}^4 - \frac{1}{12} \frac{m_{\pi}^6}{m_{K}^2} + \frac{41}{72} m_{K}^2 m_{\pi}^2 + \frac{11}{72}m_{\pi}^4 \right) \overline{H}^{\chi}_{K \pi 2\eta} - \left( \frac{1}{2} m_{K}^4 \right) \overline{H}^{\chi}_{2K \pi \eta}\end{aligned}$$ The terms $d^{K}_{\pi \pi K}$, $d^K_{\eta \eta K}$ and $d^K_{K \pi \eta}$ are the result of applying Tarasov’s relations to the sunset integrals appearing in Eq.(A.17) of [@Amoros:1999dp] and rewriting them in terms of the sunset diagram master integrals given in Appendix \[Sec:SunsetResults\]. The eta mass \[Sec:EtaMass\] ---------------------------- The eta mass is given in [@Amoros:1999dp] as: $$\begin{aligned} m^2_{\eta} = m^{2}_{\eta 0} + \left( m^{2}_{\eta} \right)^{(4)} + \left( m^{2}_{\eta} \right)^{(6)}_{CT} + \left( m^{2}_{\eta} \right)^{(6)}_{loop} + \mathcal{O}(p^8)\end{aligned}$$ where $$\begin{aligned} m^{2}_{\eta} = \frac{2}{3} B_0 \left(2 m_s + \hat{m} \right)\end{aligned}$$ and $$\begin{aligned} F_{\pi}^2 \left( m_{\eta}^{2} \right)^{(4)} =& \frac{8}{9}(3 L^r_{4}-L^r_{5}-6 L^r_{6}+48 L^r_{7}+18 L^r_{8}) m_{\pi}^4 -\frac{16}{9} (3 L^r_{4}-4 L^r_{5}-6 L^r_{6}+48 L^r_{7}+24 L^r_{8}) m_K^2 m_{\pi}^2 \nonumber \\ & -\frac{64}{9} (3 L^r_{4}+2 L^r_{5}-6 L^r_{6}-6 L^r_{7}-6 L^r_{8}) m_K^4 + \left(\frac{8}{3} l^r_{K} - \frac{64}{27} l^r_{\eta} \right) m_K^4 - \left(\frac{7}{27} l^r_{\eta} + l^r_{\pi}\right) m_{\pi}^4 \nonumber \\ & + \frac{44}{27} l^r_{\eta} m_K^2 m_{\pi}^2 \label{EqMeP4}\end{aligned}$$ The $\mathcal{O}(p^6)$ counter-term contribution is given by: $$\begin{aligned} F_{\pi}^4 \left( m_{\eta}^{2} \right)^{(6)}_{CT} =& -\frac{256}{27} m_{K}^6 ( 8 C_{12}^r + 12 C_{13}^r + 6 C_{14}^r + 6 C_{15}^r + 9 C_{16}^r + 6 C_{17}^r + 6 C_{18}^r - 27 C_{19}^r - 27 C_{20}^r \nonumber \\ & - 27 C_{21}^r - 18 C_{31}^r - 18 C_{32}^r - 18 C_{33}^r ) + \frac{16}{27} m_{\pi}^6 ( 2 C_{12}^r - 6 C_{13}^r + 9 C_{14}^r - 3 C_{15}^r + 27 C_{16}^r \nonumber \\ & + 9 C_{17}^r + 24 C_{18}^r - 27 C_{19}^r + 27 C_{20}^r - 27 C_{21}^r - 18 C_{31}^r + 54 C_{32}^r) -\frac{32}{9} m_{K}^2 m_{\pi}^4 ( 4 C_{12}^r \nonumber \\ & - 6 C_{13}^r + 10 C_{14}^r - 3 C_{15}^r + 24 C_{16}^r +10 C_{17}^r + 24 C_{18}^r - 54 C_{19}^r - 18 C_{20}^r -36 C_{31}^r + 6 C_{32}^r \nonumber \\ & - 48 C_{33}^r)+\frac{64}{9} m_{K}^4 m_{\pi}^2 (8 C_{12}^r + 10 C_{14}^r + 15 C_{16}^r + 10 C_{17}^r + 18 C_{18}^r - 54 C_{19}^r - 27 C_{20}^r \nonumber \\ & + 27 C_{21}^r - 36 C_{31}^r - 12 C_{32}^r - 48 C_{33}^r )\end{aligned}$$ and the model independent $\mathcal{O}(p^6)$ contribution can be subdivided as: $$\begin{aligned} F_\pi^4 \left( m^{2}_{\eta} \right)^{(6)}_{loop} = c_{sunset}^{\eta} + c_{log \times log}^{\eta} + c_{log}^{\eta} + c_{log \times L_i}^{\eta} + c_{L_i}^{\eta} + c_{L_i \times L_j}^{\eta}\end{aligned}$$ where $c_{log}^{\eta}$ represents the terms containing the chiral logarithms: $$\begin{aligned} (16 \pi^2) c_{log}^{\eta} =& \left(\frac{41}{324} l^r_{\eta} + \frac{961}{108} l^r_K - 3 l^r_{\pi} \right) m_K^4 m_{\pi}^2 + \left(\frac{371}{486} l^r_{\eta} - 3 l^r_K +\frac{61}{27} l^r_{\pi} \right) m_K^2 m_{\pi}^4 -\left(\frac{1093}{729} l^r_{\eta} + \frac{577}{27} l^r_K \right) m_K^6 \nonumber \\ & - \left(\frac{2045}{11664} l^r_{\eta} + \frac{931}{432} l^r_{\pi} \right) m_{\pi}^6\end{aligned}$$ The $c_{log \times log}^{\eta}$ term refers to the collection of bilinear chiral log terms: $$\begin{aligned} c_{log \times log}^{\eta} &= \left(-\frac{2713}{108} (l^r_{\eta})^2 +\frac{473}{54} l^r_{\eta} l^r_{K} + \frac{256}{27} l^r_{\eta} l^r_{\pi} - \frac{133}{18} (l^r_{K})^2 - \frac{55}{6} l^r_{K} l^r_{\pi} - \frac{3}{4}(l^r_{\pi})^2 \right) m_K^4 m_{\pi}^2 \nonumber \\ & + \left(\frac{1367}{162} (l^r_{\eta})^2 - \frac{31}{27} l^r_{\eta} l^r_{K} - \frac{172}{27} l^r_{\eta} l^r_{\pi} + \frac{10}{3} (l^r_{K})^2 - 3 l^r_{K} l^r_{\pi} + \frac{5}{2} (l^r_{\pi})^2 \right) m_K^2 m_{\pi}^4 \nonumber \\ & + \left(\frac{6185}{243}(l^r_{\eta})^2 - \frac{118}{9} l^r_{\eta} l^r_{K} + \frac{103}{9} (l^r_{K})^2 \right) m_K^6 + \left(-\frac{911}{972} (l^r_{\eta})^2 + \frac{7}{9} l^r_{\eta} l^r_{\pi} + \frac{65}{12} (l^r_{\pi})^2 \right) m_{\pi}^6 \end{aligned}$$ and $c_{L_i}^{\eta}$ are those terms proportional to the low energy constants $L_i$: $$\begin{aligned} \left(16 \pi ^2\right) c_{L_i}^{\eta} &= \frac{1}{27} \left(256 L^r_{1}+544 L^r_{2}+152 L^r_{3}+\frac{256}{3} L^r_{5} - 1024 L^r_{7}-512 L^r_{8}\right) m_K^6 \nonumber \\ & + \frac{1}{9} \left(-64 L^r_{1}-88 L^r_{2}-34 L^r_{3}-\frac{208}{3}L^r_{5} + 832 L^r_{7}+416 L^r_{8}\right) m_K^4 m_{\pi}^2 \nonumber \\ & +\frac{1}{9} \left(16 L^r_{1}+88 L^r_{2}+32 L^r_{3}+\frac{160}{3}L^r_{5} - 640 L^r_{7}-320 L^r_{8}\right) m_K^2 m_{\pi}^4 \nonumber \\ & + \frac{1}{27} \left(-4 L^r_{1}-58 L^r_{2}-20 L^r_{3}-\frac{112}{3} L^r_{5} + 448 L^r_{7}+224 L^r_{8}\right) m_{\pi}^6 \label{cLi}\end{aligned}$$ while bilinears in the LECs are given by $c_{L_i \times L_j}^{\eta}$: $$\begin{aligned} c_{L_i \times L_j}^{\eta} =& -\frac{128}{9} \bigg(36 (L^r_{4})^2+15 L^r_{4} L^r_{5}-144 L^r_{4} L^r_{6}+144 L^r_{4} L^r_{7}+42 L^r_{4} L^r_{8}+12 (L^r_{5})^2-30 L^r_{5} L^r_{6} \nonumber \\ & \qquad -48 L^r_{5} L^r_{7} -32 L^r_{5} L^r_{8}+144 (L^r_{6})^2-288 L^r_{6} L^r_{7}-84 L^r_{6} L^r_{8}-96 L^r_{7} L^r_{8}-48 (L^r_{8})^2 \bigg) m_K^4 m_{\pi}^2 \nonumber \\ & -\frac{128}{9} \bigg( 3 L^r_{4} L^r_{5}+6 L^r_{4} L^r_{8}-10 (L^r_{5})^2-6 L^r_{5} L^r_{6}+144 L^r_{5} L^r_{7}+76 L^r_{5} L^r_{8}-12 L^r_{6} L^r_{8} \nonumber \\ & \qquad -96 L^r_{7} L^r_{8}-48 (L^r_{8})^2 \bigg) m_K^2 m_{\pi}^4 \nonumber \\ & -\frac{1024}{27} \bigg( 6 L^r_{4}+L^r_{5}-12 L^r_{6}-6 L^r_{8}) (3 L^r_{4}+2 L^r_{5}-6 L^r_{6}-6 L^r_{7}-6 L^r_{8} \bigg) m_K^6 \nonumber \\ & +\frac{128}{27} \bigg( 3 L^r_{4}+5 L^r_{5}-6 L^r_{6}-6 L^r_{8}) (3 L^r_{4}-L^r_{5}-6 L^r_{6}+48 L^r_{7}+18 L^r_{8} \bigg) m_{\pi}^6 \end{aligned}$$ $c_{log \times L_i}^{\eta}$ are those terms that contain a product of the low energy constants and a chiral log: $$\begin{aligned} c_{log \times L_i}^{\eta} &= \bigg\{ \frac{32}{27}(72 L^r_{1}+72 L^r_{2}+36 L^r_{3}-54 L^r_{4}-113 L^r_{5}+156 L^r_{6}+684 L^r_{7}+422 L^r_{8}) l^r_{\eta} \nonumber \\ & \quad +\frac{8}{3} (16 L^r_{1}+4 L^r_{2}+7 L^r_{3}-12 L^r_{4}-4 L^r_{5}+8 L^r_{6}+96 L^r_{7}+56 L^r_{8}) l^r_K \nonumber \\ & \quad +\frac{256}{9} (3 L^r_{4}+2 L^r_{5}-6 L^r_{6} -6 L^r_{7} - 6 L^r_{8}) l^r_{\pi} \bigg\} m_K^4 m_{\pi}^2 \nonumber \\ & + \bigg\{ -\frac{16}{27} (36 L^r_{1}+36 L^r_{2}+18 L^r_{3}-27 L^r_{4}-104 L^r_{5}+78 L^r_{6}+720 L^r_{7}+404 L^r_{8}) l^r_{\eta} \nonumber \\ & \quad -\frac{16}{9} (72 L^r_{1}+18 L^r_{2}+18 L^r_{3}-87 L^r_{4}+8 L^r_{5}+102 L^r_{6}-312 L^r_{7}-120 L^r_{8}) l^r_{\pi} \nonumber \\ & \quad -\frac{16}{9} (3 L^r_{4}-L^r_{5}-6 L^r_{6}+48 L^r_{7}+18 L^r_{8}) l^r_K \bigg\} m_K^2 m_{\pi}^4 \nonumber \\ & + \bigg\{ -\frac{512}{27} (6 L^r_{1}+6 L^r_{2}+3 L^r_{3}-6 L^r_{4}-6 L^r_{5}+16 L^r_{6}+24 L^r_{7}+20 L^r_{8}) l^r_{\eta} \nonumber \\ & \quad -\frac{32}{9} (48 L^r_{1}+12 L^r_{2}+21 L^r_{3}-60 L^r_{4}-22 L^r_{5}+72 L^r_{6}+48 L^r_{7}+60 L^r_{8}) l^r_K \bigg\} m_K^6 \nonumber \\ & + \bigg\{ \frac{8}{27} (6 L^r_{1}+6 L^r_{2}+3 L^r_{3}-6 L^r_{4}-32 L^r_{5}+16 L^r_{6}+240 L^r_{7}+130 L^r_{8}) l^r_{\eta} \nonumber \\ & \quad + 8 (4 L^r_{1}+L^r_{2}+L^r_{3}-6 L^r_{4}+8 L^r_{6}-48 L^r_{7}-18 L^r_{8}) l^r_{\pi} \bigg\} m_{\pi}^6 \end{aligned}$$ The contributions from the sunset integrals $\overline{c}_{sunset}^{\eta}$ can in turn be expressed as: $$\begin{aligned} c^{\eta}_{sunset} = \frac{1}{\left(16 \pi ^2\right)^2} &\Bigg\{ \left(\frac{8783}{1944}-\frac{115 \pi ^2}{162}\right) m_K^6+\left(\frac{629 \pi ^2}{1296}-\frac{3515}{864}\right) m_K^4 m_{\pi}^2-\left(\frac{1259}{2592}+\frac{77 \pi ^2}{216}\right) m_K^2 m_{\pi}^4 \nonumber \\ & \quad -\left(\frac{20183}{31104}+\frac{7 \pi ^2}{432}\right) m_{\pi}^6 \Bigg\} + c_{\eta \pi \pi}^{\eta} + c_{\eta K K}^{\eta} + c_{\pi K K}^{\eta}\end{aligned}$$ where the contributions in the square brackets come from a combination of the single mass scale sunsets and the free terms (i.e. those not involving a chiral log, a low energy constant, or arising from a sunset diagram) of $\mathcal{O}(p^6)$, and where: $$\begin{aligned} c_{\eta \pi \pi}^{\eta} &= \frac{1}{6} m_{\pi}^4 \overline{H}^\chi_{\eta \pi \pi}\end{aligned}$$ $$\begin{aligned} c_{\eta K K}^{\eta} = \left(\frac{53}{36} m_K^2 m_{\pi}^2 -\frac{1}{24} m_K^4 - \frac{5}{24} m_{\pi}^4 \right) \overline{H}^\chi_{\eta K K} + \left(\frac{146}{27} m_K^6 - \frac{425}{54} m_K^4 m_{\pi}^2 + \frac{74}{27} m_K^2 m_{\pi}^4 - \frac{5}{18} m_{\pi}^6 \right) \overline{H}^\chi_{2\eta K K}\end{aligned}$$ $$\begin{aligned} \overline{c}_{\pi K K}^{\eta} =& \left(\frac{9}{8} m_{K}^4 - \frac{13}{12} m_{K}^2 m_{\pi}^2 + \frac{23}{24} m_{\pi}^4 \right) \overline{H}^\chi_{\pi K K} + \left(m_{K}^6 - \frac{1}{3} m_{K}^4 m_{\pi}^2 - \frac{2}{3} m_{K}^2 m_{\pi}^4 \right) \overline{H}^\chi_{\pi 2K K} \nonumber \\ & + \left(-\frac{3}{2} m_{K}^4 m_{\pi}^2 + \frac{7}{3} m_{K}^2 m_{\pi}^4 - \frac{5}{6} m_{\pi}^6 \right) \overline{H}^\chi_{2\pi K K}\end{aligned}$$ The eta decay constant \[Sec:EtaDecay\] --------------------------------------- The eta decay constant is given in [@Amoros:1999dp] as: $$\begin{aligned} F_{\eta} = F^0 \left( \overline{F}_{\eta}^{(4)} + ( \overline{F}_{\eta}^{(6)} )_{CT} + ( \overline{F}_{\eta}^{(6)} )_{loop} \right) + \mathcal{O}(p^8)\end{aligned}$$ where the $\mathcal{O}(p^4)$ term is: $$\begin{aligned} F_{\pi}^2 \overline{F}_{\eta}^{(4)} =& 8\left(L^r_{4}+\frac{2}{3} L^r_{5}\right) m_K^2 + 4 \left(L^r_{4}-\frac{1}{3}L^r_{5}\right) m_{\pi}^2 - 3 l^r_K m_K^2 \label{EqFeP4}\end{aligned}$$ and the $\mathcal{O}(p^6)$ counter-term contribution is given by: $$\begin{aligned} F_{\pi}^4 \left( F_{\eta}^{2} \right)^{(6)}_{CT} =& \left(\frac{64}{3} C^r_{14}+\frac{64}{3} C^r_{15}+32 C^r_{16}+\frac{64}{3} C^r_{17}+\frac{64}{3} C^r_{18}\right) m_K^4 \nonumber \\ & + \left(-\frac{64}{3} C^r_{14}+\frac{16}{3} C^r_{15}-32 C^r_{16}-\frac{64}{3} C^r_{17}-\frac{128}{3} C^r_{18}\right) m_K^2 m_{\pi}^2 \nonumber \\ & + \left(8 C^r_{14}-\frac{8}{3} C^r_{15}+24 C^r_{16}+8 C^r_{17}+\frac{64}{3} C^r_{18}\right) m_{\pi}^4 \end{aligned}$$ and the model independent $\mathcal{O}(p^6)$ contribution can be subdivided as: $$\begin{aligned} \left( F_{\eta} \right)^{(6)}_{loop} = d_{sunset}^{\eta} + d_{log \times log}^{\eta} + d_{log}^{\eta} + d_{log \times L_i}^{\eta} + d_{L_i}^{\eta} + d_{L_i \times L_j}^{\eta}\end{aligned}$$ where $d_{log}^{\eta}$ represents the terms containing the chiral logarithms: $$\begin{aligned} (16 \pi^2) d_{log}^{\eta} =& \left( \frac{3}{8} l^r_{\pi} + \frac{3}{2} l^r_K - \frac{4363}{1944} l^r_{\eta} \right) m_K^2 m_{\pi}^2 + \left(\frac{16631}{1944} l^r_{\eta} + \frac{17}{24} l^r_K \right) m_K^4 + \left(\frac{3713}{7776} l^r_{\eta} + \frac{47}{32} l^r_{\pi} \right) m_{\pi}^4\end{aligned}$$ The $d_{log \times log}^{\eta}$ term refers to the collection of bilinear chiral log terms: $$\begin{aligned} (4 m_K^2-m_{\pi}^2) d_{log \times log}^{\eta} =& -\frac{1}{4} \left(\frac{23}{6} (l^r_{\eta})^2 - \frac{167}{3} l^r_{\eta} l^r_K + \frac{43}{3} (l^r_K)^2 - 93 l^r_K l^r_{\pi} - \frac{99}{2} (l^r_{\pi})^2 \right) m_K^4 m_{\pi}^2 \nonumber \\ & + \frac{1}{3} \left(\frac{71}{2} (l^r_{\eta})^2 - 119 l^r_{\eta} l^r_K + \frac{191}{2} (l^r_K)^2 \right) m_K^6 + \frac{1}{8} \bigg( (l^r_{\eta})^2+9 (l^r_{\pi})^2 \bigg) m_{\pi}^6 \nonumber \\ & - \left( (l^r_{\eta})^2 + l^r_{\eta} l^r_K + (l^r_K)^2+6 l^r_K l^r_{\pi}+\frac{15}{2} (l^r_{\pi})^2 \right) m_K^2 m_{\pi}^4 \end{aligned}$$ and $d_{L_i}^{\eta}$ are those terms proportional to the low energy constants $L_i$: $$\begin{aligned} 9 \left(16 \pi ^2\right) d_{L_i}^{\eta} &= 8(2 L^r_{1}+2 L^r_{2}+L^r_{3}) m_K^2 m_{\pi}^2 - (2 L^r_{1}+29 L^r_{2}+10 L^r_{3}) m_{\pi}^4 - (32 L^r_{1}+68 L^r_{2}+19 L^r_{3}) m_K^4 \label{dLi}\end{aligned}$$ while bilinears in the LECs are given by $d_{L_i \times L_j}^{\eta}$: $$\begin{aligned} d_{L_i \times L_j}^{\eta} =& \left(224 (L^r_{4})^2+192 L^r_{4} L^r_{5}-256 L^r_{4} L^r_{6}-128 L^r_{4} L^r_{8}+\frac{256}{9} (L^r_{5})^2-\frac{512}{3} L^r_{5} L^r_{6}-\frac{256}{3} L^r_{5} L^r_{8}\right) m_K^4 \nonumber \\ & + \left(56 (L^r_{4})^2 + 48 L^r_{4} L^r_{5}-64 L^r_{4} L^r_{6}-64 L^r_{4} L^r_{8} - \frac{200}{9} (L^r_{5})^2 + \frac{64}{3} L^r_{5} L^r_{6} + \frac{64}{3} L^r_{5} L^r_{8} \right) m_{\pi}^4 \nonumber \\ & + \left(224 (L^r_{4})^2+96 L^r_{4} L^r_{5}-256 L^r_{4} L^r_{6}+\frac{448}{9} (L^r_{5})^2 - \frac{128}{3} L^r_{5} L^r_{6} \right) m_K^2 m_{\pi}^2\end{aligned}$$ $d_{log \times L_i}^{\eta}$ are those terms that contain a product of the low energy constants and a chiral log: $$\begin{aligned} d_{log \times L_i}^{\eta} &= \left\{ \frac{4}{3} \left(2 L^r_{1}+2 L^r_{2}+L^r_{3}-L^r_{4}-\frac{2}{3} L^r_{5} \right) l^r_{\eta} + 4 \left(12 L^r_{1}+3 L^r_{2}+3 L^r_{3}-11 L^r_{4}+\frac{2}{3} L^r_{5} \right) l^r_{\pi} \right\} m_{\pi}^4 \nonumber \\ & - \left\{ \frac{32}{3} \left(2 L^r_{1}+2 L^r_{2}+L^r_{3}-L^r_{4}-\frac{2}{3} L^r_{5}\right) l^r_{\eta} + 4 \left(5 L^r_{4}+\frac{13}{3} L^r_{5}\right) l^r_K + \frac{32}{3} (3 L^r_{4}+2 L^r_{5}) l^r_{\pi} \right\} m_K^2 m_{\pi}^2 \nonumber \\ & + \left\{ \frac{64}{3} \left(2 L^r_{1}+2 L^r_{2}+L^r_{3}-L^r_{4}-\frac{2}{3} L^r_{5} \right) l^r_{\eta} + 4 (16 L^r_{1}+4 L^r_{2}+7 L^r_{3}-18 L^r_{4}-4 L^r_{5}) l^r_K \right\} m_K^4\end{aligned}$$ The contributions from the sunset integrals $d_{sunset}^{\eta}$ can in turn be expressed as: $$\begin{aligned} \left(4 m_K^2-m_{\pi}^2\right) d^{\eta}_{sunset} &= \frac{1}{\left(16 \pi ^2\right)^2} \bigg\{ \left(\frac{65765}{3888}+\frac{59 \pi ^2}{36}\right) m_K^6-\left(\frac{13465}{1728}+\frac{47 \pi ^2}{96}\right) m_K^4 m_{\pi}^2 \nonumber \\ & -\left(\frac{3377}{5184}-\frac{3 \pi ^2}{8}\right) m_K^2 m_{\pi}^4+\left(\frac{46099}{62208}-\frac{\pi ^2}{96}\right) m_{\pi}^6 \bigg\} + d_{\eta \pi \pi}^{\eta} + d_{\eta K K}^{\eta} + d_{\pi K K}^{\eta}\end{aligned}$$ where the contributions in the square brackets come from a combination of the single mass scale sunsets and the free terms (i.e. those not involving a chiral log, a low energy constant, or arising from a sunset diagram) of $\mathcal{O}(p^6)$, and where: $$\begin{aligned} d_{\eta \pi \pi}^{\eta} &= \left(\frac{1}{3}m_K^2 m_{\pi}^4 - \frac{1}{12} m_{\pi}^6 \right) \overline{H}^\chi_{2\eta \pi \pi} - \frac{1}{4} m_{\pi}^4 \overline{H}^\chi_{\eta \pi \pi}\end{aligned}$$ $$\begin{aligned} d_{\eta K K}^{\eta} = \left(-\frac{479}{48} m_K^4 - \frac{17}{12} m_K^2 m_{\pi}^2 + \frac{1}{16} m_{\pi}^4 \right) \overline{H}^\chi_{\eta K K} + \left(\frac{173}{9} m_K^6 - \frac{23}{12} m_K^4 m_{\pi}^2 - \frac{19}{18} m_K^2 m_{\pi}^4 + \frac{1}{12}m_{\pi}^6 \right) \overline{H}^\chi_{2\eta K K}\end{aligned}$$ $$\begin{aligned} d_{\pi K K}^{\eta} &= \left(\frac{87}{16} m_K^4 + \frac{1}{4} m_K^2 m_{\pi}^2 + \frac{5}{16} m_{\pi}^4 \right) \overline{H}^\chi_{\pi K K} + \left(\frac{3}{4} m_K^4 m_{\pi}^2 - 4 m_K^2 m_{\pi}^4 + \frac{1}{4} m_{\pi}^6 \right) \overline{H}^\chi_{2\pi K K} \nonumber \\ & + \left(-\frac{33}{2} m_K^6 + \frac{5}{2} m_K^4 m_{\pi}^2 - m_K^2 m_{\pi}^4\right) \overline{H}^\chi_{\pi 2K K} \end{aligned}$$ Approximate Results for the Three Mass Sunsets \[Sec:ApproxSunsets\] ==================================================================== We now present truncated results which numerically agree to within 1% of the full results of the master integrals given in Appendix \[Sec:SunsetResults\] for much of the range of masses we are interested in. These partial sums have been obtained with the help of an ancillary `Mathematica` file, called `truncation.nb`, provided with this paper. In this file, as inputs one can choose the numerical values of the meson masses and the maximum error acceptable due to the truncation. The file gives the partial sums for each of the master integrals accordingly as an output. The truncation procedure that we use does not follow from a rigorous asymptotic analysis. Our aim here is more to give simplified formulas that may be used in numerical simulations to save CPU time, mainly for interested lattice practitioners. To get the simplified expressions, we use a simple criterion: for a given set of numerical values of the pseudo-scalar masses, in each of the different contributions of Eqs.(\[Eq:Hkpe\])-(\[Eq:Hp2kk\]) we keep terms that are bigger than $10^{-p}$, $p\geq1$ being incremented until we achieve the precision goal given by the numerical difference between the corresponding partial sum and the sum of the first hundreds[^4] of terms, the latter being defined as the ‘exact’ value (notice that the very small uncertainties on the pseudo-scalar masses are neglected in this procedure). This way of getting truncations of course implies that for sufficiently different sets of pseudo-scalar masses one gets non-identical simplified expressions of the master integrals. This however does not detract from their numerical utility. The truncated results presented below have been tested for all the sets of meson masses presented in the lattice study of [@Durr:2016ulb], and for the majority of these mass values, the truncated expressions give results that are accurate upto $1\%$ of the exact value. The numerical implications and accuracy of these approximate results are studied in more detail in Section \[Sec:NumAnalysis\]. Truncated kaon sunsets ---------------------- $$\begin{aligned} & \overline{H}^{\chi}_{K \pi \eta} \approx \frac{m_{K}^2}{512\pi ^4} \Bigg\{ \frac{5 \pi^2}{6} -\frac{1}{4} - \frac{7}{4}\left(\frac{m_{\eta}^4}{m_{K}^4}+\frac{m_{\pi}^4}{m_{K}^4}\right) + \left(1-\frac{\pi^2}{2}\right)\left(\frac{m_{\eta}^2}{m_{K}^2}+\frac{m_{\pi}^2}{m_{K}^2}\right) + \frac{1}{2} \frac{m_{\eta}^4}{m_{K}^4} \log\left[\frac{m_{\eta}^2}{m_{K}^2}\right] \nonumber \\ & \quad +\frac{m_{\pi}^2 m_{\eta}^2}{m_{K}^4} \left(7+\frac{2 \pi^2}{3}-2 \log\left[\frac{m_{\eta}^2}{m_{K}^2}\right]-2 \log\left[\frac{m_{\pi}^2}{m_{K}^2}\right]+\log\left[\frac{m_{\eta}^2}{m_{K}^2}\right] \log\left[\frac{m_{\pi}^2}{m_{K}^2}\right]\right) + \frac{1}{2} \frac{m_{\pi}^4}{m_{K}^4} \log\left[\frac{m_{\pi}^2}{m_{K}^2}\right] \nonumber \\ & \quad -\frac{m_{\pi}^2}{m_{K}^2} \log\left[\frac{m_{\pi}^2}{m_{K}^2}\right]^2-\frac{m_{\eta}^2}{m_{K}^2} \log\left[\frac{m_{\eta}^2}{m_{K}^2}\right]^2 + \frac{8 \pi }{3} \frac{m_{\eta}^3}{m_{K}^3} {}_2F_1 \bigg[ \begin{array}{c} -\frac{1}{2},\frac{1}{2} \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\eta^2}{4 m_{K}^2} \bigg] +\frac{1}{36}\frac{m_{\eta}^6}{m_{K}^6} {}_3F_2 \bigg[ \begin{array}{c} 1,1,2 \\ \frac{5}{2},4 \\ \end{array} \bigg| \frac{m_\eta^2}{4 m_K^2} \bigg] \nonumber \\ & \quad + \frac{1}{6} \frac{m_{\pi}^2 m_{\eta}^2}{m_K^4} \left( \log \left[\frac{m_{\eta}^2}{4 m_K^2}\right] + \log \left[\frac{m_{\pi}^2}{4 m_K^2}\right] \right) \left( \frac{m_{\eta}^2}{m_K^2} {}_2F_1 \bigg[ \begin{array}{c} 1,1 \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] + \frac{m_{\pi}^2}{m_K^2} {}_2F_1 \bigg[ \begin{array}{c} 1,1 \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\pi^2}{4m_K^2} \bigg] \right) \nonumber \\ & \quad - \frac{15\pi}{512} \frac{m_{\pi}^4 m_{\eta}^3}{m_{K}^7} \left(\log \left[\frac{m_{\pi}^2}{16 m_{\eta}^2}\right]+\frac{13}{6}\right) - \frac{1}{20} \frac{m_{\pi}^4 m_{\eta}^4}{m_{K}^8} \left(\frac{37}{15}-\log \left[\frac{m_{\eta}^2}{m_{K}^2}\right]-\log \left[\frac{m_{\pi}^2}{m_{K}^2}\right] \right) \nonumber \\ & \quad - \frac{\pi}{4} \frac{m_{\pi}^2 m_{\eta}^3}{m_{K}^5} \left(\log \left[\frac{m_{\pi}^2}{16 m_{\eta}^2}\right] + \frac{11}{3}\right) + \frac{1}{12} \frac{m_{\pi}^2 m_{\eta}^2}{m_{K}^4} \left(\frac{m_{\eta}^2}{m_{K}^2}+\frac{m_{\pi}^2}{m_{K}^2}\right) \left(5-8 \gamma-4\psi \left[\frac{5}{2}\right] \right) \nonumber \\ & \quad + 2 \pi \frac{m_{\pi}^2 m_{\eta}}{m_{K}^3} \left(\log \left[\frac{m_{\pi}^2}{16 m_{\eta}^2}\right]+1\right) + \frac{\pi}{32} \frac{ m_{\pi}^4}{m_{K}^4} \left(8 \frac{m_{K}}{m_{\eta}} + 3 \frac{m_{\eta}}{m_{K}}\right) \left(\frac{1}{2}-\log \left[\frac{m_{\pi}^2}{16 m_{\eta}^2}\right] \right) \Bigg\} \label{HkpeApprox}\end{aligned}$$ $$\begin{aligned} & \overline{H}^{\chi}_{2K\pi\eta} \approx \frac{1}{512\pi^4} \Bigg\{ \frac{5\pi^2}{6} -1 - \frac{m_{\eta}^2}{m_{K}^2} \left( 1+\frac{\pi^2}{3}+ \frac{1}{2} \log^2 \left[\frac{m_{K}^2}{m_{\eta}^2}\right] + \log \left[\frac{m_{K}^2}{m_{\eta}^2}\right] + \text{Li}_2 \left[1-\frac{m_{\pi}^2}{m_{\eta}^2}\right] \right) \nonumber \\ & \quad - \frac{m_{\pi}^2}{m_{K}^2} \left( 1+\frac{\pi^2}{3} - \log \left[\frac{m_{\pi}^2}{m_{K}^2}\right] - \log \left[\frac{m_{K}^2}{m_{\eta}^2}\right]\log \left[\frac{m_{\pi}^2}{m_{K}^2}\right] - \text{Li}_2 \left[1-\frac{m_{\pi}^2}{m_{\eta}^2}\right] \right) + \frac{2\pi}{3} \frac{m_{\eta}^3}{m_K^3} {}_2F_1 \bigg[ \begin{array}{c} \frac{1}{2},\frac{1}{2} \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] \nonumber \\ & \quad - \frac{1}{4} \frac{m_{\eta }^4}{m_K^4} {}_3F_2 \bigg[ \begin{array}{c} 1,1,1 \\ \frac{3}{2},3 \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] + \frac{1}{2} \frac{m_{\pi}^2 m_{\eta}^2}{m_{K}^4} \left( 4 - \log\left[ \frac{m_{\pi}^2}{m_{K}^2} \right] + \log\left[ \frac{m_{K}^2}{m_{\eta}^2} \right] \right) + \frac{\pi}{2} \frac{m_{\pi}^2 m_{\eta}}{m_{K}^3} \left( 1 + \log \left[ \frac{m_{\pi}^2}{16 m_{\eta}^2} \right] \right) \nonumber \\ & \quad + \frac{1}{60} \frac{m_{\pi}^2 m_{\eta}^6}{m_{K}^8} \left( \frac{4}{5} - \log\left[ \frac{m_{\pi}^2}{m_{K}^2} \right] + \log\left[ \frac{m_{K}^2}{m_{\eta}^2} \right] \right) + \frac{1}{12} \frac{m_{\pi}^2 m_{\eta}^4}{m_{K}^6} \left( \frac{11}{6} - \log\left[ \frac{m_{\pi}^2}{m_{K}^2} \right] + \log \left[ \frac{m_{K}^2}{m_{\eta}^2} \right] \right) \nonumber \\ & \quad + \frac{\pi}{16} \frac{m_{\pi}^2 m_{\eta}^3}{m_{K}^5} \left( \frac{11}{3} + \log \left[ \frac{m_{\pi}^2}{16 m_{\eta}^2} \right] \right) + \frac{\pi}{16} \frac{m_{\pi}^4}{m_{K}^3 m_{\eta}} \left( \frac{1}{2} - \log \left[ \frac{m_{\pi}^2}{16 m_{\eta}^2} \right] \right) \Bigg\} \label{H2kpeApprox}\end{aligned}$$ $$\begin{aligned} & \overline{H}^{\chi}_{K 2\pi \eta} \approx \frac{1}{512\pi ^4} \Bigg\{ - 1 - \frac{\pi^2}{2} - 2\log \left[ \frac{m_{\pi}^2}{m_{K}^2} \right] -\log^2 \left[\frac{m_{\pi}^2}{m_{K}^2} \right] -\frac{m_{\pi}^2}{m_{K}^2} \left(3-\log \left[ \frac{m_{\pi}^2}{m_{K}^2}\right] \right) \nonumber \\ & \quad + \frac{m_{\eta}^2}{m_{K}^2} \left( 5 + \frac{2 \pi^2}{3} - 2 \log \left[ \frac{m_{\pi}^2}{m_{K}^2} \right] + \log \left[ \frac{m_{K}^2}{m_{\eta}^2}\right] - \log \left[ \frac{m_{K}^2}{m_{\eta}^2}\right] \log \left[ \frac{m_{\pi}^2}{m_{K}^2}\right] \right) + \frac{1}{12} \frac{m_{\pi}^4}{m_{K}^4} {}_3F_2 \bigg[ \begin{array}{c} 1,1,2 \\ \frac{5}{2},3 \\ \end{array} \bigg| \frac{m_\pi^2}{4m_K^2} \bigg] \nonumber \\ & \quad + \frac{1}{6} \frac{m_{\eta}^4}{m_{K}^4} \left( 2\gamma_E + \log \left[ \frac{m_{\eta}^2}{4 m_{K}^2} \right] + \log \left[ \frac{m_{\pi}^2}{4 m_{K}^2} \right] \right) {}_2F_1 \bigg[ \begin{array}{c} 1,1 \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] + \frac{1}{3} \frac{m_{\eta}^4}{m_{K}^4} \left(\frac{5}{4} - 2 \gamma -\psi\left[\frac{5}{2}\right]\right) \nonumber \\ & \quad + \frac{1}{3} \frac{m_{\pi}^2 m_{\eta}^2}{m_{K}^4} \left( \log \left[ \frac{m_{\eta}^2}{4 m_{K}^2} \right] + \log \left[ \frac{m_{\pi}^2}{4 m_{K}^2} \right] \right) {}_3F_2 \bigg[ \begin{array}{c} 1,1,3 \\ 2,\frac{5}{2} \\ \end{array} \bigg| \frac{m_\pi^2}{4m_K^2} \bigg] - \frac{2}{3} \frac{m_{\pi}^2 m_{\eta}^2}{m_{K}^4} \left(\frac{7}{6}+\gamma -\log [4] \right) \nonumber \\ & \quad - \frac{15 \pi}{256} \frac{m_{\pi}^2 m_{\eta}^3}{m_{K}^5}\left(\log \left[ \frac{m_{\pi}^2}{16 m_{\eta}^2}\right] + \frac{8}{3}\right) - \frac{\pi}{4} \frac{m_{\eta}^3}{m_{K}^3} \left(\log \left[ \frac{m_{\pi}^2}{16 m_{\eta}^2}\right] + \frac{14}{3}\right) - \frac{3 \pi}{32} \frac{m_{\pi}^4 }{m_{K} m_{\eta}^3} \left(\log \left[ \frac{m_{\pi}^2}{16 m_{\eta}^2} \right] + \frac{5}{3}\right) \nonumber \\ & \quad - \frac{\pi}{2} \frac{m_{\eta}}{m_{K}} \left(\frac{m_{\pi}^2}{m_{\eta}^2} + \frac{3}{8} \frac{m_{\pi}^2}{m_{K}^2}\right) \log \left[\frac{m_{\pi}^2}{16 m_{\eta}^2}\right] - \frac{1}{10} \frac{m_{\pi}^2 m_{\eta}^4}{m_{K}^6} \left( \frac{59}{30}-\log \left[ \frac{m_{\eta}^2}{m_{K}^2} \right] - \log \left[ \frac{m_{\pi}^2}{m_{K}^2} \right] \right) \nonumber \\ & \quad - \frac{\pi}{64} \frac{m_{\eta}^5}{m_{K}^5} \left(\log \left[ \frac{m_{\pi}^2}{16 m_{\eta}^2} \right] + \frac{86}{15}\right) + 2 \pi \frac{m_{\eta}}{m_{K}} \left(\log \left[ \frac{m_{\pi}^2}{16 m_{\eta}^2}\right]+2 \right) \Bigg\} \label{Hk2peApprox}\end{aligned}$$ $$\begin{aligned} & \overline{H}^{\chi}_{K \pi 2\eta} \approx \frac{1}{512\pi^4} \Bigg\{ - 1 -\frac{\pi^2}{2} + 2 \log \left[ \frac{m_k^2}{m_{\eta}^2} \right] - \log^2 \left[\frac{m_{K}^2}{m_{\eta}^2}\right] + \pi \left(\frac{m_{\eta}^2}{m_{K}^2}\right)^{1/2} \left(4-\frac{m_{\eta}^2}{m_{K}^2}\right)^{1/2} - \frac{\pi m_{\pi}^2}{m_{\eta} m_{K}} \nonumber \\ & \quad + \frac{m_{\pi}^2}{m_{K}^2} \left( 5 + \frac{2 \pi ^2}{3} + 2 \log \left[\frac{m_{K}^2}{m_{\eta}^2}\right] - \log \left[\frac{m_{\pi}^2}{m_{K}^2}\right] - \log \left[\frac{m_{\pi}^2}{m_{K}^2}\right] \log \left[ \frac{m_{K}^2}{m_{\eta}^2} \right] \right) - \frac{m_{\eta}^2}{m_{K}^2} \left(3 + \log \left[\frac{m_{K}^2}{m_{\eta}^2}\right] \right) \nonumber \\ & \quad + 2 \pi \frac{m_{\eta}}{m_K} {}_2F_1 \bigg[ \begin{array}{c} \frac{1}{2},\frac{1}{2} \\ \frac{3}{2} \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] + \frac{m_{\eta}^4}{12 m_{K}^4} {}_3F_2 \bigg[ \begin{array}{c} 1,1,2 \\ \frac{5}{2},3 \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] - \frac{1}{10} \frac{m_{\eta}^4 m_{\pi}^2}{m_{K}^6} \left(\frac{7}{30}+\gamma -\log (4)\right) \nonumber \\ & \quad -\frac{2}{3} \frac{m_{\pi}^2 m_{\eta}^2}{m_{K}^4} \left(\frac{7}{6}+\gamma -\log (4)\right) - \frac{3 \pi}{8} \frac{m_{\pi}^2 m_{\eta}}{m_{K}^3} \left(\log \left[\frac{m_{\pi}^2}{16 m_{\eta}^2}\right]+3\right) + \frac{\pi m_{\pi}^2}{m_{\eta} m_{K}} \log \left[\frac{m_{\pi}^2}{16 m_{\eta}^2}\right] \nonumber \\ & \quad - \frac{5\pi}{128} \frac{m_{\pi}^2 m_{\eta}^3}{m_{K}^5} \left(\log \left[\frac{m_{\pi}^2}{16 m_{\eta}^2}\right] + \frac{13}{3}\right) + \frac{\pi}{16} \frac{m_{\pi}^4}{m_{\eta}^3 m_{K}} \left(2 \log \left[\frac{m_{\pi}^2}{16 m_{\eta}^2}\right]+3\right) \nonumber \\ & \quad + \frac{1}{3} \frac{m_\pi^2}{m_K^2} \frac{m_\eta^2}{m_K^2} {}_3F_2 \bigg[ \begin{array}{c} 1,1,3 \\ \frac{5}{2},2 \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] \left( 2\gamma - 1 + \log \left[ \frac{m_\eta^2}{4 m_K^2} \right] + \log \left[ \frac{m_\pi^2}{4 m_K^2} \right] \right) \Bigg\} \label{Hkp2eApprox}\end{aligned}$$ Truncated eta sunsets --------------------- $$\begin{aligned} & \overline{H}^\chi_{\pi K K} \approx \frac{m_\pi^2}{512\pi^4} \Bigg\{ \frac{\pi ^2}{6}-5-\log^2 \left[ \frac{m_\pi^2}{m_K^2} \right] + 4 \log \left[\frac{m_\pi^2}{m_K^2}\right] + \frac{m_\eta^2}{m_\pi^2} \left(\log \left[ \frac{m_K^2}{m_\eta^2}\right] + \frac{5}{4}\right) + \frac{m_K^2}{m_\pi^2} \left(6+\frac{\pi ^2}{3}\right) \nonumber \\ & + \frac{1}{3} \frac{m_\eta^2}{m_K^2} \left(\frac{7}{6}-\log \left[\frac{m_\pi^2}{m_K^2}\right] \right) + \frac{1}{10} \frac{m_\pi^2}{m_K^2} \frac{m_\eta^2}{m_K^2} \left(\frac{37}{30}-\log \left[ \frac{m_\pi^2}{m_K^2}\right] \right) - \frac{1}{18} \frac{m_\eta^2}{m_\pi^2} \frac{m_\eta^2}{m_K^2} {}_3F_2 \bigg[ \begin{array}{c} 1,1,2 \\ \frac{5}{2},4 \\ \end{array} \bigg| \frac{m_\eta^2}{4 m_K^2} \bigg] \nonumber \\ & + \frac{1}{3} \frac{m_\pi^2}{m_K^2} \left( \frac{8}{3}-\log [4] -{}_2F_1 \bigg[ \begin{array}{c} 1,1 \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\pi^2}{4 m_K^2} \bigg] \log \left[ \frac{m_\pi^2}{4 m_K^2} \right] \right) \Bigg\} \label{HpkkApprox} \end{aligned}$$ $$\begin{aligned} & \overline{H}^\chi_{2\pi K K} \approx \frac{1}{512\pi^4} \Bigg\{2 \log \left[\frac{m_\pi^2}{m_K^2}\right]-\log ^2\left[\frac{m_\pi^2}{m_K^2}\right] + \frac{1}{3} \frac{m_\eta^2}{m_K^2} \left(\frac{1}{6}-\log \left[\frac{m_\pi^2}{m_K^2} \right] \right) - \frac{1}{30} \frac{m_\eta^4}{m_K^4} \left(\frac{19}{15} + \log \left[\frac{m_\pi^2}{m_K^2}\right] \right) \nonumber \\ & \quad + \frac{2}{3} \frac{m_\pi^2}{m_K^2} \Bigg( \frac{8}{3}-\log [4] -\, {}_2F_1 \bigg[ \begin{array}{c} 1,1 \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\pi^2}{4 m_K^2} \bigg] \left(\frac{1}{2} + \log \left[\frac{m_\pi^2}{4 m_K^2}\right] \right) \Bigg) + \frac{1}{5} \frac{m_\pi^2}{m_K^2} \frac{m_\eta^2}{m_K^2} \left(\frac{11}{15}-\log \left[\frac{m_\pi^2}{m_K^2}\right] \right) \nonumber \\ & \quad + \frac{1}{10} \frac{m_\pi^4}{m_K^4} \left( \frac{31}{15} - \log [4] -\frac{1}{3} {}_2F_1 \bigg[ \begin{array}{c} 2,2 \\ \frac{7}{2} \\ \end{array} \bigg| \frac{m_\pi^2}{4 m_K^2} \bigg] \log \left[\frac{m_\pi^2}{4 m_K^2}\right] \right) + \frac{5}{231} \frac{m_\pi^4}{m_K^4} \frac{m_\eta^6}{m_K^{6}} \left(\frac{757}{2772} - \log \left[ \frac{m_\pi^2}{m_K^2}\right] \right) \nonumber \\ & \quad - \frac{1}{210} \frac{m_\eta^6}{m_K^6} - \frac{1}{63} \frac{m_\pi^2}{m_K^2} \frac{m_\eta^6}{m_K^6} \left(\frac{79}{630} + \log \left[\frac{m_\pi^2}{m_K^2}\right] \right) + \frac{1}{21} \frac{m_\pi^4}{m_K^4} \frac{m_\eta^4}{m_K^4} \left(\frac{223}{315}-\log \left[\frac{m_\pi^2}{m_K^2}\right] \right) \nonumber \\ & \quad - \frac{2}{35} \frac{m_\pi^2}{m_K^2} \frac{m_\eta^4}{m_K^4} \left(\frac{9}{140} + \log \left[\frac{m_\pi^2}{m_K^2}\right] \right) + \frac{3}{35} \frac{m_\eta^2}{m_K^2} \frac{m_\pi^4}{m_K^4} \left(\frac{533}{420}-\log \left[\frac{m_\pi^2}{m_K^2}\right] \right) + \frac{\pi ^2}{6} - 3 \Bigg\} \label{H2pkkApprox}\end{aligned}$$ $$\begin{aligned} & \overline{H}^\chi_{\pi 2K K} \approx \frac{1}{512\pi^4} \Bigg\{ 1 + \frac{\pi ^2}{6} - \frac{1}{10} \frac{m_\eta^2}{m_K^2} \frac{m_\pi^4}{m_K^4} \left(\frac{11}{15}-\log \left[ \frac{m_\pi^2}{m_K^2} \right]\right) + \frac{1}{60} \frac{m_\pi^6}{m_K^6} {}_2F_1 \bigg[ \begin{array}{c} 2,2 \\ \frac{7}{2} \\ \end{array} \bigg| \frac{m_\pi^2}{4 m_K^2} \bigg] \log \left[\frac{m_\pi^2}{4 m_K^2}\right] \nonumber \\ & - \frac{3}{70} \frac{m_\pi^4}{m_K^8} \frac{m_\eta^4}{m_K^4} \left(\frac{43}{420}-\log \left[ \frac{m_\pi^2}{m_K^2}\right] \right) + \frac{1}{30} \frac{m_\pi^2}{m_K^2} \frac{m_\eta^4}{m_K^4} \left(\frac{23}{30} + \log \left[ \frac{m_\pi^2}{m_K^2}\right] \right) - \frac{2}{63} \frac{m_\eta^4}{m_K^4} \frac{m_\pi^6}{m_K^6} \left(\frac{223}{315}-\log \left[ \frac{m_\pi^2}{m_K^2} \right] \right) \nonumber \\ & - \frac{3}{70} \frac{m_\eta^2}{m_K^2} \frac{m_\pi^6}{m_K^6} \left(\frac{533}{420}-\log \left[ \frac{m_\pi^2}{m_K^2} \right] \right) - \frac{1}{6} \frac{m_\pi^2}{m_K^2}\frac{m_\eta^2}{m_K^2} \left(\frac{1}{6} - \log \left[ \frac{m_\pi^2}{m_K^2} \right] \right) + \frac{1}{2} \frac{m_\eta^2}{m_K^2} {}_3F_2 \bigg[ \begin{array}{c} 1,1,1 \\ \frac{3}{2},3 \\ \end{array} \bigg| \frac{m_\eta^2}{4 m_K^2} \bigg] \nonumber \\ & - \frac{1}{6} \frac{m_\pi^4}{m_K^4} \Bigg( \frac{8}{3} - \log [4] - \left( 1 + \log \left[\frac{m_\pi^2}{4 m_K^2}\right] \right) {}_2F_1 \bigg[ \begin{array}{c} 1,1 \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\pi^2}{4 m_K^2} \bigg] \Bigg) - \frac{m_\pi^2}{m_K^2} \left(2-\log \left[ \frac{m_\pi^2}{m_K^2} \right] \right) \Bigg\} \label{Hp2kkApprox} \end{aligned}$$ Numerical analysis \[Sec:NumAnalysis\] ====================================== Several numerical analyses are perfomed in this section. We first perform a study to determine the relative contribution of the various classes of terms making up the NNLO piece of $m_K$, $F_K$, $m_\eta$ and $F_\eta$, while also examining the difference that arises due to use of the GMO simplified, as contrasted to the physical, expressions. Next, by means of numerical tests, we justify the use of the truncated sunset expressions of Section \[Sec:ApproxSunsets\] instead of the exact expressions of Appendix \[Sec:SunsetResults\] in potential studies involing fits with lattice data. In both these studies, we do not provide uncertainties for the values provided, as the numerics are comparative rather than absolute in nature. In the last part of this section, we compute values for $m_K$, $F_K$, $m_\eta$ and $F_\eta$ using our expressions, and physical meson mass values as inputs. Since our aim in this last part is to provide numbers that can be used to check our expressions, rather than to present new and carefully recalculated values of the $m_P$ and $F_P$, and in keeping with the convention used in [@Bijnens:2014lea], with whose values our own are compared, we give only central values for the calculated quantities. Breakup of the contributions ---------------------------- We begin by giving a numerical breakup of the various terms that make up the masses and decay constants to show their relative contributions. As the expressions given earlier in this paper are ‘renormalized’ ones, we can directly substitute physical values for the meson masses and the pion decay constant in them, and the error is of $\mathcal{O}(p^8)$. Table \[TableContrib\] gives numerical values for the various components of the two loop contributions to the kaon and eta masses and decay constants for two different sets of values of the LECs, i.e. the free fit and the BE14 fit, obtained from continuum fits at NNLO, and the results of which are summarized in Ref.[@Bijnens:2014lea]. These numbers have been obtained by using the full $m_{\eta}^2$ dependent expressions (i.e. that have not been simplified by use of the GMO relation), and by summing the first 1000 terms of the single series, and the first 10000 terms of the double series, of the expressions given in Appendix \[Sec:SunsetResults\] for the three mass scale sunsets. --------- ---------- -- -- -- ------------ ----------- ----------- ----------- \[2ex\] Free Fit $-2.8763$ $0.1178$ $-0.3124$ $3.3396$ BE14 $-4.3794$ $0.2768$ $0.0665$ $2.3745$ Free Fit $18.3342$ $-0.2398$ $3.1301$ $14.4631$ BE14 $15.0591$ $-0.5637$ $1.2018$ $8.9358$ Free Fit $-7.1642$ $0.2018$ $-1.1207$ $3.5228$ BE14 $-10.2093$ $0.3845$ $-0.6144$ $1.1668$ Free Fit $18.3342$ $-0.2398$ $3.1301$ $14.4631$ BE14 $15.0591$ $-0.5637$ $1.2018$ $8.9358$ --------- ---------- -- -- -- ------------ ----------- ----------- ----------- : Contribution (in units of $10^{-6}$) of NNLO component terms to $m^2_P$ and $F_P$. The inputs are $m_{\pi} = m_{\pi^0} = 0.1350$, $m_K = m_K^{\text{avg}} = 0.4955$, $m_{\eta} = 0.5479$ and $F_{\pi} = F_{\pi\text{ phys}} = 0.0922$, all in GeV. The renormalization scale $\mu = 0.77$ GeV.[]{data-label="TableContrib"} For the kaon mass, we see that the largest contribution arises from the pure log term and the pure sunset contributions. The contribution from the terms involving both the chiral logs as well as the low energy constants is also large, but its negative sign serves to reduce the contribution rather than augment it. The contribution of the bilinear log terms is also substantial. The large uncertainty due to the $L_i$, however, means that the contribution to full loop contribution of both the $L_i \times L_j$ term and the $\log \times L_i$ term may be significantly different from what their central values suggest. --------- ---------- ----------- ---------- ----------- ------------ ----------- ----------- ---------- \[2ex\] Physical $2.4100$ $0.9420$ $3.0586$ $-4.3794$ $0.2768$ $2.3745$ GMO $2.5102$ $0.9289$ $3.0225$ $-4.0554$ $0.0587$ $2.5313$ Physical $-1.2220$ $1.7648$ $-7.3042$ $15.0591$ $-0.5637$ $8.9358$ GMO $-1.2939$ $1.7698$ $-7.2988$ $14.3140$ $0.4358$ $9.1287$ Physical $4.1105$ $1.5896$ $5.9059$ $-10.2093$ $0.3845$ $-0.6144$ $1.1668$ GMO $4.8962$ $1.3989$ $5.9110$ $-9.7738$ $0.9473$ $0.9980$ $4.3775$ Physical $-1.2220$ $1.7648$ $-7.3042$ $15.0591$ $-0.5637$ $8.9358$ GMO $-1.2939$ $1.7698$ $-7.2988$ $14.3140$ $0.4358$ $9.1287$ --------- ---------- ----------- ---------- ----------- ------------ ----------- ----------- ---------- : Tables \[TableA\] and \[TableB\]: The inputs for the physical and GMO case are the same as for Table \[TableContrib\]. The inputs for the lattice column are $m_{\pi} = 0.4023$ and $m_{K} = 0.5574$, both in GeV. -------------------------------------------- ------------- ------------- ----------- \[2ex\] $\overline{H}^{\chi}_{K \pi \eta}$ $50.1058$ $52.3059$ $52.6996$ $\overline{H}^{\chi}_{2K \pi \eta}$ $47.1145$ $43.9569$ $25.3240$ $\overline{H}^{\chi}_{K 2\pi \eta}$ $-258.6990$ $-264.8280$ $25.3240$ $\overline{H}^{\chi}_{K \pi 2\eta}$ $63.0648$ $65.3259$ $38.1248$ $c^K_{K \pi \eta}$ $3.0345$ $3.1439$ $3.7614$ $d^K_{K \pi \eta}$ $-2.3367$ $-2.2692$ $6.3472$ $c^K_{sunsets}$ $2.4100$ $2.5102$ $4.1692$ $d^K_{sunsets}$ $-1.2220$ $-1.2939$ $-1.1516$ $\overline{H}^{\chi}_{\pi K K}$ $44.7862$ $44.7750$ $49.4563$ $\overline{H}^{\chi}_{2\pi K K}$ $-236.5110$ $-234.5361$ $29.5042$ $\overline{H}^{\chi}_{\pi 2K K}$ $58.2355$ $59.1524$ $32.2094$ $c^\eta_{\pi K K}$ $4.0771$ $4.0273$ $4.7771$ $d^\eta_{\pi K K}$ $0.1336$ $0.3386$ $11.6803$ $c^\eta_{sunsets}$ $4.1105$ $4.8962$ $6.0683$ $d^\eta_{sunsets}$ $-1.1654$ $-1.6868$ $-1.8894$ -------------------------------------------- ------------- ------------- ----------- : Tables \[TableA\] and \[TableB\]: The inputs for the physical and GMO case are the same as for Table \[TableContrib\]. The inputs for the lattice column are $m_{\pi} = 0.4023$ and $m_{K} = 0.5574$, both in GeV. -------------------------------- ----------- ----------- ----------- \[2ex\] $(m_K^2)^{(6)}_{loop}$ $0.0329$ $0.0350$ $0.0656$ $(F_K^2)^{(6)}_{loop}$ $0.1237$ $0.1263$ $0.3305$ $(m_K^2)^{(6)}_{CT}$ $-0.0437$ $-0.0437$ $-0.0276$ $(F_K^2)^{(6)}_{CT}$ $0.0238$ $0.0238$ $-0.0097$ $(m_\eta^2)^{(6)}_{loop}$ $0.0161$ $0.0606$ $0.0779$ $(F_\eta^2)^{(6)}_{loop}$ $0.1888$ $0.1856$ $0.3678$ $(m_\eta^2)^{(6)}_{CT}$ $-0.0115$ $-0.0115$ $0.0035$ $(F_\eta^2)^{(6)}_{CT}$ $0.0009$ $0.0009$ $-0.0302$ -------------------------------- ----------- ----------- ----------- : Tables \[TableA\] and \[TableB\]: The inputs for the physical and GMO case are the same as for Table \[TableContrib\]. The inputs for the lattice column are $m_{\pi} = 0.4023$ and $m_{K} = 0.5574$, both in GeV. In the case of the kaon decay constant, the largest contribution is from the $log \times L_i$ terms, and is of an order of magnitude higher than the next highest positive contributions, coming from the bilinear LEC and bilinear log terms. The linear chiral log terms and the pure sunset terms reduce the two-loop contribution due to their negative sign. As in the case of the kaon mass, the contribution of the $L_i \times L_j$ term may be significantly different due to the large uncertainty of its central value. Similarly, with both the eta mass and decay constant, the largest contribution in absolute terms comes from the $log \times L_i$ terms. For the eta mass, though, the negative sign of this term serves to reduce the contributions from the $log$ and $sunset$ contributions that have the next largest values. In the eta decay constant, however, the $log \times L_i$ dominates the overall value of the $\mathcal{O}(p^6)$ contribution. In Tables \[TablePhysVsGMOContrib\], \[TableA\] and \[TableB\], we justify the use of the GMO relation to obtain simplified expressions for the masses and decay constants. In all three tables, we see that the difference between quantities calculated using GMO masses varies from those using the physical masses by a maximum of around 4% in most calculated quantities, exceptions being $\overline{H}_{2K \pi \eta}$, $c^\eta_{L_i}$, $c^\eta_{L_i \times L_j}$ (and consequently in $(m_\eta^2)_{loop}^{(6)}$), and $d^\eta_{\pi K K}$ (thus also in $(d^\eta)_{sunsets}$). However, at the level of the total NNLO contribution, the difference is negligible for the kaon mass and small for the kaon and eta decay constants. The column labelled ‘lattice’ in Tables \[TableA\] and \[TableB\] gives values for the sunset integrals and various components making up the NNLO contributions to $m^2_K$ and $\overline{F}_K$ using as input a particular set of meson mass values used in the lattice simulations of [@Durr:2016ulb]. The large divergence between the numbers obtained using the physical and GMO mass inputs on one hand, and the lattice mass inputs on the other hand, demonstrate the necessity to use lattice results carefully when comparing with the expressions presented in this paper. Simplified expressions for three mass scale sunset results \[Sec:NumApproxSunsets\] ----------------------------------------------------------------------------------- We show here that the approximate expressions for the sunset integrals presented in Section \[Sec:ApproxSunsets\], made by truncating the infinite series at suitable points, is sufficiently precise for purposes of data fitting against the results of the lattice simulations presented in [@Durr:2016ulb; @Durr:2010hr]. In Tables \[TableApprox1\] and \[TableApprox2\] are shown the results for three sets of mass inputs, all taken from [@Durr:2016ulb]. The ‘Lattice Low’ columns have as inputs: $m_{\pi} = 0.1830$ GeV, $m_{K} = 0.4964$ GeV, which are values representative of the lower end of the range of values of masses used in [@Durr:2016ulb]. The ‘Lattice Mid’ columns have as inputs: $m_{\pi} = 0.3010$ GeV, $m_{K} = 0.5625$ GeV, and the ‘Lattice High’ columns have as inputs: $m_{\pi} = 0.4023$ GeV, $m_{K} = 0.5574$ GeV, which are values representative of the middle and upper end, respectively, of the range of masses used in the aforementioned lattice study. For each of these three sets of masses, the values of various quantities are calculated in two ways- using the exact values of the sunsets (as given by the results of Appendix \[Sec:SunsetResults\], and using the approximate expressions for the sunsets (as given by Eq.(\[HkpeApprox\])-(\[Hkp2eApprox\]). It can be seen from the results of these tables that the deviation from the exact results is less than $1\%$ in all cases apart from $(m_K^2)^{(6)}_{loop}$ calculated using the ‘Lattice Low’ values. Indeed, the truncations were performed on the full expressions of the sunsets in such a manner, that the numerical deviation of the approximations from the exact values was less than $1\%$ for the majority of the meson masses used in [@Durr:2016ulb]. More specifically, for $\overline{H}^{\chi}_{K \pi \eta}$ Eq.(\[HkpeApprox\]) differs from Eq.(\[Eq:Hkpe\]) by less than $0.5\%$ for all 47 sets of masses used in [@Durr:2016ulb]. For $\overline{H}^{\chi}_{2K \pi \eta}$ Eq.(\[H2kpeApprox\]) differs from Eq.(\[Eq:H2kpe\]) by more than $1\%$ for seven of these sets of masses, and by less than $0.4\%$ for 38 sets. And for $\overline{H}^{\chi}_{K 2\pi \eta}$ and $\overline{H}^{\chi}_{K \pi 2\eta}$ both, the truncated results differ from the exact ones by more than $1\%$ for the same 3 sets of masses. Similarly, for the eta sunsets, $\overline{H}^{\chi}_{\pi K K}$ differs from Eq.(\[HpkkApprox\]) by less than 1% for all sets of masses, and $\overline{H}^{\chi}_{2\pi K K}$ and $\overline{H}^{\chi}_{\pi 2K K}$ differ from Eq.(\[H2pkkApprox\]) and Eq.(\[Hp2kkApprox\]) by less than 1% for all but (the same) six sets of masses. -------------------------------------------- ------------- ------------- ------------ ------------ ------------ ------------ \[2ex\] \[2ex\] $\overline{H}^{\chi}_{K \pi \eta}$ $49.1972$ $49.2763$ $57.3564$ $57.4264$ $52.6594$ $52.6996$ $\overline{H}^{\chi}_{2K \pi \eta}$ $40.3584$ $40.3898$ $33.4005$ $33.5287$ $25.3936$ $25.3240$ $\overline{H}^{\chi}_{K 2\pi \eta}$ $-181.192$ $-180.8920$ $-94.4140$ $-94.5730$ $-37.4788$ $-37.7974$ $\overline{H}^{\chi}_{K \pi 2\eta}$ $60.6167$ $60.8187$ $51.0392$ $51.2868$ $37.9694$ $38.1248$ $c^K_{K \pi \eta}$ $2.9267$ $2.9300$ $5.1472$ $5.1522$ $3.7574$ 3.7614 $d^K_{K \pi \eta}$ $-1.2676$ $-1.2730$ $1.6939$ $1.6774$ $6.3404$ $6.3472$ $c^K_{sunsets}$ $2.4126$ $2.4158$ $4.6864$ $4.6914$ $4.1651$ 4.1692 $d^K_{sunsets}$ $-1.2508$ $-1.2562$ $-1.6999$ $-1.7164$ $-1.1584$ $-1.1516$ $\overline{H}^{\chi}_{\pi K K}$ $42.5595$ $42.6486$ $51.1414$ $51.3158$ $49.1902$ $49.4563$ $\overline{H}^{\chi}_{2\pi K K}$ $-157.3080$ $-157.1500$ $-79.1677$ $-79.2012$ $-29.5237$ $-29.5042$ $\overline{H}^{\chi}_{\pi 2K K}$ $54.2419$ $54.1775$ $44.4467$ $44.3589$ $32.3957$ $32.2094$ $c^\eta_{\pi K K}$ $3.7206$ $3.7247$ $6.4170$ $6.4305$ $4.7598$ $4.7771$ $d^\eta_{\pi K K}$ $1.0047$ $1.0522$ $5.3347$ $5.4545$ $11.4664$ $11.6803$ $c^\eta_{sunsets}$ $4.5926$ $4.5967$ $8.1678$ $8.1813$ $6.0509$ $6.0683$ $d^\eta_{sunsets}$ $-1.8788$ $-1.8313$ $-2.9205$ $-2.8007$ $-2.1033$ $-1.8894$ -------------------------------------------- ------------- ------------- ------------ ------------ ------------ ------------ : Contribution (in units of $10^{-6}$) of various components to $m^2_P$ and $F_P$ for three sets of meson mass inputs from lattice simulations. For ‘Lattice Low’, $m_\pi=0.1830$ and $m_K=0.4964$; for ‘Lattice Mid’, $m_\pi=0.3010$ and $m_K=0.5625$; for ‘Lattice High’, $m_\pi=0.4023$ and $m_K=0.5574$; all in GeV.[]{data-label="TableApprox1"} Figure \[FigHApproxError\] gives a graphical representation of the relative errors of the truncated sunset expressions over a range of values of $\rho$. The points on the curves are the specific mass points used in [@Durr:2016ulb]. It is seen that for values of $\rho \lesssim 0.5$, which constitutes the majority of the mass values in the simulation of [@Durr:2016ulb], the relative error is less than $1\%$. ![Relative errors of the truncated sunset kaon (left) and eta (right) integrals.[]{data-label="FigHApproxError"}](HkaonApproxError.eps){width="98.00000%"}    ![Relative errors of the truncated sunset kaon (left) and eta (right) integrals.[]{data-label="FigHApproxError"}](HetaApproxError.eps){width="98.00000%"} -------------------------------------- ----------- ----------- ----------- ----------- ----------- ----------- \[2ex\] \[2ex\] $(m_K^2)^{(6)}_{loop}$ 0.0353 0.0353 0.0710 0.0711 0.0655 0.0656 $(\overline{F}_K^2)^{(6)}_{loop}$ 0.1536 0.1536 0.2559 0.2557 0.3304 0.3305 $(m_K^2)^{(6)}_{CT}$ -0.0384 -0.0384 -0.0560 -0.0560 -0.0276 -0.0276 $(\overline{F}_K^2)^{(6)}_{CT}$ 0.0183 0.0183 0.0108 0.0108 -0.0097 -0.0097 $(m_\eta^2)^{(6)}_{loop}$ $0.0217$ $0.0218$ $0.0544$ $0.0544$ $0.0776$ $0.0779$ $(\overline{F}_\eta^2)^{(6)}_{loop}$ $0.2102$ $0.2109$ $0.3152$ $0.3169$ $0.3649$ $0.3678$ $(m_\eta^2)^{(6)}_{CT}$ $-0.0076$ $-0.0076$ $-0.0023$ $-0.0023$ $0.0035$ $0.0035$ $(\overline{F}_\eta^2)^{(6)}_{CT}$ $-0.0034$ $-0.0034$ $-0.0197$ $-0.0197$ $-0.0302$ $-0.0302$ -------------------------------------- ----------- ----------- ----------- ----------- ----------- ----------- : Contribution (in units of $10^0$) of various components to $m^2_P$ and $F_P$ for three sets of meson mass inputs from lattice simulations. For ‘Lattice Low’, $m_\pi=0.1830$ and $m_K=0.4964$; for ‘Lattice Mid’, $m_\pi=0.3010$ and $m_K=0.5625$; for ‘Lattice High’, $m_\pi=0.4023$ and $m_K=0.5574$; all in GeV.[]{data-label="TableApprox2"} Comparison with prior determinations ------------------------------------ In this section, we give numerical values for the quantities discussed in this paper in the form of LO + NLO + NNLO for both the BE14 and free fits. These have been calculated with the input parameters given under the tables of the previous section. We give both the values calculated using our GMO-simplified expressions, as well as the full ones. ### $m^2_K$ Using the full expressions of Section \[Sec:KaonMass\] and the BE14 (free fit) LECs, we get the following values: $$\begin{aligned} \frac{m_{K}^2}{m_{K,\text{phys}}^2} & = \frac{1}{m_{K,\text{phys}}^2} + \frac{\left(m_{K} \right)^{(4)}}{m_{K,\text{phys}}^2} + \frac{\left(m_{K}\right)^{(6)}_{\text{loop}}}{m_{K,\text{phys}}^2} + \frac{\left(m_{K}\right)^{(6)}_{\text{CT}}}{m_{K, \text{phys}}^2} \nonumber \\ & = 1 -0.0690 (+0.0229) + 0.1338 (0.1882) -0.1779 (-0.2049)\end{aligned}$$ and using the GMO-simplified expressions: $$\begin{aligned} \frac{m_{K}^2}{m_{K,\text{phys}}^2} = 1 -0.0704 (+0.0215) + 0.1427(0.1959) -0.1779 (-0.2049)\end{aligned}$$ These numbers are close to the literature values [@Ecker:2013pba]: $$\begin{aligned} \left( \frac{m_{K}^2}{m_{K,\text{phys}}^2} \right)_{lit} & = 1.112(0.0994) -0.069(+0.022) -0.043(-0.016)\end{aligned}$$ for the BE14 case, and although less so for the free fit case, are still compatible with them. ### $F_K$ For the kaon decay constant, using the BE14 (free fit) low energy constants and the expressions of Section \[Sec:KaonDecay\], we obtain: $$\begin{aligned} \frac{F_{K}}{F_{0}} &= 1 + F^{(4)}_{K} + \left(F_{K} \right)^{(6)}_{\text{loop}} + \left(F_{K} \right)^{(6)}_{\text{CT}} \nonumber\\ & = 1 + 0.3849(0.4355) + 0.1237(0.2001) + 0.0238(0.0422)\end{aligned}$$ Using the using the BE14 (free fit) low energy constants and GMO-simplified expressions, we get: $$\begin{aligned} \frac{F_{K}}{F_{0}} = 1 + 0.3828 (0.4334) + 0.1263 (0.2012) + 0.0238 (0.0423)\end{aligned}$$ To obtain $F_{K}/F_{\pi}$, we use the expansion presented in [@Bijnens:2011tb]: $$\begin{aligned} \frac{F_K}{F_{\pi}} = 1 + \left( \frac{F_K}{F_0} \bigg|_{p^4} - \frac{F_{\pi}}{F_0} \bigg|_{p^4} \right)_{\text{NLO}} + \left( \frac{F_K}{F_0} \bigg|_{p^6} - \frac{F_{\pi}}{F_0} \bigg|_{p^6} - \frac{F_K}{F_0} \bigg|_{p^4} \frac{F_{\pi}}{F_0} \bigg|_{p^4} + \frac{F_{\pi}}{F_0} \bigg|^2_{p^4} \right)_{\text{NNLO}}\end{aligned}$$ and values for the $F_{\pi}/F_0$ calculated in [@Ananthanarayan:2017yhz]. We get: $$\begin{aligned} \frac{F_{K}}{F_{\pi}} = 1 + 0.1764 (0.1208) + 0.0226 (0.0769)\end{aligned}$$ using the full expressions and the BE14 (free fit) LEC values. These values agree well with the numbers presented in [@Ecker:2013pba]: $$\begin{aligned} \left( \frac{F_{K}}{F_{\pi}} \right)_{lit} = 1 + 0.176(0.121) + 0.023(0.077)\end{aligned}$$ ### $m^2_\eta$ Using the full expressions of Section \[Sec:EtaMass\] and the BE14 (free fit) LECs, we get the following values: $$\begin{aligned} \frac{m_\eta^2}{m_{\eta,\text{phys}}^2} & = \frac{1}{m_{\eta,\text{phys}}^2} + \frac{\left(m_\eta \right)^{(4)}}{m_{\eta,\text{phys}}^2} + \frac{\left(m_\eta \right)^{(6)}_{\text{loop}}}{m_{\eta,\text{phys}}^2} + \frac{\left(m_\eta \right)^{(6)}_{\text{CT}}}{m_{\eta, \text{phys}}^2} \nonumber \\ & = 1 -0.2126(-0.0736) +0.0538(0.1624) -0.0383(-0.1498)\end{aligned}$$ and using the GMO-simplified expressions: $$\begin{aligned} \frac{m_\eta^2}{m_{\eta,\text{phys}}^2} = 1 -0.2595(-0.1250) + 0.2018(0.2919) -0.0383(-0.1498)\end{aligned}$$ As with the kaon, the BE14 numbers are close to the literature values [@Ecker:2013pba], while the free fit numbers only mildly agree. $$\begin{aligned} \left( \frac{m_{\eta}^2}{m_{\eta,\text{phys}}^2} \right)_{lit} & = 1.197(0.938) -0.214(-0.076) +0.017(0.014)\end{aligned}$$ ### $F_\eta$ Using the full expressions of Section \[Sec:EtaDecay\] and the BE14 (free fit) LECs, we get the following values: $$\begin{aligned} \frac{F_\eta}{F_0} & = 1 + \left(F_\eta \right)^{(4)} + \left(F_\eta\right)^{(6)}_{\text{loop}} + \left(F_\eta\right)^{(6)}_{\text{CT}} \nonumber \\ & = 1 + 0.4672(0.4996) + 0.1888(0.2597) + 0.0009(0.0254)\end{aligned}$$ and using the GMO-simplified expressions: $$\begin{aligned} m_\eta^2 = 1 +0.4672(0.4996) + 0.1797(0.2508) + 0.0009(0.0254)\end{aligned}$$ Lattice Fittings \[Sec:LatticeFits\] ==================================== We present in this section a simplified form of the expressions for $m_K$, $F_K$, $m_\eta$ and $F_\eta$ that may conveniently be used in fits with lattice data. For this purpose, we used the simplified expressions of the sunset master integrals of Section $\ref{Sec:ApproxSunsets}$, and expanded the $c^P_{sunset}$ and $d^P_{sunset}$ terms around the mass ratio $m_\pi^2/m_K^2 = 0$. Though the integrals $\overline{H}^\chi_{K 2\pi \eta}$ and $\overline{H}^\chi_{2\pi K K}$ diverge in the $m_\pi^2 \rightarrow 0$ limit, that they are multiplied by factors of $m_\pi^2$ ensures analyticity of the expressions in this limit. $m^2_K$ ------- The GMO expressions for the kaon mass can be written as: $$\begin{aligned} m_K^2 =& m_{K0}^2 + m_K^2 \left\{ \left(\frac{4}{9}\xi_K-\frac{1}{9}\xi_\pi\right) \lambda_\eta +\xi_K \hat L_{1M}^r + \xi_\pi \hat L_{2M}^r \right\} \nonumber\\ & \qquad + m_K^2\Bigg\{ \hat K_{1M}^r \lambda_\pi^2 + \hat K_{2M}^r \lambda_\pi\lambda_K + \hat K_{3M}^r \lambda_\pi\lambda_\eta + \hat K_{4M}^r \lambda_K^2 + \hat K_{5M}^r \lambda_K\lambda_\eta + \hat K_{6M}^r \lambda_\eta^2 \nonumber\\ & \qquad \qquad \quad + \xi_K^2 F_M \left[\frac{m_\pi^2}{m_K^2}\right] + \hat C_{1M} \lambda_\pi+\hat C_{2M}\lambda_K+\hat C_{3M}\lambda_\eta + \hat C_{4M} \Bigg\}\end{aligned}$$ where $\xi_\pi=m_\pi^2/(16\pi^2 F_\pi^2)$, $\xi_K= m_K^2/(16\pi^2 F_\pi^2)$ and $\lambda_i = \log(m_i^2/\mu^2)$. The coefficients $\hat L^r_{iM}$ are functions of the NLO LECs $L_i^r$. Each of the $\hat K_{iM}^r,\hat C_{iM}^r$ has three terms proportional to $\xi_\pi^2,\xi_\pi\xi_K,\xi_K^2$ respectively. The $\hat K_{iM}$ and $F_M$ are fully determined, the $\hat C_{iM}^r, i=1,2,3$ depend linearly on the NLO LECs and $\hat C_{4M}$ depends up to quadratically on the NLO LECS and linearly on the NNLO LECs. There is some ambiguity in dividing the terms not depending on LECs between the various terms since $\log(m_i^2/m_K^2)=\lambda_i-\lambda_K$ for $i=\pi,\eta$. The $F_I$ can be subdivided as: $$\begin{aligned} F_I [ \rho ] =& a_{1I} + \bigg( a_{2I} + a_{3I} \log[\rho] + a_{4I} \log^2[\rho] \bigg) \rho + \bigg( a_{5I} + a_{6I} \log[\rho] + a_{7I} \log^2[\rho] \bigg) \rho^2 \nonumber \\ & + \bigg( a_{8I} + a_{9I} \log[\rho] + a_{10I} \log^2[\rho] \bigg) \rho^3 + \bigg( a_{11I} + a_{12I} \log[\rho] + a_{13I} \log^2[\rho] \bigg) \rho^4 + \mathcal{O} \left( \rho^5 \right) \label{Eq:FI}\end{aligned}$$ where $\rho = m_\pi^2/m_K^2$, and for the kaon mass, $I=M$. For a more detailed discussion of the various possible ways in which the above expressions may be expressed for fitting with lattice data, see [@Ananthanarayan:2017yhz]. Note that unlike in [@Ananthanarayan:2017yhz] where $F_I [ \rho ]$ was truncated after $\mathcal{O} \left( \rho^3 \right)$, here we retain up to $\mathcal{O} \left( \rho^4 \right)$ terms. Our justification for doing so is that only at $\mathcal{O} \left( \rho^4 \right)$ does the expansion converge to the desired level of accuracy. This is shown graphically in Figure \[FigLatticeFit\], where the expression $F_I$, which contains the terms $c_{\text{sunsets}}$ or $d_{\text{sunsets}}$ and terms from the bilinear chiral logs that are proportional to powers of $\rho$, are plotted with four different inputs. The blue plot was calculated using the exact values of the sunset integrals, the red using the approximate expressions of Section \[Sec:ApproxSunsets\], and the dotted and dashed plots using the truncated sunset expressions expanded in $\rho$ up to $\mathcal{O}(\rho^3)$ and $\mathcal{O}(\rho^4)$ respectively. It is seen that only at $\mathcal{O}(\rho^4)$ do the expansions converge reasonably well to the exact ones over the entire range of interest of $\rho$, i.e. for $\rho \lesssim 0.5$. ![$F_M$ (left) and $F_F$ (right) plotted against $\rho$ using exact and truncated sunset integral values, as well as expansions of the latter upto $\mathcal{O}(\rho^3)$ and $\mathcal{O}(\rho^4)$.[]{data-label="FigLatticeFit"}](LatticeFitMK.eps){width="98.00000%"}    ![$F_M$ (left) and $F_F$ (right) plotted against $\rho$ using exact and truncated sunset integral values, as well as expansions of the latter upto $\mathcal{O}(\rho^3)$ and $\mathcal{O}(\rho^4)$.[]{data-label="FigLatticeFit"}](LatticeFitFK.eps){width="98.00000%"} Explicitly, for $m_K$, we have: $$\begin{aligned} & \hat{L}^r_{1M} = -8 (4 \pi )^2 (2 L^r_{4}+L^r_{5}-4 L^r_{6}-2 L^r_{8}), \quad \hat{L}^r_{2M} = -8 (4 \pi )^2 (L^r_{4}-2 L^r_{6})\end{aligned}$$ $$\begin{aligned} & \hat{K}^r_{1M} = \frac{1}{8} \xi _{\pi } \xi _K + \frac{169}{192} \xi _{\pi }^2, \quad \hat{K}^r_{2M} = \frac{1}{16} \xi _{\pi }^2 -\frac{3}{8} \xi _{\pi } \xi _K , \quad \hat{K}^r_{6M} = -\frac{11}{324} \xi_K^2 - \frac{47}{324} \xi _{\pi } \xi _K + \frac{1279}{5184} \xi _{\pi }^2 \nonumber \\ & \hat{K}^r_{4M} = \frac{43}{24} \xi _K^2 + \frac{1}{9} \xi _{\pi } \xi _K, \quad \hat{K}^r_{5M} = \frac{7}{18} \xi _K^2 + \frac{19}{72} \xi _{\pi } \xi _K - \frac{1}{16} \xi _{\pi }^2, \quad \hat{K}^r_{3M} = -\frac{55}{72} \xi _{\pi } \xi _K - \frac{97}{288} \xi _{\pi }^2 \end{aligned}$$ $$\begin{aligned} \hat{C}^r_{1M} =& \left(16 (4 \pi )^2 (2 L^r_{4}+L^r_{5}-4 L^r_{6}-2 L^r_{8})-\frac{11}{8}\right) \xi _{\pi } \xi _K \nonumber \\ & - \left((4 \pi )^2 (48 L^r_{1}+12 L^r_{2}+15 L^r_{3}-68 L^r_{4}-12 L^r_{5}+88 L^r_{6}+24 L^r_{8})+\frac{455}{288}\right) \xi _{\pi }^2 \end{aligned}$$ $$\begin{aligned} \hat{C}^r_{2M} =& \left(8 (4 \pi )^2 (L^r_{4}-2 L^r_{6})-\frac{41}{36}\right) \xi _{\pi } \xi _K \nonumber \\ & - \left(2 (4 \pi )^2 (36 L^r_{1}+18 L^r_{2}+15 L^r_{3}-40 L^r_{4}-16 L^r_{5}+64 L^r_{6}+32 L^r_{8})+\frac{487}{144}\right) \xi _K^2\end{aligned}$$ $$\begin{aligned} \hat{C}^r_{3M} =& \left(\frac{8}{9} (4 \pi )^2 (16 L^r_{1}+4 L^r_{2}+7 L^r_{3}-18 L^r_{4}-L^r_{5}+20 L^r_{6}-12 L^r_{7}+6 L^r_{8})+\frac{5}{8}\right) \xi _{\pi } \xi _K \nonumber \\ & -\left(\frac{16}{9} (4 \pi )^2 (16 L^r_{1}+4 L^r_{2}+7 L^r_{3}-24 L^r_{4}-8 L^r_{5}+32 L^r_{6}+16 L^r_{8})+\frac{74}{27}\right) \xi _K^2 \nonumber \\ & + \left(\frac{13}{864}-\frac{1}{9} (4 \pi )^2 (16 L^r_{1}+4 L^r_{2}+7 L^r_{3}-12 L^r_{4}+12 L^r_{5}+8 L^r_{6}-96 L^r_{7}-40 L^r_{8})\right) \xi _{\pi }^2 \end{aligned}$$ $$\begin{aligned} \hat{C}^r_{4M} &= \frac{1}{27} (4 \pi )^2 \bigg\{ \bigg( 108 L^r_{1}+366 L^r_{2}+89 L^r_{3}-32 L^r_{5}+384 L^r_{7}+192 L^r_{8} \bigg) \xi _K^2 \nonumber \\ & \quad -\bigg( 48 L^r_{2} + 4 L^r_{3} - 64 L^r_{5} + 768 L^r_{7} + 384 L^r_{8} \bigg) \xi _{\pi } \xi _K + \bigg( 168 L^r_{2}+41 L^r_{3}-32 L^r_{5}+384 L^r_{7}+192 L^r_{8} \bigg) \xi _{\pi }^2 \bigg\} \nonumber \\ & -16 \left( 16 \pi ^2\right)^2 \bigg\{ 2 \bigg (C^r_{12}+2 C^r_{13}+C^r_{14}+C^r_{15}+2 C^r_{16}-3 C^r_{19}-4 C^r_{20}-6 C^r_{21}-C^r_{31}-2 C^r_{32} + 16 (L^r_{4})^2 \nonumber \\ & \qquad + 12 L^r_{4} L^r_{5} - 64 L^r_{4} L^r_{6} - 32 L^r_{4} L^r_{8} + 2 (L^r_{5})^2 - 24 L^r_{5} L^r_{6} - 12 L^r_{5} L^r_{8} + 64 (L^r_{6})^2 + 64 L^r_{6} L^r_{8} + 16 (L^r_{8})^2 \bigg) \xi _K^2 \nonumber \\ & \quad + \bigg( 2 C^r_{13}-2 C^r_{14}+C^r_{15}-4 C^r_{16}+2 C^r_{17}+6 C^r_{19}+2 C^r_{20}-12 C^r_{21}-2 C^r_{32} + 32 (L^r_{4})^2 + 16 L^r_{4} L^r_{5} \nonumber \\ & \qquad - 128 L^r_{4} L^r_{6} - 24 L^r_{4} L^r_{8} + 4 (L^r_{5})^2 - 32 L^r_{5} L^r_{6} - 8 L^r_{5} L^r_{8} + 128 (L^r_{6})^2 + 48 L^r_{6} L^r_{8} \bigg) \xi _{\pi } \xi _K \nonumber \\ & \quad + \bigg( C^r_{14}+3 C^r_{16}-C^r_{17}-3 C^r_{19}-3 C^r_{20}-3 C^r_{21}+8 (L^r_{4}-2 L^r_{6}) (L^r_{4}+L^r_{5}-2 L^r_{6}-L^r_{8}) \bigg) \xi _{\pi }^2 \bigg\}\end{aligned}$$ $$\begin{aligned} a_{1M} =& \frac{1165}{2592} \left(\text{Li}_2\left[ \frac{3}{4} \right] +\log [4] \log \left[ \frac{4}{3}\right] \right) +\frac{25 \pi ^2}{288}+\frac{2665}{3456}+\frac{23 \pi }{12 \sqrt{2}}-\frac{103}{192} \log ^2\left[\frac{4}{3}\right] - \frac{163}{216} \log \left[ \frac{4}{3} \right] \nonumber \\ & - \frac{1}{24} \text{arccosec}^2\left[\sqrt{3}\right] + \left(\frac{\pi}{24}-\frac{23}{6 \sqrt{2}}\right) \text{arccosec} \left[ \sqrt{3} \right]\end{aligned}$$ $$\begin{aligned} a_{2M} =& -\frac{689}{648} \left(\text{Li}_2\left[\frac{3}{4}\right]+\log [4] \log \left[\frac{4}{3}\right] \right)+\frac{11 \pi ^2}{72}-\frac{386 \gamma }{135}+\frac{71687}{16200}-\frac{221 \pi }{108 \sqrt{2}}-\frac{3277 \pi }{4320 \sqrt{3}}+\frac{5 \sqrt{2} \pi }{27} \nonumber \\ & +\frac{53}{144} \log ^2\left[\frac{4}{3}\right] + \frac{55}{54} \log \left[ \frac{4}{3} \right] -\frac{1}{90}\log [4] - \frac{7 \pi}{288 \sqrt{3}} \log \left[\frac{64}{3}\right] + \frac{19}{24} \text{arccosec}^2 \left[ \sqrt{3} \right] \nonumber \\ & +\left(\frac{43 \sqrt{2}}{27}+\frac{17 \gamma }{3 \sqrt{2}}-\frac{19 \pi }{24}\right) \text{arccosec}\left[ \sqrt{3} \right] - \frac{1}{54} \psi \left[\frac{5}{2}\right]\end{aligned}$$ $$\begin{aligned} a_{3M} = \frac{11}{8}+\frac{7 \pi}{288 \sqrt{3}}-\frac{1}{8} \log \left[ \frac{4}{3} \right], \quad a_{4M} = -\frac{1}{8}, \quad a_{7M} = -\frac{169}{192}, \quad a_{10M} = \frac{3}{16}, \quad a_{13M} = \frac{9}{64}\end{aligned}$$ $$\begin{aligned} a_{5M} =& \frac{1031}{1296} \left(\text{Li}_2 \left[ \frac{3}{4}\right] +\log [4] \log \left[ \frac{4}{3} \right] \right) -\frac{23 \pi ^2}{48}+\frac{479393}{388800}+\frac{65 \pi }{72 \sqrt{2}}+\frac{706841 \pi }{331776 \sqrt{3}}+\frac{21737 \gamma }{6480} \nonumber \\ & -\frac{55}{192} \log ^2\left[\frac{4}{3}\right]-\frac{151}{90} \log [4] - \frac{551}{1728} \log \left[\frac{4}{3}\right] - \frac{62437 \pi}{55296 \sqrt{3}} \log \left[\frac{64}{3}\right] - \frac{23}{24} \text{arccosec}^2 \left[ \sqrt{3} \right] \nonumber \\ & -\frac{251}{648} \psi \left[ \frac{5}{2} \right] + \left(-\frac{173}{96 \sqrt{2}}-\frac{1009 \gamma }{144 \sqrt{2}}+\frac{23 \pi }{24}+\frac{23}{16 \sqrt{2}} \log [12] \right) \text{arccosec} \left[ \sqrt{3}\right]\end{aligned}$$ $$\begin{aligned} & a_{6M} = -\frac{79}{48}+\frac{62437 \pi }{55296 \sqrt{3}}+\frac{43}{96} \log \left[ \frac{4}{3}\right] -\frac{23}{16 \sqrt{2}} \text{arccosec} \left[ \sqrt{3} \right]\end{aligned}$$ $$\begin{aligned} a_{8M} &= -\frac{43}{216} \left(\text{Li}_2\left[\frac{3}{4}\right]+\log [4] \log \left[\frac{4}{3}\right] \right)+\frac{11 \pi ^2}{72}-\frac{199933 \gamma }{207360}-\frac{9347509}{6220800}-\frac{563 \pi }{2304 \sqrt{2}}-\frac{8967451 \pi }{13271040 \sqrt{3}} \nonumber \\ & +\frac{30889}{51840} \log [4] + \frac{9653}{34560} \log \left[ \frac{4}{3}\right] + \frac{284179 \pi}{442368 \sqrt{3}} \log \left[ \frac{64}{3}\right] + \frac{5}{24} \text{arccosec}^2 \left[ \sqrt{3} \right] + \frac{47}{144} \psi\left[\frac{5}{2}\right] \nonumber \\ & + \left(-\frac{5 \pi }{24}+\frac{1015}{1024 \sqrt{2}}+\frac{6313 \gamma }{4608 \sqrt{2}}-\frac{175}{768 \sqrt{2}}\log [12] \right) \text{arccosec} \left[ \sqrt{3} \right]\end{aligned}$$ $$\begin{aligned} & a_{9M} = -\frac{5681}{17280}-\frac{284179 \pi }{442368 \sqrt{3}}+\frac{1}{24} \log \left[ \frac{4}{3}\right] + \frac{175}{768 \sqrt{2}} \text{arccosec} \left[ \sqrt{3}\right]\end{aligned}$$ $$\begin{aligned} a_{11M} &= \frac{5}{288} \left(\text{Li}_2 \left[ \frac{3}{4} \right] + \log [4] \log \left[\frac{4}{3}\right] \right)+\frac{25 \pi ^2}{288}+\frac{21213943}{33177600}+\frac{1981 \pi }{110592 \sqrt{2}}+\frac{331627 \pi }{42467328 \sqrt{3}}+\frac{166979 \gamma }{1105920} \nonumber \\ & -\frac{61451}{1548288} \log \left[\frac{4}{3}\right] - \frac{737789}{3870720} \log [4] - \frac{708911 \pi}{7077888 \sqrt{3}} \log \left[\frac{64}{3}\right] -\frac{7}{72} \psi\left[\frac{5}{2}\right] \nonumber \\ & + \left(-\frac{2309 \gamma }{73728 \sqrt{2}}-\frac{8057}{442368 \sqrt{2}}+\frac{527}{24576 \sqrt{2}} \log [12] \right) \text{arccosec} \left[ \sqrt{3}\right]\end{aligned}$$ $$\begin{aligned} & a_{12M} = -\frac{499231}{1548288}+\frac{708911 \pi }{7077888 \sqrt{3}}-\frac{1}{96} \log \left[\frac{4}{3}\right]-\frac{527}{24576 \sqrt{2}} \text{arccosec} \left[ \sqrt{3}\right]\end{aligned}$$ $F_K$ ----- We can fit $F_K$ in a similar manner as follows: $$\begin{aligned} \frac{F_K}{F} &= 1 + \left\{ -\frac{3}{8} \xi_{\pi} \lambda_{\pi} + \left(\frac{1}{8}\xi_{\pi}-\frac{1}{2}\xi_K \right) \lambda _{\eta } -\frac{3}{4} \xi_K \lambda_K +\xi_K \hat L_{1F}^r + \xi_\pi \hat L_{2F}^r \right\} \nonumber \\ & \qquad +\Bigg\{ \hat K_{1F}^r \lambda_\pi^2 + \hat K_{2F}^r \lambda_\pi\lambda_K + \hat K_{3F}^r \lambda_\pi\lambda_\eta + \hat K_{4F}^r \lambda_K^2 + \hat K_{5F}^r \lambda_K\lambda_\eta + \hat K_{6F}^r \lambda_\eta^2 \nonumber\\ & \qquad \quad + \xi_K^2 F_F\left[ \frac{m_\pi^2}{m_K^2} \right] + \hat C_{1F}\lambda_\pi+\hat C_{2F}\lambda_K+\hat C_{3F}\lambda_\eta + \hat C_{4F} \Bigg\}\end{aligned}$$ where $$\begin{aligned} \hat{L}^r_{1F} = 4 (4 \pi )^2 (2 L^r_{4}+L^r_{5}), \quad \hat{L}^r_{2F} = 4 (4 \pi )^2 L^r_{4}\end{aligned}$$ $$\begin{aligned} & \hat{K}^r_{1F} = \frac{1}{6} \xi _{\pi } \xi _K - \frac{5}{192} \xi _{\pi }^2, \quad \hat{K}^r_{2F} = \frac{51}{32} \xi _{\pi } \xi _K - \frac{3}{32} \xi _{\pi }^2, \hat{K}^r_{6F} = \frac{31}{36} \xi _K^2 - \frac{11}{72} \xi _{\pi } \xi _K - \frac{21}{64} \xi _{\pi }^2 \nonumber \\ & \hat{K}^r_{4F} = \frac{155}{288} \xi _K^2 + \frac{11}{144} \xi _{\pi } \xi _K, \quad \hat{K}^r_{5F} = -\frac{91}{72} \xi _K^2 - \frac{53}{288} \xi _{\pi } \xi _K + \frac{3}{32} \xi _{\pi }^2, \quad \quad \hat{K}^r_{3F} = \frac{25}{24} \xi _{\pi } \xi _K + \frac{47}{96} \xi _{\pi }^2\end{aligned}$$ $$\begin{aligned} \hat{C}^r_{1F} = \left(\frac{3}{16}-\frac{19}{2} (4 \pi )^2 (2 L^r_{4}+L^r_{5})\right) \xi _{\pi } \xi _K + \left(\frac{1}{2} (4 \pi )^2 (48 L^r_{1}+12 L^r_{2}+15 L^r_{3}-47 L^r_{4}-6 L^r_{5})+\frac{53}{64}\right) \xi _{\pi }^2\end{aligned}$$ $$\begin{aligned} \hat{C}^r_{2F} = \left(\frac{245}{96} + (4 \pi )^2 (36 L^r_{1}+18 L^r_{2}+15 L^r_{3}-30 L^r_{4}-7 L^r_{5}) \right) \xi _K^2 + \left(\frac{173}{144}-(4 \pi )^2 (7 L^r_{4}+6 L^r_{5})\right) \xi _{\pi } \xi _K \end{aligned}$$ $$\begin{aligned} \hat{C}^r_{3F} =& \left(\frac{19}{18} + \frac{2}{9} (4 \pi )^2 (64 L^r_{1}+16 L^r_{2}+28 L^r_{3}-66 L^r_{4}-3 L^r_{5}-72 L^r_{7}-36 L^r_{8})\right) \xi _K^2 \nonumber \\ & - \left(\frac{65}{144} + \frac{1}{18} (4 \pi )^2 (128 L^r_{1}+32 L^r_{2}+56 L^r_{3}-78 L^r_{4}+111 L^r_{5}-576 L^r_{7}-288 L^r_{8}) \right) \xi _{\pi } \xi _K \nonumber \\ & + \left(\frac{3}{64} + \frac{1}{18} (4 \pi )^2 (16 L^r_{1}+4 L^r_{2}+7 L^r_{3}-3 L^r_{4}+42 L^r_{5}-288 L^r_{7}-144 L^r_{8}) \right) \xi _{\pi }^2\end{aligned}$$ $$\begin{aligned} \hat{C}^r_{4F} &= 8 \left(16 \pi ^2\right)^2 \bigg\{ \bigg( -2 C^r_{14}+C^r_{15}-4 C^r_{16}+2 C^r_{17}+28 (L^r_{4})^2+14 L^r_{4} L^r_{5}-32 L^r_{4} L^r_{6}+4 (L^r_{5})^2-8 L^r_{5} L^r_{6} \bigg) \xi _{\pi } \xi _K \nonumber \\ & \quad + \bigg( 2 C^r_{14}+2 C^r_{15}+4 C^r_{16}+(2 L^r_{4}+L^r_{5}) (14 L^r_{4}+3 L^r_{5}-16 L^r_{6}-8 L^r_{8} \bigg) \xi _K^2 \nonumber \\ & \quad + \bigg( C^r_{14}+3 C^r_{16}-C^r_{17}+7 (L^r_{4})^2+8 L^r_{4} L^r_{5}-8 L^r_{4} L^r_{6}-8 L^r_{4} L^r_{8} \bigg) \xi _{\pi }^2 \bigg\} \nonumber \\ & +\frac{2}{27} (4 \pi )^2 \bigg\{ \bigg( 12 L^r_{2}+L^r_{3}-36 L^r_{5}+432 L^r_{7}+216 L^r_{8} \bigg) \xi _{\pi } \xi _K - \left(42 L^r_{2}+\frac{41}{4} L^r_{3}-18 L^r_{5}+216 L^r_{7}+108 L^r_{8}\right) \xi _{\pi }^2 \nonumber \\ & \quad -\left(27 L^r_{1}+\frac{183 L^r_{2}}{2}+\frac{89 L^r_{3}}{4}-18 L^r_{5}+216 L^r_{7}+108 L^r_{8}\right) \xi _K^2 \bigg\}\end{aligned}$$ We subdivide $F_F$ as in Eq.(\[Eq:FI\]) with $I=F$, and with the $a_{iF}$ given by: $$\begin{aligned} a_{1F} =& -\frac{6337}{5184} \left(\text{Li}_2\left[\frac{3}{4}\right] + \log [4] \log \left[ \frac{4}{3} \right] \right) + \frac{41 \pi^2}{192} - \frac{11 \sqrt{2} \pi}{27} + \frac{10525}{6912} - \frac{119 \pi }{216 \sqrt{2}}-\frac{23}{1152} \log ^2\left[\frac{4}{3}\right] \nonumber \\ & + \frac{127}{48} \log \left[\frac{4}{3}\right] + \frac{41}{48} \text{arccosec}^2 \left[ \sqrt{3}\right] + \left(\frac{295}{108 \sqrt{2}}-\frac{41 \pi }{48}\right) \text{arccosec} \left[\sqrt{3}\right]\end{aligned}$$ $$\begin{aligned} a_{2F} =& \frac{5821}{2592} \left(\text{Li}_2\left[\frac{3}{4}\right] + \log [4] \log \left[ \frac{4}{3} \right] \right) -\frac{25 \pi ^2}{96}-\frac{2050019}{388800}+\frac{145 \pi }{72 \sqrt{2}}+\frac{38693 \pi }{25920 \sqrt{3}}+\frac{82 \gamma }{405}-\frac{137}{576} \log ^2 \left[ \frac{4}{3} \right] \nonumber \\ & - \frac{1687}{810} \log \left[ \frac{4}{3} \right] - \frac{281}{540} \log [4] - \frac{13 \pi}{1728 \sqrt{3}} \log \left[\frac{64}{3}\right] +\frac{11}{48} \text{arccosec} \left[ \sqrt{3} \right]^2 - \frac{29}{324} \psi \left[\frac{5}{2}\right] \nonumber \\ & + \left(-\frac{11 \pi }{48}-\frac{13}{3 \sqrt{2}}-\frac{13 \gamma }{18 \sqrt{2}}+\frac{1}{6 \sqrt{2}} \log [1728] \right) \text{arccosec} \left[ \sqrt{3} \right]\end{aligned}$$ $$\begin{aligned} a_{3F} = \frac{169}{6480} + \frac{13 \pi }{1728 \sqrt{3}}+\frac{7}{48} \log \left[ \frac{4}{3} \right] -\frac{1}{2 \sqrt{2}} \text{arccosec} \left[ \sqrt{3} \right]\end{aligned}$$ $$\begin{aligned} a_{4F} = -\frac{1}{6}, \quad a_{7F} = \frac{325}{384}, \quad a_{10F} = -\frac{9}{64}, \quad a_{13F} = - \frac{27}{128}\end{aligned}$$ $$\begin{aligned} a_{5F} =& -\frac{845}{648} \left(\text{Li}_2\left[\frac{3}{4}\right]+\log [4] \log \left[ \frac{4}{3}\right] \right) +\frac{5 \pi ^2}{18} - \frac{1301 \sqrt{3} \pi }{512}-\frac{66191 \gamma }{12960}+\frac{25789}{155520}-\frac{145 \pi }{144 \sqrt{2}}+\frac{3572063 \pi }{663552 \sqrt{3}} \nonumber \\ & + \frac{145}{384} \log^2 \left[ \frac{4}{3} \right] + \frac{15403}{6480} \log [4] + \frac{15941}{17280} \log \left[\frac{4}{3}\right] + \frac{176189 \pi}{110592 \sqrt{3}} \log \left[\frac{64}{3}\right] + \frac{59}{48} \text{arccosec}^2 \left[ \sqrt{3} \right] \nonumber \\ & + \frac{35}{144} \psi \left[ \frac{5}{2} \right] + \left(-\frac{59 \pi }{48}+\frac{323}{192 \sqrt{2}}+\frac{3167 \gamma }{288 \sqrt{2}}-\frac{115}{48 \sqrt{2}} \log [12] \right) \text{arccosec} \left[ \sqrt{3} \right]\end{aligned}$$ $$\begin{aligned} a_{6F} = \frac{4427}{2160}-\frac{176189 \pi }{110592 \sqrt{3}}-\frac{155}{192} \log \left[ \frac{4}{3} \right] + \frac{115}{48 \sqrt{2}} \text{arccosec} \left[ \sqrt{3} \right] \end{aligned}$$ $$\begin{aligned} a_{8F} =& \frac{265}{864} \left( \text{Li}_2 \left[\frac{3}{4}\right] + \log [4] \log \left[ \frac{4}{3} \right] \right) - \frac{29 \pi ^2}{288} + \frac{11061169}{4147200} + \frac{4753 \pi }{13824 \sqrt{2}}+\frac{20910563 \pi }{26542080 \sqrt{3}}+\frac{199393 \gamma }{138240} \nonumber \\ & - \frac{16337}{23040} \log [4] - \frac{10477}{27648} \log \left[\frac{4}{3}\right] -\frac{804611 \pi}{884736 \sqrt{3}} \log \left[ \frac{64}{3} \right] - \frac{5}{16} \text{arccosec}^2 \left[ \sqrt{3}\right] -\frac{119}{288} \psi \left[\frac{5}{2}\right] \nonumber \\ & + \left(-\frac{19319 \gamma }{9216 \sqrt{2}}-\frac{84251}{55296 \sqrt{2}}+\frac{5 \pi }{16}+\frac{823}{3072 \sqrt{2}} \log [12] \right) \text{arccosec} \left[ \sqrt{3}\right]\end{aligned}$$ $$\begin{aligned} a_{9F} = -\frac{2971}{27648}+\frac{804611 \pi }{884736 \sqrt{3}}-\frac{1}{96} \log \left[\frac{4}{3}\right] - \frac{823}{3072 \sqrt{2}} \text{arccosec} \left[ \sqrt{3} \right]\end{aligned}$$ $$\begin{aligned} a_{11F} =& -\frac{5}{192} \left(\text{Li}_2 \left[ \frac{3}{4} \right] + \log [4] \log \left[ \frac{4}{3} \right] \right) -\frac{25 \pi ^2}{192}-\frac{4582831}{4423680}-\frac{1310311 \gamma }{6635520}-\frac{2135 \pi }{73728 \sqrt{2}}-\frac{13905571 \pi }{84934656 \sqrt{3}} \nonumber \\ & +\frac{4453 \sqrt{3} \pi }{65536}+\frac{532067}{1935360} \log [4] + \frac{312911}{2903040} \log \left[ \frac{4}{3} \right] + \frac{1674775 \pi}{14155776 \sqrt{3}} \log \left[ \frac{64}{3}\right] + \frac{97}{648} \psi \left[ \frac{5}{2} \right] \nonumber \\ & + \left(-\frac{391 \gamma }{49152 \sqrt{2}}+\frac{9421}{294912 \sqrt{2}}-\frac{59}{4096 \sqrt{2}} \log [12] \right) \text{arccosec} \left[\sqrt{3}\right]\end{aligned}$$ $$\begin{aligned} a_{12F} = \frac{5174549}{11612160}-\frac{1674775 \pi }{14155776 \sqrt{3}}+\frac{1}{64} \log \left[ \frac{4}{3} \right] + \frac{59}{4096 \sqrt{2}} \text{arccosec} \left[ \sqrt{3}\right]\end{aligned}$$ The divergence of $F_F$ as given above from its exact value is shown in Figure \[FigLatticeFit\]. $m^2_\eta$ ---------- The GMO expressions for the eta mass can similarly be expressed as: $$\begin{aligned} m_\eta^2 =& m_{\eta 0}^2 + \bigg\{ \frac{64 \pi^2}{3} \xi _K^2 \lambda _K - 8 \pi ^2 \xi _{\pi }^2 \lambda_\pi + \left( \frac{352 \pi^2}{27} \xi_\pi \xi_K - \frac{512\pi^2}{27} \xi _K^2 - \frac{56\pi^2}{27} \xi_\pi^2 \right) \lambda_\eta \nonumber \\ & \qquad \qquad - \frac{64}{9} \xi _K^2 \hat L_{1m}^r - \frac{16}{9} \xi_\pi \xi_K \hat L_{2m}^r + \frac{8}{9} \xi_\pi^2 \hat L_{3m}^r \bigg\} \nonumber\\ & \qquad +\bigg\{ \hat K_{1m}^r \lambda_\pi^2 + \hat K_{2m}^r \lambda_\pi\lambda_K + \hat K_{3m}^r \lambda_\pi\lambda_\eta + \hat K_{4m}^r \lambda_K^2 + \hat K_{5m}^r \lambda_K\lambda_\eta + \hat K_{6m}^r \lambda_\eta^2 \nonumber\\ & \hspace*{7ex} + m_K^2 \xi_K^2 F_m \left[\frac{m_\pi^2}{m_K^2}\right] + \hat C_{1m} \lambda_\pi+\hat C_{2m}\lambda_K+\hat C_{3m}\lambda_\eta + \hat C_{4m} \bigg\}\end{aligned}$$ Note that in contrast with the kaon, there is an extra $m_K^2$ prefactor to $F_m$ aside from the $\xi_K^2$. Furthermore, each of the $\hat K_{im}^r,\hat C_{im}^r$ have six terms proportional to $\xi_\pi^2,\xi_\pi\xi_K,\xi_K^2$ and $m_\pi^2$ multiplied by either $m_\pi^2$ or $m_K^2$. ![$F_m$ (left) and $F_f$ (right) plotted against $\rho$ using exact and truncated sunset integral values, as well as expansions of the latter upto $\mathcal{O}(\rho^3)$ and $\mathcal{O}(\rho^4)$.[]{data-label="FigLatticeFitEta"}](LatticeFitMeta.eps){width="98.00000%"}    ![$F_m$ (left) and $F_f$ (right) plotted against $\rho$ using exact and truncated sunset integral values, as well as expansions of the latter upto $\mathcal{O}(\rho^3)$ and $\mathcal{O}(\rho^4)$.[]{data-label="FigLatticeFitEta"}](LatticeFitFeta.eps){width="98.00000%"} Explicitly, for $m_\eta$, we have: $$\begin{aligned} & \hat{L}^r_{1M} = (4 \pi )^4 ( 3 L^r_{4}+2 L^r_{5}-6 L^r_{6}-6 L^r_{7}-6 L^r_{8} ) \nonumber \\ & \hat{L}^r_{2M} = (4 \pi )^4 ( 3 L^r_{4}-4 L^r_{5}-6 L^r_{6}+48 L^r_{7}+24 L^r_{8} ) \nonumber \\ & \hat{L}^r_{3M} = (4 \pi )^4 ( 3 L^r_{4}-L^r_{5}-6 L^r_{6}+48 L^r_{7}+18 L^r_{8} )\end{aligned}$$ $$\begin{aligned} & \hat{K}^r_{1M} = \left(\frac{5}{8} \xi _{\pi } \xi _K + \frac{65}{48} \xi _{\pi }^2 \right) m_\pi^2 - \left( \frac{3}{16} \xi_\pi \xi_K \right) m_K^2 \nonumber \\ & \hat{K}^r_{2M} = \left( \frac{3}{4} \xi_\pi \xi_K \right) m_\pi^2 + \left( \frac{55}{24} \xi_\pi \xi_K \right) m_K^2 \nonumber \\ & \hat{K}^r_{3M} = \left(\frac{7}{36} \xi_\pi^2 - \frac{43}{27} \xi_\pi \xi_K \right) m_\pi^2 + \left( \frac{64}{27} \xi_\pi \xi_K \right) m_K^2 \nonumber \\ & \hat{K}^r_{4M} = \left( \frac{5}{6} \xi_\pi \xi_K \right) m_\pi^2 + \left( \frac{103}{36} \xi_K^2 -\frac{133}{72} \xi_\pi \xi_K \right) m_K^2 \nonumber \\ & \hat{K}^r_{5M} = - \left( \frac{31}{108} \xi_\pi \xi_K \right) m_\pi^2 + \left(\frac{473}{216} \xi_\pi \xi_K - \frac{59}{18} \xi_K^2 \right) m_K^2 \nonumber \\ & \hat{K}^r_{6M} = \left(\frac{1367}{648} \xi_\pi \xi_K - \frac{911}{3888} \xi_\pi^2 \right) m_\pi^2 + \left(\frac{6185}{972} \xi_K^2 - \frac{2713}{432} \xi_\pi \xi_K \right) m_K^2\end{aligned}$$ $$\begin{aligned} \hat{C}^r_{1M} =& \left(\frac{61}{54} \xi_\pi \xi_K - \frac{931}{864} \xi_\pi^2 \right) m_\pi^2 - \frac{3}{2} \xi_\pi \xi_K m_K^2 \nonumber \\ & + (16 \pi^2 ) \bigg\{ \left(\frac{128}{3} L^r_{4} + \frac{256}{9} L^r_{5} - \frac{256}{3} L^r_{6} - \frac{256}{3} L^r_{7} - \frac{256}{3} L^r_{8} \right) \xi_\pi \xi_K m_K^2 \nonumber \\ & \qquad \qquad - \left( 64 L^r_{1}+16 L^r_{2}+16 L^r_{3}-\frac{232}{3} L^r_{4} + \frac{64}{9} L^r_{5} + \frac{272}{3} L^r_{6} - \frac{832}{3} L^r_{7} - \frac{320}{3} L^r_{8} \right) \xi_\pi \xi_K m_\pi^2 \nonumber \\ & \qquad \qquad + \bigg( 16 L^r_{1}+4 L^r_{2}+4 L^r_{3}-24 L^r_{4}+32 L^r_{6}-192 L^r_{7}-72 L^r_{8} \bigg) \xi_\pi^2 m_\pi^2 \bigg\}\end{aligned}$$ $$\begin{aligned} \hat{C}^r_{2M} =& -\frac{3}{2} \xi_\pi \xi_K m_\pi^2 + \left(\frac{961}{216} \xi_\pi \xi_K -\frac{577}{54} \xi_K^2 \right) m_K^2 \nonumber \\ & + (16\pi^2) \bigg\{ \left(\frac{64}{3} L^r_{1} + \frac{16}{3} L^r_{2} + \frac{28}{3} L^r_{3} - 16 L^r_{4} - \frac{16}{3} L^r_{5} + \frac{32}{3} L^r_{6} + 128 L^r_{7} + \frac{224}{3} L^r_{8} \right) \xi_\pi \xi_K m_K^2 \nonumber \\ & \qquad \qquad -\left(\frac{256}{3} L^r_{1} + \frac{64}{3} L^r_{2} + \frac{112}{3} L^r_{3} - \frac{320}{3} L^r_{4} - \frac{352}{9} L^r_{5} + 128 L^r_{6} + \frac{256}{3} L^r_{7} + \frac{320}{3} L^r_{8} \right) \xi_K^2 m_K^2 \nonumber \\ & \qquad \qquad + \left(-\frac{8}{3} L^r_{4} + \frac{8}{9} L^r_{5} + \frac{16}{3} L^r_{6} - \frac{128}{3} L^r_{7} - 16 L^r_{8} \right) \xi_\pi \xi_K m_\pi^2 \bigg\}\end{aligned}$$ $$\begin{aligned} \hat{C}^r_{3M} =& \left(\frac{371}{972} \xi_\pi \xi_K -\frac{2045}{23328} \xi_\pi^2 \right) m_\pi^2 + \left(\frac{41}{648} \xi_\pi \xi_K - \frac{1093}{1458} \xi_K^2 \right) m_K^2 + \nonumber \\ & + \left(16 \pi ^2\right) \bigg\{ \left(\frac{128}{3} L^r_{1} + \frac{128}{3} L^r_{2} + \frac{64}{3} L^r_{3} - 32 L^r_{4} - \frac{1808}{27} L^r_{5} + \frac{832}{9} L^r_{6} +\frac{1216}{3} L^r_{7} + \frac{6752}{27} L^r_{8} \right) \xi_\pi \xi_K m_K^2 \nonumber \\ & \qquad \qquad -\left(\frac{512}{9} L^r_{1} + \frac{512}{9} L^r_{2} + \frac{256}{9} L^r_{3} - \frac{512}{9} L^r_{4} - \frac{512}{9} L^r_{5} + \frac{4096}{27} L^r_{6} + \frac{2048}{9} L^r_{7} + \frac{5120}{27} L^r_{8} \right) m_K^2 \xi _K^2 \nonumber \\ & \qquad \qquad - \left(\frac{32}{3} L^r_{1} + \frac{32}{3} L^r_{2} + \frac{16}{3} L^r_{3} - 8 L^r_{4} - \frac{832}{27} L^r_{5} + \frac{208}{9} L^r_{6} + \frac{640}{3} L^r_{7} + \frac{3232}{27} L^r_{8} \right) \xi_\pi \xi_K m_\pi^2 \nonumber \\ & \qquad \qquad + \left(\frac{8}{9} L^r_{1} + \frac{8}{9} L^r_{2} + \frac{4}{9} L^r_{3} - \frac{8}{9} L^r_{4} - \frac{128}{27} L^r_{5} + \frac{64}{27} L^r_{6} + \frac{320}{9} L^r_{7} + \frac{520}{27} L^r_{8} \right) \xi_\pi^2 m_\pi^2 \bigg\}\end{aligned}$$ $$\begin{aligned} \hat{C}^r_{4M} &= \frac{2}{81} (16 \pi^2) \bigg\{ \bigg(384 L^r_{1}+816 L^r_{2}+228 L^r_{3}+128 L^r_{5}-1536 L^r_{7}-768 L^r_{8} \bigg) \xi _K^2 m_K^2 \nonumber \\ & \qquad \qquad \qquad - \bigg( 288 L^r_{1}+396 L^r_{2}+153 L^r_{3}+312 L^r_{5}-3744 L^r_{7}-1872 L^r_{8} \bigg) \xi_\pi \xi_K m_K^2 \nonumber \\ & \qquad \qquad \qquad + \bigg( 72 L^r_{1}+396 L^r_{2}+144 L^r_{3}+240 L^r_{5}-2880 L^r_{7}-1440 L^r_{8} \bigg) \xi_\pi \xi_K m_\pi^2 \nonumber \\ & \qquad \qquad \qquad + \bigg( -6 L^r_{1}-87 L^r_{2}-30 L^r_{3}-56 L^r_{5}+672 L^r_{7}+336 L^r_{8} \bigg) \xi_\pi^2 m_\pi^2 \bigg\} \nonumber \\ & + (16 \pi^2)^2 \bigg\{ \frac{128}{27} (3 L^r_{4}+5 L^r_{5}-6 L^r_{6}-6 L^r_{8}) \left( 3 L^r_{4}-L^r_{5}-6 L^r_{6}+48 L^r_{7}+18 L^r_{8} \right) \xi_\pi^2 m_\pi^2 \nonumber \\ & \qquad \qquad - \frac{256}{27} \bigg( 8 C^r_{12}+12 C^r_{13}+6 C^r_{14}+6 C^r_{15}+9 C^r_{16}+6 C^r_{17}+6 C^r_{18}-27 C^r_{19}-27 C^r_{20}-27 C^r_{21} \nonumber \\ & \qquad \qquad \qquad -18 C^r_{31}-18 C^r_{32}-18 C^r_{33} \bigg) \xi_K^2 m_K^2 \nonumber \\ & \qquad \qquad - \frac{1024}{27} \bigg( 6 L^r_{4}+L^r_{5}-12 L^r_{6}-6 L^r_{8}) (3 L^r_{4}+2 L^r_{5}-6 L^r_{6}-6 L^r_{7}-6 L^r_{8} \bigg) \xi_K^2 m_K^2 \nonumber \\ & \qquad \qquad + \frac{16}{27} \bigg( 2 C^r_{12}-6 C^r_{13}+9 C^r_{14}-3 C^r_{15}+27 C^r_{16}+9 C^r_{17}+24 C^r_{18}-27 C^r_{19}+27 C^r_{20}-27 C^r_{21} \nonumber \\ & \qquad \qquad \qquad -18 C^r_{31}+54 C^r_{32} \bigg) \xi_\pi^2 m_\pi^2 \nonumber \\ & \qquad \qquad -\frac{32}{9} \bigg( 4 C^r_{12}-6 C^r_{13}+10 C^r_{14}-3 C^r_{15}+24 C^r_{16}+10 C^r_{17}+24 C^r_{18}-54 C^r_{19}-18 C^r_{20}-36 C^r_{31} \nonumber \\ & \qquad \qquad \qquad + 6 C^r_{32}-48 C^r_{33} \bigg) \xi _\pi \xi_K m_\pi^2 \nonumber \\ & \qquad \qquad + \frac{64}{9} \bigg( 8 C^r_{12}+10 C^r_{14}+15 C^r_{16}+10 C^r_{17}+18 C^r_{18}-54 C^r_{19}-27 C^r_{20}+27 C^r_{21}-36 C^r_{31} \nonumber \\ & \qquad \qquad \qquad -12 C^r_{32}-48 C^r_{33} \bigg) \xi_\pi \xi_K m_K^2 \nonumber \\ & \qquad \qquad - \frac{128}{9} \bigg( 36 (L^r_{4})^2+15 L^r_{4} L^r_{5}-144 L^r_{4} L^r_{6}+144 L^r_{4} L^r_{7}+42 L^r_{4} L^r_{8}+12 (L^r_{5})^2-30 L^r_{5} L^r_{6} -48 L^r_{5} L^r_{7} \nonumber \\ & \qquad \qquad \qquad -32 L^r_{5} L^r_{8}+144 (L^r_{6})^2-288 L^r_{6} L^r_{7}-84 L^r_{6} L^r_{8}-96 L^r_{7} L^r_{8}-48 (L^r_{8})^2 \bigg) \xi_\pi \xi_K m_K^2 \nonumber \\ & \qquad \qquad -\frac{128}{9} \bigg(3 L^r_{4} L^r_{5}+6 L^r_{4} L^r_{8}-10 (L^r_{5})^2-6 L^r_{5} L^r_{6}+144 L^r_{5} L^r_{7}+76 L^r_{5} L^r_{8}-12 L^r_{6} L^r_{8}-96 L^r_{7} L^r_{8} \nonumber \\ & \qquad \qquad \qquad -48 (L^r_{8})^2 \bigg) \xi_\pi \xi_K m_\pi^2 \bigg\}\end{aligned}$$ The $F_m$ can be subdivided as: $$\begin{aligned} F_m^\eta [ \rho ] =& a_{1m} + \bigg( a_{2m} + a_{3m} \log[\rho] + a_{4m} \log^2[\rho] \bigg) \rho + \bigg( a_{5m} + a_{6m} \log[\rho] + a_{7m} \log^2[\rho] \bigg) \rho^2 \nonumber \\ & \quad + \bigg( a_{8m} + a_{9m} \log[\rho] + a_{10m} \log^2[\rho] \bigg) \rho^3 + \bigg( a_{11m} + a_{12m} \log[\rho] + a_{13m} \log^2[\rho] \bigg) \rho^4 + \mathcal{O} \left( \rho^5 \right) \end{aligned}$$ Note that we omit the factor of $1/(16\pi^2)^2$ in this definition in contrast to Eq.(\[Eq:FI\]). In Figure \[FigLatticeFitEta\], we see that the $\mathcal{O}(\rho^4)$ expansion version of $F_m$ agrees with the exact valued $F_m$ well within our desired range of $\rho$. $$\begin{aligned} a_{1m} &= \frac{1165}{864} \left( \frac{\pi^2}{3} + \log ^2 \left[ 2 \sqrt{3}-3\right] - \log \left[ \frac{4}{3} \right] \log \left[ 3+2\sqrt{3} \right] - 2 \text{Li}_2 \left[ \frac{\sqrt{3}}{2}\right] + 2 \text{Li}_2 \left[ 2\sqrt{3}-3 \right] \right) \nonumber \\ & + \frac{875}{486}-\frac{1157}{384} \log^2\left[ \frac{4}{3} \right] - \frac{19}{24} \log \left[ \frac{4}{3} \right] + \frac{1}{8} \csc ^{-1}\left[ \sqrt{3} \right]^2 + \frac{23}{2\sqrt{2}} \csc ^{-1}\left[ \sqrt{3} \right] \end{aligned}$$ $$\begin{aligned} a_{2m} &= -\frac{9859}{3456} \left( \frac{\pi^2}{3} + \log ^2 \left[ 2 \sqrt{3}-3 \right] - \log \left[ \frac{4}{3}\right] \log \left[ 3+2\sqrt{3} \right] -2 \text{Li}_2 \left[ \frac{\sqrt{3}}{2} \right] +2 \text{Li}_2 \left[ -3+2 \sqrt{3} \right] \right) \nonumber \\ & + \frac{18889}{22680} + \frac{16865}{4608} \log^2 \left[\frac{4}{3}\right] + \frac{683}{288} \log \left[ \frac{4}{3} \right] - \frac{75}{32} \csc^{-1} \left[ \sqrt{3} \right]^2 - \frac{517}{72 \sqrt{2}} \csc ^{-1} \left[ \sqrt{3} \right]\end{aligned}$$ $$\begin{aligned} a_{3m} = \frac{41}{27}, \quad a_{4m} = \frac{3}{16}, \quad a_{6m} = \frac{947}{3780}, \quad a_{7m} = -\frac{5}{8}\end{aligned}$$ $$\begin{aligned} a_{5m} &= \frac{7711}{4608} \left( \log ^2\left[ 2 \sqrt{3}-3\right] - \log \left[ \frac{4}{3} \right] \log \left[ 3 + 2\sqrt{3} \right] -2 \text{Li}_2\left[ \frac{\sqrt{3}}{2}\right] + 2 \text{Li}_2 \left[2 \sqrt{3}-3 \right] \right) +\frac{8735 \pi ^2}{13824} \nonumber \\ & -\frac{206795171}{57153600}-\frac{8629}{6144} \log^2 \left[\frac{4}{3}\right] - \frac{179}{1152} \log \left[\frac{4}{3}\right] + \frac{293}{128} \csc ^{-1} \left[ \sqrt{3} \right]^2 + \frac{1043}{288 \sqrt{2}} \csc ^{-1}\left[\sqrt{3}\right]\end{aligned}$$ $$\begin{aligned} a_{8m} &= -\frac{1099}{6144} \left( \log^2\left[2 \sqrt{3}-3\right] -\log \left[ \frac{4}{3} \right] \log \left[ 3+2 \sqrt{3}\right] -2 \text{Li}_2\left[\frac{\sqrt{3}}{2}\right] +2 \text{Li}_2 \left[ 2 \sqrt{3}-3 \right] \right) -\frac{10465 \pi^2}{55296} \nonumber \\ & + \frac{27092374721}{31120135200} - \frac{437}{24576} \log^2 \left[ \frac{4}{3} \right] + \frac{\log [3]}{480} - \frac{13681}{69120} \log \left[\frac{4}{3}\right]-\frac{27}{512} \csc^{-1} \left[\sqrt{3}\right]^2 - \frac{323}{576 \sqrt{2}} \csc^{-1} \left[\sqrt{3}\right]\end{aligned}$$ $$\begin{aligned} a_{9m} = \frac{1}{3} \log \left[\frac{4}{3}\right]-\frac{52837}{561330}, \quad a_{10m} = -\frac{11}{48}, \quad a_{12m} = -\frac{1}{8} \log \left[\frac{4}{3}\right] - \frac{4327283}{8981280}, \quad a_{13m} = \frac{1}{16}\end{aligned}$$ $$\begin{aligned} a_{11m} &= \frac{181}{24576} \left( \log ^2\left[2 \sqrt{3}-3\right]-\log \left[\frac{4}{3}\right] \log \left[3+2 \sqrt{3}\right] -2 \text{Li}_2\left[ \frac{\sqrt{3}}{2} \right] + 2 \text{Li}_2\left[ 2 \sqrt{3}-3 \right] \right) +\frac{356603876663}{569053900800} \nonumber \\ & + \frac{3253\pi^2}{73728} + \frac{5963}{98304} \log^2\left[\frac{4}{3}\right] + \frac{177301}{967680} \log \left[\frac{4}{3}\right] -\frac{31 \log [3]}{1120} - \frac{27}{2048} \csc ^{-1} \left[ \sqrt{3} \right]^2 - \frac{67}{2048 \sqrt{2}} \csc^{-1} \left[\sqrt{3}\right]\end{aligned}$$ $F_\eta$ -------- The expression for $F_\eta$ can be written as: $$\begin{aligned} \frac{F_\eta}{F} &= 1 + \left\{ \frac{8}{3} \xi_K \hat{L}^r_{1f} + \frac{4}{3} \xi_\pi \hat{L}^r_{2f} - \frac{3}{2} \xi_K \lambda_K \right\} \nonumber\\ & \qquad + \Bigg\{ \hat K_{1F}^r \lambda_\pi^2 + \hat K_{2f}^r \lambda_\pi\lambda_K + \hat K_{3f}^r \lambda_\pi\lambda_\eta + \hat K_{4f}^r \lambda_K^2 + \hat K_{5f}^r \lambda_K\lambda_\eta + \hat K_{6f}^r \lambda_\eta^2 \nonumber\\ & \hspace*{7ex} + m^2_K \xi_K^2 F_f\left[ \frac{m_\pi^2}{m_K^2} \right] + \hat C_{1f}\lambda_\pi+\hat C_{2f}\lambda_K+\hat C_{3f}\lambda_\eta + \hat C_{4f} \Bigg\}\end{aligned}$$ where $$\begin{aligned} \hat{L}^r_{1f} = (4 \pi)^2 (3 L^r_{4} + 2 L^r_{5}), \quad \hat{L}^r_{2f} = (4 \pi)^2 (3 L^r_{4} - L^r_{5})\end{aligned}$$ $$\begin{aligned} & \hat{K}^r_{1f} = \frac{99}{128} \rho - \frac{141}{512} \rho^2 + \frac{3}{2048} \rho^3 + \frac{3}{8192} \rho^4 + \mathcal{O}(\rho)^5, \quad \hat{K}^r_{3f} = 0 \nonumber \\ & \hat{K}^r_{2f} = \frac{93}{64} \rho - \frac{3}{256} \rho^2 - \frac{3}{1024} \rho^3 - \frac{3}{4096} \rho^4 + \mathcal{O}(\rho)^5, \quad \hat{K}^r_{5f} = -\frac{119}{48} + \frac{1}{4} \rho \nonumber \\ & \hat{K}^r_{4f} = \frac{191}{96} + \frac{35}{128} \rho + \frac{3}{512} \rho^2 + \frac{3}{2048} \rho^3 + \frac{3}{8192} \rho^4 + \mathcal{O}(\rho)^5, \quad \hat{K}^r_{6f} = \frac{71}{96} + \frac{1}{8} \rho - \frac{1}{32} \rho^2\end{aligned}$$ $$\begin{aligned} \hat{C}^r_{1f} =& \left(\frac{3}{16}-\frac{16}{3} (4 \pi)^2 (3 L^r_{4}+2 L^r_{5})\right) \xi_\pi \xi_K + \left(\frac{2}{3} (4 \pi )^2 (36 L^r_{1}+9 L^r_{2}+9 L^r_{3}-33 L^r_{4}+2 L^r_{5})+\frac{47}{64}\right) \xi_\pi^2\end{aligned}$$ $$\begin{aligned} \hat{C}^r_{2F} =& \left(2 (4 \pi)^2 (16 L^r_{1}+4 L^r_{2}+7 L^r_{3}-18 L^r_{4}-4 L^r_{5}) + \frac{17}{48}\right) \xi_K^2 + \left(\frac{3}{4}-\frac{2}{3} (4 \pi )^2 (15 L^r_{4}+13 L^r_{5})\right) \xi_\pi \xi_K \end{aligned}$$ $$\begin{aligned} \hat{C}^r_{3F} =& \left(\frac{32}{9} (4 \pi )^2 (6 L^r_{1}+6 L^r_{2}+3 L^r_{3}-3 L^r_{4}-2 L^r_{5})+\frac{16631}{3888}\right) \xi_K^2 \nonumber \\ & - \left(\frac{16}{9} (4 \pi )^2 (6 L^r_{1}+6 L^r_{2}+3 L^r_{3}-3 L^r_{4}-2 L^r_{5})+\frac{4363}{3888}\right)\xi_\pi \xi_K \nonumber \\ & + \left(\frac{2}{9} (4 \pi )^2 (6 L^r_{1}+6 L^r_{2}+3 L^r_{3}-3 L^r_{4}-2 L^r_{5})+\frac{3713}{15552}\right) \xi_\pi^2\end{aligned}$$ $$\begin{aligned} \hat{C}^r_{4F} &= \frac{1}{9} (4 \pi )^2 \bigg\{ 8 ( 2 L^r_{1}+2 L^r_{2}+L^r_{3}) \xi_\pi \xi_K - (32 L^r_{1}+68 L^r_{2}+19 L^r_{3}) \xi_K^2 - (2 L^r_{1}+29 L^r_{2}+10 L^r_{3}) \xi_\pi^2 \bigg\} \nonumber \\ & + \frac{8}{9} (4\pi)^4 \bigg\{ 4 \bigg( 6 C^r_{14}+6 C^r_{15}+9 C^r_{16}+6 C^r_{17}+6 C^r_{18}+(3 L^r_{4}+2 L^r_{5}) (21 L^r_{4}+4 L^r_{5}-24 L^r_{6}-12 L^r_{8}) \bigg) \xi_K^2 \nonumber \\ & - \bigg( 24 C^r_{14} - 6 C^r_{15} + 36 C^r_{16} + 24 C^r_{17} + 48 C^r_{18} - 252 (L^r_{4})^2 - 108 L^r_{4} L^r_{5} + 288 L^r_{4} L^r_{6} - 56 (L^r_{5})^2 + 48 L^r_{5} L^r_{6} \bigg) \xi_\pi \xi_K \nonumber \\ & + \bigg( 9 C^r_{14}-3 C^r_{15}+27 C^r_{16}+9 C^r_{17}+24 C^r_{18}+(3 L^r_{4}-L^r_{5}) (21 L^r_{4}+25 L^r_{5}-24 L^r_{6}-24 L^r_{8} ) \bigg) \xi_\pi^2 \bigg\}\end{aligned}$$ Due to the numerically large prefactors of the masses in the expression for $d^\eta_{\pi K K}$, the errors that arise due to the use of the truncated sunset expressions get magnified significantly, resulting in a poorly converging expression if these approximate results for the sunsets are used. This can be seen in Figure \[FigLatticeFitEta\], where the divergence between the truncated and exact values is significant even for small values of $\rho$. Therefore, in this case of $F_f$, we present an expansion in $\rho$ taken from the sunset integral series evaluated to a high order (and which therefore results in rapid convergence to the exact result), but an expansion which is numerical. $$\begin{aligned} F_f [ \rho ] = 9.03816 & + \bigg( -7.82805 + 1.51852 \log (\rho) + 0.1875 \log ^2(\rho) \bigg) \rho \nonumber \\ & + \bigg( 2.69955 + 0.250529 \log (\rho) - 0.625 \log ^2(\rho) \bigg) \rho^2 \nonumber \\ & + \bigg( -1.08218 + 0.00176579 \log (\rho) - 0.229167 \log^2(\rho) \bigg) \rho^3 \nonumber \\ & + \bigg( 0.722228 - 0.306794 \log (\rho) + 0.0625 \log ^2(\rho) \bigg) \rho^4 + \mathcal{O}(\rho^5)\end{aligned}$$ Summary and Conclusion ====================== $SU(3)$ ChPT is the effective theory of the strong interactions at low-energies, and describes the pseudo-scalar octet degrees of freedom and their interactions. Of the many properties associated with this sector, the masses and decay constants are amongst the most fundamental. The predictions for these from the effective theory and from the lattice constitute some of the most important tests of this part of the standard model and the standard picture of spontaneous symmetry breaking of the axial-vector symmetries associated with the massless limit of the theory. In the limit of isospin invariance, there are three masses in the theory, namely $m_\pi$, $m_K$ and $m_\eta$. At two-loop order, the meson mass expressions involve the computation of the sunset diagrams, while the decay constants also require calculation of the energy derivative of the sunsets, all evaluated on-shell. Sunset integrals have been investigated in great detail independently of ChPT, and much is known about them. In the most general mass configuration, it has been shown that any sunset can be expressed in terms of at most four MI. If some of the masses are equal, then the number of MI reduces. On the other hand, if any of the masses is set to zero, the sunset is known to suffer from infra-red problems. All the above features contribute to the complexity of analyzing the masses and decay constants in ChPT. Analytic treatments of the pion mass and decay constant have been performed in [@Ananthanarayan:2017yhz; @Kaiser:2007kf], where due to strangeness conservation, the only configuration not corresponding to a pseudo-threshold is a sunset with a kaon pair and an $\eta$ in the propagators. Since the pion mass is the smallest parameter in the theory, it is possible in this case to provide an expansion in this parameter to get the corresponding analytic expression. On the other hand, for the eta, in which a similar configuration appears, except with the pion mass in the propagator and the eta mass in the external momentum, it is not possible to expand in the small parameter without encountering IR divergences. In the case of the kaon, the sole configuration not of the pseudo-threshold type is one in which all the three particles are present in the propagators. For the quantities of interest, there are then two mass ratios present and one may wish to provide a double series representation in these mass ratios. In this work we have carried out precisely this exercise, by introducing MB representation for the sunset diagrams at hand (see [@Ananthanarayan:2018] for details). Whereas for problems with a single MB parameter, a simple approach exists which allows one to carry out the evaluation and summation of residues of poles, closing the contour in the complex plane to the left or to the right, and then using simple ratio tests to figure out the regions of convergence in the single parameter, a more sophisticated analysis is required when two or (especially) more MB parameters appear. The case at hand is a concrete realization of this scenario with two parameters. Our work follows several steps which will be summarized now. 1. We decompose the vector, tensor and derivative sunsets integrals appearing in the expressions for the $m_P$ and $F_P$ by applying integration by parts to express then in terms of the MI. 2. The resulting MI are of various mass configurations, and appear with up to three distinct mass scales. Each of these MI are then evaluated by using MB representations. The solutions of the one and two mass scale MI appearing in this analysis can all be written in closed form. The solutions of the three mass scale master integrals, however, are expressed as linear combinations of single and double infinite series. The full results are given in Appendix \[Sec:SunsetResults\]; see also [@Ananthanarayan:2017qmx] for an equivalent rewriting of these results in terms of Kamp' e de F' eriet series. We show in Appendix \[Sec:PionSunsets\] how to get analytic results valid for the pion case. 3. We substitute the sunset integrals results into the expressions of the $m_P$ and $F_P$. 4. The GMO relation is then applied to these expressions. As the GMO is a tree-level relation, and we wish to express the $m_P$ and $F_P$ in terms of physical meson masses, this involves calculating and including contributions from lower $\mathcal{O}(p^4)$ terms to the higher orders ones $\mathcal{O}(p^6)$. The motivation for applying the GMO relation stems partly from the desire to provide simple expressions that can be compared against lattice simulations, in which the eta mass is generally calculated using the GMO relation and is not an independent parameter. 5. We isolate the contributions to the $m_P$ and $F_P$ from different terms, e.g. linear chiral log terms, bilinear chiral logs, terms involving the $\mathcal{O}(p^4)$ LEC, etc., to determine their relative weight in the final expressions of the masses and decay constants. 6. The $m_P$ and $F_P$ expressions for $P=K,\eta$ without application of the GMO relation, but separated into terms of different classes, is given in Appendix \[Sec:NonGMOExpr\]. 7. A set of results are given for the three mass scale sunsets that are truncations of the exact results, but which are numerically close to the latter for the lattice input sets of [@Durr:2010hr]. The approximate results for the the sunsets appearing in the kaon and eta expressions is given in Section \[Sec:NumApproxSunsets\], and those for the pion are given in Appendix \[Sec:PionSunsets\]. The numerical justification for some of these approximations is presented in Section \[Sec:NumAnalysis\]. 8. Also presented as an ancillary tool to this paper is a `Mathematica` based code that allows one to obtain a truncated expression for the three mass master sunset integrals when the level of precision and values of the input meson masses are provided. This allows lattice practitioners, amongst others, to obtain analytic approximations for the sunsets for any given set of lattice inputs. These can then be used to construct relatively compact analytic expressions for easy comparison with lattice or experimental data. 9. A numerical study is done in Section \[Sec:NumAnalysis\] for $m_K$, $F_K$, $m_\eta$ and $F_\eta$ to provide a breakup of the relative numerical contributions to the NNLO part of their different constituents. This shows that the sunset integral contribution is significant. 10. In Section \[Sec:NumAnalysis\], we also numerically justify the use of our GMO-simplified expressions by showing that the error on various components constituting the NNLO contribution due to the use of the GMO relation does not exceed 5% in most cases, and that the final error on the NNLO contribution is effectively zero for the kaon mass, and very small for the kaon and eta decay constants. 11. We provide in Section \[Sec:LatticeFits\] a set of expressions for $m_K$, $m_\eta$, $F_K$ and $F_\eta$ that can be easily fit with lattice data, and in which the term that depends on the approximation of the loop integrals may easily be substituted by other approximations (calculated, for example, using tools such as the aforementioned supplementary `Mathematica` files). 12. We also calculate values of $m_K$, $F_K$, $m_\eta$ and $F_\eta$, and see that the comparison of our results with prior determinations shows good agreement when the BE14 LEC values are used. When the free fit LEC values are used, our results show some divergence with some literature values. In this paper, we adopt a phenomenology practitioners perspective, and provide principally the final results that are of relevance in this respect. The results given in Appendix \[Sec:SunsetResults\], for example, are for the $\mathcal{O}(\epsilon^0)$ term, and are only convergent for the values of mass ratios shown in Figure \[Fig:RegOfConv\]. In a forthcoming publication [@Ananthanarayan:2018], we describe the calculation of the three mass scale sunset integrals in detail, and give the complete $\epsilon$-expansion for all possible values of the meson masses. An important field where analytic expressions may be of use, and one which we have emphasised strongly in this work, is in lattice QCD. In [@Ananthanarayan:2017qmx], the use of analytic expressions to determine values of ChPT parameters was demonstrated. Data from recent lattice simulations for $m_K$, $m_\eta$, $F_K$ and $F_\eta$ is not publically available, but we hope that the expressions and tools provided in this work will encourage and assist lattice practioners to perform such a cross-disciplinary study. Acknowledgements {#acknowledgements .unnumbered} ================ SF thanks David Greynat for helpful discussions and correspondence. JB is supported in part by the Swedish Research Council grants contract numbers 2015-04089 and 2016-05996 and by the European Research Council under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 668679). BA is partly supported by the MSIL Chair of the Division of Physical and Mathematical Sciences, Indian Institute of Science. Expressions without the use of GMO \[Sec:NonGMOExpr\] ===================================================== We present here the expressions for the masses and decay constants in which the physical eta mass has been retained and not simplified by use of the GMO relation. We give only those terms that change when the GMO relation is used. Kaon Mass --------- The expression for the kaon mass, not simplified using the GMO relation, is: $$\begin{aligned} M^2_{K} = m^{2}_{K} + \left( m^{2}_{K} \right)^{(4)} + \left( m^{2}_{K} \right)^{(6)}_{CT} + \left( m^{2}_{K} \right)^{(6)}_{loop} + \mathcal{O}(p^8) \end{aligned}$$ where $m^{2}_{K}$ is given by Eq.(\[cTreeMass\]), $\left( m^{2}_{K} \right)^{(6)}_{CT}$ is given by Eq.(\[cCT\]), $$\begin{aligned} \frac{F_{\pi}^2}{m_K^2} \left( m^{2}_{K} \right)^{(4)} = 8(m_{\pi}^2 + 2 m_K^2)(2 L_6^r - L_4^r) + 8 m_K^2 (2L_8^r - L_5^r) + \frac{m_{\eta}^4}{2 m_{K}^2} l^r_{\eta} + \frac{m_{\pi}^2 m_{\eta}^2}{6 m_{K}^2} l^r_{\eta} \end{aligned}$$ and $$\begin{aligned} F_{\pi}^4 \left( m^{2}_{K} \right)^{(6)}_{loop} = c^{K}_{L_i} + c^{K}_{L_i \times L_j} + c^{K}_{log \times L_i} + c^{K}_{log} + c^{K}_{log \times log} + c^{K}_{sunset}\end{aligned}$$ where: $$\begin{aligned} 27 (16 \pi^2) c^{K}_{L_i} &= 108 m_K^6 L_1^r + 3 \left( 122 m_K^6 - 16 m_K^4 m_{\pi}^2 + 56 m_K^2 m_{\pi}^4 \right)L_2^r \nonumber \\ & + \left(89 m_k^6 - 4 m_K^4 m_{\pi}^2 + 41 m_K^2 m_{\pi}^4 \right) L_3^r \end{aligned}$$ $$\begin{aligned} c^{K}_{log \times L_i} =& -2 m_K^2 m_{\pi}^2 \bigg( 48 m_{\pi}^2 L_1^r + 12 m_{\pi}^2 L_2^r + 15 m_{\pi}^2 L_3^r - 4 \left(8 m_K^2+17 m_{\pi}^2\right) L_4^r - 4 \left(4 m_K^2+3 m_{\pi}^2\right) L_5^r \nonumber \\ & \quad + 8 \left(8 m_K^2+11 m_{\pi}^2\right) L_6^r + 8 \left(4 m_K^2+3 m_{\pi}^2\right) L_8^r \bigg) l_{\pi}^r \nonumber \\ & - 4 m_K^4 \bigg( 36 m_K^2 L_1^r + 18 m_K^2 L_2^r + 15 m_K^2 L_3^r - 4 \left(10 m_K^2 + m_{\pi}^2 \right) L_4^r - 16 m_K^2 L_5^r \nonumber \\ & \quad + 8 \left(8 m_K^2+m_{\pi}^2 \right) L_6^r + 32 m_K^2 L_8^r \bigg) l_K^r \nonumber \\ & - \frac{2}{9} m_{\eta}^2 \bigg( 48 m_{K}^2 \left(4 m_{K}^2-m_{\pi}^2\right) L_1^r + 12 m_{K}^2 \left(4 m_{K}^2 - m_{\pi}^2\right) L_2^r + 21 m_{K}^2 \left(4 m_{K}^2-m_{\pi}^2\right) L_3^r \nonumber \\ & \quad + 36 m_{K}^2 \left(m_{\pi}^2-8 m_{K}^2\right) L_4^r - 4 \left(28 m_{K}^4-3 m_{K}^2 m_{\pi}^2+2 m_{\pi}^4\right) L_5^r + 24 m_{K}^2 \left(16 m_{K}^2-m_{\pi}^2\right) L_6^r \nonumber \\ & \quad + 96 \left(2 m_{K}^4-3 m_{K}^2 m_{\pi}^2+m_{\pi}^4\right) L_7^r + 24 \left(12 m_{K}^4-7 m_{K}^2 m_{\pi}^2+2 m_{\pi}^4\right) L_8^r \bigg) l_{\eta}^r\end{aligned}$$ $$\begin{aligned} \left( 16 \pi^2 \right) c_{log}^{K} & = \left(\frac{3}{8} m_{\eta}^2 m_{K}^2 m_{\pi}^2 - \frac{13}{4} m_{K}^4 m_{\pi}^2 - \frac{45}{16} m_{K}^2 m_{\pi}^4 \right) l_{\pi}^r \nonumber \\ & - \bigg( \frac{487}{72} m_{K}^6 + \frac{9}{16} m_{\eta}^4 m_{K}^2 - \frac{1}{12} m_{\eta}^2 m_{K}^4 + \frac{3}{4} m_{\eta}^2 m_{K}^2 m_{\pi}^2 + \frac{7}{4} m_{K}^4 m_{\pi}^2 + \frac{3}{16} m_{K}^2 m_{\pi}^4 \bigg) l_K^r \nonumber \\ & + \left(\frac{9}{16} m_{\eta}^4 m_{K}^2 - \frac{143}{36} m_{\eta}^2 m_{K}^4 - \frac{1}{8} m_{\eta}^2 m_{K}^2 m_{\pi}^2 \right) l_{\eta}^r\end{aligned}$$ $$\begin{aligned} c_{log \times log}^{K} &= \left(\frac{15}{16} m_{\eta}^4 m_{\pi}^2 - \frac{41}{16} m_{\eta}^2 m_{K}^2 m_{\pi}^2 + \frac{15}{16} m_{\eta}^2 m_{\pi}^4 + \frac{9}{4} m_{K}^4 m_{\pi}^2 + \frac{9}{4} m_{K}^2 m_{\pi}^4 \right) (l^r_{\pi})^2 \nonumber \\ & + \left(-\frac{9}{8} m_{\eta}^4 m_{K}^2 + \frac{11}{12} m_{\eta}^2 m_{K}^4 - \frac{3}{2} m_{\eta}^2 m_{K}^2 m_{\pi}^2 + \frac{143}{18} m_{K}^6 + \frac{7}{4} m_{K}^4 m_{\pi}^2 - \frac{3}{8} m_{K}^2 m_{\pi}^4 \right) (l^r_{K})^2 \nonumber \\ & + \left(-\frac{121}{72} m_{K}^2 m_{\eta}^4 + \frac{5}{32} m_{\eta}^4 m_{\pi}^2 + \frac{205}{36} m_{\eta}^2 m_{K}^4 - \frac{41}{16} m_{\eta}^2 m_{K}^2 m_{\pi}^2 + \frac{227}{288} m_{\eta}^2 m_{\pi}^4 \right) (l^r_{\eta})^2 \nonumber \\ & + \left(\frac{3}{2} m_{\eta}^2 m_{K}^2 m_{\pi}^2 - \frac{7}{2} m_{K}^4 m_{\pi}^2 + \frac{3}{4} m_{K}^2 m_{\pi}^4 \right) l^r_{\pi} l^r_{K} + \left(\frac{9}{4} m_{\eta}^4 m_{K}^2 - \frac{9}{2} m_{\eta}^2 m_{K}^4 + \frac{3}{2} m_{\eta}^2 m_{K}^2 m_{\pi}^2\right) l^r_{K} l^r_{\eta} \nonumber \\ & + \left(-\frac{15}{8} m_{\eta}^4 m_{\pi}^2 + \frac{5}{24} m_{\eta}^2 m_{K}^2 m_{\pi}^2 - \frac{37}{24} m_{\eta}^2 m_{\pi}^4 \right) l^r_{\pi} l^r_{\eta} \end{aligned}$$ $$\begin{aligned} c^K_{sunset} &= \frac{1}{(16\pi^2)^2} \Bigg\{ \left(\frac{413}{128}+\frac{97 \pi ^2}{384}\right) m_{\eta}^4 m_{K}^2 -\left(\frac{33}{8}+\frac{15 \pi ^2}{64}\right) m_{\eta}^6 - \left(\frac{427}{3456}+\frac{179 \pi ^2}{648}\right) m_{K}^6 \nonumber \\ & + \left(\frac{29}{64}-\frac{\pi ^2}{32}\right) m_{\eta}^2 m_{K}^4 - \left(\frac{3}{4}+\frac{3 \pi ^2}{64}\right) m_{\pi}^6 + \left(\frac{3}{8}-\frac{5 \pi ^2}{64}\right) m_{\eta}^2 m_{\pi}^4 + \left(\frac{3}{8}-\frac{5 \pi ^2}{64}\right) m_{\eta}^4 m_{\pi}^2 \nonumber \\ & +\left(\frac{209}{1728}-\frac{265 \pi ^2}{2592}\right) m_{K}^4 m_{\pi}^2-\left(\frac{67}{384}+\frac{9 \pi ^2}{128}\right) m_{K}^2 m_{\pi}^4 + \left(\frac{3}{2}+\frac{53 \pi ^2}{192}\right) m_{\eta}^2 m_{K}^2 m_{\pi}^2 \Bigg\} \nonumber \\ & + c^K_{K \pi \pi} + c^K_{K \eta \eta} + c^K_{K \pi \eta}\end{aligned}$$ $$\begin{aligned} c^K_{K \eta \eta} &= \left(\frac{45}{32} m_{\eta}^4 - \frac{19}{16} m_{\eta}^2 m_{K}^2 + \frac{25}{288} m_{K}^4 \right) \overline{H}^{\chi}_{K \eta \eta} - \left(\frac{15}{8} m_{\eta}^4 m_{K}^2 - \frac{4}{3} m_{\eta}^2 m_{K}^4 - \frac{13}{24} m_{K}^6 \right) \overline{H}^{\chi}_{2K \eta \eta}\end{aligned}$$ $$\begin{aligned} c^K_{K \pi \eta} &= \left(-\frac{3}{32} m_{\eta}^4 + \frac{5}{16} m_{\eta}^2 m_{K}^2 - \frac{3}{4} m_{\eta}^2 m_{\pi}^2 + \frac{13}{16} m_{K}^4 + \frac{5}{16} m_{K}^2 m_{\pi}^2 - \frac{3}{32} m_{\pi}^4 \right) \overline{H}^{\chi}_{K \pi \eta} \nonumber \\ & + \left(\frac{3}{32} m_{\eta}^6 - \frac{13}{32} m_{\eta}^4 m_{K}^2 + \frac{39}{32} m_{\eta}^4 m_{\pi}^2 + \frac{1}{8}m_{\eta}^2 m_{K}^4 - \frac{51}{32} m_{\eta}^2 m_{K}^2 m_{\pi}^2 + \frac{9}{16} m_{\eta}^2 m_{\pi}^4 \right) \overline{H}^{\chi}_{K \pi 2\eta} \nonumber \\ & + \left(\frac{9}{16} m_{\eta}^4 m_{\pi}^2 - \frac{51}{32} m_{\eta}^2 m_{K}^2 m_{\pi}^2 + \frac{39}{32} m_{\eta}^2 m_{\pi}^4 + \frac{1}{8} m_{K}^4 m_{\pi}^2 -\frac{13}{32} m_{K}^2 m_{\pi}^4 + \frac{3}{32} m_{\pi}^6 \right) \overline{H}^{\chi}_{K 2\pi \eta}\end{aligned}$$ where $c^{K}_{L_i \times L_j}$ is given by Eq.(\[cLiLj\]) and $c^K_{K \pi \pi}$ by Eq.(\[cBarkpp\]). Kaon Decay Constant ------------------- The expression for the kaon decay constant, not simplified using the GMO relation, is: $$\begin{aligned} \frac{F_K}{F_0} = 1 + F_K^{(4)} + \left( F_K \right)^{(6)}_{CT} + \left( F_K \right)^{(6)}_{loop} + \mathcal{O}(p^8) \end{aligned}$$ where: $$\begin{aligned} F_{\pi}^2 F_K^{(4)} = 4\left(2 m_{K}^2+m_{\pi}^2\right) L_4^r + 4 m_{K}^2 L_5^r - \frac{3}{4} m_{\pi}^2 l_{\pi}^r - \frac{3}{2} m_{K}^2 l_{K} - \frac{3}{4} m_{\eta}^2 l_{\eta}^r\end{aligned}$$ and $ \left( F_K \right)^{(6)}_{CT}$ is given by Eq.(\[dCT\]). We also have: $$\begin{aligned} F_{\pi}^4 \left( F_K \right)^{(6)}_{loop} = d^{K}_{L_i} + d^{K}_{L_i \times L_j} + d^{K}_{log \times L_i} + d^{K}_{log} + d^{K}_{log \times log} + d^{K}_{sunset}\end{aligned}$$ where: $$\begin{aligned} -54(16\pi^2) d^{K}_{L_i} = (108 m_{K}^4) L_1^r + 6 \left(61 m_{K}^4 - 8 m_{K}^2 m_{\pi}^2 + 28 m_{\pi}^4 \right) L_2^r + \left(89 m_{K}^4 - 4 m_{K}^2 m_{\pi}^2 + 41 m_{\pi}^4 \right) L_3^r\end{aligned}$$ $$\begin{aligned} d^{K}_{log \times L_i} =& \left(48 m_{\pi}^2 L_1^r + 12 m_{\pi}^2 L_2^r + 15 m_{\pi}^2 L_3^r - \left(38 m_{K}^2+47 m_{\pi}^2\right) L_4^r - \left(19 m_{K}^2+6 m_{\pi}^2\right) L_5^r \right) m_{\pi}^2 l_{\pi}^r \nonumber \\ & + 2 \left( 36 m_{K}^2 L_1^r + 18 m_{K}^2 L_2^r + 15 m_{K}^2 L_3^r - \left(30 m_{K}^2 + 7 m_{\pi}^2\right) L_4^r - \left(7 m_{K}^2 + 6 m_{\pi}^2\right) L_5^r \right) m_{K}^2 l_K^r \nonumber \\ & + \left(\frac{1}{3} \left(4 m_{K}^2-m_{\pi}^2\right) (16 L_1^r +4 L_2^r + 7 L_3^r ) - \left(22 m_{K}^2-m_{\pi}^2\right) L_4^r - 3 \left(m_{K}^2 + 2 m_{\pi}^2 \right) L_5^r \right) m_{\eta}^2 l_{\eta} \end{aligned}$$ $$\begin{aligned} \left(16\pi^2\right) d_{log}^{K} =& \left( - \frac{9}{16} m_{\eta}^2 m_{\pi}^2 + \frac{9}{8} m_{K}^2 m_{\pi}^2 + \frac{39}{32} m_{\pi}^4 \right) l_{\pi}^r + \left( - \frac{27}{32} m_{\eta}^4 + \frac{41}{24} m_{\eta}^2 m_{K}^2 - \frac{5}{16} m_{\eta}^2 m_{\pi}^2 \right) l_{\eta}^r \nonumber \\ & + \left( \frac{27}{32} m_{\eta}^4 + \frac{5}{48} m_{\eta}^2 m_{K}^2 + \frac{9}{8} m_{\eta}^2 m_{\pi}^2 + \frac{643}{144} m_{K}^4 + \frac{27}{16} m_{K}^2 m_{\pi}^2 + \frac{9}{32} m_{\pi}^4 \right) l_K^r\end{aligned}$$ $$\begin{aligned} d_{log \times log}^{K} =& \left(-\frac{45}{32}\frac{m_{\eta}^4 m_{\pi}^2}{m_{K}^2} - \frac{45}{32} \frac{m_{\eta}^2 m_{\pi}^4}{m_{K}^2} + \frac{103}{32} m_{\eta}^2 m_{\pi}^2 - \frac{9}{8} m_{K}^2 m_{\pi}^2 + \frac{51}{32} m_{\pi}^4 \right) (l^r_{\pi})^2 \nonumber \\ & + \left(\frac{27}{16} m_{\eta}^4 - \frac{11}{12} m_{\eta}^2 m_{K}^2 + \frac{9}{4} m_{\eta}^2 m_{\pi}^2 + \frac{3}{8} m_{K}^4 - \frac{3}{2} m_{K}^2 m_{\pi}^2 + \frac{9}{16} m_{\pi}^4 \right) (l^r_{K})^2 \nonumber \\ & + \left(-\frac{45}{32}\frac{m_{\eta}^4 m_{\pi}^2}{m_{K}^2} + \frac{45}{16} m_{\eta}^4 - \frac{45}{32} \frac{m_{\eta}^2 m_{\pi}^4}{m_{K}^2} - \frac{19}{6} m_{\eta}^2 m_{K}^2 + \frac{7}{2} m_{\eta}^2 m_{\pi}^2 \right) (l^r_{\eta})^2 \nonumber \\ & + \left(-\frac{27}{8} m_{\eta}^4 + \frac{53}{24} m_{\eta}^2 m_{K}^2 - \frac{9}{4} m_{\eta}^2 m_{\pi}^2 \right) l^r_{K} l^r_{\eta} + \left( \frac{45}{16} \frac{m_{\eta}^4 m_{\pi}^2}{m_{K}^2} + \frac{45}{16} \frac{m_{\eta}^2 m_{\pi}^4}{m_{K}^2} -\frac{5}{8} m_{\eta}^2 m_{\pi}^2 \right) l^r_{\pi} l^r_{\eta} \nonumber \\ & + \left(-\frac{9}{4} m_{\eta}^2 m_{\pi}^2 + \frac{75}{8} m_{K}^2 m_{\pi}^2 - \frac{9}{8} m_{\pi}^4 \right) l^r_{\pi} l^r_{K}\end{aligned}$$ $$\begin{aligned} d^{K}_{sunset} &= \frac{1}{\left( 16 \pi ^2\right)^2} \Bigg\{ \left(\frac{99}{16}+\frac{45 \pi ^2}{128}\right) \frac{m_{\eta}^6}{m_{K}^2} - \left(\frac{9}{16}-\frac{15 \pi ^2}{128}\right) \frac{m_{\eta}^4 m_{\pi}^2}{m_{K}^2} - \left(\frac{1111}{256}+\frac{263 \pi ^2}{768}\right) m_{\eta}^4 \nonumber \\ & - \left(\frac{9}{16}-\frac{15 \pi ^2}{128}\right) \frac{m_{\eta}^2 m_{\pi}^4}{m_{K}^2} + \left(\frac{67}{192}+\frac{185 \pi ^2}{1728}\right) m_{\eta}^2 m_{K}^2-\left(\frac{9}{4}+\frac{139 \pi ^2}{384}\right) m_{\eta}^2 m_{\pi}^2 \nonumber \\ & + \left(\frac{583}{2304}+\frac{3 \pi ^2}{32}\right) m_{K}^4 + \left(\frac{9}{8}+\frac{9 \pi ^2}{128}\right) \frac{m_{\pi}^6}{m_{K}^2} - \left(\frac{5}{192}-\frac{19 \pi ^2}{192}\right) m_{K}^2 m_{\pi}^2 + \left(\frac{745}{768}+\frac{15 \pi ^2}{256}\right) m_{\pi}^4 \Bigg\} \nonumber \\ & + d^{K}_{K \pi \pi} + d^{K}_{K \eta \eta} + d^{K}_{K \pi \eta} \end{aligned}$$ $$\begin{aligned} d^K_{K \eta \eta} &= \left(- \frac{135}{64} \frac{m_{\eta}^4}{m_{K}^2} + \frac{25}{16} m_{\eta}^2 - \frac{229}{576}m_{K}^2 \right) \overline{H}^\chi_{K \eta \eta} + \left(\frac{45}{16} m_{\eta}^4 - \frac{41}{24} m_{\eta}^2 m_{K}^2 + \frac{37}{144} m_{K}^4 \right) \overline{H}^\chi_{2K \eta \eta}\end{aligned}$$ $$\begin{aligned} d^K_{K \pi \eta} &= \left(\frac{9}{64} \frac{m_{\eta}^4}{m_{K}^2}+ \frac{9}{8} \frac{m_{\eta}^2 m_{\pi}^2}{m_{K}^2} - \frac{5}{16} m_{\eta}^2 + \frac{9}{64} \frac{m_{\pi}^4}{m_{K}^2} + \frac{7}{32} m_{K}^2 - \frac{5}{16} m_{\pi}^2 \right) \overline{H}^\chi_{K \pi \eta} - \left( \frac{1}{2} m_{K}^4 \right) \overline{H}^\chi_{2K \pi \eta} \nonumber \\ & + \left(- \frac{9}{64} \frac{m_{\eta}^6}{m_{K}^2} - \frac{117}{64} \frac{m_{\eta}^4 m_{\pi}^2}{m_{K}^2} + \frac{29}{64}m_{\eta}^4 - \frac{27}{32} \frac{m_{\eta}^2 m_{\pi}^4}{m_{K}^2} - \frac{13}{16} m_{\eta}^2 m_{K}^2 + \frac{123}{64} m_{\eta}^2 m_{\pi}^2 \right) \overline{H}^\chi_{K \pi 2\eta} \nonumber \\ & + \left(- \frac{27}{32} \frac{m_{\eta}^4 m_{\pi}^2}{m_{K}^2} - \frac{117}{64} \frac{m_{\eta}^2 m_{\pi}^4}{m_{K}^2} + \frac{123}{64} m_{\eta}^2 m_{\pi}^2 - \frac{9}{64} \frac{m_{\pi}^6}{m_{K}^2} - \frac{13}{16} m_{K}^2 m_{\pi}^2 + \frac{29}{64} m_{\pi}^4 \right) \overline{H}^\chi_{K 2\pi \eta}\end{aligned}$$ where $d^{K}_{L_i \times L_j}$ is given by Eq.(\[dBarLiLj\]) and $d^K_{K \pi \pi} $ by Eq.(\[dBarkpp\]). Eta Mass -------- The expression for the eta mass, not simplified using the GMO relation, is: $$\begin{aligned} M^2_{\eta} = m^{2}_{\eta} + \left( m^{2}_{\eta} \right)^{(4)} + \left( m^{2}_{\eta} \right)^{(6)}_{CT} + \left( m^{2}_{\eta} \right)^{(6)}_{loop} + \mathcal{O}(p^8) \end{aligned}$$ where $m^{2}_{\eta}$ is given by Eq.(\[cTreeMass\]), $\left( m^{2}_{\eta} \right)^{(6)}_{CT}$ is given by Eq.(\[cCT\]), $$\begin{aligned} \frac{F_{\pi}^2}{m_\eta^2} \left( m_{\eta}^{2} \right)^{(4)} =& -8 m_{\eta}^2 \left(2 m_{K}^2+m_{\pi}^2\right) L_4^r + \frac{8}{3} m_{\eta}^2 \left(m_{\pi}^2-4 m_{K}^2\right) L_5^r + \frac{16}{3} \left(8 m_{K}^4 + 2 m_{K}^2 m_{\pi}^2 - m_{\pi}^4 \right) L_6^r \nonumber \\ & + \frac{128}{3} \left(m_{K}^2-m_{\pi}^2\right)^2 L_7^r + \frac{16}{3} L_8^r \left(8 m_{K}^4-8 m_{K}^2 m_{\pi}^2+3 m_{\pi}^4\right) - m_{\pi}^4 l^r_{\pi} \nonumber \\ & + \frac{2}{3} m_{K}^2 \left(3 m_{\eta}^2 + m_{\pi}^2 \right) l^r_{K} + \frac{1}{9} m_{\eta}^2 \left(7 m_{\pi}^2 - 16 m_{K}^2 \right) l^r_{\eta}\end{aligned}$$ and $$\begin{aligned} F_{\pi}^4 \left( m^{2}_{\eta} \right)^{(6)}_{loop} = c^{\eta}_{L_i} + c^{\eta}_{L_i \times L_j} + c^{\eta}_{log \times L_i} + c^{\eta}_{log} + c^{\eta}_{log \times log} + c^{\eta}_{sunset}\end{aligned}$$ where: $$\begin{aligned} ( 16 \pi^2 ) c_{log}^{\eta} &= \left(-\frac{4}{3}\frac{m_K^6 m_{\pi}^2}{m_{\eta}^2} + \frac{5}{3}\frac{m_K^4 m_{\pi}^4}{m_{\eta}^2} - \frac{7}{12} \frac{m_K^2 m_{\pi}^6}{ m_{\eta}^2} + \frac{1}{16} \frac{m_{\pi}^8}{m_{\eta}^2} - 2 m_K^4 m_{\pi}^2 + \frac{2}{3}m_K^2 m_{\pi}^4 - \frac{41}{24} m_{\pi}^6 \right) l^r_{\pi} \nonumber \\ & + \left(\frac{20}{3} \frac{m_K^8}{m_{\eta}^2} - \frac{10}{3} \frac{m_K^6 m_{\pi}^2}{m_{\eta}^2} + \frac{5}{12} \frac{m_K^4 m_{\pi}^4}{m_{\eta}^2} - 24 m_K^6 + \frac{82}{9}m_K^4 m_{\pi}^2 - 3 m_K^2 m_{\pi}^4\right) l^r_K \nonumber \\ & + \bigg(-\frac{20}{3} \frac{m_K^8}{m_{\eta}^2} + \frac{14}{3} \frac{m_K^6 m_{\pi}^2}{m_{\eta}^2} - \frac{25}{12} \frac{m_K^4 m_{\pi}^4}{m_{\eta}^2} - \frac{262}{243} m_{\eta}^2 m_K^4 + \frac{7}{12} \frac{m_K^2 m_{\pi}^6}{m_{\eta}^2} + \frac{823}{486} m_{\eta}^2 m_K^2 m_{\pi}^2 \nonumber \\ & \quad - \frac{1}{16} \frac{m_{\pi}^8}{m_{\eta}^2} - \frac{83}{486} m_{\eta}^2 m_{\pi}^4 + \frac{16}{9} m_K^6 - \frac{16}{9} m_K^4 m_{\pi}^2 + \frac{5}{3} m_K^2 m_{\pi}^4 - \frac{1}{3} m_{\pi}^6 \bigg) l^r_{\eta} \end{aligned}$$ $$\begin{aligned} c_{log \times log}^{\eta} &= \bigg( -\frac{20}{3} \frac{m_{K}^8 m_{\pi}^2}{m_{\eta}^4} + \frac{22}{3} \frac{m_{K}^6 m_{\pi}^4}{m_{\eta}^4} - \frac{29}{12} \frac{m_{K}^4 m_{\pi}^6}{m_{\eta}^4} + \frac{1}{4}\frac{m_{K}^2 m_{\pi}^8}{m_{\eta}^4} + 4 \frac{m_{K}^6 m_{\pi}^2}{m_{\eta}^2} - \frac{14}{3} \frac{m_{K}^4 m_{\pi}^4}{m_{\eta}^2} \nonumber \\ & \quad + \frac{11}{12} \frac{m_{K}^2 m_{\pi}^6}{m_{\eta}^2} + 3 m_{K}^2 m_{\pi}^4 + \frac{65}{12} m_{\pi}^6 \bigg) (l_{\pi}^r)^2 \nonumber \\ & + \bigg(- \frac{20}{3} \frac{m_{K}^8 m_{\pi}^2}{m_{\eta}^4} + \frac{22}{3} \frac{m_{K}^6 m_{\pi}^4}{m_{\eta}^4} - \frac{29}{12} \frac{m_{K}^4 m_{\pi}^6}{m_{\eta}^4} + \frac{1}{4} \frac{m_{K}^2 m_{\pi}^8}{m_{\eta}^4} - \frac{20}{3} \frac{m_{K}^8}{m_{\eta}^2 } + \frac{22}{3} \frac{m_{K}^6 m_{\pi}^2}{m_{\eta}^2} \nonumber \\ & \quad - \frac{61}{12} \frac{m_{K}^4 m_{\pi}^4}{m_{\eta}^2} + \frac{11}{12} \frac{m_{K}^2 m_{\pi}^6}{m_{\eta}^2} + \frac{100}{9} m_{K}^6 -\frac{71}{9} m_{K}^4 m_{\pi}^2 + \frac{23}{6} m_{K}^2 m_{\pi}^4 \bigg) (l_{K}^r)^2 \nonumber \\ & + \bigg( -\frac{25}{3} m_{\eta}^4 m_{K}^2 +\frac{55}{12} m_{\eta}^4 m_{\pi}^2 - \frac{20}{3} \frac{m_{K}^8}{m_{\eta}^2} + \frac{10}{3} \frac{m_{K}^6 m_{\pi}^2}{m_{\eta}^2} - \frac{5}{12} \frac{m_{K}^4 m_{\pi}^4}{m_{\eta}^2} + \frac{2312}{81} m_{\eta}^2 m_{K}^4 \nonumber \\ & \quad - \frac{1636}{81} m_{\eta}^2 m_{K}^2 m_{\pi}^2 + \frac{619}{162} m_{\eta}^2 m_{\pi}^4 + \frac{8}{9} m_{K}^6 + \frac{4}{9} m_{K}^4 m_{\pi}^2 - \frac{1}{6}m_{K}^2 m_{\pi}^4 \bigg) (l_{\eta}^r)^2 \nonumber \\ & + \bigg( \frac{40}{3} \frac{m_{K}^8 m_{\pi}^2}{ m_{\eta}^4} - \frac{44}{3} \frac{m_{K}^6 m_{\pi}^4}{m_{\eta}^4} + \frac{29}{6} \frac{m_{K}^4 m_{\pi}^6}{ m_{\eta}^4} - \frac{1}{2} \frac{m_{K}^2 m_{\pi}^8}{m_{\eta}^4} - 8 \frac{m_{K}^6 m_{\pi}^2}{m_{\eta}^2} + \frac{28}{3} \frac{m_{K}^4 m_{\pi}^4}{m_{\eta}^2} \nonumber \\ & \quad -\frac{11}{6} \frac{m_{K}^2 m_{\pi}^6}{m_{\eta}^2} - \frac{32}{3} m_{K}^4 m_{\pi}^2 - \frac{8}{3} m_{K}^2 m_{\pi}^4 \bigg) l_{\pi}^r l_{K}^r \nonumber \\ & + \bigg( \frac{40}{3} \frac{m_{K}^8}{m_{\eta}^2} - \frac{20}{3} \frac{m_{K}^6 m_{\pi}^2}{m_{\eta}^2} + \frac{5}{6} \frac{m_{K}^4 m_{\pi}^4}{ m_{\eta}^2} - \frac{64}{9} m_{\eta}^2 m_{K}^4 + \frac{28}{9} m_{\eta}^2 m_{K}^2 m_{\pi}^2 - \frac{16}{9} m_{K}^6 - \frac{8}{9} m_{K}^4 m_{\pi}^2 \nonumber \\ & \quad + \frac{1}{3} m_{K}^2 m_{\pi}^4 \bigg) l_{K}^r l_{\eta}^r + \bigg( \frac{64}{9} m_{\eta}^2 m_{K}^2 m_{\pi}^2 - \frac{35}{9} m_{\eta}^2 m_{\pi}^4 \bigg) l_{\pi}^r l_{\eta}^r\end{aligned}$$ $$\begin{aligned} c^{\eta}_{sunset} = \frac{1}{\left(16 \pi ^2\right)^2} &\Bigg\{ -\left(\frac{80}{3}+\frac{20 \pi ^2}{9}\right) \frac{m_K^{10}}{m_{\eta}^4} + \left(\frac{58}{3}+2 \pi ^2\right) \frac{ m_K^8 m_{\pi}^2}{m_{\eta}^4} - \left(6+\frac{11 \pi ^2}{12}\right) \frac{m_K^6 m_{\pi}^4}{m_{\eta}^4} \nonumber \\ & + \left(\frac{49}{24}+\frac{2 \pi ^2}{9}\right) \frac{m_K^4 m_{\pi}^6}{m_{\eta}^4} - \left(\frac{7}{12}+\frac{\pi ^2}{48}\right) \frac{m_K^2 m_{\pi}^8}{m_{\eta}^4} + \frac{1}{16} \frac{m_{\pi}^{10}}{m_{\eta}^4} + \left(\frac{91}{6}+\frac{17 \pi ^2}{9}\right) \frac{m_K^8}{m_{\eta}^2} \nonumber \\ & - \left(\frac{77}{12}+\frac{23 \pi ^2}{18}\right) \frac{m_K^6 m_{\pi}^2}{m_{\eta}^2} + \left(\frac{127}{96}+\frac{73 \pi ^2}{144}\right) \frac{m_K^4 m_{\pi}^4}{m_{\eta}^2} + \left(\frac{455}{324}+\frac{32 \pi ^2}{243}\right) m_{\eta}^2 m_K^4 \nonumber \\ & -\left(\frac{45}{32}+\frac{11 \pi ^2}{144}\right) \frac{m_K^2 m_{\pi}^6}{m_{\eta}^2}-\left(\frac{911}{648}+\frac{28 \pi ^2}{243}\right) m_{\eta}^2 m_K^2 m_{\pi}^2 + \frac{119}{384} \frac{m_{\pi}^8}{m_{\eta}^2} \nonumber \\ & + \left(\frac{1547}{5184}+\frac{49 \pi ^2}{1944}\right) m_{\eta}^2 m_{\pi}^4+\left(\frac{6095}{972}-\frac{767 \pi ^2}{729}\right) m_K^6+\left(\frac{85 \pi ^2}{108}-\frac{451}{144}\right) m_K^4 m_{\pi}^2 \nonumber \\ & -\left(\frac{2857}{2592}+\frac{457 \pi ^2}{972}\right) m_K^2 m_{\pi}^4-\left(\frac{1417}{7776}+\frac{91 \pi ^2}{11664}\right) m_{\pi}^6 \Bigg\} + c_{\pi \pi \eta}^{\eta} + c_{K K \eta}^{\eta} + c_{\pi K K}^{\eta}\end{aligned}$$ $$\begin{aligned} c_{K K \eta}^{\eta} &= \left(10 \frac{m_K^8}{m_{\eta}^4} - 5 \frac{m_K^6 m_{\pi}^2}{m_{\eta}^4} + \frac{5}{8} \frac{m_K^4 m_{\pi}^4}{m_{\eta}^4} - \frac{20}{3} \frac{m_K^6}{m_{\eta}^2} + 2 \frac{m_K^4 m_{\pi}^2}{m_{\eta}^2} - \frac{1}{12} \frac{m_K^2 m_{\pi}^4}{m_{\eta}^2} - \frac{2}{3} m_K^4 + \frac{11}{9} m_K^2 m_{\pi}^2 - \frac{5}{24} m_{\pi}^4 \right) \overline{H}^\chi_{\eta K K} \nonumber \\ & + \left(- \frac{40}{3} \frac{m_K^8}{m_{\eta}^2} + \frac{20}{3} \frac{m_K^6 m_{\pi}^2}{m_{\eta}^2} - \frac{5}{6} \frac{m_K^4 m_{\pi}^4}{m_{\eta}^2} + \frac{56}{9} m_{\eta}^2 m_K^4 - \frac{44}{9} m_{\eta}^2 m_K^2 m_{\pi}^2 + \frac{5}{6} m_{\eta}^2 m_{\pi}^4 + \frac{64}{9} m_K^6 - \frac{16}{9} m_K^4 m_{\pi}^2 \right) \overline{H}^\chi_{2\eta K K}\end{aligned}$$ Eta Decay Constant ------------------ The expression for the eta decay constant, not simplified using the GMO relation, is: $$\begin{aligned} \frac{F_K}{F_0} = 1 + F_{\eta}^{(4)} + \left( F_{\eta} \right)^{(6)}_{CT} + \left( F_{\eta} \right)^{(6)}_{loop} + \mathcal{O}(p^8) \end{aligned}$$ where: $$\begin{aligned} F_{\pi}^2 F_{\eta}^{(4)} = 4 \left(2 m_K^2+m_{\pi}^2\right) L^r_{4} + \frac{4}{3} \left(4 m_K^2-m_{\pi}^2\right) L^r_{5} - 3 m_K^2 l^r_K\end{aligned}$$ and $ \left( F_K \right)^{(6)}_{CT}$ is given by Eq.(\[dCT\]). We also have: $$\begin{aligned} F_{\pi}^4 \left( F_K \right)^{(6)}_{loop} = d^{K}_{L_i} + d^{K}_{L_i \times L_j} + d^{K}_{log \times L_i} + d^{K}_{log} + d^{K}_{log \times log} + d^{K}_{sunset}\end{aligned}$$ where: $$\begin{aligned} \left(16\pi^2\right) d_{log}^{\eta} &= \left( 2 \frac{m_K^6 m_{\pi}^2}{m_{\eta}^4} - \frac{5}{2} \frac{m_K^4 m_{\pi}^4}{m_{\eta}^4} + \frac{7}{8} \frac{m_K^2 m_{\pi}^6}{m_{\eta}^4} - \frac{3}{32} \frac{m_{\pi}^8}{m_{\eta}^4} - \frac{m_K^4 m_{\pi}^2}{m_{\eta}^2} + \frac{11}{6} \frac{m_K^2 m_{\pi}^4}{m_{\eta}^2} - \frac{19}{48} \frac{m_{\pi}^6}{m_{\eta}^2} + \frac{9}{8} m_{\pi}^4 \right) l^r_{\pi} \nonumber \\ & + \left(-10 \frac{m_K^8}{m_{\eta}^4} + 5 \frac{m_K^6 m_{\pi}^2}{m_{\eta}^4} - \frac{5}{8} \frac{m_K^4 m_{\pi}^4}{m_{\eta}^4} + \frac{4}{9} \frac{m_K^6}{m_{\eta}^2} - \frac{10}{9} \frac{m_K^4 m_{\pi}^2}{m_{\eta}^2} + \frac{1}{4} \frac{m_K^2 m_{\pi}^4}{m_{\eta}^2} + 6 m_K^4 + \frac{9}{4} m_K^2 m_{\pi}^2 \right) l^r_K \nonumber \\ & + \bigg( 10 \frac{m_K^8}{m_{\eta}^4} - 7 \frac{m_K^6 m_{\pi}^2}{m_{\eta}^4} + \frac{25}{8} \frac{m_K^4 m_{\pi}^4}{m_{\eta}^4} - \frac{7}{8} \frac{m_K^2 m_{\pi}^6}{m_{\eta}^4} + \frac{3}{32} \frac{m_{\pi}^8}{m_{\eta}^4} - \frac{4}{9} \frac{m_K^6}{m_{\eta}^2} + \frac{19}{9} \frac{m_K^4 m_{\pi}^2}{m_{\eta}^2} - \frac{25}{12} \frac{m_K^2 m_{\pi}^4}{m_{\eta}^2} \nonumber \\ & \quad + \frac{3}{4} m_{\eta}^2 m_K^2 + \frac{19}{48} \frac{m_{\pi}^6}{m_{\eta}^2} - \frac{3}{4} m_{\eta}^2 m_{\pi}^2 + \frac{550}{243} m_K^4 - \frac{1331}{972} m_K^2 m_{\pi}^2 + \frac{2221}{3888} m_{\pi}^4 \bigg) l^r_{\eta}\end{aligned}$$ $$\begin{aligned} d_{log \times log}^{\eta} &= \left( 10 \frac{m_K^8}{m_{\eta}^4} - 5 \frac{m_K^6 m_{\pi}^2}{m_{\eta}^4} + \frac{5}{8} \frac{m_K^4 m_{\pi}^4}{m_{\eta}^4} + \frac{9}{8} m_{\eta}^4 - \frac{8}{9} \frac{m_K^6}{m_{\eta}^2} - \frac{4}{9} \frac{m_K^4 m_{\pi}^2}{m_{\eta}^2} + \frac{1}{6} \frac{m_K^2 m_{\pi}^4}{m_{\eta}^2} - 3 m_{\eta}^2 m_K^2 + \frac{3}{4} m_{\eta}^2 m_{\pi}^2 \right) (l^r_{\eta})^2 \nonumber \\ & + \left(10 \frac{m_K^8 m_{\pi}^2}{m_{\eta}^6} - 11 \frac{m_K^6 m_{\pi}^4}{m_{\eta}^6} + \frac{29}{8} \frac{m_K^4 m_{\pi}^6}{m_{\eta}^6} - \frac{3}{8} \frac{m_K^2 m_{\pi}^8}{m_{\eta}^6} - 2 \frac{m_K^6 m_{\pi}^2}{m_{\eta}^4} + \frac{11}{3} \frac{m_K^4 m_{\pi}^4}{m_{\eta}^4} - \frac{19}{24} \frac{m_K^2 m_{\pi}^6}{m_{\eta}^4} - \frac{9}{8} m_{\pi}^4 \right) (l^r_{\pi})^2 \nonumber \\ & + \left(-20\frac{m_K^8}{m_{\eta}^4} + 10\frac{m_K^6 m_{\pi}^2}{m_{\eta}^4} - \frac{5}{4} \frac{m_K^4 m_{\pi}^4}{m_{\eta}^4} + \frac{16}{9} \frac{m_K^6}{m_{\eta}^2} + \frac{8}{9} \frac{m_K^4 m_{\pi}^2}{m_{\eta}^2} - \frac{1}{3} \frac{m_K^2 m_{\pi}^4}{m_{\eta}^2}\right) l^r_{\eta} l^r_K \nonumber \\ & + \bigg( 10 \frac{m_K^8 m_{\pi}^2}{m_{\eta}^6} - 11 \frac{m_K^6 m_{\pi}^4}{m_{\eta}^6} + \frac{29}{8} \frac{m_K^4 m_{\pi}^6}{m_{\eta}^6} - \frac{3}{8} \frac{m_K^2 m_{\pi}^8}{m_{\eta}^6} + 10 \frac{m_K^8}{m_{\eta}^4} - 7 \frac{m_K^6 m_{\pi}^2}{m_{\eta}^4} + \frac{103}{24} \frac{m_K^4 m_{\pi}^4}{m_{\eta}^4} \nonumber \\ & \quad - \frac{19}{24} \frac{m_K^2 m_{\pi}^6}{m_{\eta}^4} - \frac{8}{9} \frac{m_K^6}{m_{\eta}^2} - \frac{4}{9} \frac{m_K^4 m_{\pi}^2}{m_{\eta}^2} + \frac{1}{6} \frac{m_K^2 m_{\pi}^4}{m_{\eta}^2} + 3 m_K^4-\frac{3}{2} m_K^2 m_{\pi}^2 \bigg) (l^r_K)^2 \nonumber \\ & + \bigg( -20 \frac{m_K^8 m_{\pi}^2}{m_{\eta}^6} + 22 \frac{m_K^6 m_{\pi}^4}{m_{\eta}^6} - \frac{29}{4} \frac{m_K^4 m_{\pi}^6}{m_{\eta}^6} + \frac{3}{4} \frac{m_K^2 m_{\pi}^8}{m_{\eta}^6} + 4 \frac{m_K^6 m_{\pi}^2}{m_{\eta}^4} - \frac{22}{3} \frac{m_K^4 m_{\pi}^4}{m_{\eta}^4} + \frac{19}{12} \frac{m_K^2 m_{\pi}^6}{m_{\eta}^4} \nonumber \\ & \quad +12 m_K^2 m_{\pi}^2 \bigg) l^r_K l^r_{\pi}\end{aligned}$$ $$\begin{aligned} d^{\eta}_{sunset} &= \frac{1}{\left( 16 \pi ^2\right)^2} \Bigg\{ \left(40+\frac{10 \pi ^2}{3}\right) \frac{m_K^{10}}{m_{\eta}^6} - \left(29+3 \pi ^2\right) \frac{m_K^8 m_{\pi}^2}{m_{\eta}^6} + \left(9+\frac{11 \pi ^2}{8}\right) \frac{ m_K^6 m_{\pi}^4}{m_{\eta}^6} -\left(\frac{49}{16}+\frac{\pi ^2}{3}\right) \frac{m_K^4 m_{\pi}^6}{m_{\eta}^6} \nonumber \\ & + \left(\frac{7}{8}+\frac{\pi ^2}{32}\right) \frac{m_K^2 m_{\pi}^8}{m_{\eta}^6} - \frac{3}{32} \frac{m_{\pi}^{10}}{m_{\eta}^6} - \left(\frac{91}{4}+\frac{43 \pi ^2}{18}\right) \frac{m_K^8}{m_{\eta}^4} + \left(\frac{151}{24}+\frac{49 \pi^2}{36}\right) \frac{m_K^6 m_{\pi}^2}{m_{\eta}^4} \nonumber \\ & - \left(\frac{125}{192}+\frac{131 \pi ^2}{288}\right) \frac{m_K^4 m_{\pi}^4}{m_{\eta}^4} + \left(\frac{301}{192}+\frac{19 \pi ^2}{288}\right) \frac{ m_K^2 m_{\pi}^6 }{m_{\eta}^4} - \frac{277}{768} \frac{m_{\pi}^8}{m_{\eta}^4} + \left(\frac{5 \pi ^2}{27}-\frac{23}{12}\right) \frac{m_K^6}{m_{\eta}^2} \nonumber \\ & + \left(\frac{13}{4}+\frac{\pi ^2}{18}\right) \frac{m_K^4 m_{\pi}^2}{m_{\eta}^2} - \left(\frac{331}{192}+\frac{\pi ^2}{48}\right) \frac{m_K^2 m_{\pi}^4}{m_{\eta}^2} + \left(\frac{59}{384}+\frac{\pi ^2}{72}\right) \frac{m_{\pi}^6}{m_{\eta}^2} +\left(\frac{12349}{7776}+\frac{5 \pi ^2}{24}\right) m_K^4 \nonumber \\ & +\left(\frac{\pi ^2}{48}-\frac{4133}{7776}\right) m_K^2 m_{\pi}^2+\left(\frac{6761}{15552}+\frac{5 \pi ^2}{96}\right) m_{\pi}^4 \Bigg\} + d^{\eta}_{\pi \pi \eta} + d^{\eta}_{K K \eta} + d^{\eta}_{\pi K K}\end{aligned}$$ $$\begin{aligned} d^{\eta}_{\pi \pi \eta} &= \left( \frac{1}{12} m_{\pi}^4 \right) \overline{H}_{2\eta \pi \pi} - \left( \frac{1}{12} \frac{m_{\pi}^4}{m_{\eta}^2} \right) \overline{H}^\chi_{\eta \pi \pi}\end{aligned}$$ $$\begin{aligned} d^{\eta}_{K K \eta} &= \bigg(-15 \frac{m_K^8}{m_{\eta}^6}+\frac{15}{2} \frac{m_K^6 m_{\pi}^2}{m_{\eta}^6} - \frac{15}{16} \frac{m_K^4 m_{\pi}^4}{m_{\eta}^6} + \frac{28}{3} \frac{m_K^6}{m_{\eta}^4} - \frac{10}{3} \frac{m_K^4 m_{\pi}^2}{m_{\eta}^4} + \frac{1}{4}\frac{m_K^2 m_{\pi}^4}{m_{\eta}^4} - \frac{8}{9} \frac{m_K^4}{m_{\eta}^2} \nonumber \\ & \quad - \frac{2}{9} \frac{m_K^2 m_{\pi}^2}{m_{\eta}^2} + \frac{1}{12} \frac{m_{\pi}^4}{m_{\eta}^2} - \frac{3}{4} m_K^2 + \frac{3}{16} m_{\pi}^2 \bigg) \overline{H}^\chi_{K K \eta} \nonumber \\ & + \bigg( 20 \frac{m_K^8}{m_{\eta}^4} - 10 \frac{m_K^6 m_{\pi}^2}{m_{\eta}^4} + \frac{5}{4} \frac{m_K^4 m_{\pi}^4}{m_{\eta}^4} - \frac{88}{9} \frac{m_K^6}{m_{\eta}^2} + \frac{28}{9} \frac{m_K^4 m_{\pi}^2}{m_{\eta}^2} - \frac{1}{6} \frac{m_K^2 m_{\pi}^4}{m_{\eta}^2} + \frac{8}{9} m_K^4 \nonumber \\ & \quad + \frac{2}{9} m_K^2 m_{\pi}^2 - \frac{1}{12} m_{\pi}^4 \bigg) \overline{H}^\chi_{K K 2\eta}\end{aligned}$$ $$\begin{aligned} d^{\eta}_{\pi K K} &= \bigg(5 \frac{m_K^8}{m_{\eta}^6} - \frac{9}{2} \frac{m_K^6 m_{\pi}^2}{m_{\eta}^6} + \frac{45}{16} \frac{m_K^4 m_{\pi}^4}{m_{\eta}^6} - \frac{7}{8} \frac{m_K^2 m_{\pi}^6}{m_{\eta}^6} + \frac{3}{32} \frac{m_{\pi}^8}{m_{\eta}^6} - 2 \frac{m_K^6}{m_{\eta}^4} + \frac{10}{3} \frac{m_K^4 m_{\pi}^2}{m_{\eta}^4} - \frac{55}{24} \frac{m_K^2 m_{\pi}^4}{m_{\eta}^4} \nonumber \\ & \quad + \frac{19}{48} \frac{m_{\pi}^6}{m_{\eta}^4} +\frac{1}{2} \frac{m_K^4}{m_{\eta}^2} - \frac{1}{6} \frac{m_K^2 m_{\pi}^2}{m_{\eta}^2} + \frac{25}{96} \frac{m_{\pi}^4}{m_{\eta}^2} - \frac{9}{16} m_{\pi}^2 \bigg) \overline{H}^\chi_{\pi K K} \nonumber \\ & + \bigg( - 20 \frac{m_K^{10}}{m_{\eta}^6} + 25 \frac{m_K^8 m_{\pi}^2}{m_{\eta}^6} - \frac{59}{4} \frac{m_K^6 m_{\pi}^4}{m_{\eta}^6} + \frac{63}{16} \frac{m_K^4 m_{\pi}^6}{m_{\eta}^6} - \frac{3}{8} \frac{m_K^2 m_{\pi}^8}{m_{\eta}^6} + \frac{9}{m_{\eta}^4} m_K^8 - \frac{77}{6} \frac{m_K^6 m_{\pi}^2}{m_{\eta}^4} \nonumber \\ & \quad + \frac{355}{48} \frac{m_K^4 m_{\pi}^4}{m_{\eta}^4} - \frac{19}{16} \frac{m_K^2 m_{\pi}^6}{m_{\eta}^4} - \frac{m_K^6}{m_{\eta}^2} + \frac{1}{3} \frac{m_K^4 m_{\pi}^2}{m_{\eta}^2} - \frac{25}{48} \frac{m_K^2 m_{\pi}^4}{m_{\eta}^2} \bigg) \overline{H}^\chi_{\pi 2K K} \nonumber \\ & + \bigg(-\frac{m_K^6 m_{\pi}^4}{m_{\eta}^6}-\frac{m_K^4 m_{\pi}^6}{m_{\eta}^6} + \frac{11}{16} \frac{m_K^2 m_{\pi}^8}{m_{\eta}^6} - \frac{3}{32} \frac{m_{\pi}^{10}}{m_{\eta}^6} + \frac{m_K^6 m_{\pi}^2}{m_{\eta}^4} - \frac{3}{2} \frac{m_K^4 m_{\pi}^4}{m_{\eta}^4} + \frac{91}{48} \frac{m_K^2 m_{\pi}^6}{m_{\eta}^4} \nonumber \\ & \quad - \frac{19}{48} \frac{m_{\pi}^8}{m_{\eta}^4} - \frac{1}{2} \frac{m_K^4 m_{\pi}^2}{m_{\eta}^2} + \frac{1}{6} \frac{m_K^2 m_{\pi}^4}{m_{\eta}^2} - \frac{25}{96} \frac{m_{\pi}^6}{m_{\eta}^2} \bigg) \overline{H}^\chi_{2\pi K K} \end{aligned}$$ where $d^{\eta}_{L_i \times L_j}$ is given by Eq.(\[dBarLiLj\]). Sunset Integral Results \[Sec:SunsetResults\] ============================================= The results presented in this Appendix have been checked by doing the calculations analytically in two different ways (see [@Ananthanarayan:2018] for details). They have also been checked numerically using `AMBRE` [@Gluza:2007rt; @Gluza:2010rn] and other related `Mathematica` packages, as well as using **CHIRON** [@Bijnens:2014gsa]. Equivalent expressions in terms of Kampé de Fériet series may be found in [@Ananthanarayan:2017qmx]. Three mass scale kaon sunsets ----------------------------- $$\begin{aligned} \label{Eq:Hkpe} & \overline{H}^{\chi}_{K \pi \eta} = \frac{m_{K}^2}{512\pi ^4} \Bigg\{ -\frac{1}{4}+\frac{5 \pi ^2}{6}-\frac{7}{4}\left(\frac{m_{\eta}^4}{m_{K}^4}+\frac{m_{\pi}^4}{m_{K}^4}\right) + \left(1-\frac{\pi^2}{2}\right)\left(\frac{m_{\eta}^2}{m_{K}^2}+\frac{m_{\pi}^2}{m_{K}^2}\right) +\frac{m_{\pi}^4}{2 m_{K}^4} \log\left[\frac{m_{\pi}^2}{m_{K}^2}\right] \nonumber \\ & \quad +\frac{m_{\pi}^2}{m_{K}^2} \frac{m_{\eta}^2}{m_{K}^2} \left(7+\frac{2 \pi^2}{3}-2 \log\left[\frac{m_{\eta}^2}{m_{K}^2}\right]-2 \log\left[\frac{m_{\pi}^2}{m_{K}^2}\right]+\log\left[\frac{m_{\eta}^2}{m_{K}^2}\right] \log\left[\frac{m_{\pi}^2}{m_{K}^2}\right]\right) +\frac{m_{\eta}^4}{2 m_{K}^4} \log\left[\frac{m_{\eta}^2}{m_{K}^2}\right] \nonumber \\ & \quad -\frac{m_{\pi}^2}{m_{K}^2} \log\left[\frac{m_{\pi}^2}{m_{K}^2}\right]^2-\frac{m_{\eta}^2}{m_{K}^2} \log\left[\frac{m_{\eta}^2}{m_{K}^2}\right]^2 +\frac{8 \pi }{3}\left(\frac{m_{\eta}^2}{m_{K}^2}\right)^{3/2} {}_2F_1 \bigg[ \begin{array}{c} \frac{1}{2},-\frac{1}{2} \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] +\frac{1}{36}\frac{m_{\eta}^6}{m_{K}^6} {}_3F_2 \bigg[ \begin{array}{c} 1,1,2 \\ \frac{5}{2},4 \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] \nonumber \\ & \quad + \frac{1}{36} \frac{m_{\pi}^6}{m_{K}^6} {}_3F_2 \bigg[ \begin{array}{c} 1,1,2 \\ \frac{5}{2},4 \\ \end{array} \bigg| \frac{m_\pi^2}{4m_K^2} \bigg] + \frac{1}{6} \frac{m_{\eta}^4}{m_K^4} \frac{m_{\pi}^2}{m_K^2} \left( 2\gamma_E - 1 + \log \left[\frac{m_{\eta}^2}{4 m_K^2}\right] + \log \left[\frac{m_{\pi}^2}{4 m_K^2}\right] \right) {}_2F_1 \bigg[ \begin{array}{c} 1,1 \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\pi^2}{4m_K^2} \bigg] \nonumber \\ & \quad + 2 \sqrt{\pi} \frac{m_{\eta}^2}{m_{K}^2} \sum_{m=0}^{\infty} \frac{\Gamma(1+m)}{ \Gamma(\frac{5}{2}+m)} \left(\frac{m_{\pi}^2}{4m_{K}^2}\right)^{m+2} \bigg( 2 \psi(m+1)+\psi(m+2)+\psi(m+3)-2 \psi\left(m+\frac{5}{2}\right) \bigg) \nonumber \\ & \quad + 8 \sqrt{\pi} \sum_{m,n=0}^{\infty} \frac{ \Gamma (m+n+1) \Gamma (m+n+2) \Gamma (m+n+3)}{\Gamma (m+2) \Gamma (m+3) \Gamma (n+1) \Gamma (n+2) \Gamma \left(m+n+\frac{5}{2}\right)}\left(\frac{m_{\eta}^2}{4 m_{K}^2}\right)^{m+2}\left(\frac{m_{\pi}^2}{4 m_{K}^2}\right)^{n+1} \nonumber \\ & \qquad \times \bigg( \log \left[\frac{m_{\eta}^2}{4 m_{K}^2}\right]+\log \left[\frac{m_{\pi}^2}{4 m_{K}^2}\right] -\psi(m+2)-\psi(m+3)-\psi(n+1)-\psi(n+2)\nonumber \\ & \qquad \quad + 2 \psi(m+n+1) +2 \psi(m+n+2)+2 \psi(m+n+3) -2 \psi\left(m+n+\frac{5}{2}\right) \bigg) \nonumber \\ & \quad + \frac{32}{\pi^{3/2}} \left(\frac{m_{K}^2}{4m_{\eta}^2}\right)^{\frac{1}{2}} \sum_{m,n=0}^{\infty} \frac{\Gamma \left(m+\frac{1}{2}\right) \Gamma \left(m+\frac{3}{2}\right) \Gamma \left(n-\frac{1}{2}\right) \Gamma \left(n+\frac{1}{2}\right) \Gamma \left(n+\frac{3}{2}\right)}{\Gamma (n+1) \Gamma (m+n+2) \Gamma (m+n+3)} \left(\frac{m_{\pi}^2}{m_{\eta}^2}\right)^m\left(\frac{m_{\pi}^2}{4m_{K}^2}\right)^{n+2} \nonumber \\ & \qquad \times \left( \log \left[\frac{m_\pi^2}{m_\eta^2}\right] + \psi\left(m+\frac{1}{2}\right) + \psi\left(m+\frac{3}{2}\right) - \psi(m+n+2) - \psi(m+n+3) \right) \nonumber \\ & \quad - 8 \sqrt{\pi} \sum_{m,n=0}^{\infty} \frac{\Gamma \left(m+n-\frac{1}{2}\right) \Gamma \left(m+n+\frac{1}{2}\right) \Gamma \left(m+n+\frac{3}{2}\right)}{\Gamma \left(m+\frac{1}{2}\right) \Gamma \left(m+\frac{3}{2}\right) \Gamma (n+1) \Gamma (n+2) \Gamma (m+n+1)} \left(\frac{m_{\eta}^2}{4m_{K}^2}\right)^{\frac{1}{2}+m} \left(\frac{m_{\pi}^2}{4m_{K}^2}\right)^{1+n} \nonumber \\ & \qquad \times \left( \log \left[\frac{m_\pi^2}{m_\eta^2}\right] + \psi\left(m+\frac{1}{2}\right) + \psi\left(m+\frac{3}{2}\right) - \psi(n+1) - \psi(n+2) \right) \Bigg\}\end{aligned}$$ $$\begin{aligned} \label{Eq:H2kpe} & \overline{H}^{\chi}_{2K \pi \eta} = \frac{1}{512\pi ^4} \Bigg\{ \frac{5 \pi^2}{6} -1 -\frac{m_{\eta}^2}{m_{K}^2} \bigg( 1 + \frac{\pi^2}{3} + \frac{1}{2} \log ^2 \left[\frac{m_{K}^2}{m_{\eta}^2}\right] + \log \left[\frac{m_{K}^2}{m_{\eta}^2}\right] + \text{Li}_2 \left[ 1-\frac{m_{\pi}^2}{m_{\eta}^2} \right] \bigg) \nonumber \\ & \quad -\frac{m_{\pi}^2}{m_{K}^2} \bigg( 1 + \frac{\pi^2}{3} - \log \left[\frac{m_{\pi}^2}{m_{K}^2}\right] - \frac{1}{2} \log^2 \left[ \frac{m_{K}^2}{m_{\eta}^2} \right] - \log \left[\frac{m_{K}^2}{m_{\eta}^2}\right] \log \left[\frac{m_{\pi}^2}{m_{K}^2}\right] - \text{Li}_2 \left[1-\frac{m_{\pi}^2}{m_{\eta}^2}\right] \bigg) \nonumber \\ & \quad - \frac{m_{\pi}^4}{4 m_{K}^4} {}_3F_2 \bigg[ \begin{array}{c} 1,1,1 \\ \frac{3}{2},3 \\ \end{array} \bigg| \frac{m_\pi^2}{4m_K^2} \bigg] - \frac{m_{\eta}^4}{4 m_{K}^4} {}_3F_2 \bigg[ \begin{array}{c} 1,1,1 \\ \frac{3}{2},3 \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] + \frac{2 \pi}{3} \left(\frac{m_{\eta}^2}{m_{K}^2}\right)^{\frac{3}{2}} {}_2F_1 \bigg[ \begin{array}{c} \frac{1}{2},\frac{1}{2} \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] \nonumber \\ & \quad + 4 \sqrt{\pi} \sum_{m,n=0}^{\infty} \frac{\Gamma \left(m+n+\frac{1}{2}\right)^2 \Gamma \left(m+n+\frac{3}{2}\right)}{\Gamma \left(m+\frac{1}{2}\right) \Gamma \left(m+\frac{3}{2}\right) \Gamma (n+1) \Gamma (n+2) \Gamma (m+n+1)} \left(\frac{m_{\eta}^2}{4m_{K}^2}\right)^{\frac{1}{2}+m} \left(\frac{m_{\pi}^2}{4m_{K}^2}\right)^{1+n} \nonumber \\ & \qquad \times \left( \log \left[\frac{m_\pi^2}{m_\eta^2}\right] + \psi\left(m+\frac{1}{2}\right) + \psi\left(m+\frac{3}{2}\right) - \psi(n+1) - \psi(n+2) \right) \nonumber \\ & \quad - 4 \sqrt{\pi} \sum_{m,n=0}^{\infty} \frac{ \Gamma (m+n+1)^2 \Gamma (m+n+2) }{\Gamma (m+1) \Gamma (m+2) \Gamma (n+1) \Gamma (n+2) \Gamma \left(m+n+\frac{3}{2} \right) } \left(\frac{m_{\eta}^2}{4m_{K}^2}\right)^{1+m} \left(\frac{m_{\pi}^2}{4m_{K}^2}\right)^{1+n} \nonumber \\ & \qquad \times \bigg( \log \left[\frac{m_{\pi}^2}{4 m_{K}^2}\right] + \log \left[\frac{m_{\eta}^2}{4 m_{K}^2}\right] -\psi(m+1)-\psi(m+2) -\psi(n+1)-\psi(n+2) \nonumber \\ & \qquad \quad + 4 \psi(m+n+1)+2 \psi(m+n+2)-2 \psi\left(m+n+\frac{3}{2}\right) \bigg) \nonumber \\ & \quad - \frac{4}{\pi^{3/2}} \sum_{m,n=0}^{\infty} \frac{\Gamma \left(m+\frac{1}{2}\right) \Gamma \left(m+\frac{3}{2}\right) \Gamma \left(n+\frac{1}{2}\right)^2 \Gamma \left(n+\frac{3}{2}\right)}{\Gamma (n+1) \Gamma (m+n+2) \Gamma (m+n+3)} \left(\frac{m_{\pi}^2}{m_{\eta}^2}\right)^{m+\tfrac{1}{2}} \left(\frac{m_{\pi}^2}{4m_{K}^2}\right)^{n+\tfrac{3}{2}} \nonumber \\ & \qquad \times \left( \log \left[\frac{m_\pi^2}{m_\eta^2}\right] + \psi\left(m+\frac{1}{2}\right) + \psi\left(m+\frac{3}{2}\right) - \psi(m+n+2) - \psi(m+n+3) \right) \Bigg\}\end{aligned}$$ $$\begin{aligned} \label{Eq:Hk2pe} & \overline{H}^{\chi}_{K 2\pi \eta} = \frac{1}{512\pi ^4} \Bigg\{ \frac{1}{6} \frac{m_\eta^4}{m_K^4} \left( 2\gamma_E + \log \left[ \frac{m_{\eta}^2}{4 m_{K}^2} \right] + \log \left[ \frac{m_{\pi}^2}{4 m_{K}^2} \right] \right) {}_2F_1 \bigg[ \begin{array}{c} 1,1 \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] - 1 - \frac{\pi^2}{2} - 2\log \left[ \frac{m_{\pi}^2}{m_{K}^2} \right] \nonumber \\ & + \frac{m_{\eta}^2}{m_{K}^2} \left( 5 + \frac{2 \pi^2}{3} - 2 \log \left[ \frac{m_{\pi}^2}{m_{K}^2} \right] + \log \left[ \frac{m_{K}^2}{m_{\eta}^2}\right] - \log \left[ \frac{m_{K}^2}{m_{\eta}^2}\right] \log \left[ \frac{m_{\pi}^2}{m_{K}^2}\right] \right) + \frac{m_{\pi}^4}{12 m_{K}^4} {}_3F_2 \bigg[ \begin{array}{c} 1,1,2 \\ \frac{5}{2},3 \\ \end{array} \bigg| \frac{m_\pi^2}{4m_K^2} \bigg] \nonumber \\ & + \frac{1}{3} \frac{m_{\eta}^2}{m_{K}^2} \frac{m_{\pi}^2}{m_{K}^2}\left( 2\gamma_E - 1 + \log \left[ \frac{m_{\eta}^2}{4 m_{K}^2} \right] + \log \left[ \frac{m_{\pi}^2}{4 m_{K}^2} \right] \right) {}_3F_2 \bigg[ \begin{array}{c} 1,1,3 \\ 2,\frac{5}{2} \\ \end{array} \bigg| \frac{m_\pi^2}{4m_K^2} \bigg] -\frac{m_{\pi}^2}{m_{K}^2} \left(3-\log \left[ \frac{m_{\pi}^2}{m_{K}^2}\right] \right) \nonumber \\ & -\log^2 \left[\frac{m_{\pi}^2}{m_{K}^2} \right] + 2 \sqrt{\pi} \sum_{m=0}^{\infty} \frac{\Gamma (m+1)}{\Gamma \left(m+\frac{5}{2}\right)} \left(\frac{m_{\eta}^2}{4 m_{K}^2}\right)^{m+2} \bigg( 2 \psi(m+1) +\psi(m+2) + \psi(m+3)-2 \psi\left(m+\frac{5}{2}\right) \bigg) \nonumber \\ & + \frac{\sqrt{\pi}}{2} \frac{m_{\eta}^2}{m_{K}^2} \sum_{m=0}^{\infty} \frac{\Gamma (m+1) \Gamma (m+3)}{\Gamma (m+2) \Gamma \left(m+\frac{5}{2}\right)} \left(\frac{m_{\pi}^2}{4 m_{K}^2}\right)^{m+1} \bigg( 2 \psi(m+1)+2 \psi(m+3)-2 \psi\left(m+\frac{5}{2}\right) \bigg) \nonumber \\ & + \frac{2}{\pi^{3/2}} \sum_{m,n=0}^{\infty} \frac{\Gamma \left(m+\frac{1}{2}\right) \Gamma \left(m+\frac{3}{2}\right) \Gamma \left(n-\frac{1}{2}\right) \Gamma \left(n+\frac{1}{2}\right) \Gamma \left(n+\frac{3}{2}\right)}{\Gamma (n+1) \Gamma (m+n+2)^2} \left(\frac{m_{\pi}^2}{m_{\eta}^2}\right)^{m+\tfrac{1}{2}} \left(\frac{m_{\pi}^2}{4 m_{K}^2}\right)^{n+\tfrac{1}{2}} \nonumber \\ & \qquad \times \left( \log \left[ \frac{m_{\pi}^2}{m_{\eta}^2} \right] + \psi\left(m+\frac{1}{2}\right) + \psi\left(m+\frac{3}{2}\right) -2 \psi(m+n+2) \right) \nonumber \\ & - 2 \sqrt{\pi} \sum_{m,n=0}^{\infty} \frac{\Gamma \left(m+n-\frac{1}{2}\right) \Gamma \left(m+n+\frac{1}{2}\right) \Gamma \left(m+n+\frac{3}{2}\right)}{\Gamma \left(m+\frac{1}{2}\right) \Gamma \left(m+\frac{3}{2}\right) \Gamma (n+1)^2 \Gamma (m+n+1)} \left(\frac{m_{\eta}^2}{4 m_{K}^2}\right)^{m+\frac{1}{2}} \left(\frac{m_{\pi}^2}{4 m_{K}^2}\right)^n \nonumber \\ & \qquad \times \left( \log \left[ \frac{m_{\pi}^2}{m_{\eta}^2} \right] + \psi\left(m+\frac{1}{2}\right)+\psi\left(m+\frac{3}{2}\right)-2 \psi(n+1) \right) \nonumber \\ & + 2 \sqrt{\pi} \sum_{m,n=0}^{\infty} \frac{\Gamma (m+n+2) \Gamma (m+n+3) \Gamma (m+n+4)}{\Gamma (m+2) \Gamma (m+3) \Gamma (n+2)^2 \Gamma \left(m+n+\frac{7}{2}\right)} \left(\frac{m_{\eta}^2}{4 m_{K}^2}\right)^{m+2} \left(\frac{m_{\pi}^2}{4 m_{K}^2}\right)^{n+1} \nonumber \\ & \qquad \times \bigg( \log \left[ \frac{m_{\pi}^2}{4 m_{K}^2} \right] + \log \left[ \frac{m_{\eta}^2}{4 m_{K}^2} \right] - \psi(m+2) -\psi(m+3) -2 \psi(n+2) \nonumber \\ & \qquad \quad + 2 \psi(m+n+2)+2 \psi(m+n+3)+2 \psi(m+n+4)-2 \psi\left(m+n+\frac{7}{2}\right) \bigg) \Bigg\}\end{aligned}$$ $$\begin{aligned} & \overline{H}^{\chi}_{K \pi 2\eta} = \frac{1}{512\pi^4} \Bigg\{ 2 \log \left[ \frac{m_K^2}{m_{\eta}^2} \right] - \log^2 \left[\frac{m_{K}^2}{m_{\eta}^2}\right] + \pi \left(\frac{m_{\eta}^2}{m_{K}^2}\right)^{\tfrac{1}{2}} \left(4-\frac{m_{\eta}^2}{m_{K}^2}\right)^{\tfrac{1}{2}} - \frac{m_{\eta}^2}{m_{K}^2} \left(3 + \log \left[\frac{m_{K}^2}{m_{\eta}^2}\right] \right) \nonumber \\ & \quad + \frac{m_{\pi}^2}{m_{K}^2} \left( 5 + \frac{2 \pi ^2}{3} + 2 \log \left[\frac{m_{K}^2}{m_{\eta}^2}\right] - \log \left[\frac{m_{\pi}^2}{m_{K}^2}\right] - \log \left[\frac{m_{\pi}^2}{m_{K}^2}\right] \log \left[ \frac{m_{K}^2}{m_{\eta}^2} \right] \right) + 2 \pi \frac{m_{\eta}}{m_K} {}_2F_1 \bigg[ \begin{array}{c} \frac{1}{2},\frac{1}{2} \\ \frac{3}{2} \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] \nonumber \\ & \quad + \frac{m_{\eta}^4}{12 m_{K}^4} {}_3F_2 \bigg[ \begin{array}{c} 1,1,2 \\ \frac{5}{2},3 \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] + \frac{1}{6} \frac{m_\pi^4}{m_K^4} \left( 2\gamma_E + \log \left[ \frac{m_{\eta}^2}{4 m_{K}^2} \right] + \log \left[ \frac{m_{\pi}^2}{4 m_{K}^2} \right] \right) {}_2F_1 \bigg[ \begin{array}{c} 1,1 \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\pi^2}{4m_K^2} \bigg] \nonumber \\ & \quad + \frac{1}{3} \frac{m_{\eta}^2}{m_{K}^2} \frac{m_{\pi}^2}{m_{K}^2} \left( 2\gamma_E - 1 + \log \left[ \frac{m_{\eta}^2}{4 m_{K}^2} \right] + \log \left[ \frac{m_{\pi}^2}{4 m_{K}^2} \right] \right) {}_3F_2 \bigg[ \begin{array}{c} 1,1,3 \\ 2,\frac{5}{2} \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] - 1 -\frac{\pi^2}{2} \nonumber \\ & \quad + \frac{\sqrt{\pi}}{2} \frac{m_{\pi}^2}{m_{K}^2} \sum_{m=0}^{\infty} \frac{\Gamma (m+1) \Gamma (m+3)}{\Gamma (m+2) \Gamma \left(m+\frac{5}{2}\right)} \left(\frac{m_{\eta}^2}{4 m_{K}^2}\right)^{m+1} \bigg( 2 \psi(m+1)+2 \psi(m+3)-2 \psi\left(m+\frac{5}{2} \right) \bigg) \nonumber \\ & \quad + 2 \sqrt{\pi} \sum_{m=0}^{\infty} \frac{\Gamma (m+1)}{\Gamma \left(m+\frac{5}{2}\right)} \left(\frac{m_{\pi}^2}{4 m_{K}^2}\right)^{m+2} \bigg( 2 \psi(m+1)+\psi(m+2)+\psi(m+3)-2 \psi\left(m+\frac{5}{2}\right) \bigg) \nonumber \\ & \quad - 2 \sqrt{\pi} \sum_{m,n=0}^{\infty} \frac{\Gamma \left(m+n-\frac{1}{2}\right) \Gamma \left(m+n+\frac{1}{2}\right) \Gamma \left(m+n+\frac{3}{2}\right)}{\Gamma \left(m+\frac{1}{2}\right)^2 \Gamma (n+1) \Gamma (n+2) \Gamma (m+n+1)} \left(\frac{m_{\eta}^2}{4 m_{K}^2}\right)^{m-\tfrac{1}{2}} \left(\frac{m_{\pi}^2}{4 m_{K}^2}\right)^{n+1} \nonumber \\ & \quad \qquad \times \left( \log \left[\frac{m_{\pi}^2}{m_{\eta}^2}\right] + 2 \psi\left(m+\frac{1}{2}\right) -\psi(n+1)-\psi(n+2) \right) \nonumber \\ & \quad + 2 \sqrt{\pi} \sum_{m,n=0}^{\infty} \frac{\Gamma (m+n+2) \Gamma (m+n+3) \Gamma (m+n+4)}{\Gamma (m+2)^2 \Gamma (n+2) \Gamma (n+3) \Gamma \left(m+n+\frac{7}{2}\right)} \left(\frac{m_{\eta}^2}{4 m_{K}^2}\right)^{m+1} \left(\frac{m_{\pi}^2}{4 m_{K}^2}\right)^{n+2} \nonumber \\ & \quad \qquad \times \bigg( \log \left[\frac{m_{\pi}^2}{4 m_{K}^2}\right] + \log \left[\frac{m_{\eta}^2}{4 m_{K}^2}\right] -2 \psi(m+2) - \psi(n+2)-\psi(n+3) \nonumber \\ & \quad \qquad \quad + 2 \psi(m+n+2)+2 \psi(m+n+3)+2 \psi(m+n+4)-2 \psi\left(m+n+\frac{7}{2}\right) \bigg) \nonumber \\ & \quad - \frac{2}{\pi^{3/2}} \sum_{m,n=0}^{\infty} \frac{\Gamma \left(m+\frac{3}{2}\right)^2 \Gamma \left(n-\frac{1}{2}\right) \Gamma \left(n+\frac{1}{2}\right) \Gamma \left(n+\frac{3}{2}\right)}{\Gamma (n+1) \Gamma (m+n+2) \Gamma (m+n+3)} \left(\frac{m_{\pi}^2}{m_{\eta}^2}\right)^{m+\tfrac{3}{2}} \left(\frac{m_{\pi}^2}{4 m_{K}^2}\right)^{n+\tfrac{1}{2}} \nonumber \\ & \quad \qquad \times \left( \log \left[\frac{m_{\pi}^2}{m_{\eta}^2}\right] + 2 \psi\left(m+\frac{3}{2}\right) - \psi(m+n+2)-\psi(m+n+3) \right) \Bigg\} \label{Eq:Hkp2e}\end{aligned}$$ Three mass scale eta sunsets ---------------------------- $$\begin{aligned} & \overline{H}^\chi_{\pi K K} = \frac{m_{\pi}^2}{512 \pi ^4} \Bigg\{ \frac{\pi ^2}{6}-5 + 4 \log \left[\frac{m_{\pi}^2}{m_K^2}\right] - \log ^2 \left[\frac{m_{\pi}^2}{m_K^2}\right] + \frac{m_{\eta}^2}{m_{\pi}^2} \left( \log \left[ \frac{m_{K}^2}{m_{\eta}^2} \right] + \frac{5}{4} \right) + \frac{m_{K}^2}{m_{\pi}^2} \left(6 + \frac{\pi ^2}{3}\right) \nonumber \\ & \quad - \frac{1}{18} \frac{m_{\eta}^2}{m_{K}^2} \frac{m_{\eta}^2}{m_{\pi}^2} {}_3F_2 \bigg[ \begin{array}{c} 1,1,2 \\ \frac{5}{2},4 \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] - \frac{1}{3} \frac{m_{\pi}^2}{m_K^2} \log \left[ \frac{m_{\pi}^2}{4 m_K^2} \right] {}_2F_1 \bigg[ \begin{array}{c} 1,1 \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_{\pi}^2}{4 m_K^2} \bigg] \nonumber \\ & \quad - \sqrt{\pi} \sum_{m,n=0}^{\infty} \frac{\Gamma (m+n+1) \Gamma (m+n+2) \Gamma (m+n+3)}{\Gamma (m+2) \Gamma (m+3) \Gamma (n+1) \Gamma (n+2) \Gamma \left(m+n+\frac{5}{2}\right)} \left(\frac{m_{\eta}^2}{4 m_{K}^2}\right)^{m+1} \left(\frac{m_{\pi}^2}{4 m_{K}^2}\right)^n \nonumber \\ & \quad \times \Bigg( \log \left[\frac{m_{\pi}^2}{4 m_{K}^2}\right]-\psi(n+1)-\psi(n+2) + \psi(m+n+1) + \psi(m+n+2) +\psi(m+n+3) - \psi\left( m+n+\frac{5}{2} \right) \Bigg) \nonumber \\ & \quad - \sqrt{\pi} \sum_{m=0}^{\infty} \frac{\Gamma (m+1)}{\Gamma \left(m+\frac{5}{2}\right)} \left(\frac{m_{\pi}^2}{4 m_{K}^2}\right)^{m+1} \Bigg( \psi(m+1) - \psi\left( m+\frac{5}{2} \right) \Bigg) \Bigg\} \label{Eq:Hpkk}\end{aligned}$$ $$\begin{aligned} & \overline{H}^\chi_{2\pi K K} = \frac{1}{512 \pi ^4} \Bigg\{ \frac{\pi ^2}{6} - 3 + 2 \log \left[ \frac{m_{\pi}^2}{m_{K}^2} \right] - \log^2 \left[ \frac{m_{\pi}^2}{m_{K}^2} \right] - \frac{1}{3} \frac{m_{\pi}^2}{m_K^2} \left( 1 + 2 \log \left[ \frac{m_{\pi}^2}{4 m_K^2} \right] \right) {}_2F_1 \bigg[ \begin{array}{c} 1,1 \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_{\pi}^2}{4 m_K^2} \bigg] \nonumber \\ & \quad - \frac{1}{30} \frac{m_{\pi}^4}{m_K^4} \log \left[ \frac{m_{\pi}^2}{4 m_K^2} \right] {}_2F_1 \bigg[ \begin{array}{c} 2,2 \\ \frac{7}{2} \\ \end{array} \bigg| \frac{m_{\pi}^2}{4 m_K^2} \bigg] - \sqrt{\pi} \sum_{m=0}^{\infty} \left(\frac{m_\pi^2}{4 m_K^2}\right)^{m+1} \frac{\Gamma (m+1) \Gamma (m+3)}{\Gamma (m+2) \Gamma \left(m+\frac{5}{2}\right)} \Bigg(\psi(m+1)-\psi\left(m+\frac{5}{2}\right)\Bigg) \nonumber \\ & \quad - \sqrt{\pi} \sum_{m,n=0}^{\infty} \frac{\Gamma (m+n+1) \Gamma (m+n+2) \Gamma (m+n+3)}{\Gamma (m+2) \Gamma (m+3) \Gamma (n+1)^2 \Gamma \left(m+n+\frac{5}{2}\right)} \left(\frac{m_{\eta}^2}{4 m_{K}^2}\right)^{m+1} \left(\frac{m_{\pi}^2}{4 m_{K}^2}\right)^n \nonumber \\ & \quad \times \Bigg( \log \left[\frac{m_{\pi}^2}{4m_{K}^2}\right] - \psi(n+1) - \psi(n+2) + \psi(m+n+1) + \psi(m+n+2) + \psi(m+n+3) - \psi \left( m+n+\frac{5}{2} \right) \Bigg) \nonumber \\ & \quad - \sqrt{\pi} \sum_{m,n=0}^{\infty} \frac{\Gamma (m+n+1) \Gamma (m+n+2) \Gamma (m+n+3)}{\Gamma (m+2) \Gamma (m+3) \Gamma (n+1) \Gamma (n+2) \Gamma \left(m+n+\frac{5}{2}\right)} \left(\frac{m_{\eta}^2}{4 m_{K}^2}\right)^{m+1} \left(\frac{m_{\pi}^2}{4 m_{K}^2}\right)^n \Bigg\} \label{Eq:H2pkk}\end{aligned}$$ $$\begin{aligned} & \overline{H}^\chi_{\pi 2K K} = \frac{1}{512 \pi ^4} \Bigg\{ 1 + \frac{\pi^2}{6} + \frac{1}{60} \frac{m_{\pi}^6}{m_K^6} \log \left[ \frac{m_{\pi}^2}{4 m_K^2} \right] {}_2F_1 \bigg[ \begin{array}{c} 2,2 \\ \frac{7}{2} \\ \end{array} \bigg| \frac{m_\pi^2}{4 m_K^2} \bigg] - \frac{m_{\pi}^2}{m_{K}^2} \left(2 - \log \left[ \frac{m_{\pi}^2}{m_{K}^2} \right] \right) \nonumber \\ & \quad + \frac{1}{6} \frac{m_{\pi}^4}{m_K^4} \left( 1 + \log \left[ \frac{m_{\pi}^2}{4 m_K^2} \right] \right) {}_2F_1 \bigg[ \begin{array}{c} 1,1 \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\pi^2}{4 m_K^2} \bigg] + \frac{1}{2} \frac{m_{\eta}^2}{ m_{K}^2} {}_3F_2 \bigg[ \begin{array}{c} 1,1,1 \\ \frac{3}{2},3 \\ \end{array} \bigg| \frac{m_\eta^2}{4 m_K^2} \bigg] \nonumber \\ & \quad + 2 \sqrt{\pi } \sum_{m=0}^\infty \frac{\Gamma (m+2)}{\Gamma \left(m+\frac{5}{2}\right)} \left(\frac{m_\pi^2}{4 m_K^2}\right)^{m+2} \Bigg( \psi(m+1)-\psi\left(m+\frac{5}{2}\right) \Bigg) \nonumber \\ & \quad+ 2 \sqrt{\pi} \sum_{m,n=0}^{\infty} \frac{\Gamma (m+n+2)^2 \Gamma (m+n+3)}{\Gamma (m+2) \Gamma (m+3) \Gamma (n+1) \Gamma (n+2) \Gamma \left(m+n+\frac{5}{2}\right)} \left(\frac{m_{\eta}^2}{4 m_{K}^2}\right)^{m+1} \left(\frac{m_{\pi}^2}{4 m_{K}^2}\right)^{n+1} \nonumber \\ & \quad \times \Bigg( \log \left[\frac{m_{\pi}^2}{4m_{K}^2}\right] - \psi(n+1)-\psi(n+2) +\psi(m+n+1)+ \psi(m+n+2) +\psi(m+n+3) - \psi \left( m+n+\frac{5}{2} \right) \Bigg) \nonumber \\ & \quad + 2 \sqrt{\pi} \sum_{m,n=0}^{\infty} \frac{\Gamma (m+n+1) \Gamma (m+n+2) \Gamma (m+n+3)}{\Gamma (m+2) \Gamma (m+3) \Gamma (n+1) \Gamma (n+2) \Gamma \left(m+n+\frac{5}{2}\right)} \left(\frac{m_{\eta}^2}{4 m_{K}^2}\right)^{m+1} \left(\frac{m_{\pi}^2}{4 m_{K}^2}\right)^{n+1} \Bigg\} \label{Eq:Hp2kk}\end{aligned}$$ One and two mass scale sunsets ------------------------------ $$\begin{aligned} \overline{H}^{\chi}_{K \eta \eta} &= \frac{m_{\eta}^2}{512\pi^4} \Bigg\{ 4 + \frac{\pi^2}{3} + \frac{m_{K}^2}{m_{\eta}^2} \left(\frac{\pi ^2}{6}-\frac{1}{4}\right) - \frac{m_{K}^2}{m_{\eta}^2} \log^2 \left[ \frac{m_{K}^2}{m_{\eta}^2} \right] + 2 \log \left[ \frac{m_K^2}{m_\eta^2} \right] \nonumber \\ & \qquad + 2 \left( \frac{m_\eta^2}{m_K^2} + \frac{m_K^2}{m_\eta^2}-2 \right) \left(\text{Li}_2\left[\frac{m_{K}^2}{m_{\eta}^2}\right] + \log \left[1-\frac{m_{K}}{m_{\eta}}\right] \log \left[\frac{m_K^2}{m_\eta^2}\right] \right) \Bigg\}\end{aligned}$$ $$\begin{aligned} \overline{H}^{\chi}_{2K \eta \eta} &= \frac{1}{512\pi^4} \Bigg\{ \frac{\pi^2}{6} - 1 - \log^2 \left[ \frac{m_{K}^2}{m_{\eta}^2} \right] + 2 \left(1-\frac{m_{\eta}^2}{m_{K}^2}\right) \left( \text{Li}_2 \left[ \frac{m_{K}^2}{m_{\eta}^2} \right] + \log^2 \left[ \frac{m_{K}^2}{m_{\eta}^2} \right] \log^2 \left[ 1 - \frac{m_{K}^2}{m_{\eta}^2} \right] \right) \Bigg\}\end{aligned}$$ $$\begin{aligned} \overline{H}^{\chi}_{K K K} &= \frac{m_K^2}{512 \pi^4} \left(\frac{15}{4}+\frac{\pi ^2}{2}\right)\end{aligned}$$ The Pion Mass and Decay Constant \[Sec:PionSunsets\] ==================================================== Analytic expressions for the pion mass and decay constant in ChPT at two loops are presented in [@Ananthanarayan:2017yhz]. The expressions given there are approximations obtained by taking an expansion of the three mass scale sunsets around zero external momentum $p^2=m_\pi^2=0$. To go beyond these approximations, one may use the exact result presented in Eq.(17) of [@Ananthanarayan:2017qmx], as well as its derivatives, and substitute them into Eq.(28) and Eq.(46) of [@Ananthanarayan:2017yhz]. We would like to emphasize that it is proved in [@Ananthanarayan:2018] that although the expression in Eq.(\[Eq:Hpkk\]) of the present paper and the one in Eq.(17) of [@Ananthanarayan:2017qmx] do not come from summing up the same sets of residues in the Mellin-Barnes intermediate computations, each may be derived from the other by the swap $m_\pi^2 \leftrightarrow m_\eta^2$. This is also true for Eqs.(\[Eq:H2pkk\]) and (\[Eq:Hp2kk\]) and their pion analogues. In [@Ananthanarayan:2017qmx], some results of this paper and of [@Ananthanarayan:2017yhz] are used to obtain an expression for the quantity $F_K/F_\pi$. An approximate analytic expression that can be readily fit with lattice data is also presented there. The truncated three mass sunsets used to produce Eqs.(18)-(19) of [@Ananthanarayan:2017qmx] are: $$\begin{aligned} & \overline{H}^\chi_{\eta K K} \approx \frac{m_\pi^2}{512 \pi^4} \Bigg\{ \frac{5}{4} + \log \left[\frac{m_K^2}{m_\pi^2}\right] + \frac{1}{30} \frac{m_\eta^6}{m_K^4 m_\pi^2} \left(\gamma -1+\psi\left(\frac{7}{2}\right)\right) + \frac{1}{3} \frac{m_\eta^4}{m_K^2 m_\pi^2} \left(\gamma +\psi\left(\frac{5}{2}\right)\right) + \frac{m_K^2}{m_\pi^2} \left(6+\frac{\pi ^2}{3}\right) \nonumber \\ & \quad + \frac{m_\eta^2}{m_\pi^2} \left(\frac{\pi^2}{6}-5+\log (256) -\log ^2\left[\frac{m_\eta^2}{m_K^2}\right] \right) + \frac{ 4 m_\eta m_K}{m_\pi^2} \sqrt{4-\frac{m_\eta^2}{m_K^2}} \log \left[\frac{m_\eta^2}{4 m_K^2}\right] \csc ^{-1}\left[\frac{2 m_K}{m_\eta}\right] \nonumber \\ & \quad + \frac{1}{3} \frac{m_\eta^2}{m_K^2} \left(\frac{7}{6}-\log \left[\frac{m_\eta^2}{m_K^2}\right] \right) + \frac{1}{35} \frac{m_\eta^6}{m_K^6} \left(\frac{533}{420}-\log \left[\frac{m_\eta^2}{m_K^2}\right] \right) + \frac{1}{10} \frac{m_\eta^4}{m_K^4} \left(\frac{37}{30}-\log \left[\frac{m_\eta^2}{m_K^2}\right] \right) \Bigg\}\end{aligned}$$ $$\begin{aligned} & \overline{H}^\chi_{2\eta K K} \approx \frac{1}{512 \pi^4} \Bigg\{ \frac{m_\eta^4}{10 m_K^4} \left( \frac{31}{15}-\log (4) - \frac{1}{3} \, _2F_1 \left[2,2;\frac{7}{2};\frac{m_\eta^2}{4 m_K^2}\right] \log \left[\frac{m_\eta^2}{4 m_K^2}\right] \right) + \frac{1}{105} \frac{m_\eta^6}{m_K^6} \left(2 \gamma -3+2 \psi\left(\frac{9}{2}\right)\right) \nonumber \\ & \quad + \frac{1}{3} \frac{m_\pi^2}{m_K^2} \left(\frac{1}{6} - \log \left[\frac{m_\eta^2}{m_K^2}\right] \right) - \log^2 \left[\frac{m_K^2}{m_\eta^2}\right] - 2 \log \left[\frac{m_K^2}{m_\eta^2}\right] - 8 \log \left[ \frac{m_\eta^2}{4 m_K^2}\right] + \frac{3}{35}\frac{m_\eta^4 m_\pi^2 }{m_K^6} \left(\frac{131}{140} - \log \left[\frac{m_\eta^2}{m_K^2}\right] \right) \nonumber \\ & \quad + \frac{4 m_K}{m_\eta} \sqrt{4-\frac{m_\eta^2}{m_K^2}} \left(2 \log \left[ \frac{m_\eta^2}{4 m_K^2}\right] + 1 \right) \sin ^{-1} \left[\frac{m_\eta}{2 m_K}\right] + \frac{1}{5} \frac{m_\eta^2 m_\pi^2}{m_K^4} \left(\frac{11}{15}-\log \left[\frac{m_\eta^2}{m_K^2}\right] \right) + \frac{2}{3} \frac{m_\eta^2}{m_K^2} \left(\gamma +\psi\left(\frac{5}{2}\right)\right) \nonumber \\ & \quad + \frac{5}{462} \frac{m_\eta^8 m_\pi^2}{m_K^{10}} \left(\frac{18107}{13860} - \log \left[\frac{m_\eta^2}{m_K^2}\right] \right) + \frac{2}{63} \frac{m_\eta^6 m_\pi^2}{m_K^8} \left(\frac{1627}{1260} - \log \left[\frac{m_\eta^2}{m_K^2}\right] \right) + \frac{\pi^2}{6}-7 \Bigg\}\end{aligned}$$ $$\begin{aligned} & \overline{H}^\chi_{\eta 2K K} \approx \frac{1}{512 \pi^4} \Bigg\{ 1 + \frac{1}{2} \frac{m_\pi^2}{m_K^2} \, _3F_2 \left[1,1,1;\frac{3}{2},3;\frac{m_\pi^2}{4 m_K^2}\right] + \frac{m_\eta^8}{m_K^8} \left(\frac{\log (4)}{140}-\frac{389}{29400}\right) - \frac{1}{6} \frac{m_\eta^4}{m_K^4} \left(\gamma +\psi\left(\frac{5}{2}\right)\right) \nonumber \\ & \quad + \frac{\pi^2}{6} + \frac{1}{60} \frac{m_\eta^6}{m_K^6} \left(\, _2F_1 \left[ 2,2;\frac{7}{2};\frac{m_\eta^2}{4 m_K^2}\right] \log \left[\frac{m_\eta^2}{4 m_K^2}\right] - \frac{62}{15}+\log (16)\right) + \frac{m_\eta^2}{m_K^2} \left(3 \log \left[\frac{m_\eta^2}{4 m_K^2}\right] + \log (4)\right) \nonumber \\ & \quad + \frac{1}{6} \frac{m_\eta^2 m_\pi^2}{m_K^4} \left(\log \left[\frac{m_\eta^2}{m_K^2}\right] - \frac{1}{6}\right) + \frac{1}{63} \frac{m_\eta^8 m_\pi^2}{m_K^{10}} \left(\log \left[\frac{m_\eta^2}{m_K^2}\right] - \frac{1627}{1260}\right) + \frac{3}{70} \frac{m_\eta^6 m_\pi^2}{m_K^8} \left(\log \left[\frac{m_\eta^2}{m_K^2}\right] - \frac{533}{420}\right) \nonumber \\ & \quad - \frac{2 m_\eta}{m_K} \sqrt{4-\frac{m_\eta^2}{m_K^2}} \left(\log \left[\frac{m_\eta^2}{4 m_K^2}\right] + 1 \right) \csc ^{-1} \left[\frac{2 m_K}{m_\eta}\right] + \frac{1}{10} \frac{m_\eta^4 m_\pi^2}{m_K^6} \left(\log \left[\frac{m_\eta^2}{m_K^2}\right]-\frac{11}{15}\right) \Bigg\}\end{aligned}$$ [99]{} B. Ananthanarayan, J. Bijnens, S. Friot and S. Ghosh, arXiv:1711.11328 \[hep-ph\], accepted for publication in Phys. Rev. D (Rapid Communication). B. Ananthanarayan, J. Bijnens, S. Ghosh and A. Hebbar, Eur. Phys. J. A [**52**]{} (2016) no.12, 374 doi:10.1140/epja/i2016-16374-8 \[arXiv:1608.02386 \[hep-ph\]\]. B. Ananthanarayan, J. Bijnens and S. Ghosh, Eur. Phys. J. C [**77**]{} (2017) no.7, 497 doi:10.1140/epjc/s10052-017-5019-y \[arXiv:1703.00141 \[hep-ph\]\]. J. Gasser and H. Leutwyler, Annals Phys.  [**158**]{} (1984) 142. doi:10.1016/0003-4916(84)90242-2 J. Gasser and H. Leutwyler, Nucl. Phys. B [**250**]{} (1985) 465. doi:10.1016/0550-3213(85)90492-4 J. Bijnens, G. Colangelo, G. Ecker, J. Gasser and M. E. Sainio, Nucl. Phys. B [**508**]{} (1997) 263 Erratum: \[Nucl. Phys. B [**517**]{} (1998) 639\] doi:10.1016/S0550-3213(97)80013-2, 10.1016/S0550-3213(97)00621-4, 10.1016/S0550-3213(98)00127-8 \[hep-ph/9707291\]. J. Bijnens, Prog. Part. Nucl. Phys.  [**58**]{} (2007) 521 doi:10.1016/j.ppnp.2006.08.002 \[hep-ph/0604043\]. J. Gasser and M. E. Sainio, Eur. Phys. J. C [**6**]{} (1999) 297 \[hep-ph/9803251\]. O. V. Tarasov, Nucl. Phys. B [**502**]{} (1997) 455 \[hep-ph/9703319\]. G. Ecker, P. Masjuan and H. Neufeld, Phys. Lett. B [**692**]{} (2010) 184 doi:10.1016/j.physletb.2010.07.037 \[arXiv:1004.3422 \[hep-ph\]\]. G. Ecker, P. Masjuan and H. Neufeld, Eur. Phys. J. C [**74**]{} (2014) 2, 2748 \[arXiv:1310.8452 \[hep-ph\]\]. R. Kaiser, JHEP [**0709**]{} (2007) 065 \[arXiv:0707.2277 \[hep-ph\]\]. F. A. Berends, A. I. Davydychev and N. I. Ussyukina, Phys. Lett. B [**426**]{} (1998) 95 \[hep-ph/9712209\]. H. Czyz, A. Grzelinska and R. Zabawa, Phys. Lett. B [**538**]{} (2002) 52 \[hep-ph/0204039\]. A. I. Davydychev and J. B. Tausk, Nucl. Phys. B [**397**]{} (1993) 123. F. A. Berends, M. Buza, M. Bohm and R. Scharf, Z. Phys. C [**63**]{} (1994) 227. doi:10.1007/BF01411014 L. Adams, C. Bogner and S. Weinzierl, PoS LL [**2016**]{} (2016) 033 \[arXiv:1606.09457 \[hep-ph\]\]. L. Adams, C. Bogner and S. Weinzierl, J. Math. Phys.  [**56**]{} (2015) no.7, 072303 doi:10.1063/1.4926985 \[arXiv:1504.03255 \[hep-ph\]\]. J. Ablinger, J. Blümlein, A. De Freitas, M. van Hoeij, E. Imamoglu, C. G. Raab, C.-S. Radu and C. Schneider, arXiv:1706.01299 \[hep-th\]. P. Post and J. B. Tausk, Mod. Phys. Lett. A [**11**]{} (1996) 2115 doi:10.1142/S0217732396002101 \[hep-ph/9604270\]. D. Greynat and A. Pich, talk given in Flavianet Meeting, Orsay, november 14th-16th, 2007. D. Greynat, talk given in Rencontres de Physique des Particules, Centre de Physique Théorique, École Polytechnique, march 23rd-25th, 2009. D. Greynat, private communication. B. Ananthanarayan, S. Friot and S. Ghosh; to appear. S. Friot and D. Greynat, J. Math. Phys.  [**53**]{} (2012) 023508 doi:10.1063/1.3679686 \[arXiv:1107.0328 \[math-ph\]\]. J. P. Aguilar, D. Greynat and E. De Rafael, Phys. Rev. D [**77**]{} (2008) 093010 doi:10.1103/PhysRevD.77.093010 \[arXiv:0802.2618 \[hep-ph\]\]. S. Dürr [*et al.*]{}, Phys. Rev. D [**95**]{} (2017) no.5, 054513 doi:10.1103/PhysRevD.95.054513 \[arXiv:1601.05998 \[hep-lat\]\]. G. Amoros, J. Bijnens and P. Talavera, Nucl. Phys. B [**568**]{} (2000) 319 \[hep-ph/9907264\]. J. Bijnens and G. Ecker, Ann. Rev. Nucl. Part. Sci.  [**64**]{} (2014) 149 \[arXiv:1405.6488 \[hep-ph\]\]. S. Dürr [*et al.*]{}, Phys. Rev. D [**81**]{} (2010) 054507 doi:10.1103/PhysRevD.81.054507 \[arXiv:1001.4692 \[hep-lat\]\]. J. Bijnens and I. Jemos, Nucl. Phys. B [**854**]{} (2012) 631 doi:10.1016/j.nuclphysb.2011.09.013 \[arXiv:1103.5945 \[hep-ph\]\]. J. Gluza, K. Kajda and T. Riemann, Comput. Phys. Commun.  [**177**]{} (2007) 879 doi:10.1016/j.cpc.2007.07.001 \[arXiv:0704.2423 \[hep-ph\]\]. J. Gluza, K. Kajda, T. Riemann and V. Yundin, Eur. Phys. J. C [**71**]{} (2011) 1516 doi:10.1140/epjc/s10052-010-1516-y \[arXiv:1010.1667 \[hep-ph\]\]. J. Bijnens, Eur. Phys. J. C [**75**]{} (2015) no.1, 27 doi:10.1140/epjc/s10052-014-3249-9 \[arXiv:1412.0887 \[hep-ph\]\]. [^1]: By complete results, we refer only to the finite $\mathcal{O}(\epsilon^0)$ term obtained using dimensional regularisation. The $\mathcal{O}(\epsilon^{-1})$ and $\mathcal{O}(\epsilon^{-2})$ terms are known exactly for all mass configurations [@Davydychev:1992mt]. [^2]: Some years ago, there had been an attempt to pursue such a programme [@David1]. However, the investigations were not completed and publications did not result [@David2]. [^3]: By exact expression we mean the partial sum where a big number of terms is retained such that it is assured that by adding more terms the corresponding numerical result stays stable within the standard numerical precision of `Mathematica`. [^4]: 1000 terms for single series and 10000 terms for double series.
--- abstract: 'The X-ray properties of a relaxed cluster of galaxies are determined primarily by its gravitational potential well and the entropy distribution of its intracluster gas. That entropy distribution reflects both the accretion history of the cluster and the feedback processes which limit the condensation of intracluster gas. Here we present [*Chandra*]{} observations of the core entropy profiles of nine classic “cooling-flow" clusters that appear relaxed and contain intracluster gas with a cooling time less than a Hubble time. We show that those entropy profiles are remarkably similar, despite the fact that the clusters range over a factor of three in temperature. They typically have an entropy level of $\approx 130 \, {\rm keV \, cm^2}$ at 100 kpc that declines to a plateau $\sim 10 \, {\rm keV \, cm^2}$ at $\lesssim 10$ kpc. Between these radii, the entropy profiles are $\propto r^\alpha$ with $\alpha \approx 1.0 - 1.3$. The non-zero central entropy levels in these clusters correspond to a cooling time $\sim 10^8 \, {\rm yr}$, suggesting that episodic heating on this timescale maintains the central entropy profile in a quasi-steady state.' author: - 'Megan Donahue, Donald J. Horner, Kenneth W. Cavagnolo, and G. Mark Voit' bibliography: - 'entropy.bib' title: Entropy Profiles in the Cores of Cooling Flow Clusters of Galaxies --- Introduction {#sec:intro} ============ The global properties of a cluster of galaxies, such as its bolometric X-ray luminosity   and its mean temperature  , are determined primarily by the mass  within a suitably chosen virial radius. A cluster’s temperature depends on mass because mass determines the depth of the cluster’s potential well. Its X-ray luminosity depends on mass because mass determines both the total number of baryons in the cluster and the potential well confining those baryons. However, several secondary factors combine to produce a dispersion in both   and   at a fixed , and understanding the nature of that dispersion is crucial to doing precision cosmology with clusters. One of those factors is merger shocks, which can temporarily raise both the luminosity and best-fitting temperature of a cluster [e.g., @2002ApJ...577..579R]. A second is the shape of the potential well, because clusters whose potentials are more centrally concentrated tend to have higher central temperatures [e.g., @voit02]. A third factor is the amount of intracluster gas with a cooling time less than the age of the universe. The presence of such gas leads to both a large peak in the central surface brightness of a cluster and a central temperature gradient that rises with radius. Consequently, clusters having larger amounts of gas with a short cooling time tend to have higher   and lower   at a given value of [@1998MNRAS.297L..57A; @1994MNRAS.267..779F; @markevitch98]. Such clusters have often been called “cooling-flow clusters," because the central gas was thought to condense and flow toward the center of the cluster as it radiated away its thermal energy [see @2004cgpc.symp..144D for a recent review]. Observations from [*Chandra*]{} and [*XMM-Newton*]{} now show that the central gas is not simply cooling to low temperatures and condensing in the manner originally envisioned [e.g., @Peterson2001; @2003ApJ...590..207P]. Some form of feedback apparently prevents the central gas from condensing and forming stars, thereby truncating the high end of the galaxy luminosity function. The nature of that feedback is currently an active topic of both observational and theoretical research, focusing largely on the role of outflows from active galactic nuclei in cluster cores. This paper analyzes archival [*Chandra*]{} data on nine cooling-flow clusters seeking clues to what keeps that gas from condensing and why clusters of a given mass have different amounts of gas with a short central cooling time. The tactic we take in our analysis is to focus on the entropy profiles of these clusters. We concentrate on entropy because it is a more fundamental property of the intracluster medium itself than either temperature or density alone. For example, the temperature of a cluster’s gas primarily reflects the cluster’s potential well depth; heating or cooling of the gas merely causes it to expand or contract in the potential well with only a modest change in temperature. The density of that gas depends on how much gravity can compress it in the cluster’s potential well, and it is the specific entropy of the gas that determines its density at a given pressure. Thus, the observable X-ray properties of a relaxed cluster of galaxies depend almost entirely on two physical attributes: (1) the shape and depth of the cluster’s dark-matter halo, and (2) the entropy distribution of the intracluster gas. [e.g., @voit02]. Intracluster entropy is also intimately related to the cooling and feedback processes that govern galaxy evolution and that may also play a role in limiting condensation in cluster cores. Theories and simulations of cluster formation which ignore these processes fail to reproduce the observable properties of present-day clusters. If gravity alone were responsible for shaping the appearances of clusters and groups, then we would expect their properties to be nearly self-similar, with a luminosity-temperature relation like $L \propto T^2$. Furthermore, we would expect groups and clusters to have similar surface-brightness profiles, when scaled to the virial radius of the system. However, observations indicate that $L \propto T^{2.6-2.9}$ [@es91a; @markevitch98] and that the surface-brightness profiles of groups are shallower than those of clusters [@hms99; @pcn99]. Many papers in the literature have attributed these deviations from self-similarity to an early episode of preheating by supernovae or active galaxies. Heat input that establishes a uniform minimum entropy level in the intergalactic medium breaks self-similarity because the extra entropy makes that gas harder to compress as it accretes into dark-matter haloes [e.g., @kaiser91; @eh91]. Thus, gas in the smaller potential wells of groups would be less centrally concentrated than gas in the larger potential wells of clusters, thereby reducing the luminosities of groups relative to clusters and flattening their surface-brightness profiles. Such a model predicts that the central entropy profile would flatten into a core reflecting the minimum entropy level of the IGM. However, several analyses have suggested that the energy input required to explain the observed relations through global preheating is implausibly extreme. [@vb01] have argued that the entropy scale responsible for similarity breaking is not a global property of the intergalactic medium but rather a requirement set by radiative cooling—the observed entropy at the core radii of groups and clusters turns out to be similar to the entropy level at which intracluster gas would cool within a Hubble time. Gas below this cooling threshold is therefore subject to cooling, condensation, and whatever feedback follows from that condensation, which may include both supernova and AGN activity. In this scenario, cooling and feedback conspire to deplete the amount of gas below the cooling threshold: some of the low-entropy gas condenses and feedback subsequently raises the entropy of the remaining gas until both cooling and feedback shut down. [@voit02] have built upon this principle to create cluster models that account for many observable X-ray properties, including the  -  relation, the  -  relation, the surface-brightness profiles of clusters, and their temperature gradients. However, those models do not explain why clusters differ in the amount of gas that still remains below the cooling threshold. Cluster entropy profiles have been presented by previous workers, with some of the earliest work coming from ROSAT and ASCA [@pcn99; @2000MNRAS.315..689L]. They pointed out that if gravitational collapse was the only physical process affecting the intracluster gas, all entropy profiles should be self-similar. The temperature and density profiles available from ROSAT and ASCA data were very limited, however. Results from XMM show that outside their core radii, clusters are similar, but the XMM data for inside the core radius of a cluster are limited by the spatial resolution of the telescope. In order to better understand the sub-threshold gas, the processes that keep some of it from condensing, and the effects of those processes on the global X-ray properties of clusters, we are developing a [*Chandra*]{} library of intracluster core entropy distributions. We selected the first batch of clusters for the library using criteria designed to ensure high-quality radial entropy profiles. The number of X-ray events had to be sufficient for extraction of high-quality spectra in multiple annular regions, and the clusters had to be relaxed enough to have an unambiguous central emission peak. We therefore compiled a master list of clusters available in the [*Chandra*]{} public archive as of December 1, 2003, when this project began, and cross-correlated it with the NASA/IPAC Extragalactic Database (NED) to obtain redshifts and positional data for the clusters. We then added any observations noted as cluster observations (those with sequence numbers beginning with ‘8’) that did not contain NED clusters. After compiling this master list, we created [*Chandra*]{} images of all of the clusters and examined them to eliminate any clusters with insufficient counts for our study or with obvious substructure, such double peaks. We also eliminated observations badly affected by background flares. This selection process yielded a sample consisting entirely of nearby “cooling flow" clusters, most of which are considered classic examples of that category. Table \[tab:sample\] lists that sample of nine clusters, including the cluster name, [*Chandra*]{} observation identification numbers, coordinates, maximum extraction radius, the redshift of the cluster used in spectral fits, its X-ray luminosity, and its mean X-ray temperature. These last two quantities are taken from ASCA observations analyzed by D. Horner in his PhD thesis.[^1] That is why this paper describing the first installment in our [*Chandra*]{} entropy library, which will ultimately contain clusters with a greater variety of core properties, focuses on the entropy profiles of classic cooling-flow clusters. In §\[sec:analysis\] we present our data analysis methods and techniques, and §\[sec:results\] presents our results. We find that the entropy profiles for the nine cooling-flow clusters in our sample are remarkably similar, suggesting that the process preventing condensation somehow maintains the entropy profiles in a quasi-steady state. In §\[sec:discussion\] we discuss those results in the context of some recent theoretical models, and §\[sec:summary\] summarizes the paper. Throughout the analysis, we assume a cosmology with $H_{0}$ = 70 km s$^{-1}$ Mpc$^{-1}$, $\Omega_{m} = 0.3$, and $\Omega_{\Lambda} = 0.7$. The derived data products for the Chandra Cluster Entropy Library such as the X-ray spectra, associated response files, and surface brightness profiles will be available through the NASA High Energy Space Archive (HEASARC), in the Chandra section of W3Browse[^2]. Data Analysis {#sec:analysis} ============= The goal of our analysis was to derive entropy profiles as a function of radius for the nine clusters in our sample, with entropy quantified in terms of the adiabatic constant $K = kT n_e^{-2/3}$. Thus, we needed to determine the electron density $n_e$ and the gas temperature $T$ as functions of radius. To do that, we first processed and cleaned the data as described in §\[sec:data\]. Then, we divided each cluster into concentric annuli and extracted a spectrum in each annulus as described in §\[sec:extraction\]. We describe our analysis of the projected spectra from gas within these annuli in §\[sec:projected\]. In order to obtain gas temperature as a function of physical rather than projected radius, we then performed a deprojection analysis of the projected spectra as described in §\[sec:deprojected\]. The deprojected cluster temperatures are not much different from the projected gas temperature for these clusters, presumably because the radial emission profiles are so steep. The resulting temperature gradients are rather coarse, with the number of radial bins ranging from four to thirteen. Because we desired a more finely-grained representation of the density gradients, we determined them by deprojecting the exposure-corrected source counts in the 0.5-2.0 keV band as described in §\[sec:nelec\_sbprofiles\]. In all the analysis, we use only the observations by the Advanced CCD Imaging Spectrometer (ACIS) backside illuminated S3 chip, which generally constrained the analysis to the inner $\sim 100$ kpc for these nearby clusters. Data Processing {#sec:data} --------------- We reprocessed the Level 1 events files from the Chandra archive with [CIAO 3.2.2]{} and [CALDB 3.1.0]{} to obtain processed (Level 2) events files. We applied updated gain maps and standard grade filtering to these events. Because emission from low-redshift clusters usually fills the whole S3 detector, we constructed matching blank-sky background event files for each observation, using Maxim Markevitch’s blank sky background database. We cleaned the cluster data of flare contamination, following Maxim Markevitch’s cookbook, in order to match his background maps. This process involved inspecting a light curve for events between 2.5–7.0 keV and removing time intervals with significant background flares, i.e., peaks with count rates $\gtrsim$ 3 $\sigma$ and/or a factor of $\gtrsim$ 1.2 off the mean background level of the observations, using Maxim Markevitch’s light curve cleaning routine [ lc\_clean.sl]{}.[^3] We then created a final events file for the purposes of studying the extended cluster emission by excluding any bright point sources near the cluster. The point sources were identified by a visual inspection of the image. Spectral Extraction {#sec:extraction} ------------------- In each cluster, we selected a region for spectral extraction by centroiding the cluster emission and placing it at the center of a circular aperture with the largest radius that would fit entirely within the chip. Because many clusters are centered at the aim point of the S3 chip rather than the center of the chip, that maximum radius can be as small as $\approx 2\arcmin$ (as opposed to $\approx 4\arcmin$ for the center of the chip). For many of the clusters, this is $\lesssim 0.1 r_{500}$, where $r_{500}$ (the radius at which the average cluster density is 500 times the critical density) is roughly the virial radius of the cluster. We report the maximum radius in units of arcminutes and kiloparsecs in Table \[tab:sample\]. We then divided this aperture into concentric annuli, determining the annular boundaries from background-subtracted, cumulative count profiles (0.3-8.0 keV) by fixing the number of counts in each annulus to be at least 10,000 - 20,000 counts. The minimum number of total counts was chosen to allow us to test whether and where two-temperature spectral fits might be preferred over single temperature fits, and to allow for a simple two-parameter fit to the temperature gradient. We then extracted spectra for each annulus, along with corresponding blank-sky background spectra, using the [*acisspec*]{} script. Two detector response files must be created for each individual spectrum, an Redistribution Matrix File (RMF) and an Ancillary Response File (ARF), which take into account the variability and the spatial dependencies across the detector. As of [CIAO 3.2]{} and [CALDB 3.0]{}, the [CIAO]{} tool [*mkacisrmf*]{} must be run after spectra are extracted for each cluster using the [*acisspec*]{} script. The purpose of [*mkacisrmf*]{} is to generate an RMF using a CCD spectral response with two components. One component does not include charge-transfer inefficiency (CTI) effects; the second incorporates the spatial variation in the chip response caused by CTI. The weight map created by [*acisspec*]{} is utilized by [*mkacisrmf*]{} to make a count-weighted RMF. Finally the weighted ARF is created with [*mkwarf*]{}, using the same weight map and the new weighted RMF, in order to match the energy grid of the weighted RMF. (The last step utilizing [*mkwarf*]{} is only necessary because we fit our data using XSPEC.) For these older data on extended sources from the ACIS S3 chip, this new procedure results in temperatures and metallicities only marginally different from that obtained with [CALDB 2.29]{}, [CIAO 3.0]{}. Therefore, one can be confident that, for the intents of this paper, the Chandra calibration has stabilized for archival data and our results are robust to the details of how the RMF or ARF were computed. However, in our experience since the beginning of this project, the evolution of the Chandra calibration software and the associated calibration data files has occasionally led to significant changes in the best-fitting spectral parameters for these same observations since their original publication. We discuss those changes in § \[sec:literature\]. As of late 2005, such changes appear to be a thing of the past, as the calibration of the ACIS S3 detector seems to be converging to consistent results. For the spectral fitting, we grouped the counts into bins with a minimum of 25 counts per bin, and restricted the fit to 0.7-7.0 keV. Projected X-ray Spectra {#sec:projected} ----------------------- Once we had extracted the spectra, we fit them with plasma emission models using the software package [XSPEC 11.3.1]{} for several classes of input assumptions, including one-temperature (1-T) and two-temperature (2-T) MEKAL models. We fit each annular spectrum between $0.7-7.0$ keV individually, so that all free parameters are independent in each annulus. Earlier versions of the [*Chandra*]{} calibration seemed to suggest that two-temperature models were required. However, after the [*Chandra*]{} calibration improved, we discovered that two-temperature models were no longer required to fit the vast majority of the cluster spectra in our sample. We also explored the effect of fixing the hydrogen column density (${N_{\rm H}}$) of X-ray absorbing gas to the Galactic value in the direction of the cluster, finding that discrepancies between the best-fit hydrogen column density and the assumed Galactic value resulted in a systematic shift in the derived cluster temperature. Again, with earlier versions of the calibration data and software, this source of uncertainty affects the hotter clusters more than the moderate temperature clusters. As the calibration improved over the years we worked on this project, the impact of hydrogen column density anomalies on the best-fit temperature diminished. As of the last two calibration versions, the best-fit ${N_{\rm H}}$ is consistent with Galactic ${N_{\rm H}}$ from @DickeyLockman1990. Therefore, our spectral results are reported for fits for which ${N_{\rm H}}$ was constrained to an assumed Galactic value. Each one-temperature fit takes four parameters: a temperature $T$, a heavy-element abundance fraction $Z$ relative to the solar values, an absorbing column density $N_{\rm H}$, and a spectral normalization defined to be $$N = \frac{10^{-14}}{4 \pi d_A^2 (1+z)^2} \int n_e n_H dV$$ where $n_e$ and $n_{\rm H}$ are the number densities of electrons and hydrogen nuclei, respectively, in units of cm$^{-3}$, and $d_A$ is the angular-size distance in cm. Table \[tab:1Tprfixed\] gives the best fits for each annulus when $N_{\rm H}$ is fixed to the Galactic value toward the cluster taken from @DickeyLockman1990. The columns in Table \[tab:1Tprfixed\] are as follows: 1. Cluster name 2. Outer radius of annulus in arcminutes 3. $N_{\rm H}$ in units of $10^{20}$ cm$^{-3}$ 4. Best-fitting temperature in keV with 90% confidence limits 5. Heavy-element abundance $Z$ in solar units along with 90% confidence limits \[The solar value of Fe/H is taken to be $4.68 \times 10^{-5}$ [@angr].\] 6. Spectral normalization $N$ of the MEKAL model fit 7. Statistical uncertainty of the spectral normalization $N$ with 90% confidence limits 8. Reduced $\chi^{2}$ value of the fit 9. The number of degrees of freedom in the fit In some cases, the $\chi^{2}$ values of the 1-T fits to the projected data are poor. A poor fit could mean either that multiple temperature components are present at a given radius or that the projected spectrum in that annulus comes from gas of different temperatures because of a radial temperature gradient. To resolve this issue, we deprojected the annular spectra for each cluster. Spectral Deprojection {#sec:deprojected} --------------------- In a cluster with a centrally peaked surface-brightness profile, the spectrum from an annulus of a given projected radius is dominated by gas at radii similar to the projected radius but includes cluster emission along the entire line of sight through the cluster. In order to recover the properties of the gas as a function of physical radius, one must correct for this projected emission. To deproject spectra, one starts at the outermost projected annulus, fits an emission model and then iteratively removes the contribution of the outer layers from inner annuli. This procedure requires, at minimum, some sort of assumption about the symmetry of the cluster. For this work, we assumed that the cluster emission is spherically symmetric. To deproject the cluster data, we fit the spectra using the built-in deprojection model [projct]{} in the software package XSPEC, tested and validated by @2005MNRAS.356..237J. The fit simultaneously includes the projected spectra from all annuli in the cluster, and the initial guesses to the fit parameters were taken from the fits to the projected spectra. Table \[tab:table1Tdeproj\] lists our best-fit parameters when $N_{\rm H}$ is fixed at the Galactic value. The columns in this table are the same as for Tables \[tab:1Tprfixed\], except that the normalization in column (6) is that for a spherical shell, not for the individual spectrum at that annulus. The $\chi^2$ values quoted represent a single fit over all the spectra for the cluster instead of those for the individual annuli. The deprojected fits occasionally showed signs of instability in which the temperature and metallicity would oscillate, with large uncertainties, from shell to shell. Since an accurate measurement of metallicity requires more counts than an accurate measurement of temperature, we tied the metallicities of neighboring annuli together. The metallicity was constrained to be equal across groups of two or three annuli to make a common metallicity estimate at lower spatial resolution than temperature or normalization. Even so, the best fit projected temperatures occasionally exhibited instability from shell to shell. However, the excursions were smaller than the statistical uncertainty in the temperatures. We also noticed that deprojections of spectra with background corrections based on the deep field resulted in the outermost bin having somewhat higher normalizations (see also @2005MNRAS.356..237J). The outermost bin in most cases is contaminated by emission originating even farther from the center of the cluster. To estimate the effect of cluster emission outside the outermost annuli, we deprojected the spectral datasets for each cluster where the spectrum from the outermost annulus was background-corrected based on local background instead of the deep fields. This choice reduced the best-fit normalization in that bin but did not change the best-fit temperature in that bin. Our results in this paper are therefore not sensitive to the treatment of this outermost bin. We report the deprojection results for deep-background subtracted data only in Table \[tab:table1Tdeproj\], except for the case of Abell 1795 which had a higher local background than average. (The temperature for the second most outer annulus had no upper limit for A1795 data for which all annuli had a deep-background correction.) The outer temperature was statistically the same for either method. Using this deprojection algorithm, we obtained reasonable $\chi^2$ values for fits with a single-temperature plasma within each spherical shell. We tested the effect of including a second temperature component, and occasionally a second component in the central sphere improved the $\chi^2$ somewhat, but not significantly. The three clusters for which there may be evidence in our data for a second component are 2A0335+096, Abell 252, and Abell 2052. We report the results of a deprojection analysis where we allowed a second thermal component in the innermost bin in Table \[tab:table2Tdeproj\]. The metallicity of this inner sphere was constrained to be the same in both components. We did not require second components in as many cases as previous workers, particularly those who included soft energy bins in their fit ($E<0.7-0.8$ keV). This discrepancy is due to the immaturity of the Chandra calibration early in the mission. We found over the history of our own analysis that the need for a second temperature component decreased as the CALDB version number increased. We will discuss the sensitivity of our derived entropy profiles to the presence or absence of a soft component in the central cores in §\[sec:entropy\]. Electron Densities from Deprojected Surface Brightness Profiles {#sec:nelec_sbprofiles} --------------------------------------------------------------- Electron density profiles can be derived with much higher resolution than the temperature and abundance profiles given in Tables \[tab:1Tprfixed\]-\[tab:table1Tdeproj\] because the count rate in a limited bandpass is much more sensitive to electron density that it is to temperature. We therefore derived high-resolution density profiles for each cluster by dividing our apertures into annuli of 2.5 and 5.0 arcseconds, and using interpolated normalization-to-count-rate ratios to solve for the electron density corresponding to a given projected count rate. We created differential surface-brightness profiles from 0.5-2.0 keV to minimize their dependence on temperature. We corrected the counts in each annular bin for vignetting and small variations in the net exposure time for each bin by extracting a exposure profile from a normalized exposure map created assuming a mono-energetic photon spectrum of 1 keV. These corrections were typically less than 5% per bin. Comparisons with maps made for 0.5 and 1.5 keV photons showed that, the systematic uncertainty induced by assuming mono-energetic photons is negligible. We found that the ratio between the spectral normalization quantity $N$ and the count rate in this bandpass (0.5-2.0 keV) was relatively insensitive to temperature, changing by about 10-15% over the full range of temperature in a given cluster. The conversion was actually more sensitive to metallicity than temperature in this bandpass. Therefore, the derived electron density profiles have only limited sensitivity to the details of how we interpolate cluster temperatures or even how the cluster temperatures themselves were determined. A deprojected emission profile (count rate per unit volume) was then computed for each cluster using a standard technique [@KCC1983]. The deprojected emission profile was converted into electron densities using the appropriate ratio of a MekaL spectral normalization to spectral count rate in the same bandpass as the surface brightness profile. This conversion takes into account both temperature and abundance variations that somewhat affect the emissivity of the gas in the 0.5-2.0 keV energy range. Results {#sec:results} ======= Here we describe the results of our data analysis. We find that the clusters in our sample have rising temperature gradients and declining abundance gradients, in agreement with previous work. We briefly discuss those results in §\[sec:abund\]. Then, in §\[sec:entropy\], we present high-resolution entropy profiles for our clusters derived by interpolating the observed temperature gradients onto our density-profile bins. These entropy profiles turn out to show a striking regularity that is best fit with a power law in radius plus a constant entropy pedestal. We conclude the section with a cluster-by-cluster comparison of our results with previous observations of these clusters. Abundance and Temperature Profiles {#sec:abund} ---------------------------------- All the clusters in our sample have temperature profiles that rise with radius. In order to quantify those temperature gradients, we fit the deprojected temperature data for the seven clusters that had at least four annuli with a minimum of 20,000 counts with the power-law relation $$T(r) = T_{100} \left( \frac {r_{\rm mid}} {100~\rm{kpc}} \right)^{\alpha_T}$$ where $T_{100}$ is the temperature in keV at 100 $h_{70}^{-1}$ kpc and $r_{\rm mid}$ is the midpoint between the inner and outer radii of the annulus. We report the results of these fits in Table \[tab:fit-temp\]. This simple form gave adequate fits for all clusters except Abell 2052. We find that the typical power-law index for these temperature profiles is $\alpha_T \approx 1/3$ inside $4\arcmin$, in agreement with the results of @vf04. The abundance measurements in these clusters, which are dominated by the iron lines in the spectrum, generally decline with radius. The innermost regions tend to have Fe/H equal to 50-100% the solar value, decreasing to 30% solar at $\gtrsim 100$ kpc. This finding generally agrees with previously observed trends in cooling-flow clusters [e.g., @dm01]. In §\[sec:literature\] we discuss our abundance results for individual clusters in more detail. Entropy Profiles {#sec:entropy} ---------------- Our primary new results concern the entropy gradients of cooling-flow clusters. Using the spectral fitting and deprojection results of §\[sec:analysis\], we derived a radial entropy profile for each cluster under the assumption that the temperature and density distributions were spherically symmetric. Strict spherical symmetry is obviously an idealization of reality that applies better to some clusters than to others, but we wanted to derive entropy profiles for all the clusters in a uniform way. To obtain the entropy profiles, we linearly interpolated temperature profiles onto the fine radial grid used for the surface-brightness deprojection in §\[sec:nelec\_sbprofiles\]. That gave us $n_e(r)$ and $T(r)$, from which we constructed $K(r) = kT(r)n_e^{-2/3}(r)$. In order to test the effects of different binning schemes, we used bin widths of both $2.5\arcsec$ and $5\arcsec$ per annulus. Since deprojecting the spectra often did not give significantly different results from simpler analysis of projected spectra, and since projected temperature profiles were better behaved, we used the projected temperatures for the profile fits reported in Tables \[tab:fit-5arcsec\] and \[tab:fit-2.5arcsec\]. To show how much of an effect this choice had on our results, we also include the results for entropy profiles obtained by using the power-law fits to $T(r)$ from deprojected spectra (Table \[tab:fit-2.5arcsec-tdeproj\]). Because interpolation of temperature within the innermost temperature bin is not well constrained, we treated the temperature gradient in a few ways, in order to probe the sensitivity of our results to how we handle the temperature profile. Method 1 modeled the temperature gradient in the innermost bin with a linear extrapolation from the adjacent bin. Method 2 used a constant temperature within the innermost bin. This assumption may have the effect of inducing a core in the entropy profile. The differences between the two methods provide an estimate of the systematic uncertainties in the modelling of the temperature structure in the innermost bin. Method 3, where we used the best-fit power law to the deprojected temperatures, provides yet another estimate which also minimizes the central entropy core since the temperature at $r=0$ is forced to be $T=0$ in such fits. This third model should be thought of as a providing a lower limit on the central entropy, as there is no spectroscopic evidence for significant amounts of gas with such low temperatures in the centers of clusters. We fit the entropy profiles derived using Methods 1-3 with two different formulae. The first formula assumes that the entropy profile is a power law in radius plus a constant entropy pedestal $K_0$: $$K(r) = K_0 + K_{100} \left( \frac {r} {100~\rm{kpc}} \right)^{\alpha} \; \; .$$ The second formula sets $K_0 = 0$, making the entropy profile a pure power law: $$K(r) = K_{100} \left( \frac {r} {100~\rm{kpc}} \right)^{\alpha} \; \; .$$ Table \[tab:fit-5arcsec\] reports the best fits obtained with these two formulae using surface-brightness bins $5\arcsec$ in width. Table \[tab:fit-2.5arcsec\] reports the best fits using surface-brightness bins $2.5\arcsec$ in width. Table \[tab:fit-2.5arcsec-tdeproj\] reports the best fits using surface-brightness bins $2.5\arcsec$ in width and the best fit to the deprojected temperature profile. Our tables also report the range (in Mpc) of the fitted region. We used 1000 Monte-Carlo bootstrap reproductions of the original surface brightness data to quantify the statistical uncertainties of the deprojection process. The outermost deprojected bins in our fit are noisy and were excluded from the fit. For one cluster, Abell 2052, the central surface brightness deprojection is strongly influenced by the presence of cavities and edges. The overall fits for this cluster consequently have large $\chi^2$ values because of these non-axisymmetric structures, which make the azimuthally averaged surface-brightness profile and therefore the deprojected density profile non-monotonic at $r \sim 10$ kpc. Our entropy-profile fitting revealed a remarkable uniformity among the clusters in our sample, which can be seen in Figures \[fig:gallery\] and \[fig:gallery\_dpt\], despite the variety of structures that can be seen in the cluster morphology. All of them have approximately the same entropy normalization at $\sim 100$ kpc: $K_{100} \approx 150\pm40$ keV cm$^2$, $K_{100} \approx 150\pm50$ keV cm$^2$, and $K_{100} \approx 140\pm30$ keV cm$^2$ for methods 1, 2, and 3 respectively. The best fits are generally obtained with a non-zero value for the entropy pedestal: $K_0 \approx 7\pm4$ keV cm$^2$, $K_0 \approx 11\pm5$ keV cm$^2$, and $K_0 \approx 6\pm4$ keV cm$^2$ for methods 1, 2, and 3 respectively. They also have similar power-law slopes in the 10-100 kpc range: $\alpha = 1.2\pm0.2$ for fits with $K_0 \neq 0$, and $\alpha = 1.0\pm0.2$ for fits with $K_0 = 0$. Only Abell 2029 has an entropy profile marginally consistent with a vanishing entropy value at $r=0$, having $K_0 = 3.0\pm2.1$ (for 5$\arcsec$ bins) when Method 1 is used for temperature interpolation. These results are insensitive to the binning procedure, as one can see by comparing Table \[tab:fit-5arcsec\] with Table \[tab:fit-2.5arcsec\]. The main effect of using a deprojected temperature profile instead of a projected temperature profile is generally (but not always) a slightly lower central entropy quantity $K_0$ (compare Table \[tab:fit-2.5arcsec\] with Table \[tab:fit-2.5arcsec-tdeproj\].) The largest uncertainties come from the statistical uncertainty of the gas temperature and from the treatment of the temperature profiles in the central bin. Our results on central entropy values ($K_0$) pertain to the component that fills the majority of the volume in the central bin. In many cases, there is clearly gas of very low entropy ($K \sim 10^{-5} \, {\rm keV \, cm^2}$ near the center of the cluster in the form of H$\alpha$-emitting nebular filaments. Another example of cool, multiphase gas can be seen in the central 10 kpc of M87. The X-ray filaments along the radio source consist of gas at $\sim 1$ keV surrounded by a 2 keV cluster atmosphere [e.g., @Sparks2004], but those filaments do not constitute a large fraction of the volume in the 10 kpc sphere surrounding M87. In order to evaluate how the presence of a cool component affects the entropy derived for the hot volume-filling component, one can consider the emission from a two-component plasma with temperatures $T_h$ and $T_c$ and normalizations $N_h$ and $N_c$ for the hot and cool components, respectively. If the two components are in pressure equilibrium, then the electron density of the hot component is $propto N_h^{1/2} * [1+(N_c/N_h)(T_c/T_h)^2]^{1/2}$. In the three clusters for which there is some evidence for a second temperature component, that component is present only within the central bin and has $N_c \sim 0.5 N_h$ and $T_c \sim 0.5 T_h$. Such a cool component comprises only $\sim 10$% of the X-ray emitting gas mass, and failing to account for it leads to a derived value of electron density that is only a few percent different from the actual value, if $N_h$ is properly measured. A larger uncertainty arises in the temperature measurement, because the best-fitting temperature in a single-temperature model will be a weighted mean of $T_c$ and $T_h$ that depends on the spectral band of the fit. Comparing our single-temperature fits with our two-temperature fits shows that $T_h$ is generally $\sim 50$% higher than the best-fitting single temperature. In addition, if a cool component is present, then the normalizations derived from single-temperature models are overestimates of the true normalization of the hot component in the central bin. In the most extreme case, in which $\sim 50$ of the counts come from the cool component, the electron density inferred from single-temperature models would be $\sim 40$% larger than the actual value in the hot phase. Combining all these effects would mean that the central entropy of the hot component could be as much as $\sim 80$% larger than what we estimate from our single temperature models, if a second component of cooler gas is indeed affecting our temperature measurements, and the entropy of a cool component with $T_c \sim 0.5 T_h$ would be $\sim 50$% of the single-temperature entropy estimate. Another source of uncertainty in entropy values derived under the assumption of a single-component plasma crops up in cases where radio plasma creates X-ray cavities. In such cases, the volume of the X-ray emitting plasma is smaller than assumed, meaning that the electron density is underestimated and the entropy is overestimated. However, this effect is not large compared with other sources of uncertainty. If the radio plasma displaces as much as 25% of the gas in a given annulus, then the true electron density would be $(0.75)^{-1/2} \sim 1.15$ of what was estimated, and the actual entropy would be $(0.75)^{1/3}\sim0.91$ times the estimated value. In summary, the entropy profiles we derive here are for the X-ray emitting component that fills the majority of the volume at each radius. Because the temperature structure of the central bin is difficult to establish, the systematic uncertainties in our central entropy measurements are approximately a factor of two. In three of our clusters, spectral deprojection suggests that there may be a second temperature component in the central bin at roughly half the temperature of the volume-filling component. Single-temperature modeling of the central bin could therefore be underestimating the entropy of the volume-filling component. Our finding that the entropy of the volume-filling component approaches a minimum value $\sim 10 \, {\rm keV \, cm^2}$ at small radii is therefore robust to the presence of a cooler component. The mass of gas in a cooler X-ray emitting component must be considerably less than that in the hotter component. Otherwise, it would emit very bright soft X-ray emission lines that would show up in our spectra as a pronounced soft excess. High-resolution spectroscopy of the cores of many of these clusters also severely limits the amount of gas cooler than about $1/2-1/3$ the virial temperature [@2003ApJ...590..207P]. In the future, it will be fruitful to compare these results with a parallel analysis of the predicted X-ray emission of simulated 3D clusters with realistic temperature profiles and emission structures. We are pursuing that work separately. Individual Clusters \[sec:literature\] -------------------------------------- The following paragraphs discuss the individual clusters in our sample. In each case, we give the radio and optical emission line characteristics of the cluster, which can affect the X-ray morphology. Then we discuss how our results on the individual clusters compare with previously published [*Chandra*]{} analyses. We also compare our results with published [*XMM-Newton*]{} results. All clusters in the sample have a central radio source in a single central bright galaxy; all but one (Abell 2029) has an optical emission line nebula in the brightest central galaxy. We also note that all but one of the clusters in our sample with H$\alpha$ emission also have had confirmed detections of vibrationally-excited molecular hydrogen at 2 microns [@Edge2002; @Donahue2000; @Jaffe2001; @Falcke1998; @JaffeBremer97], except for 2A0335+096, which has a CO detection by @Edge2001, and except for Abell 133, which hasn’t been observed for these lines recently. For the most part, our results on the temperature and metallicity profiles of these clusters seem relatively robust to calibration and telescope differences. The main discrepancies we find stem from updates to the [*Chandra*]{} calibration that have reduced the prominence of second temperature components or anomalous absorbing column densities suggested or required by the data in previous analyses. For a similar reason, single temperatures derived from fits that included all or part of the 0.3-0.7 keV bandpass in those analyses of [*Chandra*]{} data were occasionally different from what we are now obtaining. Our results are in rough agreement with recently published [*XMM-Newton*]{} temperature profiles for individual clusters. [*XMM-Newton*]{} observations of six of the clusters in our sample were analyzed by and : 2A 0335+096, Abell 262, Abell 2052, Hydra A, Abell 496, and Abell 1795 for the first paper and 2A 0335+096, Abell 262, and Abell 2052 for the second. For single-temperature projected fits, we find that the Chandra temperatures are consistently about 0.5 keV higher than XMM temperatures outside $0.5\arcmin$. For Abell 1795, the discrepancy is larger, $\sim+1$ keV, over the entire region. For single temperature deprojected fits, we have no discrepancy with (Abell 262, Abell 2052, and 2A0335+096), except for the radii of $0.5-1\arcmin$ for 2A0335+096, where we obtain a temperature about 0.4 keV higher. For the 2T deprojected fits where a cool component is posited for the center of the cluster, we agree with , but for a $\sim+0.2$ keV discrepancy in Abell 262’s $0.5-1.0\arcmin$ annulus. We shall show in the following discussion of individual clusters that our analyses rarely disagrees with that of other Chandra observers, so this discrepancy is part of a systematic difference between Chandra and XMM that appears to have reduced over time. The metallicities we find agree with those found by for all three clusters in their sample. The slope and normalization of our entropy measurements are consistent with recent measurements from XMM. derive and plot entropy profiles from XMM data for four clusters in our sample: 2A0335+096, A262, A2052, and A1795. After converting for different Hubble constants, and assuming that their outermost point is coincident with $R_{out}$ listed in their Table 1, we find excellent agreement between the estimated entropy values plotted in their Figure 5 with our entropy measurements for those clusters. Encouragingly, their entropy profiles at large radii interpolate in to the profiles we measure with very similar slope, and the entropy measurements at overlapping radii agree in their normalization. Our Chandra profiles have higher resolution and but span a smaller range of radii. (Abell 496 is also in their sample, but we could not unambiguously identify its points in their entropy plot.) The fact that our [*Chandra*]{} results generally agree with the analysis, and the entropy profiles of , and yet have a nearly constant offset in temperature with respect to , suggests that the XMM results do not agree with each other. The discrepancy may arise from using different energy ranges in the fitting ( fit 0.2-10.0 keV for every spectrum, while fit 0.3-8.0 keV for the MOS detectors and 0.7-7.0 keV for the pn detector, similar to the energy range we use to fit Chandra data (0.7-7.0 keV). Furthermore, use a more recent version of the XMM calibration and software (SAS 5.3.0 vs SAS 5.3.3). The main difference between these versions is a new tool (especget) for the extraction of spectra and the computation of responses (RMFs and ARFs). Our consistency with later XMM and Chandra results suggests that the the calibrations of the two missions are leading to increasingly consistent results, which is encouraging news. We discuss specific comparisons to published analyses of Chandra observations below. ### 2A 0335+096 {#sec:2a} The cluster 2A0335+096 is poor but popular. @SBO1995 presented VLA images showing that this cluster contains a double-lobed source with a nucleus. Each lobe lies about $12\arcsec$ on either side of the nucleus of the central galaxy. That galaxy has a spectacular, filamentary emission-line system with a bar and filaments in the central 20 kpc [@RomanishinHintzen88]. The X-ray images show two prominent cavities along with filamentary structures that are not correlated with the radio or optical line emission [@2003ApJ...596..190M]. The structure of the radio source suggests that it was produced by multiple outbursts. [*Chandra*]{} data on this cluster has previously been analyzed by two groups, @2003PASJ...55..585K and @2003ApJ...596..190M. Their results disagree somewhat with each other. Our results agree more with the latter, who may have used a later CALDB version than the former group. Our comparison with the [@2003PASJ...55..585K] analysis shows that our projected 1-T fit agrees in the region $0 < r < 20$ kpc, with temperatures between $1.65-1.76$ keV. However, the temperature gradient we derive rises more quickly over the $20 < r < 120$ kpc range. At $r \approx 40$ kpc we find a best-fit temperature value $T = 3.0\pm0.1$ keV while [@2003PASJ...55..585K] find $T = 1.9\pm^{0.1}_{0.05}$ keV. At $r \approx 54- 70$ kpc our best-fit temperature is $3.8\pm0.15$ keV while [@2003PASJ...55..585K] obtain $2.5\pm0.1$ keV. Our overall temperature fits are also more consistent with the ASCA temperature of $2.86\pm0.02$ keV [@Horner2001]. The difference between our best-fit temperatures and those of @2003PASJ...55..585K arises from the version of the CXC Calibration Database (CALDB) used to calibrate the data and the energy range used for spectral fitting. We used CALDB 3.0 in our work, while @2003PASJ...55..585K used CALDB 2.9. Also, we fit the spectral range $0.7-7.0$ keV, whereas @2003PASJ...55..585K fit the $0.5-10$ keV range. In our experience, the earliest results from [*Chandra*]{} for the coolest clusters are the most affected by improvements in the calibration. Also, including bins at the highest energies without many counts in them can lead to unreliable results. We also compared our projected and deprojected one-temperature MEKAL fit results with those of @2003ApJ...596..190M who used the @2003PASJ...55..585K data. They do not report the version of CALDB that they used. The temperature gradient we measure has a similar slope to the one they obtained. Over the region of 0-240, [@2003ApJ...596..190M] their best-fit temperature rises from 1.8 keV to 4.2 keV with a flattening at $100\arcsec < R < 200\arcsec$. The metallicity and ${N_{\rm H}}$ profiles are also similar in value and behavior. A second temperature component in the central region marginally improved our deprojected fit (Table \[tab:table2Tdeproj\]), but the lower limits on the normalization of this component and the modest improvement of $\chi^2$ are not indicative of a secure detection, particularly since the uncertainties of the calibration are probably highest at the energies of interest here. ### Abell 133 The center of Abell 133 hosts an impressive radio relic source spanning 55 kpc. @Slee2001 conclude from $4\arcsec$ resolution VLA observations that the central galaxy is not the current source of the relic but may have been where the radio source began. ROSAT X-ray observations show that the radio source is clearly interacting with the ICM [@Rizza2000]. The central galaxy also has a compact, low-ionization emission-line source [@HCW85]. @2002ApJ...575..764F report [*Chandra*]{} observations of Abell 133. We find statistical agreement with their projected temperature, metallicity, and ${N_{\rm H}}$ profiles for the fits in which @2002ApJ...575..764F restrict the spectral fitting to photons with energies greater than 0.9 keV. However, their fits for data which include the less energetic photons disagree with ours. This finding is in accord with the general trend that early [*Chandra*]{} spectral fitting results involving soft X-ray data are probably not reliable because the low-energy calibration for [*Chandra*]{} was not well characterized until at least 2004. Hence, the energy range chosen for any given fit can have a significant impact on the best-fit temperatures, absorption column ${N_{\rm H}}$, and metallicity values. The fact that the fits of @2002ApJ...575..764F that were restricted to higher energies agree with ours supports our suspicion that the [*Chandra*]{} calibration of the soft X-ray bandpass has improved with time. ### Abell 262 {#sec:a262} Abell 262 hosts the weakest radio source of our sample, the doubled-lobed source B2 0149+35 [@Parma1986; @Fanti1986], but it also has an impressive emission line nebula [@Plana1998]. @2004ApJ...612..817B find that the radio source is anti-correlated with the X-ray emission in their [*Chandra*]{} observations and but it is correlated with optical (\[N II\]) emission. In order to have enough bins to fit a 2-parameter power law to the deprojected temperature profile, the spectral fits reported for A262 have 10,000 counts per spectrum instead of 20,000. Our derived temperatures are consistent with the analyses of the Chandra data in @2004ApJ...612..817B. Our projected one-temperature fits with ${N_{\rm H}}$ fixed to the Galactic value did not describe the inner annulus between $0$ and $0.35\arcmin$ ($<7$ kpc) well, with a reduced ${\chi^2}$ of 2.44. But, as [@2004ApJ...612..817B] also found, adding a second temperature component to the projected data improved our fit with a reduced ${\chi^2}$ of 1.06 for 107 degrees of freedom. We also find good general agreement between their mean temperatures and metallicity and the average temperature and metallicity of our radial fits. Fits to deprojected spectra are typically better than those for inner projected spectra, suggesting that departures from single-temperature emission in the projected spectra stem primarily from a steep radial temperature gradient, which leads to superpositions of multi-temperature plasma along the line of sight. This trend was also true for Abell 262. In the central bin for Abell 262 (i.e., $<0.35\arcmin$), adding a second temperature component to the deprojected model only marginally improved the fits to the projected spectra (Table \[tab:table2Tdeproj\]). In these deprojected spectral fits, a single-temperature model in each radial shell gave a reduced $\chi^2 = 1.28$ for 923 degrees of freedom, while a two-temperature plasma model in the inner two shells give $\chi^2 = 1.22$ for 921 degrees of freedom. ### Abell 496 {#sec:a496} A compact radio source (smaller than $\sim 1.5\arcsec$ based on the beam size reported) inhabits the central galaxy in Abell 496 [@ODea1995]. The central galaxy is also the locale for a bright emission-line nebula [e.g. @HBvM1989]. Since the observation of Abell 496 had only 60,000 total counts, we first divided this cluster into three regions. When our spectral analysis of these three regions turned out to be well-behaved, we expanded the analysis to six regions of 10,000 counts each in order to fit a 2-parameter power-law temperature profile, and it is this result which is reported in our spectral fits. @2003ApJ...583L..13D present the original [*Chandra*]{} analysis of these data. Comparison of our projected one-temperature fits with those from their analysis show good agreement across the region $0-150$ kpc. Our temperature and metallicity profiles are also consistent with the results of deprojected, 1-T fits to the [*XMM-Newton*]{} data . ### Abell 1795 {#sec:a1795} The central galaxy of Abell 1795 contains a famous extended filament of optical line emission, mapped using long-slit spectroscopy by @Cowie83. The central bright galaxy also hosts a compact radio source 4C 26.42 [@1967MNRAS.135..231C]. [*Chandra*]{} observations by @2001MNRAS.321L..33F revealed a $40\arcsec$ X-ray filament that substantially overlaps the optical emission-line filament. The results we report on this cluster agree well with previous X-ray results. Our projected and deprojected one-temperature and two-temperature fits are in excellent agreement with the [*Chandra*]{} data analysis by [@2002MNRAS.331..635E]. They also agree with the projected one-temperature, ${N_{\rm H}}$, and metallicity fits from [*XMM-Newton*]{} analysis of Abell 1795 by . ### Abell 2029 {#sec:a2029} Abell 2029 occupies a unique niche in our sample because it alone has no trace of an optical emission line nebula, but it is still a luminous radio source. Its X-ray image is remarkably smooth, exhibiting little of the structure seen in some of its sister cool core clusters. Therefore it has been a textbook example for the quest to determine the self-interaction cross-section of dark matter by fitting dark matter potentials to the enclosed mass inferred from X-ray data [e.g., @2003ApJ...586..135L]. The work of @2002ApJ...573L..13L [@2003ApJ...586..135L] on [*Chandra*]{} observations of Abell 2029 adopts the APEC spectral model within XSPEC and fixes N$_H$ to the Galactic value. Our projected and deprojected temperature fits, along with our metallicities, agree with that group’s analysis. Our temperature fits confirm the flattening of the temperature profile inside of $18\arcsec$ found by @2003ApJ...586..135L. ### Abell 2052 {#sec:a2052} The central galaxy of Abell 2052 hosts the radio galaxy 3C 317. @Venturi2004 found a parsec-scale bipolar radio source with a radiative age of 170 years in VLBA observations of this galaxy and suggested that this source is a restarted radio galaxy. The kpc-scale appearance of the source is that of an amorphous halo with a bright core. The [*Chandra*]{} X-ray emission shows two bubbles in the ICM [@Blanton2003; @2001ApJ...558L..15B]. This galaxy also is the home of a bright emission-line nebula [e.g. @HBvM1989]. Our projected temperature profiles for fixed $N_{\rm H}$ agree with the [*Chandra*]{} analysis of [@2003ApJ...585..227B]. We also find that our best-fit metallicities track theirs, increasing to 0.75 solar at $r \approx 30$, then falling to values around 0.45 solar at larger radii. Our deprojected temperature and metallicity profiles also agree with theirs. In particular, we also see a notable increase in the deprojected metallicity at $r \approx 30$ kpc. We also obtained a minimal but likely insignificant improvement in $\chi^2$ when we included a second temperature component in the innermost bin for the deprojection analysis (Table \[tab:table2Tdeproj\]). The lower limit on the normalization of this component is quite low, which suggests that the detection should be treated as an upper limit for the purposes of this paper. ### Hydra A {#sec:hydra} Hydra A was one of the first clusters to be observed by Chandra, and it was the cluster that sparked our interest in doing this study. Many people have used the Hydra A data as comparison for their theoretical predictions; we wanted to provide a larger sample of clusters with similar published measurements. Our deprojected single-temperature fits agree with the [*Chandra*]{} results for Hydra A of [@2000ApJ...534L.135M]. We also see the temperature jump at $r \approx 70$ kpc with a subsequent decrease in temperature at $r \approx 100$ kpc. Likewise, comparing our results with those of [@2001ApJ...557..546D] for projected 1-T and 2-T MEKAL fits show similar agreement. Our best-fit values for N$_H$, metallicity, and temperature are consistent with the values in that paper. However, we do not find an improvement in ${\chi^2}$ for the innermost annulus, $0\arcsec < r < 20\arcsec$, when we allow for a second spectral temperature component. This finding is consistent with our results on second temperature components in the cores of other clusters in our sample owing to the changing calibration in the soft energy band on Chandra’s ACIS-S detector. The entropy profile from @2001ApJ...557..546D is completely consistent in normalization and shape with the one we present here. ### PKS 0745-191 {#sec:pks} PKS0745-191, at $z=0.1028$, is the most distant cluster in our sample. It hosts a powerful radio source and a luminous emission line nebula. The results of [@2002ApJ...580..763H] on [*Chandra*]{} data for PKS 0745-191 are consistent with those we obtain from our projected one-temperature spectral fits. The metallicity profiles are also similar in that they remain nearly constant with a value of $\approx$0.45 solar. The projected two-temperature fits are not relevant here because we did not find a second temperature necessary for any annulus. Comparison with the results of for [*XMM-Newton*]{} data for PKS 0745-191 yields consistency between each group’s results. For projected and deprojected one-temperature spectral fits with fixed N$_H$, we find a temperature profile similar to . Our metallicity profiles are also consistent in values and trend with radius. Discussion {#sec:discussion} ========== The main new result emerging from this [*Chandra*]{} study of the core entropy profiles in cooling-flow clusters is that the profiles are quite similar, with several interesting features in common. Despite the fact that these clusters range in temperature from 2.2 keV to 7.4 keV, their core entropy profiles all have a similar normalization ($\approx 150-160 \, {\rm keV \, cm^2}$) at a radius of 100 kpc, they all have similar power-law slopes within that radius, and the profiles generally tend to flatten to a constant value $\approx 6-10$ keV cm$^2$ at the centers of the clusters. In this section we examine these features more closely and explore their implications. Observations of Non-Zero Central Entropy {#sec:non-zero-obs} ---------------------------------------- The presence of a non-zero central entropy pedestal in [*Chandra*]{} observations of Hydra A was noted by @2001ApJ...557..546D. However, it has not generally been recognized as a common feature in cooling-flow clusters. For example, analyzed the entropy profiles of thirteen cooling-flow clusters observed with [*XMM-Newton*]{}, finding that they were adequately fit by a pure power law with $\alpha \approx 0.95$ without the need for a central entropy pedestal. However, the lower spatial resolution of [*XMM-Newton*]{} ($4\arcsec$) makes it harder to resolve the central $\sim 10$ kpc where the entropy pedestal dominates the entropy profile. Other [*Chandra*]{} studies suggest that non-zero central entropy might not be universal in cooling-flow clusters. One possible counterexample from our own study is Abell 2029. Another possible counterexample is Abell 478 [@2003ApJ...587..619S; @Sanderson:2004yz]. Also, there are group-scale objects which seem to have entropy profiles that tend to zero entropy at the center [@2003ApJ...598..250S; @2005ApJ...622..187M]. Our finding that non-zero central entropy is common, if not universal, among cooling-flow clusters rests on the flattening of the observed surface-brightness profiles within $\sim 10$ kpc of the cluster’s center, which typically corresponds to an angular radius of $\sim 10\arcsec$ in these low-redshift clusters (see Figure \[fig:SB\_gallery\]). This flattening implies that the electron density profile does not diverge at the center, and therefore that the central entropy tends to some minimum value. In order to verify that the flattening we observe is not an artifact of the deprojection procedure, we compared our observed surface-brightness profiles with the profile one would expect if the power-law behavior observed at larger radii continued to hold all the way to $r=0$. For this comparison, we used a power-law core model with $n_e \propto r^{-1}$ and $T(r) \propto r^{1/3}$, so that $K(r) \propto r$. Assuming a thermal bremstrahlung emissivity ($\propto T^{1/2}$) then yielded a predicted surface-brightness profile $S(\theta) \propto \theta^{-5/6}$. (If we had included line radiation and iron gradients, the central surface-brightness profile would have been even more sharply peaked.) We then convolved this power-law model with a two-dimensional Gaussian to simulate the effects of a point-spread function. To be conservative, we simply approximated the Chandra ACIS-S point spread function as a simple Gaussian, $e^{-r^2/\sigma^2}$ with $\sigma=1\arcsec$. This assumed PSF is actually a little broader than the actual one, so we are slightly overestimating the effect of PSF smearing on these profiles. Nevertheless, the surface-brightness profile predicted by this simple power-law model is significantly more peaked than the actual observed surface-brightness profiles, even if we do not include the divergent flux from the central pixel in the calculation. Figure \[SB\_Comparison\] shows that comparison with both the observed surface-brightness profiles and the theoretical profile normalized to unity at $r=10\arcsec$. Therefore, unless the gas temperatures in the cores of these clusters drop much more rapidly than the power-law model we assumed (or indeed more than has been measured in high-resolution spectroscopy), then they typically have nearly constant central entropy values of $\approx 6-10$ keV cm$^2$. One could argue that this result is not completely unexpected. For some time now, X-ray astronomers have been successfully fitting “beta-model” and even “double-beta model” profiles to X-ray surface brightness profiles of clusters [e.g., @1998ApJ...496...73M; @1999ApJ...517..627M; @2000MNRAS.318..715X]. The standard “beta-model" for a cluster’s X-ray surface brightness $I(\theta)$ is proportional to $(1+\theta/\theta_c)^{-3\beta+1/2}$, where $\theta_c$ is the projected core radius in appropriate units . The X-ray surface-brightness data for clusters and groups have consistently exhibited evidence for cores in the electron density profile, which is relatively insensitive to the temperature profile. The key feature of the beta-model is its flat central core. These high resolution Chandra observations show that even those clusters with significant peaks in X-ray surface brightness still have cores at $\theta < 10\arcsec$. Implications of Non-Zero Central Entropy {#sec:non-zero-imp} ---------------------------------------- The observed central entropy levels in these nine clusters suggest that the heating mechanism which prevents most of a cooling-flow cluster’s core gas from condensing is episodic [see also, @2001ApJ...557..546D; @2003MNRAS.338..837K]. Intracluster gas of entropy $K$ and temperature $T$ that radiates pure thermal bremsstrahlung emission has a cooling time $$t_c \approx 10^8 \, {\rm yr} \left( \frac {K} {10 \, {\rm keV \, cm^2}} \right)^{3/2} \left( \frac {kT} {5 \, {\rm keV}} \right)^{-1} \; \; .$$ The bulk of the gas currently at the centers of these X-ray clusters therefore will not begin to condense for at least $\sim 100$ Myr. That timescale for the introduction of feedback is consistent with the periodicity of AGN feedback inferred from X-ray studies of the cavities associated with the radio sources at the centers of cooling-flow clusters [e.g., @2004ApJ...607..800B]. @VoitDonahue2005 show that an outburst of kinetic power from an AGN at the level of $\sim 10^{45} \, {\rm erg \, s^{-1}}$ can produce such an entropy pedestal through shock heating that raises the core gas of a cooling-flow cluster by a uniform increment of $\sim 10$ keV cm$^2$. Many clusters have much higher central entropy levels, which correspond to cooling times greater than the age of the universe, meaning that they were never suspected of harboring cooling flows. One classic example is the Coma cluster. It is the nearest, richest X-ray cluster, and it has two bright central cluster galaxies rather than a single dominant one. The central entropy in the Coma cluster implied by the [*ROSAT*]{} X-ray observations of is $340^{+170}_{-80} h_{70}^{-1/3}$ keV cm$^2$. Such a large central entropy value is difficult to generate purely through feedback, requiring an AGN outburst $\gtrsim 10^{47} \, {\rm erg \, s^{-1}}$ in the heating framework of @VoitDonahue2005. We therefore suspect that Coma’s core achieved its high entropy level through merger shocks or some other dynamical means. However, there are other clusters with central entropy levels intermediate between the Coma cluster and the cooling-flow clusters in the present sample. These came to our attention when we were investigating examples of cooling-flow clusters without evidence for feedback. Every cluster in the sample we present here has a certain amount of central AGN activity, indicated by the radio power from the central galaxy. Most of them (all but Abell 2029) also show evidence for active star formation in the form of optical emission-line nebulae. For the purposes of this discussion, we will call such clusters “active clusters.” In order to isolate the relationship between AGN activity and the cooling-flow phenomenon, we used [*Chandra*]{} to observe two other clusters, Abell 1650 and Abell 2244, that had been classified as cooling-flow clusters by @Peres1998 but that did not contain measurable radio power from a central source or an optical emission-line nebula. We will call these two clusters “passive clusters." @Donahue2005A show that the two passive clusters have substantially higher central entropy values than the active clusters. These central entropy levels, amounting to 30-50 keV cm$^2$, correspond to central cooling times $\sim 1$ Gyr, suggesting that these clusters show no signs of feedback because it is not currently necessary to prevent condensation and may not have been necessary for quite some time in the past. One possibility is that these clusters were heated by an extraordinarily strong AGN outburst ($\sim 10^{46} \, {\rm erg \, s^{-1}}$) that raised the central entropy level to $\gtrsim 50$ keV cm$^2$ roughly a Gyr or more ago, so that the dynamical traces of that feedback event have now dissipated. Another possibility is that gravitationally driven effects like merger shocks [e.g., @Buote2005] have kept the central entropy relatively high for the last several Gyr and that these clusters will become more like classic cooling-flow clusters about a Gyr from now [see also @2004ApJ...613..811M]. Slopes of Core Entropy Profiles {#sec:slopes} ------------------------------- We find a mean power-law slope for core entropy profiles in our sample of $1.2-1.3\pm0.2$ when the central entropy is allowed to be non-zero and $0.9-1.0\pm0.2$ for a pure power-law fit that goes to zero at the origin. Our result for a pure power law agrees with , who found $\alpha \approx 0.95$ in [*XMM-Newton*]{} observations. However, the addition of a $\sim 10$ keV cm$^2$ central entropy pedestal along with a slight steepening of the power-law slope clearly produces a better fit to the [*Chandra*]{} data, as can be seen in Figure \[fig:gallery\]. In either case, the power-law slope is similar to the power-law index of $\alpha \approx 1.1$ observed for the entropy profiles at larger radii in clusters. That index seems to be a natural consequence of gravitational structure formation, which naturally produces profiles having $K(r) \propto r^\alpha$ with $\alpha \sim 1.1$ outside the cores of clusters . However, it is not clear why that power-law behavior should continue inside the core, where the physics of cooling and feedback ought to dominate the thermodynamics of intracluster entropy. Entropy Normalization at 100 kpc {#sec:100kpc} -------------------------------- The agreement between the entropy levels at radii of 100 kpc in these nine cooling-flow clusters indeed appears to be related to the non-gravitational processes that break self-similarity in the whole population of galaxy clusters. Gravitational structure formation, acting alone, should produce nearly self-similar clusters whose whose virial radii scale with mass as $r_{\rm vir} \propto M^{1/3}$ and whose temperatures then scale with virial radius as $T \propto r_{\rm vir}^2$. Because the gas density at a given fraction of the virial radius should be the same for all self-similar clusters, the gas entropy at that fraction of the virial radius should be $K(r/r_{\rm vir}) \propto T$. For a power-law slope in entropy of $\alpha = 1.1$, the entropy at a given physical radius in self-similar clusters should then scale as $K(r) \propto T^{1-\alpha/2} \propto T^{0.45}$. Yet we see no such systematic trend in our sample, even though the clusters range over a factor of three in temperature. This finding implies that non-gravitational processes play a role in regulating the entropy level at 100 kpc. That result should not be surprising, in light of the fact that cooling and feedback are thought to alter the luminosity-temperature relation of clusters through their effects on the core entropy distribution [e.g., @eh91; @kaiser91; @voit02; @2004MNRAS.348.1078B]. According to the model of @vb01, the entropy levels at the core radii of clusters should be related to the entropy level at which the cooling time of the gas equals a Hubble time, which is $$K_c(T) \approx 250 \, {\rm keV \, cm^2} \: \left( \frac {kT} {5 \, {\rm keV}} \right)^{2/3}$$ for pure bremsstrahlung cooling. In that case, one expects entropy at the core radius of a cluster to be $\propto T^{2/3}$, and observations of entropy at a typical core radius of $0.1 r_{\rm vir}$ do show a general correspondence between $K_c(T)$ and $K(0.1 r_{\rm vir})$ [@2003MNRAS.343..331P; @voit03]. Such a scaling of entropy within the cores of clusters is in accord with the agreement we find in entropy levels at 100 kpc. If the entropy profiles within the cores of clusters really do scale as $K(r) \propto T^{2/3} (r/r_{\rm vir})^{1.1}$, then we expect $K(100 \, {\rm kpc}) \propto T^{0.12}$ [see also @VoitDonahue2005]. In other words, the observed lack of a trend in temperature in the entropy level at 100 kpc is consistent with the idea that cooling and feedback regulate the entropy levels within the cores of these clusters. A Framework for Cooling and Feedback {#sec:framework} ------------------------------------ The framework for cooling and feedback presented by @VoitDonahue2005 was motivated by these observational findings and supports the notion that episodic AGN heating is responsible for governing these core entropy profiles. There we show that the core entropy profile one expects when pure radiative cooling is unopposed by feedback forms a lower bound on the set of entropy profiles we observe here. That bounding profile was computed by @voit02 by simply allowing radiative losses over a Hubble time to remove entropy from the baseline entropy profile characteristic of gravitational structure formation. The observed profiles converge to this bounding profile at $\gtrsim 100$ kpc and depart from it at smaller radii, where the bounding profile drops to zero entropy as $r \rightarrow 0$. Thus, it would appear that some sort of feedback is preventing the observed entropy profiles from converging to the bounding profile. The magnitude of the central entropy level is an important clue to the feedback mechanism at work. We show in @VoitDonahue2005 that adding a constant entropy pedestal of 10 keV cm$^2$ to the bounding profile reproduces the behavior of the observed profiles at small radii. In the framework we suggest, shock heating by an AGN outflow with constant kinetic power naturally produces a constant entropy pedestal out to $\sim 30$ kpc. As mentioned above, a kinetic power output $\sim 10^{45} \, {\rm erg \, s^{-1}}$ is required to raise the inner entropy level by $\sim 10 \, {\rm keV \, cm^2}$. Because that inner entropy level corresponds to a cooling time of $\sim 10^8$ years, episodic AGN outbursts on this timescale are needed to maintain the quasi-steady nature of the entropy profiles of these active clusters. Summary and Conclusions {#sec:summary} ======================= We present temperature gradients, metallicity gradients, flux normalizations, and entropy profiles for a sample of X-ray luminous, nearby clusters of galaxies. Because we selected this particular sample from the [*Chandra*]{} archive to have large numbers of X-ray counts and singly-peaked surface brightness profiles in order to derive high-quality core entropy profiles, we ended up with a sample of nine classic cooling-flow clusters. All of them show evidence for feedback, with radio emission from the central galaxy in all cases and central emission-line nebulae in eight out of nine cases, with Abell 2029 being the only exception. We demonstrate that the entropy profiles in the cores of these clusters are quite similar, with power-law slopes in radius of $\sim1.0-1.3$, flattening to a central entropy plateau $\sim 6-10$ keV cm$^2$ in the central 10 kpc or so. We suggest that the core entropy levels are maintained by periodic feedback from a centrally located AGN, with a duty cycle of about $10^8$ years. We demonstrate that these non-zero central entropy levels are not an artifact of finite angular resolution of Chandra; in fact, it is the exquisite resolution of Chandra which allows us to unambiguously detect these central entropy levels. We suggest that classifying clusters of galaxies based on their central entropy levels is a promising way to identify the mechanisms that prevent gas from condensing at their cores. The “active clusters" in the sample we present here, with entropy levels $\sim 10 \, {\rm keV \, cm^2}$, all show evidence for recent feedback, but that feedback has not produced entropy inversions in the azimuthally averaged entropy profiles. We have observed two other clusters with [*Chandra*]{} that have been classified as cooling-flow clusters, based on the fact that their central cooling time is less than a Hubble time, but that show no evidence for recent feedback. Those “passive clusters" turn out to have central entropy levels of $\sim 30-50 \, {\rm keV \, cm^2}$, even though they are as regular in appearance as Abell 2029, which has low central entropy [@Donahue2005A]. Such elevated central entropy levels can be produced by an especially strong episode of AGN feedback sometime in the past [@VoitDonahue2005]. It is also plausible that these elevated central entropy levels could be preserved by thermal conduction [@Donahue2005A]. Then there are objects like the Coma cluster, with central entropy levels $\sim 350 \, {\rm keV \, cm^2}$ , which are difficult to achieve with AGN heating and would therefore seem originate through gravitationally-driven processes like merger shocks. A larger survey of clusters with a greater variety of central entropy levels are needed to explore these issues. This work was supported through Chandra grants from the Smithsonian Astrophysical Observatory (GO3-4159X, AR3-4017A, AR5-6016X) and an Astrophysics Theory Program grant (NNG04GI89G). This research has made use of data obtained from the Chandra Data Archive (CDA), which is part of the Chandra X-ray Observatory Science Center, operated for the National Aeronautics and Space Administration (NASA) by the Smithsonian Astrophysical Observatory. This research has also made use of data obtained from the High Energy Astrophysics Science Archive Research Center (HEASARC), provided by NASA’s Goddard Space Flight Center. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA, and of NASA’s Astrophysical Data System Bibliographic Services. ![image](f3.ps){width="7in"} [lrrrccccccc]{} 2A0335+096 & 919 & 54.6699 & 9.9668 & 19.98 & 4.0 & 165 & 10.70 & 0.0347 & 44.71 & 2.88\ A133 & 2203 & 15.6759 & -21.8809 & 35.91 & 2.5 & 161 & 2.36 & 0.0554 & 44.52 & 3.71\ A262 & 2215 & 28.1948 & 36.1527 & 29.12 & 3.0 & 60 & 2.32 & 0.0163 & 43.73 & 2.17\ A496 & 3361 & 68.4045 & -13.2608 & 10.13 & 2.5 & 95 & 6.49 & 0.0317 & 44.61 & 3.89\ A1795 & 493 & 207.2207 & 26.5907 & 19.88 & 3.0 & 216 & 10.44 & 0.0622 & 45.21 & 5.49\ A2029 & 891 & 227.7248 & 5.7451 & 20.59 & 3.0 & 260 & 11.57 & 0.0761 & 45.49 & 7.38\ A2052 & 890 & 229.1834 & 7.0211 & 37.23 & 2.3 & 97 & 4.63 & 0.0353 & 44.44 & 2.96\ Hydra A & 576 & 139.5241 & -12.0955 & 19.88 & 2.0 & 122 & 5.93 & 0.0522 & 44.79 & 3.54\ PKS0745-191 & 2427 & 116.8803 & -19.2944 & 18.09 & 3.0 & 340 & 5.96 & 0.1028 & 45.66 & 6.25\ [lrrrrrrrrrr]{} 2A0335+096 & 12.44 & 18.11 & 1.70$^{+0.04 }_{-0.04 }$ & 6.91e-03 & 3.00e-04 & 0.57$^{+0.06 }_{-0.05 }$ & 1.71 & 166\ & 19.07 & – & 2.04$^{+0.05 }_{-0.05 }$ & 5.98e-03 & 2.60e-04 & 0.80$^{+0.09 }_{-0.08 }$ & 1.28 & 173\ & 25.71 & – & 2.24$^{+0.05 }_{-0.05 }$ & 6.82e-03 & 2.60e-04 & 0.77$^{+0.08 }_{-0.07 }$ & 1.31 & 183\ & 33.17 & – & 2.64$^{+0.07 }_{-0.07 }$ & 5.78e-03 & 2.20e-04 & 0.93$^{+0.10 }_{-0.09 }$ & 1.48 & 193\ & 42.29 & – & 3.03$^{+0.08 }_{-0.08 }$ & 6.37e-03 & 2.10e-04 & 0.81$^{+0.09 }_{-0.08 }$ & 1.42 & 207\ & 54.31 & – & 3.35$^{+0.11 }_{-0.08 }$ & 6.41e-03 & 2.10e-04 & 0.82$^{+0.09 }_{-0.09 }$ & 1.32 & 218\ & 70.48 & – & 3.85$^{+0.13 }_{-0.12 }$ & 6.80e-03 & 2.00e-04 & 0.74$^{+0.10 }_{-0.08 }$ & 1.40 & 233\ & 90.80 & – & 4.00$^{+0.14 }_{-0.13 }$ & 7.20e-03 & 2.20e-04 & 0.78$^{+0.10 }_{-0.09 }$ & 1.11 & 246\ & 165.84 & – & 4.43$^{+0.12 }_{-0.11 }$ & 1.77e-02 & 3.00e-04 & 0.61$^{+0.06 }_{-0.06 }$ & 1.45 & 346\ A133 & 23.24 & 1.58 & 2.31$^{+0.07 }_{-0.06 }$ & 2.14e-03 & 1.00e-04 & 1.19$^{+0.14 }_{-0.11 }$ & 1.29 & 171\ & 49.07 & – & 3.00$^{+0.10 }_{-0.10 }$ & 2.58e-03 & 9.00e-05 & 0.92$^{+0.11 }_{-0.10 }$ & 1.21 & 190\ & 89.74 & – & 4.10$^{+0.17 }_{-0.15 }$ & 3.12e-03 & 1.00e-04 & 0.69$^{+0.11 }_{-0.09 }$ & 1.06 & 223\ & 161.40 & – & 4.44$^{+0.19 }_{-0.17 }$ & 4.53e-03 & 1.20e-04 & 0.43$^{+0.08 }_{-0.08 }$ & 0.97 & 274\ A262 & 6.97 & 5.46 & 1.12$^{+0.02 }_{-0.02 }$ & 1.92e-03 & 1.20e-04 & 0.28$^{+0.04 }_{-0.03 }$ & 2.44 & 109\ & 13.74 & – & 1.66$^{+0.04 }_{-0.04 }$ & 1.27e-03 & 8.00e-05 & 1.12$^{+0.15 }_{-0.13 }$ & 1.79 & 129\ & 21.12 & – & 2.03$^{+0.07 }_{-0.06 }$ & 1.46e-03 & 1.00e-04 & 1.21$^{+0.19 }_{-0.14 }$ & 1.12 & 137\ & 30.48 & – & 2.25$^{+0.08 }_{-0.07 }$ & 1.86e-03 & 9.00e-05 & 1.07$^{+0.14 }_{-0.13 }$ & 1.13 & 154\ & 40.64 & – & 2.49$^{+0.09 }_{-0.09 }$ & 2.29e-03 & 1.10e-04 & 0.86$^{+0.11 }_{-0.11 }$ & 1.01 & 169\ & 59.76 & – & 2.32$^{+0.06 }_{-0.06 }$ & 4.93e-03 & 1.50e-04 & 0.51$^{+0.06 }_{-0.05 }$ & 1.38 & 223\ A496 & 15.57 & 4.80 & 2.37$^{+0.09 }_{-0.10 }$ & 4.86e-03 & 2.80e-04 & 0.95$^{+0.15 }_{-0.13 }$ & 1.39 & 138\ & 28.11 & – & 3.09$^{+0.14 }_{-0.14 }$ & 5.48e-03 & 2.60e-04 & 0.79$^{+0.14 }_{-0.12 }$ & 0.85 & 147\ & 41.78 & – & 3.82$^{+0.19 }_{-0.19 }$ & 5.42e-03 & 2.60e-04 & 0.95$^{+0.18 }_{-0.15 }$ & 1.04 & 156\ & 57.35 & – & 4.06$^{+0.23 }_{-0.21 }$ & 5.93e-03 & 2.50e-04 & 0.69$^{+0.14 }_{-0.13 }$ & 1.23 & 160\ & 78.24 & – & 4.82$^{+0.31 }_{-0.28 }$ & 6.57e-03 & 2.60e-04 & 0.62$^{+0.16 }_{-0.14 }$ & 1.08 & 172\ & 94.95 & – & 5.11$^{+0.42 }_{-0.37 }$ & 4.92e-03 & 2.40e-04 & 0.50$^{+0.17 }_{-0.18 }$ & 1.19 & 155\ A1795 & 20.86 & 1.17 & 3.56$^{+0.13 }_{-0.13 }$ & 4.47e-03 & 1.40e-04 & 0.62$^{+0.08 }_{-0.09 }$ & 1.06 & 186\ & 33.81 & – & 4.33$^{+0.19 }_{-0.18 }$ & 4.61e-03 & 1.30e-04 & 0.60$^{+0.10 }_{-0.09 }$ & 1.04 & 197\ & 46.04 & – & 4.66$^{+0.21 }_{-0.20 }$ & 4.51e-03 & 1.40e-04 & 0.72$^{+0.11 }_{-0.11 }$ & 1.11 & 199\ & 59.71 & – & 5.20$^{+0.28 }_{-0.26 }$ & 4.79e-03 & 1.20e-04 & 0.46$^{+0.10 }_{-0.09 }$ & 1.01 & 206\ & 74.82 & – & 5.34$^{+0.29 }_{-0.26 }$ & 4.92e-03 & 1.30e-04 & 0.47$^{+0.10 }_{-0.11 }$ & 0.86 & 210\ & 93.52 & – & 5.80$^{+0.33 }_{-0.30 }$ & 4.96e-03 & 1.30e-04 & 0.42$^{+0.11 }_{-0.09 }$ & 1.03 & 214\ & 114.38 & – & 5.89$^{+0.34 }_{-0.29 }$ & 4.92e-03 & 1.40e-04 & 0.55$^{+0.12 }_{-0.12 }$ & 1.19 & 216\ & 139.56 & – & 6.57$^{+0.42 }_{-0.38 }$ & 5.32e-03 & 1.30e-04 & 0.31$^{+0.11 }_{-0.11 }$ & 1.02 & 226\ & 171.22 & – & 6.39$^{+0.63 }_{-0.53 }$ & 3.85e-03 & 1.60e-04 & 0.50$^{+0.19 }_{-0.16 }$ & 1.11 & 231\ & 215.82 & – & 6.47$^{+0.41 }_{-0.38 }$ & 6.79e-03 & 1.50e-04 & 0.12$^{+0.09 }_{-0.08 }$ & 1.09 & 252\ A2029 & 9.52 & 3.15 & 4.14$^{+0.28 }_{-0.27 }$ & 1.28e-03 & 8.00e-05 & 1.32$^{+0.30 }_{-0.27 }$ & 1.31 & 124\ & 15.58 & – & 5.94$^{+0.58 }_{-0.48 }$ & 1.72e-03 & 9.00e-05 & 0.65$^{+0.22 }_{-0.21 }$ & 1.21 & 138\ & 20.78 & – & 6.04$^{+0.62 }_{-0.51 }$ & 1.36e-03 & 9.00e-05 & 1.26$^{+0.36 }_{-0.31 }$ & 1.04 & 129\ & 34.63 & – & 7.16$^{+0.43 }_{-0.39 }$ & 4.99e-03 & 1.40e-04 & 0.67$^{+0.14 }_{-0.14 }$ & 1.05 & 225\ & 48.48 & – & 7.07$^{+0.43 }_{-0.39 }$ & 5.02e-03 & 1.40e-04 & 0.59$^{+0.14 }_{-0.13 }$ & 0.97 & 226\ & 62.34 & – & 7.39$^{+0.51 }_{-0.46 }$ & 5.24e-03 & 1.30e-04 & 0.40$^{+0.11 }_{-0.11 }$ & 1.14 & 227\ & 77.92 & – & 7.73$^{+0.51 }_{-0.46 }$ & 5.20e-03 & 1.40e-04 & 0.59$^{+0.14 }_{-0.13 }$ & 0.93 & 226\ & 95.24 & – & 8.56$^{+0.68 }_{-0.58 }$ & 5.23e-03 & 1.50e-04 & 0.49$^{+0.16 }_{-0.14 }$ & 1.12 & 229\ & 115.15 & – & 8.12$^{+0.71 }_{-0.61 }$ & 4.92e-03 & 1.40e-04 & 0.44$^{+0.15 }_{-0.13 }$ & 1.26 & 230\ & 138.53 & – & 8.67$^{+0.68 }_{-0.60 }$ & 5.63e-03 & 1.40e-04 & 0.36$^{+0.13 }_{-0.13 }$ & 1.05 & 235\ & 167.10 & – & 9.19$^{+0.79 }_{-0.66 }$ & 5.71e-03 & 1.50e-04 & 0.38$^{+0.14 }_{-0.14 }$ & 0.98 & 240\ & 202.60 & – & 8.74$^{+0.71 }_{-0.60 }$ & 5.85e-03 & 1.60e-04 & 0.44$^{+0.15 }_{-0.14 }$ & 1.12 & 244\ & 259.74 & – & 9.01$^{+0.65 }_{-0.57 }$ & 7.60e-03 & 1.80e-04 & 0.40$^{+0.14 }_{-0.12 }$ & 1.05 & 275\ A2052 & 13.90 & 2.85 & 1.67$^{+0.04 }_{-0.04 }$ & 2.42e-03 & 9.00e-05 & 0.49$^{+0.05 }_{-0.05 }$ & 2.59 & 150\ & 20.22 & – & 1.91$^{+0.05 }_{-0.05 }$ & 2.54e-03 & 9.00e-05 & 0.59$^{+0.06 }_{-0.06 }$ & 1.83 & 164\ & 26.96 & – & 2.77$^{+0.08 }_{-0.08 }$ & 2.24e-03 & 8.00e-05 & 0.97$^{+0.10 }_{-0.10 }$ & 1.30 & 181\ & 35.38 & – & 2.93$^{+0.09 }_{-0.09 }$ & 2.55e-03 & 8.00e-05 & 0.68$^{+0.08 }_{-0.07 }$ & 1.17 & 186\ & 46.75 & – & 3.23$^{+0.11 }_{-0.11 }$ & 2.59e-03 & 9.00e-05 & 0.72$^{+0.10 }_{-0.08 }$ & 1.12 & 197\ & 59.39 & – & 3.34$^{+0.12 }_{-0.11 }$ & 2.78e-03 & 8.00e-05 & 0.57$^{+0.08 }_{-0.08 }$ & 1.08 & 206\ & 73.71 & – & 3.51$^{+0.13 }_{-0.12 }$ & 2.94e-03 & 9.00e-05 & 0.50$^{+0.08 }_{-0.07 }$ & 1.09 & 209\ & 96.88 & – & 3.33$^{+0.10 }_{-0.11 }$ & 4.51e-03 & 1.10e-04 & 0.46$^{+0.06 }_{-0.05 }$ & 1.15 & 248\ Hydra A & 19.55 & 4.84 & 3.06$^{+0.11 }_{-0.10 }$ & 5.50e-03 & 1.50e-04 & 0.49$^{+0.06 }_{-0.06 }$ & 1.12 & 186\ & 33.59 & – & 3.17$^{+0.12 }_{-0.11 }$ & 5.38e-03 & 1.50e-04 & 0.37$^{+0.05 }_{-0.06 }$ & 1.28 & 182\ & 49.47 & – & 3.47$^{+0.14 }_{-0.13 }$ & 5.16e-03 & 1.40e-04 & 0.41$^{+0.07 }_{-0.06 }$ & 1.26 & 191\ & 72.69 & – & 3.59$^{+0.15 }_{-0.14 }$ & 5.38e-03 & 1.50e-04 & 0.35$^{+0.06 }_{-0.06 }$ & 1.17 & 195\ & 122.16 & – & 3.66$^{+0.14 }_{-0.13 }$ & 8.45e-03 & 1.90e-04 & 0.28$^{+0.05 }_{-0.05 }$ & 1.00 & 242\ PKS0745-191 & 31.75 & 43.39 & 4.08$^{+0.14 }_{-0.13 }$ & 1.17e-02 & 3.00e-04 & 0.50$^{+0.07 }_{-0.07 }$ & 1.24 & 261\ & 57.83 & – & 5.87$^{+0.28 }_{-0.26 }$ & 1.12e-02 & 3.00e-04 & 0.40$^{+0.08 }_{-0.07 }$ & 0.99 & 276\ & 91.85 & – & 6.96$^{+0.37 }_{-0.34 }$ & 1.09e-02 & 2.00e-04 & 0.50$^{+0.09 }_{-0.09 }$ & 1.03 & 286\ & 147.42 & – & 7.47$^{+0.44 }_{-0.39 }$ & 1.20e-02 & 3.00e-04 & 0.37$^{+0.08 }_{-0.08 }$ & 1.14 & 293\ & 340.20 & – & 8.56$^{+0.46 }_{-0.41 }$ & 2.04e-02 & 3.00e-04 & 0.26$^{+0.07 }_{-0.07 }$ & 1.05 & 353\ [lrrrrrrrrrr]{} 2A0335+096 & 12.44 & 18.11 & 1.37$^{+0.06 }_{-0.04 }$ & 2.55e-03 & 2.50e-04 & 0.57$^{+0.09 }_{-0.08 }$ & 1.34 & 1970\ & 19.07 & – & 1.69$^{+0.12 }_{-0.12 }$ & 3.62e-03 & 2.70e-04 & – & – & –\ & 25.71 & – & 1.97$^{+0.10 }_{-0.08 }$ & 6.00e-03 & 3.50e-04 & 0.87$^{+0.10 }_{-0.09 }$ & – & –\ & 33.17 & – & 2.23$^{+0.13 }_{-0.11 }$ & 5.15e-03 & 3.00e-04 & – & – & –\ & 42.29 & – & 2.74$^{+0.15 }_{-0.14 }$ & 6.36e-03 & 2.60e-04 & 0.86$^{+0.07 }_{-0.07 }$ & – & –\ & 54.31 & – & 2.95$^{+0.18 }_{-0.16 }$ & 6.51e-03 & 2.50e-04 & – & – & –\ & 70.48 & – & 3.63$^{+0.31 }_{-0.30 }$ & 5.77e-03 & 2.30e-04 & – & – & –\ & 90.80 & – & 3.69$^{+0.23 }_{-0.22 }$ & 8.58e-03 & 2.30e-04 & 0.65$^{+0.05 }_{-0.05 }$ & – & –\ & 165.84 & – & 4.45$^{+0.12 }_{-0.11 }$ & 2.50e-02 & 4.00e-04 & – & – & –\ A133 & 23.24 & 1.58 & 2.07$^{+0.07 }_{-0.07 }$ & 1.44e-03 & 7.00e-05 & 1.18$^{+0.11 }_{-0.11 }$ & 1.10 & 860\ & 49.07 & – & 2.72$^{+0.12 }_{-0.12 }$ & 1.99e-03 & 9.00e-05 & – & – & –\ & 89.74 & – & 3.69$^{+0.29 }_{-0.26 }$ & 2.56e-03 & 8.00e-05 & 0.55$^{+0.07 }_{-0.06 }$ & – & –\ & 161.40 & – & 4.53$^{+0.18 }_{-0.17 }$ & 6.34e-03 & 1.40e-04 & – & – & –\ A262 & 6.97 & 5.46 & 0.95$^{+0.02 }_{-0.03 }$ & 1.21e-03 & 1.60e-04 & 0.26$^{+0.06 }_{-0.04 }$ & 1.28 & 923\ & 13.74 & – & 1.46$^{+0.06 }_{-0.05 }$ & 5.94e-04 & 8.50e-05 & 1.59$^{+0.25 }_{-0.21 }$ & – & –\ & 21.12 & – & 1.90$^{+0.14 }_{-0.12 }$ & 9.12e-04 & 9.00e-05 & – & – & –\ & 30.48 & – & 2.02$^{+0.17 }_{-0.14 }$ & 1.04e-03 & 1.17e-04 & – & – & –\ & 40.64 & – & 2.65$^{+0.34 }_{-0.30 }$ & 9.75e-04 & 2.39e-04 & 1.86$^{+1.05 }_{-0.39 }$ & – & –\ & 59.76 & – & 2.35$^{+0.08 }_{-0.06 }$ & 8.46e-03 & 2.80e-04 & 0.53$^{+0.07 }_{-0.05 }$ & – & –\ A496 & 15.57 & 4.80 & 2.00$^{+0.14 }_{-0.12 }$ & 2.64e-03 & 3.10e-04 & 0.98$^{+0.30 }_{-0.23 }$ & 1.12 & 928\ & 28.11 & – & 2.54$^{+0.28 }_{-0.25 }$ & 3.95e-03 & 4.40e-04 & 0.69$^{+0.28 }_{-0.21 }$ & – & –\ & 41.78 & – & 3.55$^{+0.45 }_{-0.42 }$ & 3.90e-03 & 4.90e-04 & 1.19$^{+0.51 }_{-0.40 }$ & – & –\ & 57.35 & – & 3.61$^{+0.43 }_{-0.37 }$ & 5.85e-03 & 5.50e-04 & 0.76$^{+0.32 }_{-0.25 }$ & – & –\ & 78.24 & – & 4.37$^{+0.92 }_{-0.65 }$ & 4.91e-03 & 6.40e-04 & 0.80$^{+0.53 }_{-0.40 }$ & – & –\ & 94.95 & – & 5.19$^{+0.47 }_{-0.41 }$ & 1.20e-02 & 6.00e-04 & 0.48$^{+0.18 }_{-0.18 }$ & – & –\ A1795 & 20.86 & 1.17 & 2.84$^{+0.25 }_{-0.23 }$ & 1.96e-03 & 9.00e-05 & 0.69$^{+0.09 }_{-0.09 }$ & 1.06 & 2144\ & 33.81 & – & 4.11$^{+0.66 }_{-0.54 }$ & 2.63e-03 & 1.30e-04 & – & – & –\ & 46.04 & – & 3.70$^{+0.55 }_{-0.58 }$ & 3.70e-03 & 1.60e-04 & – & – & –\ & 59.71 & – & 5.69$^{+1.64 }_{-1.25 }$ & 3.68e-03 & 1.70e-04 & 0.51$^{+0.09 }_{-0.10 }$ & – & –\ & 74.82 & – & 4.49$^{+0.88 }_{-0.60 }$ & 4.90e-03 & 2.20e-04 & – & – & –\ & 93.52 & – & 6.07$^{+1.42 }_{-1.15 }$ & 4.29e-03 & 2.00e-04 & – & – & –\ & 114.38 & – & 4.93$^{+1.06 }_{-0.70 }$ & 4.82e-03 & 2.00e-04 & 0.42$^{+0.07 }_{-0.07 }$ & – & –\ & 139.56 & – & 6.88$^{+1.05 }_{-0.85 }$ & 7.44e-03 & 2.10e-04 & – & – & –\ & 171.22 & – & 5.79$^{+1.62 }_{-1.14 }$ & 4.70e-03 & 2.30e-04 & – & – & –\ & 215.82 & – & 6.98$^{+1.20 }_{-0.96 }$ & 7.95e-03 & 2.40e-04 & – & – & –\ A2029 & 9.52 & 3.15 & 2.51$^{+0.35 }_{-0.28 }$ & 5.70e-04 & 4.70e-05 & 0.75$^{+0.08 }_{-0.10 }$ & 1.10 & 2756\ & 15.58 & – & 6.35$^{+2.48 }_{-0.77 }$ & 9.85e-04 & 7.50e-05 & – & – & –\ & 20.78 & – & 3.07$^{+1.17 }_{-0.68 }$ & 6.48e-04 & 7.80e-05 & – & – & –\ & 34.63 & – & 7.39$^{+1.37 }_{-0.99 }$ & 3.24e-03 & 1.30e-04 & – & – & –\ & 48.48 & – & 6.51$^{+1.24 }_{-1.00 }$ & 3.19e-03 & 1.60e-04 & – & – & –\ & 62.34 & – & 6.96$^{+1.58 }_{-1.11 }$ & 4.22e-03 & 2.10e-04 & 0.49$^{+0.15 }_{-0.16 }$ & – & –\ & 77.92 & – & 6.52$^{+1.60 }_{-1.10 }$ & 4.51e-03 & 2.20e-04 & – & – & –\ & 95.24 & – & 9.71$^{+3.21 }_{-2.01 }$ & 5.32e-03 & 2.90e-04 & 0.65$^{+0.25 }_{-0.12 }$ & – & –\ & 115.15 & – & 6.44$^{+2.54 }_{-1.32 }$ & 3.72e-03 & 2.80e-04 & – & – & –\ & 138.53 & – & 8.41$^{+2.95 }_{-1.71 }$ & 5.77e-03 & 3.20e-04 & 0.30$^{+0.17 }_{-0.20 }$ & – & –\ & 167.10 & – & 9.95$^{+3.26 }_{-2.07 }$ & 6.34e-03 & 2.80e-04 & – & – & –\ & 202.60 & – & 8.43$^{+1.38 }_{-1.74 }$ & 5.79e-03 & 4.40e-04 & 0.01$^{+0.39 }_{-0.01 }$ & – & –\ & 259.74 & – & 9.02$^{+0.64 }_{-0.48 }$ & 1.55e-02 & 3.00e-04 & 0.75$^{+0.08 }_{-0.10 }$ & – & –\ A2052 & 13.90 & 2.85 & 1.13$^{+0.10 }_{-0.08 }$ & 3.59e-04 & 7.20e-05 & 0.49$^{+0.09 }_{-0.08 }$ & 1.21 & 1545\ & 20.22 & – & 1.40$^{+0.06 }_{-0.06 }$ & 1.80e-03 & 1.30e-04 & – & – & –\ & 26.96 & – & 2.54$^{+0.15 }_{-0.14 }$ & 1.95e-03 & 1.00e-04 & 1.02$^{+0.12 }_{-0.11 }$ & – & –\ & 35.38 & – & 2.82$^{+0.17 }_{-0.15 }$ & 2.28e-03 & 1.20e-04 & – & – & –\ & 46.75 & – & 3.01$^{+0.28 }_{-0.25 }$ & 2.04e-03 & 1.10e-04 & 0.74$^{+0.12 }_{-0.10 }$ & – & –\ & 59.39 & – & 3.13$^{+0.29 }_{-0.27 }$ & 2.43e-03 & 1.30e-04 & – & – & –\ & 73.71 & – & 4.01$^{+0.69 }_{-0.53 }$ & 1.99e-03 & 1.00e-04 & 0.47$^{+0.05 }_{-0.04 }$ & – & –\ & 96.88 & – & 3.33$^{+0.11 }_{-0.10 }$ & 9.19e-03 & 1.90e-04 & – & – & –\ Hydra A & 19.55 & 4.84 & 2.92$^{+0.19 }_{-0.18 }$ & 3.17e-03 & 1.20e-04 & 0.48$^{+0.07 }_{-0.06 }$ & 1.16 & 999\ & 33.59 & – & 3.02$^{+0.26 }_{-0.23 }$ & 3.87e-03 & 1.50e-04 & – & – & –\ & 49.47 & – & 3.32$^{+0.28 }_{-0.26 }$ & 5.32e-03 & 1.50e-04 & 0.33$^{+0.04 }_{-0.03 }$ & – & –\ & 72.69 & – & 3.46$^{+0.33 }_{-0.28 }$ & 4.93e-03 & 1.50e-04 & – & – & –\ & 122.16 & – & 3.71$^{+0.14 }_{-0.13 }$ & 1.26e-02 & 2.00e-04 & – & – & –\ PKS0745-191 & 31.75 & 43.39 & 3.34$^{+0.19 }_{-0.18 }$ & 7.42e-03 & 2.80e-04 & 0.51$^{+0.07 }_{-0.08 }$ & 1.09 & 1472\ & 57.83 & – & 5.31$^{+0.47 }_{-0.41 }$ & 9.03e-03 & 2.80e-04 & – & – & –\ & 91.85 & – & 6.76$^{+0.68 }_{-0.60 }$ & 1.07e-02 & 3.00e-04 & 0.35$^{+0.05 }_{-0.04 }$ & – & –\ & 147.42 & – & 7.12$^{+0.56 }_{-0.51 }$ & 1.38e-02 & 3.00e-04 & – & – & –\ & 340.20 & – & 8.57$^{+0.45 }_{-0.41 }$ & 2.52e-02 & 3.00e-04 & – & – & –\ [lrrrrrrrrrrr]{} 2A0335+096 & 12.44 & 18.11 & 2.04$^{+0.36 }_{-0.24 }$ & 1.48e-03 & 3.00e-04 & 1.06$^{+0.10 }_{-0.09 }$ & 6.02e-04 & 1.09e-03 & 0.90$^{+0.20 }_{-0.16 }$ & 1.32 & 1968\ & 19.07 & – & 1.70$^{+0.09 }_{-0.08 }$ & 2.87e-03 & 3.40e-04 & & & & – & – & –\ & 25.71 & – & 1.97$^{+0.08 }_{-0.08 }$ & 6.18e-03 & 3.80e-04 & & & & 0.81$^{+0.05 }_{-0.05 }$ & – & –\ & 33.17 & – & 2.21$^{+0.12 }_{-0.11 }$ & 5.27e-03 & 2.90e-04 & & & & – & – & –\ & 42.29 & – & 2.73$^{+0.15 }_{-0.14 }$ & 6.34e-03 & 2.70e-04 & & & & 0.86$^{+0.04 }_{-0.07 }$ & – & –\ & 54.31 & – & 2.95$^{+0.17 }_{-0.16 }$ & 6.50e-03 & 2.50e-04 & & & & – & – & –\ & 70.48 & – & 3.62$^{+0.31 }_{-0.29 }$ & 5.76e-03 & 2.30e-04 & & & & – & – & –\ & 90.80 & – & 3.69$^{+0.23 }_{-0.22 }$ & 8.58e-03 & 2.30e-04 & & & & 0.65$^{+0.05 }_{-0.05 }$ & – & –\ & 165.84 & – & 4.45$^{+0.11 }_{-0.11 }$ & 2.50e-02 & 4.00e-04 & & & & – & – & –\ A262 & 6.97 & 5.46 & 1.64$^{+0.22 }_{-0.17 }$ & 2.66e-04 & 9.70e-05 & 0.80$^{+0.03 }_{-0.03 }$ & 1.58e-04 & 1.80e-04 & 1.52$^{+1.34 }_{-0.57 }$ & 1.22 & 921\ & 13.74 & – & 1.43$^{+0.04 }_{-0.04 }$ & 5.80e-04 & 7.30e-05 & & & & 1.51$^{+0.13 }_{-0.18 }$ & – & –\ & 21.12 & – & 1.90$^{+0.05 }_{-0.11 }$ & 9.42e-04 & 7.10e-05 & & & & – & – & –\ & 30.48 & – & 2.01$^{+0.14 }_{-0.13 }$ & 1.06e-03 & 9.70e-05 & & & & – & – & –\ & 40.64 & – & 2.65$^{+0.29 }_{-0.26 }$ & 9.51e-04 & 1.74e-04 & & & & 1.91$^{+0.91 }_{-0.27 }$ & – & –\ & 59.76 & – & 2.35$^{+0.07 }_{-0.05 }$ & 8.46e-03 & 2.50e-04 & & & & 0.53$^{+0.06 }_{-0.05 }$ & – & –\ A2052 & 13.90 & 2.85 & 1.59$^{+0.54 }_{-0.27 }$ & 2.50e-04 & 7.80e-05 & 0.85$^{+0.12 }_{-0.07 }$ & 1.05e-04 & 1.90e-04 & 0.62$^{+0.14 }_{-0.05 }$ & 1.20 & 1543\ & 20.22 & – & 1.40$^{+0.05 }_{-0.04 }$ & 1.56e-03 & 1.30e-04 & & & & – & – & –\ & 26.96 & – & 2.56$^{+0.13 }_{-0.14 }$ & 1.98e-03 & 8.00e-05 & & & & 0.99$^{+0.06 }_{-0.11 }$ & – & –\ & 35.38 & – & 2.80$^{+0.17 }_{-0.15 }$ & 2.30e-03 & 5.00e-05 & & & & – & – & –\ & 46.75 & – & 3.01$^{+0.29 }_{-0.24 }$ & 2.03e-03 & 1.20e-04 & & & & 0.75$^{+0.11 }_{-0.11 }$ & – & –\ & 59.39 & – & 3.13$^{+0.29 }_{-0.26 }$ & 2.43e-03 & 1.20e-04 & & & & – & – & –\ & 73.71 & – & 4.01$^{+0.67 }_{-0.53 }$ & 1.99e-03 & 1.00e-04 & & & & 0.47$^{+0.05 }_{-0.04 }$ & – & –\ & 96.88 & – & 3.33$^{+0.11 }_{-0.10 }$ & 9.19e-03 & 1.90e-04 & & & & – & – & –\ [lrlccccc]{} 2A0335+096 & 9 & $4.0\pm0.2$ & $0.41\pm0.03$ & 3.5 & 0.75\ A133 & 4 & $4.1\pm0.3$ & $0.32\pm0.04$ & 1.0 & 0.31\ A262 & 6 & $3.1\pm0.2$ & $0.34\pm0.03$ & 2.0 & 0.57\ A496 & 6 & $5.1\pm0.7$ & $0.37\pm0.08$ & 0.8 & 0.86\ A1795 & 10 & $5.5\pm0.7$ & $0.30\pm0.09$ & 1.1 & 0.99\ A2029 & 13 & $7.2\pm0.6$ & $0.31\pm0.06$ & 3.3 & 0.97\ A2052 & 8 & $3.8\pm0.2$ & $0.47\pm0.05$ & 15.4 & 0.009\ Hydra A & 5 & $3.7\pm0.3$ & $0.11\pm0.06$ & 0.2 & 0.92\ PKS0745-191 & 5 & $6.5\pm0.4$ & $0.34\pm0.05$ & 0.9 & 0.64\ [lrccccccc]{} 2A0335& 38 &0.13&1&$5.438\pm0.34$&$135.4\pm2.5$&$1.41\pm0.026$& 61& 1e-3\ &&&1&$=0$&$129.6\pm2.3$&$1.14\pm0.015$& 275& 0\ &&&2&$7.240\pm0.33$&$134.7\pm2.5$&$1.49\pm0.028$& 74& 3e-05\ &&&2&$=0$&$126.1\pm2.3$&$1.10\pm0.015$& 468& 0\ A133& 25 &0.13&1&$11.82\pm 1.1$&$149.1\pm3.8$&$1.31\pm0.050$& 27& 0.09\ &&&1&$=0$&$150.1\pm3.5$&$0.967\pm0.021$& 112& 2e-14\ &&&2&$15.70\pm 1.0$&$145.1\pm3.8$&$1.39\pm0.054$& 33& 0.02\ &&&2&$=0$&$144.8\pm3.5$&$0.906\pm0.020$& 185& 0\ A1795& 31 &0.18&1&$14.57\pm 1.7$&$120.8\pm3.8$&$1.18\pm0.063$& 5.8& 0.99\ &&&1&$=0$&$132.3\pm3.3$&$0.842\pm0.025$& 51& 0.003\ &&&2&$19.97\pm 1.6$&$114.2\pm3.8$&$1.29\pm0.069$& 9.3& 0.99\ &&&2&$=0$&$129.2\pm3.3$&$0.784\pm0.024$& 100& 2e-10\ A2029& 28 &0.20&1&$6.347\pm 3.5$&$162.4\pm6.2$&$0.891\pm0.068$& 5.0& 0.99\ &&&1&$=0$&$168.6\pm5.0$&$0.793\pm0.034$& 7.5& 0.99\ &&&2&$10.18\pm 3.3$&$158.1\pm6.2$&$0.928\pm0.070$& 3.9& 0.99\ &&&2&$=0$&$167.8\pm5.0$&$0.764\pm0.033$& 10& 0.99\ A2052& 20 &0.070&1&$9.221\pm0.82$&$188.0\pm9.7$&$1.56\pm0.066$& 70& 2e-09\ &&&1&$=0$&$153.5\pm5.9$&$1.10\pm0.029$& 176& 0\ &&&2&$11.73\pm0.78$&$197.9\pm11.$&$1.70\pm0.075$& 87& 1e-14\ &&&2&$=0$&$147.9\pm5.8$&$1.07\pm0.029$& 273& 0\ A262& 31 &0.050&1&$1.022\pm0.38$&$230.0\pm8.8$&$1.11\pm0.028$& 154& 0\ &&&1&$=0$&$218.2\pm6.6$&$1.05\pm0.013$& 162& 0\ &&&2&$3.606\pm0.34$&$241.6\pm10.$&$1.20\pm0.031$& 190& 0\ &&&2&$=0$&$195.9\pm5.9$&$0.979\pm0.013$& 290& 0\ A496& 26 &0.080&1&$7.109\pm 1.0$&$175.0\pm9.6$&$1.20\pm0.065$& 5.8& 0.99\ &&&1&$=0$&$150.8\pm6.6$&$0.902\pm0.029$& 39& 0.01\ &&&2&$11.32\pm0.97$&$177.2\pm10.$&$1.31\pm0.073$& 8.2& 0.99\ &&&2&$=0$&$133.0\pm5.8$&$0.788\pm0.027$& 90& 3e-10\ Hydra A& 20 &0.10&1&$13.43\pm 1.1$&$98.74\pm3.9$&$1.22\pm0.078$& 2.2& 0.99\ &&&1&$=0$&$92.18\pm2.8$&$0.684\pm0.024$& 72& 3e-09\ &&&2&$14.06\pm 1.1$&$98.44\pm4.0$&$1.24\pm0.080$& 2.2& 0.99\ &&&2&$=0$&$91.23\pm2.8$&$0.671\pm0.023$& 79& 2e-10\ PKS0745& 32 &0.30&1&$5.908\pm 1.1$&$111.3\pm3.2$&$1.18\pm0.037$& 13& 0.98\ &&&1&$=0$&$121.7\pm2.3$&$1.05\pm0.020$& 35& 0.16\ &&&2&$12.37\pm 1.1$&$101.9\pm3.2$&$1.27\pm0.041$& 20& 0.78\ &&&2&$=0$&$123.5\pm2.3$&$0.978\pm0.019$& 119& 3e-13\ [lrccccccc]{} 2A0335& 76 &0.13&1&$4.680\pm0.28$&$131.9\pm2.4$&$1.39\pm0.023$& 95& 0.02\ &&&1&$=0$&$123.9\pm2.2$&$1.14\pm0.014$& 330& 0\ &&&2&$6.562\pm0.27$&$131.7\pm2.5$&$1.48\pm0.025$& 115& 6e-3\ &&&2&$=0$&$119.2\pm2.2$&$1.09\pm0.014$& 593& 0\ A133& 49 &0.13&1&$11.19\pm0.99$&$149.7\pm3.7$&$1.32\pm0.046$& 37& 0.72\ &&&1&$=0$&$146.8\pm3.3$&$0.974\pm0.019$& 129& 3.51186e-10\ &&&2&$15.37\pm0.92$&$145.7\pm3.7$&$1.42\pm0.050$& 43& 0.47\ &&&2&$=0$&$139.8\pm3.2$&$0.900\pm0.018$& 219& 0\ A262& 61 &0.050&1&$1.174\pm0.30$&$240.0\pm9.6$&$1.15\pm0.027$& 141& 1e-09\ &&&1&$=0$&$223.2\pm7.2$&$1.08\pm0.014$& 156& 3e-11\ &&&2&$3.619\pm0.28$&$249.4\pm10.$&$1.24\pm0.030$& 163& 1e-14\ &&&2&$=0$&$193.6\pm6.3$&$0.990\pm0.013$& 304& 0\ A496& 51 &0.080&1&$6.771\pm 1.0$&$170.2\pm8.6$&$1.19\pm0.062$& 19& 0.99\ &&&1&$=0$&$147.8\pm5.8$&$0.912\pm0.026$& 48& 0.40\ &&&2&$11.44\pm0.93$&$172.4\pm9.6$&$1.32\pm0.070$& 23& 0.99\ &&&2&$=0$&$129.9\pm5.1$&$0.793\pm0.023$& 105& 2e-06\ A1795& 61 &0.18&1&$13.25\pm 1.7$&$121.5\pm3.1$&$1.17\pm0.053$& 15& 1.00\ &&&1&$=0$&$132.6\pm2.6$&$0.876\pm0.020$& 56& 0.51\ &&&2&$18.95\pm 1.5$&$114.7\pm3.1$&$1.28\pm0.058$& 20& 0.99\ &&&2&$=0$&$130.1\pm2.6$&$0.824\pm0.019$& 111& 2e-05\ A2029& 56 &0.20&1&$3.026\pm 2.1$&$164.9\pm4.3$&$0.863\pm0.044$& 13& 1.0\ &&&1&$=0$&$167.7\pm3.7$&$0.814\pm0.025$& 15& 1.0\ &&&2&$9.386\pm 1.9$&$157.9\pm4.2$&$0.930\pm0.047$& 10& 1.0\ &&&2&$=0$&$166.2\pm3.7$&$0.766\pm0.024$& 27& 0.99\ A2052& 41 &0.070&1&$7.270\pm0.81$&$179.3\pm7.3$&$1.48\pm0.054$& 116& 9e-11\ &&&1&$=0$&$158.4\pm5.1$&$1.16\pm0.024$& 190& 0\ &&&2&$9.648\pm0.77$&$186.2\pm8.2$&$1.60\pm0.060$& 142& 7e-15\ &&&2&$=0$&$155.0\pm5.0$&$1.14\pm0.024$& 280& 0\ Hydra A& 40 &0.10&1&$13.38\pm 1.0$&$98.58\pm3.4$&$1.23\pm0.068$& 6.7& 1.0\ &&&1&$=0$&$92.28\pm2.4$&$0.705\pm0.020$& 91& 1e-06\ &&&2&$14.08\pm 1.0$&$98.25\pm3.4$&$1.26\pm0.070$& 6.7& 1.0\ &&&2&$=0$&$91.28\pm2.4$&$0.691\pm0.020$& 101& 4e-08\ PKS0745& 65 &0.30&1&$4.831\pm0.78$&$109.4\pm2.3$&$1.17\pm0.027$& 33& 0.99\ &&&1&$=0$&$117.4\pm1.8$&$1.06\pm0.017$& 69& 0.21\ &&&2&$11.33\pm0.74$&$100.5\pm2.3$&$1.25\pm0.030$& 50& 0.79\ &&&2&$=0$&$118.1\pm1.9$&$0.962\pm0.015$& 241& 0\ [lrccccccc]{} 2A0335& 76 &0.13&3&$2.673\pm0.19$&$123.7\pm2.2$&$1.27\pm0.019$& 114& 7e-3\ &&&3&$=0$&$115.7\pm2.0$&$1.09\pm0.013$& 270& 0\ A133& 49 &0.13&3&$6.949\pm0.79$&$152.8\pm3.1$&$1.15\pm0.031$& 44& 0.41\ &&&3&$=0$&$150.0\pm2.9$&$0.955\pm0.013$& 105& 8e-07\ A262& 61 &0.050&3&$1.214\pm0.27$&$206.4\pm8.2$&$1.08\pm0.024$& 82& 0.01\ &&&3&$=0$&$190.1\pm6.4$&$1.00\pm0.013$& 101& 3e-3\ A496& 51 &0.080&3&$3.755\pm0.97$&$159.1\pm7.9$&$1.05\pm0.053$& 17& 0.99\ &&&3&$=0$&$145.3\pm5.9$&$0.891\pm0.023$& 28& 0.98\ A1795& 61 &0.18&3&$11.75\pm 1.7$&$116.1\pm4.2$&$1.19\pm0.069$& 7.9& 1.0\ &&&3&$=0$&$127.2\pm3.6$&$0.872\pm0.023$& 36& 0.98\ A2029& 56 &0.20&3&$4.783\pm 1.2$&$144.7\pm5.5$&$1.03\pm0.059$& 5.5& 1.0\ &&&3&$=0$&$148.9\pm5.3$&$0.867\pm0.029$& 16& 0.99\ A2052& 46 &0.080&3&$3.351\pm0.53$&$141.8\pm6.0$&$1.33\pm0.043$& 58& 0.03\ &&&3&$=0$&$130.7\pm5.1$&$1.15\pm0.027$& 93& 9e-06\ Hydra A& 40 &0.10&3&$10.90\pm 1.0$&$99.18\pm2.8$&$1.13\pm0.058$& 6.4& 1.0\ &&&3&$=0$&$96.67\pm2.2$&$0.736\pm0.018$& 71& 4e-3\ PKS0745& 65 &0.30&3&$5.053\pm0.57$&$110.5\pm1.9$&$1.18\pm0.022$& 38& 0.98\ &&&3&$=0$&$119.6\pm1.6$&$1.05\pm0.013$& 106& 3e-3\ [^1]: D. Horner’s catalog and Ph.D. thesis are on-line at http://lheawww.gsfc.nasa.gov/horner/thesis.html. [^2]: http://heasarc.gsfc.nasa.gov/W3Browse/chandra [^3]: Available at http://hea-www.harvard.edu/$\sim$maxim/axaf/acisbg/
--- abstract: | We implemented a GPU based parallel code to perform Monte Carlo simulations of the two dimensional q-state Potts model. The algorithm is based on a checkerboard update scheme and assigns independent random numbers generators to each thread. The implementation allows to simulate systems up to $\sim 10^9$ spins with an average time per spin flip of $0.147ns$ on the fastest GPU card tested, representing a speedup up to 155x, compared with an optimized serial code running on a high-end CPU. The possibility of performing high speed simulations at large enough system sizes allowed us to provide a positive numerical evidence about the existence of metastability on very large systems based on Binder’s criterion, namely, on the existence or not of specific heat singularities at spinodal temperatures different of the transition one. address: - 'CONICET, Centro Atómico Bariloche, 8400 San Carlos de Bariloche, Río Negro, Argentina' - 'Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba, Ciudad Universitaria, 5000 Córdoba, Argentina' - 'Instituto de Física Enrique Gaviola (IFEG-CONICET), Ciudad Universitaria, 5000 Córdoba, Argentina' author: - 'Ezequiel E. Ferrero' - Juan Pablo De Francesco - Nicolás Wolovick - 'Sergio A. Cannas' title: 'q-state Potts model metastability study using optimized GPU-based Monte Carlo algorithms' --- Monte Carlo ,GPU ,CUDA ,Potts model ,Metastability Introduction {#Intro} ============ The tremendous advances allowed by the usage of numerical simulations in the last decades have promoted these techniques to the status of indispensable tools in modern Statistical Mechanics research. Notwithstanding, many important theoretical problems in the field still remain difficult to handle due to limitations in the available computational capabilities. Among many others, typical issues that challenge the numerical treatment concern systems with slow dynamics (i.e., dynamical processes that involve very different time scales) and/or strong finite size effect, which require fast simulations of a very large number of particles. Some typical examples we may cite are spin glass transitions [@Fisher-Hertz-book1993], glassy behavior [@Ko2003; @Binder-Kob-book2005] and grain growth [@Cu2010]. In such kind of problems the state of the art is usually launched by novel numerical approaches or extensive computer simulations. In this sense, the advent of massive parallel computing continuously opens new possibilities but, at the same time, creates a demand for new improved algorithms. In particular, the usage of GPU cards (short for Graphics Processing Units) as parallel processing devices is emerging as a powerful tool for numerical simulations in Statistical Mechanics systems [@PrViPaSc2009; @BlViPr2010; @BePaPa2011; @HaLePl2010; @We2010; @We2011], as well as in other areas of physics [@HeSiBeGuTi2010; @Ti2010; @ClBaBaBrRe2010]. These GPUs have a Toolkit that abstracts the end-user from many low-level implementation details, yet all the typical problems of concurrency exists and they are magnified by the massive amount of (virtual) threads it is capable to handle. An extremely fine grained concurrency is possible and advised thanks to the Single Instruction Multiple Thread (SIMT) model. Therefore, any non trivially independent problem requires a correct concurrency control (synchronization), and the lack of it hinders correctness in a much dramatic way than current 4 or 8-way multicore CPU systems. The other challenge apart from correctness is performance, and here is where the algorithm design practice excels. Taking into account internal memory structure, memory/computation ratio, thread division into blocks and thread internal state size, can boost the algorithm performance ten times from a trivial implementation [@Ryoo08]. It is also customary to give an approximation of the speedup obtained from a CPU to GPU implementation in terms of “$N$x”, even though, as we discuss later, this number will always depend on the corresponding efforts devoted to optimally programming for each architecture. In this work we focus on GPU based Statistical Mechanics simulations of lattice spin systems. In particular, we study the metastability problem in the ferromagnetic $q$-state Potts model [@Wu1982] in two dimensions when $q > 4$. While this phenomenon is clearly observed in finite size systems, its persistence in the thermodynamics limit is still an unsolved problem and subject of debate [@Bi1981; @MeMo2000; @PeIbLo2008; @BaBeDu2008; @LoFeGrCa2009]. In an earlier work, Binder proposed a numerical criterion to determine whether metastability remains in the thermodynamic limit or not, based on the scaling properties of the average energy in the vicinity of the transition temperature [@Bi1981]. However, the narrow range of temperature values of the metastable region requires high precision calculations for the criterion to work. Hence, to reduce finite size bias and statistical errors down to an appropriated level, large enough system sizes are needed. The computation capabilities required to carry out such calculations in a reasonable time were unavailable until recently. We developed an optimized algorithm to perform Monte Carlo numerical simulations of the $q$-state Potts model on GPU cards. This algorithm allowed us to simulate systems up to $N=32768\times32768 \sim 1.073 \times 10^9$ spins with a lower bound time of 0.147ns per spin flip using using an NVIDIA GTX 480 Fermi card, and in terms of speedup, we obtained 155x from an optimized CPU sequential version running on an Intel Core 2 Duo E8400 at 3.0GHz. What is remarkable about the speedup is that it allowed us to explore bigger systems, simulate more iterations, explore parameters in a finer way, and all of it at a relatively small cost in terms of time, hardware and coding effort. With this extremely well performing algorithm we obtained a positive numerical evidence of the persistence of metastability in the thermodynamic limit for $q>4$, according to Binder’s criterion. The paper is structured as follows. In Section \[model\] we briefly review the main properties of the Potts model and the particular physical problem we are interested in. In Section \[algorithm\] we introduce the simulation algorithm and in Section \[validate\] we compare the predictions of our numerical simulations against some known equilibrium properties of the model to validate the code. In Section \[performance\] we check the performance of the code. In Section \[metastability\] we present our numerical results concerning the metastability problem. Some discussions and conclusions are presented in Section \[discussion\]. The q-state Potts model {#model} ======================= The model --------- The $q$-state Potts model [@Wu1982] without external fields is defined by the Hamiltonian $$H = - J \sum_{<i,j>} \delta (s_i,s_j) \label{Hamiltonian}$$ where $s_i=1,2,\ldots,q$, $\delta (s_i,s_j)$ is the Kronecker delta and the sum runs over all nearest neighbors pairs of spins in a Bravais lattice with $N$ sites. Being a generalization of the Ising model ($q=2$), this model displays a richer behavior than the former. One of the main interests is that the two-dimensional ferromagnetic version ($J>0$) exhibit a first order phase transition at some finite temperature when $q>4$, while for $q\leq 4$ the transition is continuous [@Wu1982]. Hence, it has become a paradigmatic model in the study of phase transitions and their associated dynamics, like for instance, domain growth kinetics [@ViGr1987; @GrAnSr1988; @SiMa1995b; @FeCa2007; @LoArCuSi2010] and nucleation as an equilibration mechanism [@MeMo2000; @Ru2002; @BaGuTrTrHu2010]. Some equilibrium properties of the two-dimensional model are known exactly, which allows numerical algorithms testing. We list here some of them that are used for comparison with the numerical results in the present work. For instance, the transition temperature for the square lattice in the thermodynamic limit is given by [@Ba1973] $$\label{Tc} \frac{k_B T_c}{J} = \frac{1}{\ln (1+\sqrt{q})}$$ where $k_B$ is the Boltzmann constant. Hereafter we will choose $k_B/J=1$. Considering the energy per spin $e=\langle H \rangle /N$, in the thermodynamic limit the latent heat for $q>4$ is [@Ba1973] $$\label{Ujump} e_d - e_o = 2 \left(1+\frac{1}{\sqrt{2}} \right)\, \tanh\frac{\Theta}{2}\,\prod_{n=1}^\infty (\tanh\, n\Theta)^2$$ where $\Theta=\arccos\, \sqrt{q}/2$ and $$\begin{aligned} \label{Ucpm} e_d= \lim_{N\to\infty} \frac{1}{N} \lim_{T\to T_c^+} \langle H \rangle,\\ e_o= \lim_{N\to\infty} \frac{1}{N} \lim_{T\to T_c^-} \langle H \rangle.\end{aligned}$$ Also $$\label{Uadd} e_d + e_o = -2(1+1/\sqrt{q})$$ from which the individual values of $e_d$ and $e_o$ can be obtained [@KiMiSh1954]. The order parameter is defined as $$\label{magnetization} m = \frac{q(N_{max}/N - 1)}{q-1}$$ where $N_{max} =max(N_1, N_2,\ldots,N_q)$, being $N_i$ the number of spins in state $i$. At the transition the jump in the order parameter (for $q>4$) is given by [@Ba1982] $$\Delta m = 1- q^{-1} - 3q^{-2} - 9q{-3} - 27q^{-4} - \ldots \label{magne-exact}$$ Metastability ------------- The problem of metastability in the infinite size $q$-state Potts model (for $q>4$) is an old standing problem in statistical mechanics  [@Bi1981; @VeBeHe2003; @FeCa2007; @IbLoPe2007; @PeIbLo2008; @LoFeGrCa2009]. It has also kept the attention of the Quantum Chromodynamics’ (QCD) community for many years [@Me1996; @KaSt2000; @VeBeHe2003; @BaBeDu2008; @BoEl2010], because it has some characteristics in common with the deconfining (temperature driven) phase transition in heavy quarks. Metastability is a verified fact in a finite system. It is known [@MeMo2000; @FeCa2007; @PeIbLo2008] that below but close to $T_c$ the system quickly relaxes to a disordered (paramagnetic) metastable state, with a life time that diverges as the quench temperature $T$ approaches $T_c$ (see for example Fig.4 in Ref.[@LoFeGrCa2009]). This state is indistinguishable from one in equilibrium in the sense of local dynamics, namely, two times correlations depends only on the difference of times, while one time averages are stationary [@PeIbLo2008]. Nevertheless, the existence of metastability in the thermodynamic limit is still an open problem [@PeIbLo2008]. In Ref.[@Bi1981] Binder studied static and dynamic critical behavior of the model (\[Hamiltonian\]) for $q=3,4,5,6$. Using standard Monte Carlo procedures he obtained good agreement with exact results for energy and free energy at the critical point and critical exponents estimates for $q=3$ in agreement with high-temperature series extrapolations and real space renormalization-group methods. When analyzing the $q=5$ and $6$ cases he realized that the transition is, in fact, a very weak first order transition, where pronounced “pseudocritical” phenomena occur. He studied system sizes from $N=16\times16$ up to $N=200\times200$, and observation times up to $10^4 \mathit{MCS}$ (a Monte Carlo Step $\mathit{MCS}$ is defined as as a complete cycle of $N$ spin update trials, according to the Metropolis algorithm). Within his analysis he was unable to distinguish between two different scenarios for the transition at $q \geq 5$ due to finite size effects taking place at the simulations. He proposed two self-avoiding possible scenarios for the transition. In the first one the energy per spin reaches the transition temperature with a finite slope both coming from higher and lower temperatures, thus projecting metastable branches at both sides of the transition that end at temperatures $T_{sp}^+$ and $T_{sp}^-$ both different from $T_c$. In the second scenario, the energy reaches $T_c$ with an infinite slope which would imply a first order phase transition with a true divergence of the specific heat at $T_c$. On the other hand, other approaches based on different definitions of the spinodal temperatures predict, either the convergence of the finite size spinodal temperatures to $T_c$ [@MeMo2000; @BaBeDu2008] or a convergence to limit values different from but closely located to $T_c$ [@LoFeGrCa2009]. Optimized GPU-based Monte Carlo algorithm for the q-state Potts model {#algorithm} ===================================================================== We developed a GPU based code to simulate the two dimensional Potts model, using classical Metropolis dynamics on square lattices of size $N= L \times L$ sites with periodic boundary conditions. For the spin update we partition lattice sites in two sets, the whites and the blacks, laid out in a framed checkerboard pattern in order to update in a completely asynchronous way all the white cells first and then all the black ones (given that the interactions are confined to nearest neighbors). This technique is also know as the Red-Black Gauss-Seidel [@pres92nrinc]. We analyzed equilibrium states of systems ranging from $N=16\times16$ to $N=32768\times 32768$ ($2^{15}\times2^{15} \simeq 1.073\times 10^9$ spins). The typical simulation protocol is the following. Starting from an initial ordered state ($s_i=1$ $\forall i$) we fix the temperature to $T=T_{min}$ and run $t_{\mathit{tran}}$ to attain equilibrium, then we run $t_{\mathit{max}}$ taking one measure each $\delta t$ steps to perform averages. After that, we keep the last configuration of the system and use it as the initial state for the next temperature, $T=T_{\mathit{min}}+\delta T$. This process is repeated until some maximum temperature $T_{\mathit{max}}$ is reached. We repeat the whole loop for several samples to average over different realizations of the thermal noise. In a similar way we perform equilibrium measurements going from $T_{\mathit{max}}$ to $T_{\mathit{min}}$ starting initially from a completely random state. GPU: device architecture and CUDA programming generalities ---------------------------------------------------------- In 2006, NVIDIA decided to take a new route in GPU design and launched the G80 graphics processing unit, deviating from the standard pipeline design of previous generations and transforming the GPU in an almost general purpose computing unit. Although this decision could have been driven by the gaming community asking for more frames per second, NVIDIA took advantage of his General Purpose Graphics Processing Units (GPGPU), and in 2007 they launched the CUDA SDK, a software development kit tailored to program its G80 using C language plus minor extensions. The G80 hardware and the CUDA compiler quickly proved to have an extremely good relation in terms of GFLOPS per watt and GFLOPS per dollar with respect to the CPU alternatives in the application field of numerical algorithms. The architecture has evolved two generations, GT200 in 2008 and 2009, and the GF100 in 2010, also known as the Fermi architecture. All of them share the same Single Instruction Multiple Thread (SIMT) concurrency paradigm in order to exploit the high parallelism (up to 480 computing cores) and the high memory bandwidth (up to 177GBps). The SIMT model is a convenient abstraction that lies in the middle of the SIMD (Single Instruction Multiple Data) and MIMD (Multiple Instruction Multiple Data), where the first reigned in the 80’s with the vector computers, and the later is the commonplace of almost every computing device nowadays, from cellphones to supercomputers. Using SIMT paradigm, the parallel algorithm development changes greatly since it is possible to code in a one-thread-per-cell fashion. The thread creation, switching and destruction have such a low performance impact that doing a matrix scaling reduces to launch one kernel per matrix cell, even if the matrix is $32768\times 32768$ of single precision floating point numbers summing up 1 GThread all proceeding in parallel. In fact, for the implementation, the more threads the better, since the high memory latency to global memory (in the order of 200 cycles) is hidden by swapping out warps (vectors of 32 threads that execute synchronously) waiting for the memory to become available. It is important to emphasize the role of blocks in the SIMT model. Threads are divided into blocks, where each block of threads have two special features: a private shared memory and the ability to barrier synchronize. Using these capabilities, the shared memory can be used as a manually-managed cache that in many cases greatly improves the performance. We used the GTX 280, GTX 470 and GTX 480 boards. The relevant hardware parameters for these boards are shown in table \[hardware-parameters-table\]. \[hardware-parameters-table\] Board Model GTX 280 GTX 470 GTX 480 ------------------------------- ----------- ----------- ----------- Available Q2 2008 GPU GT200 CUDA capability 1.3 CUDA cores 240 448 480 Processor Clock 1.30GHz 1.22GHz 1.40GHz Global Memory 1GB 1.25GB 1.50GB Memory Bandwidth 141.7GBps 133.9GBps 177.4GBps L1 Cache N/A L2 Cache N/A Max \# of Threads per Block 512 Shared Memory per Block 16KB Max \# of Registers per Block 16384 : Key features about NVIDIA GTX 280, GTX 470, and GTX 480 graphic cards. The improvements of the Fermi architecture lies on the new computing capabilities (improved Instruction Set Architecture – ISA), the doubling of cores, the inclusion of L1 and L2 cache, increased per-block amount of parallelism and shared memory. As every modern computing architecture the memory wall effect has to be relieved with a hierarchy of memories that become faster, more expensive and smaller at the top. The bottom level is the global memory, accessible by every core and having from 1GB to 1.5GB of size[^1] and a latency of 200 cycles. The next level is the shared memory, that is configurable 16KB or 48KB per block having a latency of only 2 cycles. At the top there are 32K registers per block. There are also texture and constant memory, having special addressing capabilities, but they do not bring any performance improvement in our application. The Fermi architecture has also incorporated ECC memory support to eventually deal with internal data corruption. The programming side of this architecture is a “C for CUDA”, an extension of the C Programming Language [@KR78] that enables the *host* processor to launch *device* kernels [@Kirk10]. A kernel is a (usually small) piece of code that is compiled by `nvcc`, the NVIDIA CUDA Compiler, to the PTX assembler that the architecture is able to execute. The kernel is executed simultaneously by many threads, organized in a two-level hierarchic set of parallel instances indexed as $(\mathit{grid},\mathit{block})$ (a grid of thread blocks). Internally each grid and block can be divided up to two dimensions for the grid and three dimensions for the block, in order to establish a simple thread-to-data mapping. Special variables store the thread position information of block and thread identifier $(\mathit{bid},\mathit{tid})$ that distinguishes the threads executing the kernel. It is interesting to note that although the unit of synchronous execution is a warp of 32 threads, the threads inside a warp may diverge in their execution paths (occurrence of bifurcations), at the cost of having to re-execute the warp once for each choice taken. Needless to say that in general this impacts negatively in the performance and has to be avoided. The present code is divided in two main functions: spin update and energy and magnetization computation. The first function is implemented in host code by the function [`update`]{} and this comprises calling the device kernel [`updateCUDA`]{} once updating white cells and next updating black cells in a checkerboard scheme. The energy and magnetization (and their related moments) summarization is done by [`calculate`]{} that calls the kernel [`calculateCUDA`]{} and two more auxiliary kernels: [`sumupECUDA`]{} and [`sumupMCUDA`]{}. Random Number Generator ----------------------- The Potts model simulation requires a great amount of random numbers. Namely, each cell updating its spin needs one integer random number in $\{0,\dots, q\!-\!1\}$ and possibly a second one in the real range $[0,1)$ to decide the acceptance of the flip. Hence, a key factor to performance is using a good parallel random number generator. Given the great dependence in terms of time (it has to be fast) and space (small number of per-thread variables), we find Multiply-With-Carry (MWC) [@Marsaglia:2003:RNG] ideal in both aspects. Its state is only $64$ bits, and obtaining a new number amounts to compute $x_{n+1} = (x_n \times a + c_n)\bmod{b}$, where $a$ is the multiplier, $b$ is the base, and $c_n$ is the carry from previous modulus operation. We took the implementation from the CUDAMCML package [@cudamcml_rng] that fixes $b=2^{32}$ in order to use bit masks for modulus computation. For independent random number sequences, MWC uses different multipliers, and they have to be *good* in the following sense: $a\times b-1$ should be a safeprime, where $p$ is a safeprime if both $p$ and $(p-1)/2$ are primes. Having fixed $b=2^{32}$, the process of obtaining safe primes boils down to test for primality of two numbers $\mathit{goodmult}(a) \equiv \mathit{prime}(a\times 2^{32}-1) \land \mathit{prime}((a\times 2^{32}-2)/2)$. It is important to remark that the nearer to $2^{32}$ is $a$ the longer the period of the MWC (for $a$ close to its maximum, the period is near to $2^{64}$), therefore it is always advisable to start looking for $\mathit{goodmult}$ down from $2^{32}-1$. We limit the amount of independent random number generators (RNG) to $512^2/2 = 131072$ that is slightly lower than the $150000$ good multipliers that CUDAMCML gives in its file [`safe_primes_base32.txt`]{}. The state needed comprises 12 bytes per independent RNG, totalizing 1.5MB of global memory, less that $0.15\%$ of the total available in the GTX 280. We consider this a good trade-off between independence in number generation and memory consumption. This design decision is crucial in the parallelization of the spin update function, as we frame the lattice in rectangles of $512\times 512$, to give each thread an independent RNG[^2]. Moreover, this implies that the larger the lattice, the more work will be done by a single thread. It is important to remark we are well below the RNG cycle even for the largest simulations. Spin update ----------- On top of the checkerboard division we have first to frame the lattice in rectangles of $512\times 512$ in order to use the limited amount of independent RNG (Fig.\[fig:checkerboard\], left). This implies launching two consecutive kernels (black/white) of $512\times 512/2$ threads, typically organized into a grid of $32\times 16$ blocks of $16\times 16$ threads. The second step comprises the remapping of a two dimensional stencil of four points in order to save memory transfers. The row-column pair $(i,j)$ is mapped to $(((i\!+\!j)\textit{mod}\ 2 \times L + i)/2, j)$, and this allows to pack all white and all black cells in contiguous memory locations improving locality and allowing wider reads of 3 consecutive bytes (Fig.\[fig:checkerboard\], right). ![On the left: a $8\times 8$ checkerboard framed in $4\times 4$ (red marking), the cells updated by thread $t_0$ are singled out, we also marked the north, east, south and west neighbors of cell $\bullet$. On the right: packed checkerboard showing first half of whites, where the neighboring cells $n,e,s,w$ are marked, also in the second half of black cells $\bullet$ is singled out. ](fig1.eps) \[fig:checkerboard\] We encode each spin in a byte, allowing simulations with $q\leq256$ and $L^2\leq\mathit{available\ RAM}$. Since some extra space is needed for the RNG state and for energy and magnetization summarization, this upper bound is not reached. The biggest simulations we achieve is $L=32768, q=45$ for the GTX 480. It is important to remark that shared memory is not used, since we could not improve performance and it hindered readability of the code. Texture memory techniques were not used for the same reasons. Computation of Energy and Magnetization --------------------------------------- During the evolution of the system we extract periodically two quantities: energy Eq.(\[Hamiltonian\]) and magnetization Eq.(\[magnetization\]). The kernel responsible for this job is [`calculateCUDA`]{}. It first partitions the cells into CUDA blocks. In each block we have easy access to barrier synchronization and shared memory among its threads. Each block within its cells adds the local energies and accumulates in a partial vector $(n_1, n_2,\ldots, n_q)$ the number of spins in each state. This is performed in shared memory using atomic increments to avoid race conditions. After that, those blocks’ results are added up in parallel using a butterfly-like algorithm [@Kirk10] by kernels [`sumupECUDA`]{} and [`sumupMCUDA`]{}, but none of the known optimizations [@harris07cuda] are applied, since it implies obfuscating the code for a marginal global speedup. Previous kernels end up with up to approximately a thousand partial energies and vectors of spin counters, that are finally added in the CPU. It has to be noticed that device memory consumption in this part is linear not only in $N$, but also in $q$. Defensive Programming Techniques and Code Availability ------------------------------------------------------ Writing scientific code that is maintainable, robust and repeatable is of utmost importance for the fields of science where computer simulation and experimentation is an everyday practice [@merali10error]. CUDA coding in particular is hard, not only in creating the algorithms, choosing a good block division and trying to take advantage of all its capabilities, but also, in the debugging and maintenance cycle. Debugging tools are evolving rapidly, for example there is a memory debugger [`cuda-memcheck`]{} that is shipped with current CUDA SDK. Nevertheless, we would rather adhere to some passive and active security measures within our code to make it easier to understand and modify, and at the same time, to make it robust in the sense of no unexpected hangs, miscalculations or silent fails. Among passive security measures, we use assertions (boolean predicates) related to hardware limitations like the maximum of 512 threads per block. Other use of the assertions is checking for the integer representation limitations: given the computing power that GPGPU brings, lattices of $32768\times 32768$ are feasible to simulate, and integer overflow could be a possibility, for example when computing the total energy. Assertions were also used to enforce preconditions on algorithm running, for example, the spin updating cannot do well if $L$ is not multiple of the frame size. We also check every return condition of CUDA library calls and kernels, in order to lessen the asynchrony of error detection in CUDA. The same practice is used in standard library calls for file handling. Active security measures are also taken. We use tight types in order to detect problems in compile time. We also decorate parameters and variable names with [`const`]{} modifiers where applicable. For pointer immutable parameters we forbid the modification of pointed data as well as the pointer itself. The scope of automatic variables is as narrow as possible, declaring them within blocks, in order to decrease the namespace size in every line of code. We put in practice the simple but effective idea of using meaningful variable names in order to improve the readability. We also adhere to the practice of publishing the code [@barnes10publish] in the line of [@PrViPaSc2009; @BlViPr2010; @We2010], since it benefits from community debugging and development. It can be found on [@potts3site]. Algorithm checking {#validate} ================== In order to validate our CUDA code we run some typical simulations to measure well established results. First we calculate the energy per spin $e$ and magnetization $m$ above and below the transition temperature, by cooling (heating) from an initially disordered (ordered) state. The behaviors of $e$ and $m$ as functions of $T$ for different values of $q$ are shown in Fig.\[fig3\]. From these calculations we obtain the values of the energy ($e_d$ and $e_o$) and magnetization jump $\Delta m$ at the exact transition temperature (see Section \[model\]). Results are compared with exact values in table \[valores-medidos-vs-exactos\] ![(Color online) Equilibrium energy per spin $e$ and magnetization $m$ (inset) versus temperature for $q=9,12,15, 96$. Exact values at the transition point from equations (\[Ujump\]), (\[Uadd\]) and (\[magne-exact\]) are marked as crosses. Data comes from averages over $10$ samples of linear system size $L=2048$. Error bars are smaller than the symbol size.[]{data-label="fig3"}](fig2.eps) \[valores-medidos-vs-exactos\] ----- ------------- ------------ ------------- ------------ ------------- ------------ $q$ exact calculated exact calculated exact calculated 6 1.508980... 1.51(2) 1.307516... 1.306(1) 0.677083... 0.674(2) 9 1.633167... 1.6332(5) 1.033499... 1.0334(5) 0.834019... 0.8338(4) 15 1.765905... 1.7659(2) 0.750492... 0.7509(4) 0.916693... 0.9167(3) 96 1.960306... 1.96030(3) 0.243817... 0.24382(4) 0.989247... 0.98924(2) ----- ------------- ------------ ------------- ------------ ------------- ------------ : Comparison between calculated and known exact values of $e_o$, $e_d$, and $\Delta m$ at the transition for different values of $q$. Results were obtained from averages over $10$ samples of linear system size $L=2048$ and equilibration and measurements times of at least $5\times10^5$ MCS each one. We can see a very good agreement between data and exact results. It’s worth noting that the data from table \[valores-medidos-vs-exactos\] is not the result of extrapolations of some finite size analysis, but the values from curves in Fig.\[fig3\] at the transition itself. Since we measure one point each $\Delta T$ in temperature, cooling and heating procedures won’t necessary lead to a point measured exactly at $T_c$. So, we have to interpolate points close to $T_c$ to deduce the corresponding values of $e_o$, $e_d$ and $m_o$ at $T_c$. The differences obtained from interpolations using points separated by $\Delta T$ and points separated by $2\times \Delta T$ determine the estimated errors. We also calculate the fourth order cumulant of the energy [@Ja1993; @Bi1997] $$V_L = 1- \frac{\langle H^4 \rangle}{3 \langle H^2 \rangle^2}$$ as a function of the temperature for $q=6$ and different system sizes. As it is well known, $V_L$ is almost constant far away from the transition temperature and exhibits a minimum at a pseudo critical temperature $$T_c^*(L) = T_c + \frac{T_c^2 \ln(q e_o^2 / e_d^2)}{e_d - e_o} \frac{1}{L^d}$$ In Fig.\[VL-q6\]b we show $T_c^*(L)$ vs. $1/L^2$ for $q=6$. The extrapolated value of $T_c^*(L)$ for $L\to\infty$, $0.8078\pm0.0002$ agrees with the exact value $T_c=0.8076068...$ within an accuracy of the $0.025\%$. ![ (Color online) Finite size scaling of the fourth order cumulant for $q=6$. (a) $V_L$ as a function of temperature for different system sizes. Averages were taken over several samples ranging from $300$ to $400$ for small system sizes down to $50$ and $20$ for $L=128$ and $L=256$. The orange line indicate the analytically predicted location of the minimum in the thermodynamic limit. (b) Pseudo critical temperature $T_c^*$ [*vs.*]{} $1/L^2$. Error bars, estimated from the uncertainty when locating the minimum of $V_L$, are shown only when larger than the symbol size. []{data-label="VL-q6"}](fig3.eps) Let us emphasize that, as it is well known, it’s very difficult to get good measures of cumulants with a single spin flip MC algorithm. In order to get reliable averages of the cumulant minimum location, one should guarantee a measurement time long enough to let the system overcome the phase separating energy barrier back and forward several times. Moreover, the characteristic activation time to overcome the barrier increases both with $q$ and $L$ (it increases exponentially with $L$). For instance, simulation times of the order $10^7$ for each temperature are needed to obtain a good sampling for $q=6$ and $L=256$. ![(Color online) Finite size scaling of the susceptibility for $q=2$. (main plot) $\chi$ as a function of temperature for different system linear sizes. Averages were taken over several samples ranging from $300$ for small system sizes down to $50$ and $15$ for $L=1024$ and $L=2048$, respectively. We have used equally equilibration and measurement times of $2\times10^5 \mathit{MCS}$, measuring quantities each $10 \mathit{MCS}$, thus totalizing averages over $6\times 10^6$ to $3\times 10^5$ as we increase the system size. (upper inset) Maximum value of the susceptibility peak $\chi_{max}$ [*vs.*]{} $L$. Error bars, estimated from the uncertainty when evaluating the maximum, are smaller than the symbol size. (lower inset) Pseudo critical temperature $T^*(L)$ [*vs.*]{} $1/L$. Error bars, estimated from the uncertainty when locating the position of the maximum, are shown only when larger than the symbol size. []{data-label="q2chi"}](fig4.eps) In addition, we test our code for the $q=2$ (Ising) case. Fig.\[q2chi\] shows the susceptibility of the order parameter calculated as $$\chi = \frac{N}{T} \left[\left<m^2\right> - \left<m\right>^2\right]$$ The extrapolated value of the pseudo critical temperature $T^*(L)$ (defined as the location of the susceptibility maximum) for $L\to\infty$, $1.1345\pm0.0001$, agrees with the exact value[^3] $T_c(q=2)=1.1345926...$ within an accuracy of the $0.009\%$. Even more, if we plot the maximum value of $\chi$ against the linear size $L$ it is expected to observe a finite size scaling of the form $\chi_{max} \sim L^{\gamma/\nu}$ [@Landau-Binder-2009], where $\gamma$ and $\nu$ are the exactly known critical exponents for the 2D Ising model. We obtain such scaling with a combined exponent $\gamma/\nu = 1.77\pm0.02$, in a good agreement with the exact value $\gamma/\nu = \frac{7/4}{1} = 1.75$. Algorithm performance {#performance} ===================== The first step towards performance analysis is the kernel function calling breakdown. In this case, it is done using CUDA profiling capabilities and some scripting to analyze a 2.9GB [`cuda_profile_0.log`]{} file produced after $12,6$ hours of computation. The parameters used for this profiling are $q=9, N=2048\times 2048$, $T_{\mathit{min}} = 0.721200$, $T_{\mathit{max}} = 0.721347$, $\delta T = 10^{-5}$, $t_{tran}=10^{5} \mathit{MCS}$, $t_{max}=10^{4} \mathit{MCS}$ and $\delta t=500 \mathit{MCS}$. The profile shows that there are approximately 32 millions of calls to [`updateCUDA`]{} and just a few thousands to the other three kernels. Since the individual gpu time consumptions of each kernel are comparable, the only relevant kernel to analyze is [`updateCUDA`]{}. The result is shown in table \[table:breakdown\]. \[table:breakdown\] ------------------------------------------------- ---------------------------------------------- method calls % sumupECUDA Min Avg Max updateCUDA 32340000 1.000 gpu time 8.800 8.963 9.248 calculateCUDA 2940 0.000 occupancy 1.000 1.000 1.000 sumupECUDA 2940 0.000 cpu time 0.000 0.817 4.000 sumupMCUDA 2940 0.000 memcpyHtoD 2 0.000 sumupMCUDA Min Avg Max memcpyDtoH 5880 0.000 gpu time 46.432 48.714 51.072 occupancy 0.250 0.250 0.250 updateCUDA Min Avg Max cpu time 1.000 1.461 5.000 gpu time 1309.184 1398.705 3136.768 occupancy 0.500 0.500 0.500 memcpyHtoD Min Avg Max cpu time 0.000 0.759 25780.000 gpu time 192.480 287.488 382.496 cpu time 331.000 681.500 1032.000 calculateCUDA Min Avg Max gpu time 8322.720 8353.981 8393.376 memcpyDtoH Min Avg Max occupancy 1.000 1.000 1.000 gpu time 3.168 3.364 3.616 cpu time 1.000 1.727 12.000 cpu time 10.000 11.315 19.000 ------------------------------------------------- ---------------------------------------------- : Summarization of kernel and memory movement profiling. To analyze the kernel [`updateCUDA`]{} we sweep $L$ in the range from $512$ to $32768$ in powers of two, measuring the average execution time of the kernel and normalizing it to nanoseconds per spin flip. We compare the three GPUs, using the same machine code (Compute Capability – CC 1.3, generated by NVCC 3.2)[^4], and the same video driver (driver version 260.19). We also compare the GPUs performance with a CPU implementation. For this version, we tried to keep the structure of the CUDA code, in order to compare the execution of the same physical protocol on each architecture. We replaced the calls to CUDA kernels with loops running over all the spins in the same checkerboard scheme, we used the same MWC random number generator. We also added some optimizations to improve the CPU performance like creating a precomputed table of Boltzmann weights for the spinflip acceptance for each simulated temperature, since the CPU have no mechanism for hiding memory latency and the impact of any floating-point unit (FPU) computation is noticeable. We run the CPU code against a Core 2 Duo architecture (E8400 – Q1 2008) using GCC 4.4.5 with carefully chosen optimization flags[^5]. We also vary $q$ in the set $\{6,8,9,12,15,24,48,96,192\}$. We don’t find any significant variation of the performance with $q$, except in the $q=2^k$ cases for the GTX 280, where the compiler obtains slight performance advantages using bitwise operators for modulus operation. The Fermi board has an improved modulus, rendering that difference imperceptible. The profiling measurement is done in the GPU cases using CUDA profiling capabilities that gives very precise results, avoiding any code instrumentation. For the CPU version it is necessary to instrument the code with simple system calls to obtain the wall time. In order to make the measurement independent of the temperature range covered, given that the transition temperature (and therefore the flip acceptance rate) changes with $q$, we choose a deterministic write, i.e. we always write the spin value irrespective if the spin changes its state respect of its previous state or not. Writing the spin value only when it changes its state, brings a slight performance improvement around 2% in the general case. ![Spin flip time in nanoseconds vs. lattice size running on an Intel Core 2 Duo E8400@3.0GHz CPU, and running on GTX 280, GTX 470 and GTX 480 NVIDIA GPUs. Averages are performed over $400$ runs for the GPUs and $60$ runs for the CPU. Error bars are smaller than symbol sizes when not showed. []{data-label="figure:tsfL"}](fig5.eps) In figure \[figure:tsfL\] we can see that the curve corresponding to the CPU implementation is flat around[^6] 22.8ns, showing no dependence of the averaged spin flip time with system size. For GPU cases, instead, we do have variations with respect to $L$. The slowest card is the GTX 280, with spin flip times in the range \[0.48ns, 0.54ns\] which are 47x to 42x faster than those of the CPU code. The GTX 470 has a variation between 0.21ns and 0.30ns, giving a speedup between 108x and 76x. The fastest card is the GTX 480 with spin flip times in \[0.18ns, 0.24ns\] achieving a speedup from 126x to 95x. There is also another curve corresponding to a specifically tuned version for the GTX 480 card[^7] and CC 2.0, obtaining 155x (0.147ns) for the fastest case. It is important to notice that even when using newer CPU architectures like Nehalem (X5550 – Q1 2009) the spin flip time only drops 2ns in the best case respect to the Core 2 Duo, and that Intel C++ Compiler (ICC) cannot do any better than that. Nevertheless, it should be noted that better CPU implementations could be possible, since most appropriate implementations for each architecture could be quite different from each other. For example, lower times can be attained for CPU using typewriter update scheme instead of a checkerboard one. For that reason, we hold the idea that a good measure to compare performances between GPU implementations is the “time per spin flip”, and the speedup respect to a CPU implementation is just additional illustrative information. The variations for the GPU cards are due to two competing factors in the loop of the update kernel. One is strictly decreasing with $L$ and is related to the amount of global memory movements per cell. Since there is one RNG for each thread, the global memory for the RNG state is retrieved one time in the beginning and stored in the end, therefore the larger the $L$, this single load/store global memory latency is distributed into more cells. The second factor is increasing in $L$ and is given by the inherent overhead incurred by a loop (comparison and branching), that for $L=32768$ amounts to 4096 repetitions. The bottom of Fig.\[figure:tsfL\] shows a close up for the GPU curves, and where we also add four more curves that are representative of the twenty possible combinations [^8] of cache settings for the Fermi architecture (see tables 81-82 in  [@nvidia10ptx22]). Given that both execute exactly the same machine code, we attribute the variations of speed to the different memory architectures both GPU generations have. It is interesting to see how the curve [`cgcg`]{}, a setting that skips L1 cache, slightly improves the performance without modifying the code. The worst possible case for cache setting renders a strictly slower performance than the older GF100 generation. We also frame at $256\times 256$ and $1024\times 1024$, obtaining a 25% of performance penalty for the former, and a performance increase of 2% in the later. This gives us more evidence that the framing at $512\times 512$ is an appropriate trade-off between memory consumption by the RNG and the speed of the code. Although there are divergent branches inside the code, even for deterministic cell writes (the boolean “or” operator semantics is shortcircuted), eliminating all divergent branches doing an arithmetic transformation does not bring any performance improvement. This shows the dominance of memory requests over the integer and floating point operations, and the ability of the hardware scheduler in hiding the divergent branch performance penalty in between the memory operations. To our knowledge this is the first time the Potts model is implemented in GPUs, so there is no direct performance comparison. There exist previous works that deal with similar problems and that report performance measurements. Preis *et. al* [@PrViPaSc2009] implemented a 2D Ising model in GPUs, they reported a speedup of $60x$ upon their CPU implementation using a GTX 280. Their implementation has the disadvantage that the system size is limited by the *maximum number of threads per block* allowed (enforcing $L\leq 1024$ on GT200 and $L\leq 2048$ on GF100). Later on, Block, Virnau and Preis [@BlViPr2010] simulated the 2D Ising model using multi-spin coding techniques obtaining 0.126ns per spin flip in a GT200 architecture. Weigel [@We2010; @We2011] has also considered the 2D Ising model, obtaining a better 0.076ns per spin flip [@weigel10site] on the same architecture, which is improved to 0.034ns per spin flip on a Fermi (GF100) architecture. Moreover, this was obtained with a single-spin coded implementation; however this gain is partially due to the use of a multi-hit technique updating up to $k=100$ times a set of cells while others remain untouched. Notwithstanding, Weigel obtains [@We2011] 0.13ns per spin flip for the update without multi-hit and multi-spin, which is comparable with the result of the multi-spin coded version in [@BlViPr2010]. Performance results on the 3D Ising model are also available [@PrViPaSc2009; @We2011]. The Heisenberg spin glass model is simulated on a GPU in Ref.[@BePaPa2011], and for this floating point vector spin, they achieve a 0.63ns per spin flip update on a GF100 architecture. Implementations of the Heisenberg model are also reported in [@We2011] with times per spin flip down to 0.18ns on a Fermi architecture, representing impressive speedups (up to 1029x). Recently, a GPU parallelization for the GF200 architecture was implemented in the Cellular Potts Model [@TaSo2011] with $\sim 80x$ speedup respect to serial implementations. We also conduct end-to-end benchmarks of a small simulation ($q\!=\!9$, $L\!=\!1024$, \# of samples=3, $T_{min}\!=\!0.71$, $T_{max}\!=\!0.73$, $\delta T\!=\!0.002$, $t_{tran}\!=\!2000$, $t_{max}\!=\!8000$, $\delta t\!=\!50$). We obtain 193s for the GTX 280 and 8115s for the Intel Core 2 architecture, with a global speedup of 42x, very similar to the speedup reported by the microbenchmarks. The coincidence of microbenchmarks and end-to-end benchmarks results reaffirms the fact that all the optimization efforts should go to the update kernel [`updateCUDA`]{}. Metastability in the q-state Potts model {#metastability} ======================================== Based on Binder’s criterion described in Section \[model\] we analyze the existence of metastability for $q>4$ as the system size increases. From Fig.\[fig3\] we see that for large enough values of $q$ the energy branches attain the transition temperature from both sides with a finite slope, even with a relatively poor temperature resolution. As $q$ decreases, a closer approach to $T_c$ is needed in order to distinguish whether a true singularity at $T_c$ is present or not, since the spinodal temperatures are expected to be located very close to [@LoFeGrCa2009] $T_c$. A power law divergence of the specific heat at $T_c$ would imply the following behavior $$\begin{aligned} e_{T<T_c} & = & e_o - A^- (1-T/T_c)^{1-\alpha_-} \label{divergence-at-Tc}\\ e_{T>T_c} & = & e_d - A^+ (1-T_c/T)^{1-\alpha_+}\label{divergence-at-Tc-2}\end{aligned}$$ with $\alpha_-,\alpha_+ > 0$\ On the other hand, if well defined metastable states occur, the energy could be represented in terms of a specific heat diverging at pseudospinodal temperatures $T_{sp}^+ , T_{sp}^-$ $$\begin{aligned} e_{T<T_c} & = & e_{sp}^- - A^- (1-T/T_{sp}^+)^{1-\alpha_-} \label{divergence-at-Tsp}\\ e_{T>T_c} & = & e_{sp}^+ - A^+ (1-T_{sp}^-/T)^{1-\alpha_+}\label{divergence-at-Tsp-2}\end{aligned}$$ If divergences for the specific heat occur at the pseudospinodals, we should see exponents $\alpha_- = \alpha_+ \approx 0$ in Eqs.(\[divergence-at-Tc\]) and (\[divergence-at-Tc-2\]), since Eqs.(\[divergence-at-Tsp\]) and (\[divergence-at-Tsp-2\]) imply finite slopes at $T_c$.\ We measure equilibrium curves for $e_{T<T_c}$ ($e_{T>T_c}$) starting from a ordered (disordered) initial state and performing a cooling (heating) procedure approaching $T_c$, as described in section \[algorithm\]. The results are presented in Fig.\[pendsup\] and \[pendinf\]. In both figures a crossover of the curve’s slope as we approach $T_c$ can be observed for all values of $q$. Close enough to $T_c$, the curves for $q=9,15,96$ show exponents which are indistinguishable from $1$, consistently with the existence of metastability and divergences at spinodal temperatures different from $T_c$, at least for $q \geq 9$. \[pendsup\] ![(Color online) Log-log plot of energy differences versus temperatures $T>T_c$ for various $q$. Data correspond to averages over $20$ samples of systems size $L=2048$, equilibration times ranging from $~5 \times 10^4[MCS]$ to $~2\times 10^5[MCS]$ and measurement times of $~5 \times 10^4[MCS]$, with sampling every $100[MCS]$. Error bars were estimated considering a $90\%$ confidence interval (only some representative error bars are shown for clarity). Full color lines are power-law fits of the form $|(e-e_d)/e_d| = A (1-T_c/T)^a$ (resulting exponents $a$ are showed in the labels). Dashed vertical lines of different colors indicate correspond to $T=T_c+ \Delta T(q)$, with $\Delta T =T_c-T_{sp}^-$ and $T_{sp}^-$ from Eq.(\[Tsp-STD\]). The inset shows $q=9$ curves for different system sizes, the full orange curve indicates the slope 1.](fig6.eps "fig:") \[pendinf\] ![(Color online) Log-log plot of energy differences versus temperatures $T<T_c$ for various $q$. Data correspond to averages over $20$ samples of systems size $L=2048$, equilibration times ranging from $~5 \times 10^4[MCS]$ to $~2\times 10^5[MCS]$ and measurement times of $~5 \times 10^4[MCS]$, with sampling every $100[MCS]$. Error bars were estimated considering a $90\%$ confidence interval (only some representative error bars are shown for clarity). Full color lines are power-law fits of the form $(e-e_o)/e_o = A (1-T/T_c)^a$ (resulting exponents $a$ are showed in the labels). ](fig7.eps "fig:") As pointed out by Binder [@Bi1981], to observe the crossover (if it exists at all) a temperature resolution at least $\Delta T =T_c-T_{sp}^-$ for the high energy branch (or $\Delta T =T_{sp}^+ - T_c$ for the low energy branch) is needed, where $\Delta T \equiv |T-T_c|$. A numerical estimation of the lower spinodal temperature predicted by Short Time Dynamics [@LoFeGrCa2009] is given by $$\label{Tsp-STD} \frac{T_c-T_{sp}^-}{T_c}\simeq 0.0007 \left(\ln(1+q-4)\right)^{2.81}.$$ The vertical dashed lines in Fig.\[pendsup\] correspond to $T=T_c + \Delta T(q)$, as predicted from Eq.(\[Tsp-STD\]) according to the previous criterion. The coincidence with the crossover points for all values of $q$ shows a complete agreement between the present results and those from Short Time Dynamics calculations. To attain the desired temperature resolution the system size has to be large enough, since finite size rounding errors are expected to decay as $1/L$ [@Bi1981; @Ja1993]. This is illustrated in the inset of Fig.\[pendsup\] for the particular case $q=9$, where a strong finite size effect is observed for $L=128$. A rough estimation of the minimum size required to reduce the error $L \approx 1/\Delta T$ predicts $L=400$. We see that this finite size effect is suppressed for sizes $L \geq 1000$. Moreover, further increase of the system size does not change the behavior of the curves close to $T_c$. We have no estimations for $T_{sp}^+$ for arbitrary values of $q$, but a close look to the curves in Fig.\[fig3\] suggest that $T_{sp}^+$ is closer to $T_c$ than $T_{sp}^-$ is. This is consistent with the behavior observed in Fig.\[pendinf\], where crossovers occur closer to $T_c$ than in Fig.\[pendsup\]. Our results for $q=6$ are not conclusive. For instance, in the high energy branch we observe the previously discussed crossover, but the slope changes from $0.6$ to $0.8$. Such variation is of the same order of the fitting error below the crossover. This is because statistical fluctuations in the energy become very important at the required temperature resolution level ($\Delta T/T_c \leq 10^{-4}$), as can be seen in Fig.\[pendsup\]. Hence, to obtain a clear answer a very large sample size (one can roughly estimate $\sim 2000$) and probably a larger system size is needed. In fact, we performed simulations with a sample size $50$ (for $L=2048$), without any improvement in the results. We even simulate systems of $L=8192$ with a sample size on the order of $10$ with no appreciable change. The situation is more difficult for the low energy branch, where no clear evidence of crossover is observed (see Fig.\[pendinf\]). However, one could expect the existence of an upper spinodal temperature $T_{sp}^+$ located closer to $T_c$ than the lower one $T_{sp}^-$ and therefore a higher temperature resolution (together with larger system and sampling sizes) would be needed to elucidate whether there is metastability or not. Discussion ========== We implemented a CUDA-based parallel Monte Carlo algorithm to simulate the Statistical Mechanics of the q-state Potts model. The code allows a speedup (compared with an optimized serial code running on a CPU) from 42x in the GTX 280 card up to 155x in a GTX 480, with an average time per spin flip of 0.54ns down to 0.147ns respectively. Those times are of the same order of previous implementations in the simpler case of the Ising model, without the usage of sophisticated programming techniques, such as multi spin coding. Besides the speedup, the present algorithm allows the simulation of very large systems in very short times, namely $\sim10^9$ spins with an average time per $\mathit{MCS}$ of 0.15s. Such performance is almost independent of the value of $q$. The key factors to achieve those numbers is the per-thread independent RNG that is fast and takes only a few registers, the framing scheme that increases the amount of computation done by each thread and at the same time it bounds the number of independent RNG needed, and finally the cell-packing mapping that orders the memory access. The possibility of performing high speed simulations at large enough system sizes allowed us to study the metastability problem in the two dimensional system based on Binder’s criterion, namely, on the existence or not of specific heat singularities at spinodal temperatures different from the transition one (but very close to). Our results provide a positive numerical evidence about the existence of metastability on very large systems, at least for $q\geq 9$. Even when our results for $q=6$ suggest the same behavior as for larger values of $q$, they could also be consistent with the absence of metastability. Hence, one cannot exclude the existence of a second critical value $4<q^*\leq 9$ such that metastability disappears when $4<q<q^*$. Although the present implementation was done for a two dimensional system with nearest neighbors interactions (checkerboard update scheme), its generalization to three dimensional systems and/or longer ranged interactions is feasible, but some features should be adjusted. For the generalization to the 3D case, the checkerboard scheme defining two independent sub-networks persists, however the cell-packing scheme should be updated conveniently. For the 2D case with first and second neighbors interactions, there are nine independent sub-networks to update instead of two. The combination of both generalizations is direct. The present implementation is based on the simplest single-spin flip algorithm namely, Metropolis. Its extension to more sophisticated single spin flip algorithms (See for example Refs.[@LoQiScJi2004], [@SuTo2010]) is also straightforward and represent an interesting prospective in the field. In particular, temperature reweighting [@FeSw1989] or other histogram based techniques (see for example [@Landau-Binder-2009]) can be implemented by keeping track of the energy changes at each spin flip for each step, instead of making the calculation of the energy over the whole system at each step. This kind of tracking could be done without loose of performance by implementing a paralell acumulation of local energy changes *on-the-fly* taking advantage of the GPU’s hierarchic memory scheme. Besides its theoretical interest, the large-$q$ Potts model (or minor variations of it) is widely used for simulating the dynamics of a large variety of systems, such as soap bubbles and foam [@GlWe1992; @SaGl2006], grain growth [@WeGl1992; @ThAlGr2006], gene segregation [@KoAvHaOsNe2010], biological cells [@GrGl1992], tumor migration [@TuSh2002], image segmentation [@Be2010], neural networks [@Kr2008] and social demographics behavior [@Sc2005; @TrBr2009]. The present implementation of the Potts model on GPUs, or easy modifications of it, would result helpful for some of the above cited applications. The possibility of simulating bigger systems and having results faster than usual should be welcomed in the statistical physics community. Our CUDA code is available for download and use under GNU GPL 3.0 at our Group webpage [@potts3site]. #### Acknowledgments We thank C. Bederián for very useful suggestions. We would also like to thank to A. Kolton and C. Sánchez for kindly giving access to a GTX 470 and a GTX 480 respectively, for benchmarks and simulations. Fruitful discussions and suggestions from O. Reula and M. Bellone are also acknowledged. This work was partially supported by grants from FONCYT/ANPCYT (Argentina), CONICET (Argentina), SeCyT, Universidad Nacional de Córdoba (Argentina) and NVIDIA professor partnership program. [58]{} natexlab\#1[\#1]{}\[2\][\#2]{} , , , . , () . , , , , . , () . , , , , () . , , , () . , , () . , , , () . , () . , , (2011). , , , , , () . , () . , , , , , () . , , , , , , in: , , , pp. . , () . , () . , , () . , , , () . , , , () . , , , , () . , , () . , , , () . , , () . , , () . , , , , () . , () . , , , , , () . , () . , , , () . , () . , , , () . , , (). . , () . , , () . , (). . , et al., , . , , , , , . , , , , . , () . , , . , , , . , () . , () . , . . , () . , () . , , (), . , . . , , () . D. Loison, C. L. Qin, K. D. Schotte and X. F. Jin, Eur. Phys. J. B [**41**]{}, 395–412 (2004). H. Suwa and S. Todo, Phys. Rev. Lett. [**105**]{}, 120603 (2010). , , () . , , () . , , () . , , () . , , , () . , , , , () . , , () . , , () . , () . , () . , () . , , () . [^1]: This values apply to consumer graphics cards. The Tesla HPC line incorporates up to 6GB of memory (e.g. Tesla C2070), that is configurable to be ECC in order to improve reliability [^2]: For system sizes smaller than $N=512^2$ we use smaller frames, and then, fewer RNG. But $512\times 512$ is the standard framing choice for most of the work. [^3]: It should be remembered that $J_{Potts} = 2 J_{Ising}$ if we compare our hamiltonian (\[Hamiltonian\]) with the usual Ising hamiltonian, thus giving a $T_c(q=2)$ which is a half of the commonly appearing in Ising model works. [^4]: Using CC 2.0 ISA does not bring any performance improvement. [^5]: Compiler options [`-O3 -ffast-math -march=native -funroll-loops`]{}. [^6]: It’s worth mentioning that in order to compare this value with CPU implementations of the Ising model (e.g., 8ns in [@We2011]), one should take into account that the Potts model update routine requires an extra random number to choose where to flip the spin. In addition, using MWC doesn’t provide the fastest execution times; other RNGs as LCG-32 give better times but not completely reliable results [@We2011] due to their short period. For the sake of completeness, we report that eliminating one random number toss and using LCG-32 instead of MWC we obtain a spin flip time of 14.5ns for our CPU implementation. [^7]: Each block is filling the maximum 1024 threads, we also disable L1 cache for a (free) slight performance improvement: compiler options [`-Xptxas -dlcm=cg -Xptxas -dlcm=cg`]{}. [^8]: For load instructions it could be [`ca, cg, cs, lu, cv`]{}, and for store instructions [`wb, cg, cs, wt`]{}. The default setting is [`-Xptxas -dlcm=ca -Xptxas -dlcm=wb`]{}.
--- abstract: 'Employing the non-additive Tsallis entropy, $S\sim A^{\beta}$, for the large-scale gravitational systems, we disclose that in the cosmological scales both Friedmann equation and the equation of motion for the Newtonian cosmology get modified, accordingly. We then derive the modified Newton’s law of gravitation which is valid on the large scales. We show that on the relativistic regime, the modified Friedmann equation admits an accelerated expansion, for a universe filled with ordinary matter, without invoking any kind of dark energy, provided the non-extensive parameter is chosen $\beta<1/2$. On the non-relativistic regime, however, the modified Newton’s law of gravitation can explain the flat galactic rotation curves without invoking particle dark matter provided $\beta \lesssim1/2$. Our study may be regarded as an alternative explanation for the “dark side of the Universe", through modification of the gravitational field equations.' address: | Physics Department and Biruni Observatory, Shiraz University, Shiraz 71454, Iran\ Institut für Physik, Universität Oldenburg, Postfach 2503 D-26111 Oldenburg, Germany author: - 'Ahmad Sheykhi[^1]' title: New explanation for the accelerated expansion and flat galactic rotation curves --- Introduction\[Intro\] ===================== It is quite possible to speculate that the observed astrophysical and cosmological phenomenons such as the late time acceleration of the Universe expansion, the flat rotation curves of spiral galaxies, the observed dynamics of the cluster of galaxies, the gravitational lensing, etc, which cannot be understood through the standard Newton and Einstein theories of gravitation, are just weakness of the underlying theory of gravity. If that is true, one may expect that such phenomenons are just the geometrical effects arising due to the leakage of the theory. Therefore, one should be capable to explain the observed cosmological phenomenons through modifying the underlying theory of gravity. Many attempts have been done to find possible solutions for the puzzles of accelerated expansion and flat galactic rotation curves from geometrical perspective. In particular, over the past years, modified theories of gravity have gained considerable attentions. [Among them, perhaps the most known, is $f(R)$ gravity which try to explain the early inflation, the late time acceleration and even the flat galactic rotation curves through modification of Einstein-Hilbert action (see [@Cap; @Noj; @NOdi; @NO1; @Riazi; @Sobuti; @Cog; @Sot; @Chr; @Sho; @NO2; @Cap2; @NO3; @Jian; @Odi1; @Odi2] and references therein). For a recent comprehensive review on different modified gravity techniques which contains all the necessary information in the context of cosmology, emphasizing on inflation, bouncing cosmology and late-time acceleration, we refer to [@NO4].]{} [Another attempt for explanation the dark matter puzzle, through a non-relativistic model,]{} is the Modified Newtonian Mechanics (MOND) [@Milgrom], which proposed to address the flat rotation curves of spiral galaxies through modifying Newton’s law of gravitation. Although MOND theory can explain the flat galactic rotation curves, however its suffers several problems. First of all, it is problematic to embed MOND theory within a more comprehensive relativistic theory of gravity and hence, its theoretical origin remains un-clear. Second, it predicts that the individual halo associated with a galaxy is infinite in extent, while recent galaxy-galaxy lensing results suggest that galaxy halo’s may have a maximum extent of about $0.5$ Mpc [@halo]. Some authors have also tried to explain the flat rotation curves through modification of Einstein gravity [@Sobuti1; @Sobuti2; @Mim1; @Myr2; @MimMOND; @ASJ]. In this paper, we would like to propose and alternative perspective for attack to the problem of accelerated expansion as well as the flat galactic rotation curves through modifying Friedmann equation and Newton’s law of gravitation, using thermodynamic argument. In $1902$ Gibbs pointed out that, in systems where the partition function diverges, the standard Boltzmann-Gibbs theory is not applicable, and large-scale gravitational systems are known to fall within this class. Hence, the usual Boltzmann-Gibbs additive entropy must be generalized to the non-additive (non-extensive) entropy (the entropy of the whole system is not necessarily the sum of the entropies of its sub-systems). In $1988$ Tsallis generalized standard thermodynamics to non-extensive one, which can be applied in all cases, and still possessing standard Boltzmann-Gibbs theory as a limit [@Tsallis]. Based on this, and using the statistical arguments, Tsallis and Cirto argued that the entropy of a black hole does not obey the area law and can be modified as [@Tsa] $$\label{TEnt} S=\gamma A^{\beta},$$ where $A$ is the horizon area, $\gamma$ is an unknown constant, and $\beta$ known as nonextensive parameter. On the other hand, thermodynamical interpretation of gravitational field equations, based on the profound connection between the first law of thermodynamics on the boundary, and the gravitational field equations in the bulk, is now an established fact [@Jac; @Eling]. It was argued that, given an entropy expression at hand, in any gravity theory, one can rewrite the Friedmann equations of the Friedmann-Robertson-Walker (FRW) universe in the from the first law of thermodynamics on the apparent horizon and vice versa [@Wang; @Fro; @Pad1; @PPad; @CaiKim; @Cai2; @Cai3; @Cai4; @Cai5; @Shey1; @Shey2; @Pad2; @Shey3; @JSS; @JS]. [Recently, when the entropy of the gravitational system is in the form of non-extensive Tsallis entropy, the thermodynamical interpretation of the gravitational field equation as well as the effects of non-extensive parameter in the context of cosmology have arisen a lot of interests. It was shown that non-extensive parameter change the strength of the gravitational constant and consequently the energy density of the dark components of the universe, requiring more (less) dark energy to provide the observed late time universe acceleration [@Barb; @Nunes]. In the context of the non-extensive Kaniadakis statistics [@Kan], the Jeans length was investigated and the results were compared with the Jeans length obtained in the non-extensive Tsallis statistics [@Abero]. The cosmological scenarios based on the non-extensive Tsallis entropy have been explored in [@Em]. It was shown that the Universe exhibits the usual thermal history, with the sequence of matter and dark energy eras, and depending on the value of non-extensive parameter, this scenario may exhibits a varieties of dark energy models [@Em]. Taking the entropy associated with the apparent horizon of the Friedmann-Robertson-Walker (FRW) Universe in the form of Tsallis entropy, and assuming the first law of thermodynamics, $dE=T_hdS_h+WdV$, holds on the apparent horizon, the modified Friedmann equations describing the dynamics of the universe with any spatial curvature were extracted [@SheT]. It was argued that with appropriate choice for the non-extensive parameter, this model is capable to reproduce the late-time cosmic acceleration, as well as the early deceleration, in the absence of dark energy [@SheT]. The studies on the Tsallis cosmology were also generalized to the case with variable non-extensive parameter [@NOS1]. It was shown that the extra terms, arisen from non-extensive entropy, can play the role of an effective dark energy describing the evolution of the universe from early epoch to the late time acceleration [@NOS1]. More recently, it was shown that it is quite possible to establish a correspondence between the non-extensive Tsallis cosmology and cosmology with a fluid with redefined equation of state for both constant and variable non-extensive parameter [@NOS2]. In this viewpoint, the effective fluid can derive successfully not only the present acceleration but also the early inflation without spoiling the correct late-time acceleration [@NOS2].]{} In this paper, we consider Tsallis entropy given in (\[TEnt\]) as the entropy expression of the gravitational systems, and disclose that, in the relativistic regime, it could modify Friedmann equation and the resulting equation is capable to provide, naturally, the late time accelerated expansion without invoking any kind of dark energy. On the non-relativistic regime, however, the modification to the Newton’s law of gravitation leads to the explanation of the flat rotation curves of galaxies without needing to particle dark matter. [Let us stress the similarity and difference of the present work with alternative theories of gravity, in particular $f(R)$ gravity. First of all, similar to our work, $f(R)$-gravity theories can also explain the dark side of the Universe through modification of the geometrical part of the Einstein gravity. For a comprehensive and excellent review on $f(R)$-gravity we refer to [@NO2], where it was shown that $f(R)$ theory can be considered as a unified description for the history of the Universe from the early-time inflation to the late-time acceleration. However, in $f(R)$-gravity, in order to establish a correspondence between the field equations and the first law of thermodynamics, a treatment with nonequilibrium thermodynamics is required [@Eling; @Cai5]. In this case the first law of thermodynamics acquires an additional entropy production term grown up internally due to the non-equilibrium treatment of the system. This is in contrast to the Einstein gravity, and also its modification when the entropy of the system is modified as non-extensive Tsallis entropy. It was shown that, in the presence of non-extensive entropy, the Friedmann equation of FRW cosmology in a nonflat [@Em; @SheT] and flat [@NOS1; @NOS2] universe can be deduced from the first law of thermodynamics, on the apparent/Hubble horizon, in a complete equilibrium situation. This is one of the main difference between our work and $f(R)$ gravity. Besides, for explanation the flat galactic rotation curves, we apply the non-relativistic modified Newton’s law of gravity based on Tsallis entropy, while in $f(R)$-gravity, the problem is addressed through modifying General Relativity [@Sobuti; @Chr; @Sho].]{} This paper is outlined as follows. In the next section, we show how the modified Friedmann equation in Tsallis cosmology leads to the late time accelerated expansion in the Universe filled with ordinary baryonic matter. In section III, we derive the modified Newton’s law of gravitation based on Tsallis entropy. In section IV, we show that the modified Newton’s law of gravitation can also be extracted from entropic force scenario. In section V, we employ the modified Newton’s law of gravity and disclose that it can explain the flat rotation curves of spiral galaxies. In this viewpoint, dark matter has only geometrical effect originates through modification of gravity. The last section is devoted to conclusions. Accelerated universe in Tsallis cosmology\[FIRST\] ================================================== Let us start by deriving the modified Friedmann equation based on the non-extensive Tsallis entropy from the first law of thermodynamics. This problem was already studied in [@Em; @SheT; @NOS1; @NOS2], but since it provides a basis for the next sections, for completeness, we review the derivation here, briefly. Following [@SheT], we assume the non-extensive Tsallis entropy affects on the geometry part of the Friedmann equations and hence we keep the energy content of the Universe in the form of the standard perfect fluid. It is important to note that a different viewpoint was, recently, adopted in [@NOS2] by assuming that the non-extensive entropy modifies the energy density and hence pressure of the Universe. As a result, one should redefine the equation of state of the perfect fluid [@NOS2]. Suppose the background of spacetime is given by the FRW geometry, $$ds^2={h}_{\mu \nu}dx^{\mu} dx^{\nu}+\tilde{r}^2(d\theta^2+\sin^2\theta d\phi^2).$$ In the above line element $\tilde{r}=a(t)r$, $x^0=t, x^1=r$, and $h_{\mu \nu}$=diag $(-1, a^2/(1-kr^2))$ stands for the metric of two dimensional subspace. We also assume our Universe is bounded by an apparent horizon with radius $ \tilde{r}_A={1}/{\sqrt{H^2+k/a^2}}. $ Using the definition of surface gravity $\kappa$ for the apparent horizon, it is easy to show that the temperature on the apparent horizon can be given through relation [@Cai2] $$\label{T} T=\frac{\kappa}{2\pi}=-\frac{1}{2 \pi \tilde r_A}\left(1-\frac{\dot {\tilde r}_A}{2H\tilde r_A}\right).$$ Suppose the energy-momentum tensor of the Universe is $ T_{\mu\nu}=(\rho+p)u_{\mu}u_{\nu}+pg_{\mu\nu}, $ the conservation equation, $\nabla_{\mu}T^{\mu\nu}=0$, for the FRW geometry implies the continuity equation as $\dot{\rho}+3H(\rho+p)=0$. Because our Universe is expanding, as a thermodynamical system, a work should be done due to the volume change of the system. The density of this work, on the FRW background, is given by [@Hay2] $$\label{Work2} W=\frac{1}{2}(\rho-p).$$ Finally, we propose the first law of thermodynamics holds on the apparent horizon, $$\label{FL} dE = T dS + WdV.$$ Note that for a pure de-Sitter space, $\rho=-p$, the the first law reduces to $dE = T dS -pdV$. If we denote the total energy of the universe $E=\rho V$ where $V=\frac{4\pi}{3}\tilde{r}_{A}^{3}$, after differentiating, we arrive at $$\label{dE1} dE=4\pi\tilde {r}_{A}^{2}\rho d\tilde {r}_{A}+\frac{4\pi}{3}\tilde{r}_{A}^{3}\dot{\rho} dt.$$ Substituting $\dot{\rho}$ from the continuity equation, yields $$\label{dE2} dE=4\pi\tilde {r}_{A}^{2}\rho d\tilde {r}_{A}-4\pi H \tilde{r}_{A}^{3}(\rho+p) dt.$$ Then, we should consider the evolution of the Tsallis entropy which we assume has the form (\[TEnt\]). Taking the differential of the Tsallis entropy, we find $$\label{dS} dS= 8\pi \gamma \beta (4 \pi {r}_{A}^2 )^{\beta-1} \tilde {r}_{A} d\tilde {r}_{A}.$$ Inserting Eqs. (\[T\]), (\[Work2\]), (\[dE2\]) and (\[dS\]) in the first law (\[FL\]), we obtain $$\label{Fried1} \frac{\gamma \beta}{\pi \tilde {r}_{A}^3} \left(4\pi \tilde {r}_{A}^2\right)^{\beta-1} d\tilde {r}_{A}= H(\rho+p) dt.$$ Using the continuity equation, we get $$\label{Fried2} -\frac{2}{\tilde {r}_{A}^3} \left(4\pi \tilde {r}_{A}^2\right)^{\beta-1} d\tilde {r}_{A} = \frac{2\pi }{3\gamma \beta}d\rho.$$ Integrating yields $$\label{Frie3} \frac{1}{\tilde {r}_{A}^{4-2\beta}}= \frac{2\pi (2-\beta) }{3\gamma \beta} \left(4\pi \right)^{1-\beta} \rho,$$ where the constant of integration is set equal to zero. Finally, we define the constant $\gamma$ as, $$\label{gamma} \gamma\equiv\frac{2-\beta }{4\beta L_p^2 } \left(4\pi \right)^{1-\beta},$$ where $L_p=\sqrt{G\hbar/c^3}$ is the Planck length. Since the entropy is positive definite ($\gamma>0$), the above definition also implies $\beta<2$. After using definition $\tilde {r}_{A}$, Eq. (\[Frie3\]) immediately transforms to $$\label{Fried4} \left(H^2+\frac{k}{a^2}\right)^{2-\beta} = \frac{8\pi L_p^2} {3} \rho.$$ In this way we derive the modified Friedmann equation which describes the evolution of the Universe in Tsallis cosmology based on the non-extensive Tsallis entropy. When $\beta=1$, the standard Friedmann equation is recovered. The second Friedmann equation can be easily derived by combining the continuity equation with Eq. (\[Fried4\]). Now we want to study the cosmological consequences of the obtained modified Friedmann equation. It is a matter of calculation to show that the second derivative of the scale factor satisfies the following equation $$\begin{aligned} \frac{\ddot{a}}{a} \left(H^2+\frac{k}{a^2}\right)^{1-\beta}=-\frac{4\pi L_{p}^{2}}{3(2-\beta)} \left[(2\beta-1) \rho +3p\right]. \label{2Fri5}\end{aligned}$$ Therefore, the accelerated expansion ($\ddot{a}>0$) can be achieved provided, $$\begin{aligned} (2\beta-1) \rho +3p <0, \ \ \Rightarrow \ \ \omega< \frac{1-2\beta}{3}, \label{w1}\end{aligned}$$ where $\omega=p/\rho$ denotes the equation of state parameter. Condition (\[w1\]) has interesting consequences. Let us consider it carefully in two cases. In the first case, where $\beta\geq1/2$, we always have $\omega< 0$ as a condition for an accelerated universe. In the second case where $\beta<1/2$, it is quite possible to have $\omega\geq0$, while our universe is still accelerating ($\ddot a>0$). This is a very interesting result which confirms that, in the framework of Tsallis cosmology, the current acceleration of the Universe expansion can be understood, in the presence of the ordinary matter with $w\geq 0$. Precisely speaking, we can consider a universe filled with pressureless baryonic matter, and still enjoys an accelerated expansion without invoking any dark companion for its matter/energy content. The above discussion can also be confirmed by looking explicitly to the scale factor. Assuming a flat FRW universe filled with pressureless matter ($p=0$), the Friedmann equation (\[Fried4\]) admits the solution $a(t)\sim t^{(4-2\beta)/3}$. This implies that $ \ddot{a}(t)\propto (2-\beta)(1-2\beta) \ t^{-(2+2\beta)/3}, $ where the constant of proportionality is also a positive definite [@SheT]. Thus, for $\beta<1/2$ we have an accelerated universe ($\ddot a>0$), in accordance with condition (\[w1\]). It was also argued, not only the accelerated expansion but also the early deceleration as well as the age problem of the Universe can be circumvented automatically in the context of Tsallis cosmology without invoking additional dark component of the energy [@SheT]. [We emphasize here that the authors of [@NOS1; @NOS2] argued that the modified Friedmann equations, derived from Tsallis entropy, can reproduce the late-time acceleration provided one take an effective dark-energy [@NOS1] or a redefined fluid with generalized equation of state [@NOS2]. They also established a correspondence between the modified cosmology through non-extensive thermodynamics and the holographic dark energy model as well as $f(R)$ gravity [@NOS1]. It is important to note that for derivation the Friedmann equations, the authors of [@NOS1; @NOS2] assumed the first law of thermodynamics on the Hubble horizon as $dQ=TdS$, where $dQ$ is the heat flux crossing the horizon, and the spacetime is spatially flat. While, here we derived the modified Friedmann equation for any spatial curvature by assuming $dE=TdS+WdV$ holds on the apparent horizon. Another difference between our work and [@NOS1; @NOS2] is that we could reproduce the late-time acceleration in the presence of ordinary matter, without needing to redefine the fluids or taking into account effective dark energy.]{} [Finally, it is worthy to note that the correspondence between Tsallis cosmology and the standard Firedmann equation with the fluids of redefined equation of stat established in [@NOS2], comes from the fact that the field equations of General Relativity, and hence the Friedmann equations, relate the geometry of spacetime to its energy content. Thus, any modification of the geometry can be translated to modification of the energy content and vice versa. In Tsallis cosmology based on none-extensive entropy, it is quite possible to consider the modification in the geometry part of the gravitational field equation, and keep the energy content as standard perfect fluid [@SheT]. This is reasonable, because the definition of the entropy is based on the area (geometry) of the system, and thus any modification in entropy should affect the geometry part of the field equations and vice versa [@Eling; @Shey1; @Shey2].]{} Modified Newton’s law of gravity\[NL1\] ======================================= In this section we first derive the equation of motion describing the evolution of the universe in Newtonian cosmology. Using this equation, we then derive the Modified Newton’s law of gravitation which is based on the non-extensive Tsallis entropy. We start from the Friedmann equation (\[Fried4\]) by taking the time derivative of it. We arrive at $$\begin{aligned} \label{FrN1} (2-\beta) \frac{\ddot{a}}{a} \left(\dot{a}^2+k\right)^{1-\beta}=-\frac{4\pi L_{p}^{2}}{3} \left[(2\beta-1) \rho +3 p\right]a^{2-2\beta}. \nonumber \\\end{aligned}$$ When $\beta=1$, it reduces to $$\begin{aligned} \label{FrNb1} \frac{\ddot{a}}{a}=-\frac{4\pi L_{p}^{2}}{3} \left( \rho +3 p\right),\end{aligned}$$ which is the evolutionary equation for the scale factor in standard cosmology. We also assume in the Newtonian cosmology the spacetime is Minkowskian with $k=0$, and work in the unit $\hbar=c=1$, and so $L_{p}^{2}=G$ . Therefore, Eq. (\[FrN1\]) reduces to $$\begin{aligned} \label{FrN2} (2-\beta) \frac{\ddot{a}}{a}=-\frac{4\pi G}{3} \left[(2\beta-1) \rho +3p\right] \left(\frac{a}{\dot{a}}\right)^{2-2\beta}\end{aligned}$$ We consider a compact spatial region $V$ with a compact boundary $\mathcal S$, which is a sphere with physical radius $R= a(t)r =H^{-1}$, where $r$ is a dimensionless co-moving coordinate which remains constant for any cosmological object partaking in free cosmic expansion. The active gravitational mass in General Relativity, inside the volume $V$, is defined as [@Cai4] $$\label{ActM} \mathcal M =2 \int_V{dV\left(T_{\mu\nu}-\frac{1}{2}Tg_{\mu\nu}\right)u^{\mu}u^{\nu}}.$$ A simple calculation gives $$\label{ActM2} \mathcal M =(\rho+3p)\frac{4\pi}{3}R^3.$$ In order to transform from General Relativity to the Newtonian gravity, we also replace the active gravitational mass $\mathcal M$ with the total mass $M= \rho V=4\pi \rho R^3/3$. This is equal to transforming $\rho+3p \rightarrow \rho$ in Eq. (\[FrN2\]), $$\begin{aligned} \label{FrN3} (2-\beta) \frac{\ddot{a}}{a}=-\frac{4\pi G}{3}\rho(2\beta-1) \left(\frac{a}{\dot{a}}\right)^{2-2\beta}.\end{aligned}$$ This is nothing but the modified dynamical equation describing the evolution of the Universe in Newtonian cosmology. In the limiting case where $\beta=1$, we find $$\begin{aligned} \label{FrNb3} \frac{\ddot{a}}{a}=-\frac{4\pi G}{3}\rho,\end{aligned}$$ which is the standard equation of motion in Newtonian cosmology. On the other hand, the acceleration of a test particle $m$ near the surface $\mathcal S$ can be written $$\label{FN1} \ddot{R}=\ddot{a} r=F/m.$$ where $F$ is the gravitational force between $m$ from $M$. Equating $\ddot{a}$ in Eqs. (\[FrN3\]) and (\[FN1\]), we find $$\begin{aligned} \label{FrN4} F=- \left(\frac{2\beta-1}{2-\beta}\right)\frac{4\pi G}{3}\rho m R \left(\frac{a}{\dot{a}}\right)^{2-2\beta}\end{aligned}$$ Using the fact that $R=1/H=a/\dot{a}$ and $\rho=M/V$, the above equation can be rewritten as $$\begin{aligned} \label{FrN5} F=- \left(\frac{2\beta-1}{2-\beta}\right)\frac{G M m}{R^{2 \beta}}.\end{aligned}$$ In this way we derive the modified Newton’s law of gravity based on the non-extensive Tsallis entropy. When $\beta=1$, one recovers the well-known Newton’s law of gravitation. Newton’s law from entropic force\[NLentropic\] ============================================== Now we want to employ the idea of entropic gravity proposed by Verlinde [@Ver] and show that the Newton’s law of gravity get modified when the entropy of the system is in the form of Tsallis entropy. According to the Verlinde’s hypothesis, gravity can be regarded as an entropic force caused by changes in the information associated with the positions of material bodies. Using the first principles, namely the equipartition law of energy in statistical mechanics together with the holographic principle, he derived Newton’s law of gravitation, Poisson equation and in the relativistic regime the Einstein field equations of General Relativity [@Ver]. Although it was already addressed by Padmanabhan [@Pad0] that gravity has a statistical origin, and in particular, one can use the equipartition law of energy to provide a thermodynamic interpretation of gravity, the notion that gravity is not a fundamental force and can be identified as an entropic force was first pointed out by Verlinde [@Ver]. According to Verlinde, when a test particle moves apart from the holographic screen, it will experience an effective force equal to $$\label{F} F\triangle x=T \triangle S,$$ where $T$ and $\triangle S$ are, respectively, the temperature and the entropy of the surface and $\triangle x$ is the displacement of the test particle from the holographic screen. Thus, in order to have a non-vanishing entropic force, we need to have a non-zero temperature. Suppose a holographic screen, which by assumption is a spherically symmetric surface $\mathcal {S}$ with area $A=4 \pi R^2$, is a storage device for information and the holographic principle holds. It is naturally to assume the total number of bits $N$, is proportional to the area/entropy, $N \sim A \sim S$. The total energy $E$ of the system inside the holographic screen is distributed on these bits and thus the temperature on the surface is given by the equipartition law of energy $$\label{E} E=\frac{1}{2}Nk_B T \ \Rightarrow \ T=\frac{2E}{N k_B}.$$ We also assume the total energy of the system can be written as $E=Mc^2$ where $M$ is the total mass distribution inside the holographic spherically symmetric screen which is uniformly distributed [@Ver]. The surface $\mathcal {S}$ is located between the test mass $m$ and the mass distribution $M$, and the test mass is assumed to be very close to the surface compared to its reduced Compton wavelength $\lambda_m={\hbar}/{(mc)}$. Finally, we write the number of bits on the holographic surface as $$\label{NA} N=4S=\frac{2-\beta}{\beta G} (4 \pi)^{1-\beta} A^{\beta},$$ where we have assumed $S=\gamma A^{\beta}$ and used definition (\[gamma\]). Following [@Ver], we postulate the change of the entropy associated with the information on the holographic screen equals $$\label{deltaS} \triangle S =2 \pi k_B \ \ \ \rm {when} \ \ \ |\triangle x|= \eta \lambda_m,$$ where $\eta$ is a constant which should be defined latter. We also assume the entropy gradient points radially from the outside of the surface to inside. Combining relations (\[E\]), (\[NA\]) and (\[deltaS\]) with (\[F\]) and working in the unit $\hbar=c=1$, we arrive at $$F= -\frac{\beta}{\eta (2-\beta)} \frac{GMm}{R^{2\beta}}.$$ Finally, we redefine $\eta=\beta/(2\beta-1)$ and rewrite the above equation in the form $$\label{MNL} F= \frac{(1-2\beta)}{(2-\beta)} \frac{GMm}{R^{2\beta}}.$$ This is the modified Newton’s law of gravitation derived from the viewpoint that gravity is an entropic force, by assuming that the entropy of a gravitational system is in the form of the non-additive Tsallis entropy. It is clear that our result from the entropic force approach coincides with the result obtained in the previous section. Our investigation shows that with the correction to the area law, the Newton’s law of gravitation get modified, accordingly. It is important to note that the modified Newton’s law of gravitation given in (\[MNL\]) is only valid for the large- scale gravitational systems. For example, on the galaxies out skirt, where the gravitational field is strong. This is due to the fact that Tsallis entropy is the modified entropy for the non-extensive systems such as large-scale gravitational ones [@Tsa]. Thus, one may expect that in the weak gravitational systems, for instance, inside the galaxy, in the scale of solar system and on the earth, the usual Newton’s law of gravity should be recovered. Therefore, we postulate $\beta=1$ for these scales. As a result, the non-extensive entropy formula no longer holds and the entropy obeys the well-known area law, leading to consistent consequences with experiments and observations. In the next section, we shall use this modified Newton’s law of gravity and show that it is indeed capable to explain the flat rotation curves of spiral galaxies. Explanation of the galaxies rotation curves\[NL1\] ================================================== There are a lot of observational data which confirm that the rotational velocity curves of all spiral galaxies are proportional to the distance from the center $v\propto r$, for inside the galaxy, and rotation curves usually remain *almost* flat far from galactic centers, typically beyond $30-40$ kpc. For inside the galaxy, however, these observations can be completely understood via Newtonian gravity. Unfortunately, for outside a spiral galaxy, Newton’s law of gravitation is not capable to explain the rotational curves, and there is indeed a contradiction between observation and the prediction of the theory, since Newton’s law of gravitation predicts that objects that are far from the galaxy center have lower velocities $v \propto {r^{-1/2}}$, while observations imply that the velocity curves flattened out to $v\simeq constant$. It is well established that the baryonic matter of galaxies does not provide sufficient gravitation to explain the observed dynamics of the systems. The most widely adopted way to resolve these difficulties is the dark matter hypothesis which suggests that the mass of galaxies continues to grow even when there is no luminous component to account for this increase. According to this hypothesis, all visible galaxies are surrounded by massive nonluminous matters. Besides the dark matter proposal, alternative theories of gravitation have been also speculated and debated for justification of the flat rotation curves. As we mentioned in the introduction, Milgrom [@Milgrom] tried to explain the flat rotation curves of galaxies, through modifying Newton’s law of gravity (MOND), and represents it as a geometrical effect. However, the MOND theory suffers the leakage of theoretical origin. ![The speed of a test particle around a spiral galaxy in terms of distance for $\beta=0.40$ and different values of galaxy mass $M_i=\gamma_{i} \times 10^9 M_{\odot}$.[]{data-label="Fig1"}](Fig1.eps){width="7cm"} ![The speed of a test particle around a spiral galaxy in terms of distance for $\beta=0.49$ and different values of galaxy mass $M_i=\gamma_{i} \times 10^9 M_{\odot}$.[]{data-label="Fig2"}](Fig2.eps){width="7cm"} ![The speed of a test particle around a spiral galaxy for a typical galaxy with mass $M=25\times 10^9 M_{\odot}$.[]{data-label="Fig3"}](Fig3.eps){width="7cm"} Here we attack to this problem by modification to the Newton’s law of gravity based on Tsallis entropy. Interestingly enough, we will show that the modified Newton’s law at large scales, given in Eq. (\[MNL\]), is capable to naturally explain the flat rotation curves of spiral galaxies. We postulate that $\beta=1$ for inside the galaxies where the usual Newton’s law holds, and $\beta<1/2$ at far distances, at galaxy out skirt. This implies that inside a galaxy, and at distance $r$ from its center, we must have $$\label{MNL1} a=\frac{v^2}{r}= \frac{GM(r)}{r^2} \rightarrow \ v\propto r,$$ where we have used the fact that inside the galaxy the total mass which contributes to the velocity is $M(r)=4\pi r^3 \rho/3$. However, at distances $r$ large enough for there to be no luminous galactic component indicate that, we have $M(r)=M\simeq constant$, and thus from Eq. (\[MNL\]) we have $$\begin{aligned} \label{MNL2} &&\frac{v^2}{r}= \left(\frac{1-2\beta}{2-\beta}\right) \frac{GM}{r^{2\beta}} \nonumber \\ && \Rightarrow v(r) =\sqrt{\left(\frac{1-2\beta}{2-\beta}\right)GM r^{1-2\beta}}, \label{MNL3}\end{aligned}$$ Clearly, $\beta\neq 1/2$, and in particular we should have $\beta < 1/2$, but for latter convenience, we assume the values of $\beta$ to be close to $1/2$. In order to have a better understanding of the behaviour of $v$ in terms of distance, let apply the above formula for a typical spiral galaxy. For this purpose, we assume $M=M_i$ is the total mass of the galaxy. We also set the Newtonian gravitational constant, $G \simeq 6.674\times 10^{-11} m^3 kg^{-1}s^{-2}$ and take the mass of the Sun, $M_{\odot} \simeq 10^{30} kg $. Since the mass of a typical galaxy is of order $\sim 10^9 M_{\odot}$, thus in these figures we assume the mass of a typical galaxy ranges as, $8\times 10^9 M_{\odot} <M_i < 107 \times 10^9 M_{\odot} $, which are reasonable values, at least for small spiral galaxies. We have plotted Figs. 1-3 for different values of the mass $M$ and non-extensive parameter $\beta$. In all figures, we observe that the speed of a test particle increase at small distance(inside galaxy) and tends to almost a constant value at far distance, at galaxy out skirt. From Figs. 1 and 2 we see that for a fixed value of $\beta<1/2$, but close to it, at any distance, the orbital speed increases with increasing the mass of the galaxy. Also, Fig. 3 one can see that for fixed value of $M$, at any distance, the orbital speed increases with increasing the non-extensive parameter $\beta$. These figures are compatible with astrophysical data [@NP; @Man; @Bri]. Note that here we have presented the ideas, and have shown how the modified Newton’s law of gravity given in (\[MNL3\]), at large distance, can explain the flat galactic rotation curves. We leave the details of data fitting of this model with observations for future studies. conclusions {#Con} =========== Using the non-extensive Tsallis entropy for the large-scale gravitational systems, we disclosed that on the relativistic cosmological background, the Firedmann equations describing the evolution of the FRW Universe get modified, accordingly. Starting from the first law of thermodynamics on the apparent horizon, we derived the modified Friedmann equation. We observed that when the non-extensive parameter satisfies $\beta<1/2$, the late-time acceleration of the cosmic expansion can be achieved in the presence of the ordinary matter. This implies that one may consider a universe filled with baryonic matter, and still enjoys an accelerated expansion without invoking any dark companion for its matter/energy content. On the other hand, in the regime of non-relativistic gravity, one is able to reproduce the equation of motion describing the evolution of the universe in Newtonian cosmology. We then derived the modified Newton’s law of gravitation based on the Tsallis entropy from two approaches. The first one is an inverse approach by starting from Newtonian cosmology, and the second one is extracted through entropic force scenario [@Ver]. We showed that both approaches lead to the same results. Interestingly enough, we observed that flat galactic rotation curves can be explained, through modified Newton’s law of gravitation provided $\beta \lesssim 1/2$, without needing to particle dark matter. Finally, we would like to stress that in this work, in contrast to $f(R)$ gravity theories which try to explain the flat galactic rotation curves through modifying Einstein gravity, we implemented the problem in the context of non-relativistic modified Newton’s law of gravity. In a sense our work can be located in the MOND theories category, however, the advantage of our work is that its theoretical origin is well established. [99]{} S. Capozziello, Int. J. Mod. Phys. D [**11**]{}, 483 (2002). [ S. Nojiri and S. D. Odintsov, Phys. Rev. D [**68**]{}, 123512 (2003), \[arXiv:hep-th/0307288\].]{} S. Nojiri and S. D. Odintsov, Phys. Lett. B [**631**]{}, 1 (2005), \[hep-th/0508049\]. S. Nojiri and S. D. Odintsov, Phys. Rev. D [**74**]{}, 086005 (2006) \[hep-th/0608008\]. R. Zaregonbadi, M. Farhoudi, N. Riazi, Phys. Rev. D [**94**]{}, 084052 (2016). \[arXiv:1608.00469\]. Y. Sobouti, Astron. and Astrophys. [**464**]{}, 921 (2007). [ G. Cognola, E. Elizalde, S. Nojiri, S. D. Odintsov, L. Sebastiani and S. Zerbini, Phys. Rev. D [**77**]{}, 046009 (2008). \[arXiv:0712.4017\].]{} T. P. Sotiriou and V. Faraoni, Rev. Mod. Phys. [**82**]{}, 451 (2010),\[arXiv:0805.1726\]. C. G. Boehmer, T. Harko, F. S. N. Lobo, Astropart. Phys. [**29**]{}, 386 (2008) \[arXiv:0709.0046\] [ S. Nojiri and S. D. Odintsov, Phys. Rept. [**505**]{}, 59 (2011), \[ arXiv:1011.0544\].]{} [ S. Capozziello, M. De Laurentis, Phys. Rept. [**509**]{}, 167 (2011), \[arXiv:1108.6266\].]{} [ S. Nojiri, S.D. Odintsov, Mod. Phys. Lett. A [**29**]{}, No. 40, 1450211 (2014), \[arXiv:1408.3561\].]{} [ F. Shojai, A. Shojai, Gen. Relat. Gravit. [**46**]{}, 4, (2014) 1704, \[arXiv:1404.0299\].]{} J.-h. He, A.J. Hawken, B. Li, L. Guzzo, Phys. Rev. Lett. [**115**]{}, 071306 (2015), \[arXiv:1501.00846\]. [ S. D. Odintsov, V. K. Oikonomou, Phys. Rev. D [**99**]{}, 104070 (2019) \[arXiv:1905.03496\]. ]{}[ S. D. Odintsov, V. K. Oikonomou, Phys. Rev. D [**99**]{}, 064049 (2019) \[arXiv:arXiv:1901.05363\].]{} [ S. Nojiri, S.D. Odintsov, V.K. Oikonomou, Phys. Rept. [**692**]{}, 1 (2017), \[arXiv:1705.11098\].]{} M. Milgrom, Astrop. J [**270**]{}, 365 (1983);\ M. Milgrom, Astrop. J [**270**]{}, 371 (1983). Hoekstra, H., yee, H.K.C. and Gladders, M.D., astro-ph/0109514 Yousef Sobouti, arXiv:0810.2198 Yousef Sobouti, Dark Matter in Astrophys. Particle Phys. 356 (2009) . A. H. Chamseddine and V. Mukhanov, *Mimetic Dark Matter*, JHEP [**1311**]{}, 135 (2013). R. Myrzakulov, et. al., Class. Quant. Grav. [**33**]{}, 125005 (2016). arXiv:1510.02284 S. Vagnozzi, Class. Quant. Grav. [**34**]{}, 185006 (2017). arXiv:1708.00603 A. Sheykhi, S. Grunau, \[arXiv:1911.13072\]. C. Tsallis, J. Stat. Phys. [**52**]{}, 479 (1988). C. Tsallis, L. J. L. Cirto, Eur. Phys. J. C [**73**]{}, 2487 (2013). T. Jacobson, Phys. Rev. Lett. [**75**]{}, 1260 (1995). C. Eling, R. Guedens, and T. Jacobson, Phys. Rev. Lett. [**96**]{}, 121301 (2006). B. Wang, E. Abdalla and R. K. Su, Phys. Lett. B [**503**]{}, 394 (2001);\ B. Wang, E. Abdalla and R. K. Su, Mod. Phys. Lett. A [**17**]{}, 23 (2002);\ R. G. Cai and Y. S. Myung, Phys. Rev. D [**67**]{}, 124021 (2003). [ A. V. Frolov and L. Kofman, JCAP [**0305**]{}, 009 (2003), \[hep-th/0212327\].]{} [ T. Padmanabhan, Phys. Rept. [**406**]{}, 49 (2005).]{} [ A. Paranjape, S. Sarkar and T. Padmanabhan, Phys. Rev. D [**74**]{}, 104015 (2006), hep-th/0607240.]{} R. G. Cai and S. P. Kim, JHEP [**0502**]{}, 050 (2005). M. Akbar and R. G. Cai, Phys. Rev. D [**75**]{}, 084003 (2007). R. G. Cai and L. M. Cao, Phys. Rev. D [**75**]{}, 064008 (2007). R. G. Cai and L. M. Cao, Nucl. Phys. B [**785**]{}, 135 (2007). M. Akbar and R. G. Cai, Phy. Lett. B [**648**]{}, 243 (2007). A. Sheykhi, B. Wang and R. G. Cai, Nucl. Phys. B [**779**]{}, 1 (2007). A. Sheykhi, B. Wang and R. G. Cai, Phys. Rev. D [**76**]{}, 023515 (2007). T. Padmanabhan, Rept. Prog. Phys. [**73**]{}, 046901 (2010). A. Sheykhi, B. Wang, Phys. Lett. B [**678**]{}, 434 (2009) ;\ A. Sheykhi, Class. Quantum Grav. [**27**]{}, 025007 (2010). [ M. Jamil, E. N. Saridakis and M. R. Setare, Phys. Rev. D [**81**]{}, 023007 (2010), \[arXiv:0910.0822\].]{} [ M. Jamil, E. N. Saridakis and M. R. Setare, JCAP [**1011**]{}, 032 (2010), \[arXiv:1003.0876\].]{} [ E. M. Barboza, Jr., R. d. C. Nunes, E. M. C. Abreu and J. Ananias Neto, Physica A [**436**]{}, 301 (2015).]{} [ R. C. Nunes, E. M. Barboza, E. M. C. Abreu and J. A. Neto, JCAP [**1608**]{} (2016) no. 08, 051.]{} [ G. Kaniadakis, Physica A [**296**]{}, 405 (2001);\ G. Kaniadakis, Phys. Rev. E [**66**]{}, 056125 (2002).]{} [ E. M. C. Abreu, J. A. Neto, E. M. Barboza Jr., R. C. Nunes, Europhys. Lett. [**114**]{}, 55001 (2016).]{} [ A. Lymperis and E. N. Saridakis, Eur.Phys.J. C [**78**]{}, 993 (2018), \[arXiv:1806.04614\].]{} [ A. Sheykhi, Phys. Lett. B [**785**]{}, 118 (2018), \[arXiv:1806.03996\] .]{} S. Nojiri, S. D. Odintsov, E. N. Saridakis, Eur. J. Phys. C [**79**]{}, 242 (2019), \[arXiv:1903.03098\]. S. Nojiri, S. D. Odintsov, E. N. Saridakis, R. Myrzakulov, Nuc. Phys. B [**950**]{}, 114850 (2020), \[arXiv:1911.03606\]. S. A. Hayward, S.Mukohyana, andM. C. Ashworth, Phys. Lett. A [**256**]{}, 347 (1999);\ S. A. Hayward, Class. Quant. Grav. [**15**]{}, 3147 (1998). E. Verlinde, JHEP [**1104**]{}, 029 (2011). T. Padmanabhan, Mod. Phys. Lett. A [**25**]{}, 1129 (2010). N. P. Vogt, et.al., The Astronomical J., [**127**]{}, 3273 (2004). P. D. Mannheim, J. O’Brien, Phys. Rev. D [**85**]{}, 124020 (2010). arXiv:1011.3495. J. G. O’Brien and P. D. Mannheim, Mon. Not. Roy. Astron. Soc. [**421**]{}, 1273 (2012). arXiv:1107.5229. [^1]: asheykhi@shirazu.ac.ir
--- author: - | Levent Solmaz[^1]\ Department of Physics, Bal[i]{}kesir University, Bal[i]{}kesir, Turkey, TR10100\ E-mail: title: '**Particle Spectrum in the Minimal Supersymmetric Standard Model with non-universal Higgs masses**' --- Introduction ============ There are a number of motivations for phenomenological studies of the Supersymmetric (SUSY) theories among which unification of the gauge couplings and natural suppression of the radiative corrections on the masses of Higgs bosons can be mentioned (see [*i.e.*]{} [@Chung:2003fi], for a comprehensive list of motivations). Among those theories, due to least number of particles, the Minimal Supersymmetric Standard Model (MSSM) occupies a special place. In the near future, forthcoming experiments may reveal that the incorporation of the Standard Model (SM) into a more effective theory turns out as the MSSM. Indeed, if low energy supersymmetry is realized in Nature, phenomenological studies related with the MSSM and its variants will be important to unreveal the hidden model. Since it has certain problems like the famous $\mu$ problem [@Kim:1983dt], flavor problem [@Donoghue:1983mx], and the unknown mechanism of the supersymmetric symmetry breaking, studies related with the extensions of the MSSM may be expected to shed light on future measurement, especially if nontrivial data inconsistent with the minimal model occurs. In this work, we study particle spectrum in the MSSM with non-universal Higgs mass terms (NUHM) [@Ellis:2006ix]. We provide most general semi-analytic solutions of evolving terms, in terms of high scale boundary conditions, for a low $\tan\beta$ value. Additionally, different scale and $\tan\beta$ dependencies of the soft (mass)$^2$ terms will be presented numerically. Actually, the exploration of solutions to the renormalization group equations (RGEs) of a supersymmetric model with NUHM is a subject that has been investigated (see e.g. [@Huang:2000rn] and [@Kazakov:1999pe]), but, the novel feature of our analysis is that semi-analtic solutions may facilitate the exploration of the phenomenology of the model (see [@Baer:2005bu] for phenomenology of NUHM). As is well known, weak scale observables and Grand Unified Theory (GUT) scale boundaries are connected via RGEs in a complicated manner [@Martin:1993zk] and they can be solved with the help of certain softwares. Taste of numerical solutions can not be compared with analytical ones though the former ones are very accurate. As an alternative to the numerical ones, semi-analytic expressions [@Kazakov:1999pe] and construction of certain RG invariant forms are useful for phenomenological analysis of the MSSM and its extensions [@ours]. The possibility of non-universality specific to Higgs masses was studied in a series of papers [@Codoban:1999np],[@Ellis:2006ix], by noting constraints from $b\,\rightarrow\,s\gamma$, cosmology and anomalous magnetic moment of muon and it was stressed that relaxing the scalar-mass universality assumption for the MSSM Higgs multiplets opens up many phenomenological possibilities (see also [@Ellis:2006jy] for $B_s \rightarrow \mu^+\, \mu^-$ and cold dark matter issues related with the NUHM). One of the aims of the present work is to present the full form of semi-analytical expressions explicitly, so that all weak scale observables can be expressed in terms of GUT inputs. The analytical form of the results can provide considerable insight for similar issues (we ignore $\mathcal{CP}$-violation during the numerical analysis of the NUHM, however, the full form of our results cover this issue too). Indeed, due to the complicated structure of the renormalization group equations [@Martin:1993zk], it is appealing to handle issues analytically and the solutions presented in this work can be useful for such an analysis even if they are given to the one loop order. As we will see, to keep the analysis simple, there are certain ignorance made on most of the correction terms, however, in the low $\tan\beta$ regime they do not affect our conclusions sizably and can further be added on demand. The outline of the rest of this work is as follows: In Section 2, we introduce our notation and conventions. In Section 3 we present the effects of non-universal Higgs masses terms on the supersymmetric mass spectra for varying $\tan\beta$ and scale values. A subsection of the same section is given to benchmark the semi-analytic results. Section 4 is devoted to our conclusions. The full form of the solutions of the RGEs can be found in the Appendix \[lowgen\]. Notation and Conventions ======================== We define the basic parameters of the model as soft supersymmetry breaking scalar masses $m_0$, gaugino masses $M$, the trilinear couplings $A_0$, bilinear coupling $B_0$ and supersymmetric Higgs mass parameter $\mu_0$, at the GUT scale. We assume third family dominance model and solve RGEs at the one-loop order. In this effective approach, by solving the RGEs explicitly, weak scale predictions are expressed in terms of GUT boundaries. We express Bino, Wino and Gluino with $M_{1,2,3}$, respectively, with a common initial value $M$. By writing the GUT boundaries, $$A_{i}=c_{A_i}\,A_0,\,\,\,M_j=c_{M_j}\,M,\,\,\,m_{k}=c_{k}\,m_0\,$$ where $i=t,b,\tau$ and $j=1,2,3$, and for (mass)$^2$ terms $k=H_u,H_d,{\tilde t_L},{\tilde t_R}, {\tilde b_R},{\tilde \tau_L},{\tilde \tau_R}$. We will express weak scale value of each quantity in terms of corresponding mSUGRA parameters $m_0,~M,~A_0$, and a positive $\mu$ to be determined by the electroweak breaking conditions. From the solutions of the RGEs, weak scale and GUT scale values are connected and the most important restriction, in this respect, is the mass of Z boson: $$\label{MZconstraint} \frac{1}{2}\,M^2_Z=-\mu^2 +\frac{m^2_{H_d}-\tan^2\beta\,m^2_{H_u}}{\tan^2\beta-1}+\Delta$$ where $\tan\beta$ is the ratio of vacuum expectation values ($v_u/v_d$), and $\Delta$ stands for corrections on Higgs masses. We are interested in low vacuum expectation value ($\tan\beta=10$), for which complete list of semi-analytic solutions are given in the Appendix \[lowgen\]. In addition to this, we will present graphical solutions for different $\tan\beta=10$ values in the next section. Instead of purely numerical values, expressing weak scale predictions in terms of GUT inputs proves very useful and helps to differentiate the importance of each term. In order to show the relative weigh of each term, we will express evolution of any soft (mass)$^2$ term as in the following forms $$\rm{(mass)}^2=\gamma_1\, A^2_0 + \gamma_2\, A_0\, M + \gamma_3\, M^2 + \gamma_4\, c^2_{H_d}\, m^2_0 +\gamma_5\, c^2_{H_u}\, m^2_0+ \gamma_6\, m^2_0\,.\\ \label{nota}$$ This decomposition enables one to lay stress upon the effects of non-universal Higgs mass choices. As it can be extracted from the above equation, sensitivity of each term to the initial values of $m_{H_u}$ and $m_{H_d}$ will be different. Notice that, by using the $M_Z$ constraint given in (\[MZconstraint\]) one can obtain $\mu$ and this can be expressed as $$\begin{aligned} b=\frac{2\,|\mu|^2+m^2_{H_u}+m^2_{H_d}}{\tan\beta+\cot\beta}\,\end{aligned}$$ hence, tree level relations of mass of physical Higgs boson can be written as in the followings [@Martin:1997ns] $$\begin{aligned} m^2_{A_0}&=&2\,b/\sin2\beta\\ m^2_{H^\pm}&=&m^2_{A_0}+m^2_W\\ m^2_{H_0,h_0}&=&\frac{1}{2}\left[m^2_{A_0}+m^2_{Z}\pm\sqrt{\left(m^2_{A_0}+m^2_{Z}\right)^2-4 m^2_{A_0} m^2_{Z}\cos2\beta}~\right].\end{aligned}$$ Those relations will be modified, largely, due to top-stop loop corrections and $h_0$ is the most affected one. Indeed, since the mass of the lightest $\mathcal{CP}$-even Higgs boson is larger than $114\rm{\,GeV}$ [@LEPHiggs] this correction must be included in the analysis. We will consider this correction and omit others in our effective approach. Meanwhile, the price that should be paid for that aim is predicting the spectra with small certain errors as will be shown in the following section. But this does not affect our conclusions, since the reaction of the SUSY particles to the non-universal Higgs boundary conditions is important for the present study. The necessary expression for the most important correction is $$\Delta(m^2_{h_0})=\frac{3}{4\,\pi^2}\, v^2\,y^4_t\,\sin^4\beta\,\ln\left(\frac{m_{\tilde{t}_1}\,m_{\tilde{t}_2}}{m^2_t}\right)\,,$$ where $m_{\tilde{t}_{1,2}}$ can be extracted from the following mass matrix $$\begin{aligned} \label{mzzp} {\bold m^2_{\tilde t}}=\left(\begin{array}{cc} {m}^2_t+m^2_{\tilde t_L}+(\frac{1}{2}-\frac{2}{3} s^2_w)\,M^2_Z \cos2\beta & {m}^2_t\left(A_t-\mu\cot\beta\right)\\\\ {m}^2_t\left(A_t-\mu\cot\beta\right)& {m}^2_t+m^2_{\tilde t_R}+\frac{2}{3}\,s^2_w\,M^2_Z\,\cos2\beta\end{array}\right). \end{aligned}$$ This $2\times2$ matrix can easily be diagonalized to obtain eigenvalues of stop quark masses in terms of GUT inputs, similarly the same should be done for ${\bold m^2_{\tilde b}}$, ${\bold m^2_{\tilde \tau}}$ using the solutions presented in the appendices in order to get the full sparticle spectrum as usual ([*i.e.*]{} see [@Martin:1997ns]). Indeed, having such analytic expressions is very useful to visualize the ingredients of sparticles to indirectly probe the allowed range of non-universality of Higgs bosons. As an example, for ($\tan\beta=10$) $$\begin{aligned} \label{mtoptayfa} m^2_{{\tilde t}_{1,2}}&=&-0.0523\, A^2_0 + 0.192\, A_0\, M + 3.74\, M^2 + ( 0.642 -0.0176\, c^2_{H_d}- 0.161\, c^2_{H_u} ) \, m^2_0 + {m}^2_t \nonumber\\ &-& 0.245\, M^2_Z \mp \Omega\end{aligned}$$ where the exact expression of $\Omega$ is a quite lengthy function of all terms appearing in the first line of (\[mtoptayfa\]). Notice that, it can be obtained using the full forms of the solutions given in the Apppendix. Now, let us make a simplifying assumption $\mu_0\sim\,m_0\,\sim\,M\,\sim\,A_0$ and ${\bar m_t}\sim\,2\,m_0$ on the $\Omega$ part of (\[mtoptayfa\]) to approximately predict the composition of stop masses $$\begin{aligned} m^2_{\tilde t_{1,2}}&\simeq&-0.052\, A^2_0 + 0.19\, A_0\, M + 3.8\, M^2 + ( 0.64 -0.018\, c^2_{H_d}- 0.16\, c^2_{H_u} ) \, m^2_0 + {m}^2_t \nonumber\\ &-& 0.25 \, M^2_Z \mp\,3.45\,m^2_0~.\end{aligned}$$ Using this analytical expression, for instance, one can conclude that weigh of up Higgs fields is larger than weigh of down Higgs fields but their relative weigh is negligible compared to other soft mass terms. To be specific, we will consider the specific reference point $\rm{SPS1a^\prime}$ [@Aguilar-Saavedra:2005pw] in the numerical analysis to benchmark the solutions provided. However, even under the above rough approximation we found $m_{\tilde t_1}=472\,\rm{\,GeV}$ and $m_{\tilde{t}_2}=506\,\rm{\,GeV}$, to be compared with the exact results. For the mass spectra of SUSY particles, effects of up Higgs field can be dominant, however, as we will see in the next section this can not be generalized to other sectors. Numarical Analysis ================== In this section $\tan\beta$ and scale evolutions of (mass)$^2$ terms will be presented. For this aim, solutions of RGEs are performed such that high scale is set equal to $1.9\times10^{16}\rm{\,GeV}$ and the supersymmetry breaking scale is chosen as $1\rm{\,TeV}$. With this choices, unification of the gauge couplings is satisfied at the GUT scale as $g_1=g_2=g_3=0.718\pm0.001$. One can obtain the solutions of RGEs for any $\tan\beta=10$. To be specific, for $\tan\beta=10$, mass of the heavy SM fermions fix the Yukawa couplings at the same scale as $Y_t=0.551,\, Y_b=0.0547,\, Y_\tau=0.0685$. As a brief summary of the semi-analytic solutions, this specific choice of $\tan\beta$ yields the followings equations $$\begin{aligned} \label{lowspecial} m^2_{H_u}&=&-0.102\, A^2_0 + 0.375\, A_0\, M - 1.93\, M^2 - 0.709\, m^2_0 + 0.0331\, c^2_{H_d}\, m^2_0 + 0.612\, c^2_{H_u}\, m^2_0\nonumber\\ m^2_{H_d}&=&-0.0107\, A^2_0 + 0.0309\, A_0\, M + 0.413\, M^2 - 0.0241\, m^2_0 + 0.955\, c^2_{H_d}\, m^2_0 + 0.0333\, c^2_{H_u}\, m^2_0\nonumber\\ m^2_{\tilde t_L}&=&-0.0367\, A^2_0 + 0.134\, A_0\, M + 4.33\, M^2 + 0.757\, m^2_0 + 0.00768\, c^2_{H_d}\, m^2_0 - 0.129\, c^2_{H_u}\, m^2_0\nonumber\\ m^2_{\tilde t_R}&=&-0.068\, A^2_0 + 0.25\, A_0\, M + 3.15\, M^2 + 0.527\, m^2_0 - 0.0429\, c^2_{H_d}\, m^2_0 - 0.194\, c^2_{H_u}\, m^2_0\,\,\\ m^2_{\tilde b_R}&=&-0.00534\, A^2_0 + 0.0192\, A_0\, M + 4.67\, M^2 + 0.988\, m^2_0 + 0.0149\, c^2_{H_d}\, m^2_0 - 0.0211\, c^2_{H_u}\, m^2_0\nonumber\\ m^2_{\tilde \tau_L}&=&-0.00271\, A^2_0 + 0.00216\, A_0\, M + 0.493\, M^2 + 0.994\, m^2_0 - 0.0353\, c^2_{H_d}\, m^2_0 + 0.0325\, c^2_{H_u}\, m^2_0\nonumber\\ m^2_{\tilde \tau_R}&=&-0.00542\, A^2_0 + 0.00432\, A_0\, M + 0.143\, M^2 + 0.989\, m^2_0 + 0.0595\, c^2_{H_d}\, m^2_0 - 0.065\, c^2_{H_u}\, m^2_0\,.\nonumber \end{aligned}$$ Notice that the analytical expressions given in (\[lowspecial\]) are constrained forms of the solutions presented in the Appendix (here we set $\Phi_{i,j}\rightarrow\,0$ and $c_i\rightarrow\,1$, except for $c_{H_u}$ and $c_{H_d}$). And they can be used at $\rm{SPS1a^\prime}$ point [@Aguilar-Saavedra:2005pw]. We will benchmark our solutions using this point in the following subsection. Different scale and $\tan\beta$ effects can be extracted from the following figures (Figs. \[fig1\]–\[fig7\]). In Fig. \[fig1\], we show $\tan\beta$ and scale dependencies of the composition of $m^2_{H_u}$. Normally, mass of up Higgs fields get contributions from any of the 28 terms given in (A.2). When we assume CP is conserved ($\Phi_{i,j}\rightarrow\,0$) and accept universality is in charge (except for Higgs fields), mass of the up Higgs field can be decomposed in a neat form as in (2.3) $$m^2_{H_u}=\gamma^{(H_u)}_1\, A^2_0 + \gamma^{(H_u)}_2\, A_0\, M + \gamma^{(H_u)}_3\, M^2 + \gamma^{(H_u)}_4\, c^2_{(H_d)}\, m^2_0 +\gamma^{(H_u)}_5\, c^2_{(H_u)}\, m^2_0+ \gamma^{(H_u)}_6\, m^2_0\,.\\ \label{nota}$$ As can be seen from both panels of the first figure, largest contribution to mass of up Higgs field comes from Gaugino sector (dashed-blue curves). Contribution of down Higgs field to up Higgs field is negligible, in other words, deviation of down Higgs from the universal choice can not yield a detectable effect on up Higgs field. In all Figs. \[fig1\]–\[fig7\], solid red (green) curves corresponds to contribution of $m^2_{H_d}$ ($m^2_{H_u}$) on the related (mass)$^2$ terms, which are $m^2_{H_u}$, $m^2_{H_d}$,$m^2_{t_L}$,$m^2_{t_R}$,$m^2_{b_R}$, $m^2_{l_L}$ and finally $m^2_{l_R}$, respectively. In order to show the effects of scale variations fix $\tan\beta=10$ (right panels) and for varying $\tan\beta$ values scale is fixed around the weak scale (left panels). The Figs. \[fig1\]–\[fig7\] denote that the gauge/gaugino sector contributions to scalar mass sector evolution increases scalar mass parameters as we go to the weak scale. It is visible in Figs. \[fig6\] and \[fig7\] that a strong reaction can be detected in the slepton sector to non-universal Higgs mass terms and this is true for any $\tan\beta$ value. Notice that this can be expected for Higgs bosons too (see Figs. \[fig1\]–\[fig2\]). We observe from Figs. \[fig3\]–\[fig4\] that, scalar top quarks are sensitive to NUHM only for very small $\tan\beta$ values $(\sim\,2-3)$. During the numerical analysis we observed that following the physical Higgs bosons (except the CP-even light Higgs boson), sleptons are very sensitive to NUHM terms. Hence, we present Fig. \[fig8\] to show a bird-eye picture of the reaction of stau mass eigenvalues to NUHM parameters. As can be inferred from the very figure, reaction of sparticles to the mentioned non-universality drifts the mass predictions, to some extend. This effect ranges from a few $\rm{GeV}$ to $\sim\,30-40\rm{\,GeV}$ for different sparticles and it can be detectable since the correct spectrum is well known for the MSSM. See Tab. \[table1\] for the reaction of the particles of the MSSM to NUHM. Benchmark of the solutions -------------------------- The most practical solution in order to test the trustability of our results is to use certain benchmark points. Though a large set of benchmark points and parameter lines in the MSSM parameter space is established, we will use one the the most studied points (see [@Allanach:2002nj] for Snowmass Points and Slopes). Since we ignored most of the corrections except that of on the mass of the lightest $\mathcal{CP}$-even Higgs boson, a strict comparison with the state of art programs like ISAJET [@isajet] or SOFTSUSY [@softsusy] shoul not be expected (see also [@kram] and the given web page for online comparison). Nevertheless, resulting error should not be too high and there should be a visible correlation. We observed this is indeed the case for our semi-analytic solutions. To be definite, if $\tan\beta=10$, $M=250\rm{\,GeV}$, $m_0=70\rm{\,GeV}$, $A_0=-300\rm{\,GeV}$, $\frac{\mu}{|\mu|}=1$ (which is the $\rm{SPS1a^\prime}$ reference point) then at the weak scale (at 1 $\rm{\,TeV}$) we end up with Table \[table1\]. Particle $SPS1a^\prime$ \[GeV\] This Work \[GeV\] $\rm{\%}$ Difference $\delta^{\rm{NUHM}}$ \[GeV\] --------------------------- ------------------------ ------------------- ---------------------- ------------------------------ $h^0$ 116.0 110.3 4.91 0.2 $H^0$ 425.0 425.7 -0.17 35.1 $A^0$ 424.9 425.3 -0.09 35.1 $H^{\pm}$ 432.7 432.8 -0.02 34.5 $\tilde{t}_1$ 366.5 374.3 -2.13 5.1 $\tilde{t}_2$ 585.5 578.9 1.13 5.1 $\tilde{b}_1$ 506.3 502.9 0.67 2.4 $\tilde{b}_2$ 545.7 530.7 2.75 0.9 $\tilde{\tau}_1$ 107.9 111.4 -3.24 8.8 $\tilde{\tau}_2$ 194.9 199.7 -2.46 2.1 $\tilde{\chi}^0_{1}$ 97.7 105.3 -7.78 0.2 $\tilde{\chi}^0_{2}$ 183.9 194.3 -5.65 1.2 $\tilde{\chi}^0_{3}$ 400.5 400.6 -0.02 15.9 $\tilde{\chi}^0_{4}$ 413.9 417.3 -0.82 14.5 $\tilde{\chi}^{\pm}_{1}$ 183.7 193.7 -5.44 1.3 $\tilde{\chi}^{\pm}_{2}$ 415.4 417.9 -0.60 14.6 $\tilde{\upsilon}_{\tau}$ 170.5 176.6 -3.58 3.8 : \[table1\] [Numerical values for the mass of some of the supersymmetric particles and Higgs bosons in the reference point $\rm{SPS1a^\prime}$ [@Aguilar-Saavedra:2005pw] and their comparison with our semi-analytical results. The third column is obtained by $(SPS1a^\prime-\rm{our~results})\times 100/SPS1a^\prime$. The fourth column denotes the sensitivity of each particle to the NUHM model parameters. The difference between maximal and minimal mass values is obtained by varying $c_{H_u}$ and $c_{H_d}$ in the $[0,2]$ interval and the emerging difference is called as sensitivity ($\delta^{\rm{NUHM}}$) for each term.]{} Comparison of these results with the reference point denotes that the errors in predicting $m_h$, $m_{{\tilde \tau}_{1,2}}$ and $m_{\widetilde{\chi}^0_{1}}$ are negligible. For other mass terms errors are somewhat large especially in predicting mass of the lightest neutralino $m_{\tilde{\chi}^0_1}$; here absolute error is $\sim 8 \rm{\%}$ which could be reduced if calculation are performed at two loops, corrections are noticed for all terms etc. However, apparent correlation is sufficient for our aim since we are basically interested in the reaction of those particles to the non-universal choices of the Higgs masses. Of course, this is true as far as corrections do not alter the weight of $c_{H_u}$ and $c_{H_d}$ on the SUSY particles, which we assumed to be true since the emerging mass difference of the worst prediction is less than $\sim 8 \rm{\%}$. Nevertheless, a numerical simulation including all families and known corrections would be more decisive, which is beyond the scope of this work. Conclusions =========== Using the semi-analytic solutions presented in this work it is observed that deviation from the universality assumption of the Higgs fields does not induce serious problems as in the case of other soft $(\rm{mass})^2$ terms (especially if $c_{H_u}\sim c_{H_d}\sim1$). This can be inferred from Tab \[table1\] in which coefficients of up and down Higgs fields are varied from 0 to 2 $m_0$. For this range, a striking difference can be observed on the mass of certain supersymmetric particles (like sleptons) while the others are insensitive to the mentioned phenomena (like lightest neutralino). Expected discovery of low energy SUSY at the Large Hadron Collider (LHC) and the International Linear Collider (ILC) [@Weiglein:2004hn] will require reconstruction of the supersymmetric theory parameters from the experimental data. This is necessary not only for the minimal model but also for NUHM, especially if experimental data signalling deviations from the minimal supergravity model (mSUGRA) [@mSUGRA] occurs. For this aim, precise measurements of mass of the light stau $m_{\tau_1}$, which is probably among the first sparticles to be discovered due to lepton nature and a light mass very sensitive to non-universality of the Higgs bosons, will be very suitable to shoot the NUHM parameter space. I m thankful to Durmus A. Demir for reading the manuscript and useful discussions. This work is partially supported by post-doctoral fellowship of the Scientific and Technical Research Council of Turkey. \[APP\] Explicit Solutions for **[low]{} $\tan\beta$** {#lowgen} ============================================== In this part we present explicit form of our semi-analytic solutions which are obtained by solving the RGEs explicity, to the one loop order. The Gut scale is $M_{GUT}=1.9 \times 10^{16}\rm{\,GeV}$ and $\tan\beta=10$. At the GUT scale we found the following results for gauge and Yukawa couplings $$\begin{aligned} & & g_1=0.7179,\,g_2=0.7187,\, g_3=0.7195\,\nonumber\\ & & Y_t=0.5510,\, Y_b=0.0547,\, Y_\tau=0.0685.\,\nonumber\\\end{aligned}$$ For the weak scale scale ($\sim\,1\rm{~TeV}$), our results for soft $(\rm{mass})^2$ terms read $$\begin{aligned} m^2_{H_u}&=&0.000619\, A^2_0\, c^2_{A_b} - 7.8\, \times\, {10}^{-7}\, A^2_0\, c^2_{A_\tau} - 0.103\, A^2_0\, c^2_{A_t} + 0.00473\, c^2_{M_1}\, M^2 \,\nonumber\\ &+& 0.206\, c^2_{M_2}\, M^2 - 1.94\, c^2_{M_3}\, M^2 + 0.0331\, c^2_{H_d}\, m^2_0 + 0.612\, c^2_{H_u}\, m^2_0 - 0.0319\, c^2_{b_R}\, m^2_0 \,\nonumber\\ &+& 0.0325\, c^2_{\tau_L}\, m^2_0 - 0.0325\, c^2_{\tau_R}\, m^2_0 - 0.387\, c^2_{t_L}\, m^2_0 - 0.29\, c^2_{t_R}\, m^2_0 \,\nonumber\\ &-& 0.00572\, c_{M_1}\, c_{M_2}\,M^2\, \cos \Phi_{12} - 0.0252\, c_{M_1}\, c_{M_3}\, M^2\, \cos \Phi_{13} \,\nonumber\\ &-& 0.0000612\, A_0\, c_{A_b}\, c_{M_1}\, M\, \cos \Phi_{1b} + 2.13\, \times\, {10}^{-7}\, A_0\, c_{A_\tau}\, c_{M_1}\,M\, \cos \Phi_{1\tau} \,\nonumber\\ &+& 0.0122\, A_0\, c_{A_t}\, c_{M_1}\, M\, \cos\Phi_{1t}- 0.168\, c_{M_2}\, c_{M_3}\, M^2\, \cos \Phi_{23} \,\nonumber\\ &-& 0.000535\, A_0\, c_{A_b}\, c_{M_2}\, M\, \cos\Phi_{2b} + 1.12\, \times\, {10}^{-6}\, A_0\, c_{A_\tau}\, c_{M_2}\, M\, \cos \Phi_{2\tau} \,\nonumber\\ &+& 0.0726\, A_0\, c_{A_t}\, c_{M_2}\, M\, \cos\Phi_{2t} - 0.00215\, A_0\, c_{A_b}\, c_{M_3}\, M\, \cos \Phi_{3b} \,\nonumber\\ &+& 3.48\, \times\, {10}^{-6}\, A_0\, c_{A_\tau}\, c_{M_3}\, M\, \cos \Phi_{3\tau} + 0.293\, A_0\, c_{A_t}\, c_{M_3}\, M\, \cos\Phi_{3t} \,\nonumber\\ &-& 1.54\, \times\, {10}^{-6}\, A^2_0\, c_{A_b}\, c_{A_\tau}\, \cos \Phi_{b\tau} + 0.000285\, A^2_0\, c_{A_b}\, c_{A_t}\, \cos \Phi_{tb} \,\nonumber\\ &-& 3.29\, \times\, {10}^{-7}\, A^2_0\, c_{A_\tau}\, c_{A_t}\, \cos \Phi_{t\tau}~,\,\end{aligned}$$ $$\begin{aligned} m^2_{H_d}&=&-0.00992\, A^2_0\, c^2_{A_b} - 0.00272\, A^2_0\, c^2_{A_\tau} + 0.000286\, A^2_0\, c^2_{A_t} + 0.0361\, c^2_{M_1}\, M^2 \,\nonumber\\ &+& 0.449\, c^2_{M_2}\, M^2 - 0.0613\, c^2_{M_3}\, M^2 + 0.955\, c^2_{H_d}\, m^2_0 + 0.0333\, c^2_{H_u}\, m^2_0 + 0.0224\, c^2_{b_R}\, m^2_0 \,\nonumber\\ &-& 0.0353\, c^2_{\tau_L}\, m^2_0 + 0.0298\, c^2_{\tau_R}\, m^2_0 + 0.0232\, c^2_{t_L}\, m^2_0 - 0.0642\, c^2_{t_R}\, m^2_0 \,\nonumber\\ &-& 0.000383\, c_{M_1}\, c_{M_2}\, M^2\, \cos \Phi_{12} - 0.000749\, c_{M_1}\, c_{M_3}\, M^2\, \cos \Phi_{13} \,\nonumber\\ &+& 0.000538\, A_0\, c_{A_b}\, c_{M_1}\, M\, \cos \Phi_{1b} + 0.000586\, A_0\, c_{A_\tau}\, c_{M_1}\, M\, \cos \Phi_{1\tau} \,\nonumber\\ &-& 0.0000797\, A_0\, c_{A_t}\, c_{M_1}\, M\, \cos \Phi_{1t} - 0.0097\, c_{M_2}\, c_{M_3}\, M^2\, \cos \Phi_{23} \,\nonumber\\ &+& 0.0064\, A_0\, c_{A_b}\, c_{M_2}\, M\, \cos \Phi_{2b} + 0.0016\, A_0\, c_{A_\tau}\, c_{M_2}\, M\, \cos \Phi_{2\tau} \,\nonumber\\ &-& 0.000762\, A_0\, c_{A_t}\, c_{M_2}\, M\, \cos \Phi_{2t} + 0.0258\, A_0\, c_{A_b}\, c_{M_3}\, M\, \cos \Phi_{3b} \,\nonumber\\ &-& 0.0000721\, A_0\, c_{A_\tau}\, c_{M_3}\, M\, \cos \Phi_{3\tau} - 0.00307\, A_0\, c_{A_t}\, c_{M_3}\, M\, \cos \Phi_{3t} \,\nonumber\\ &+& 0.0000554\, A^2_0\, c_{A_b}\, c_{A_\tau}\, \cos \Phi_{b\tau} + 0.00159\, A^2_0\, c_{A_b}\, c_{A_t}\, \cos \Phi_{tb} \,\nonumber\\ &-& 4.46\, \times\, {10}^{-6}\, A^2_0\, c_{A_\tau}\, c_{A_t}\, \cos \Phi_{t\tau}~,\,\end{aligned}$$ $$\begin{aligned} m^2_{\tilde t_L}&=&-0.0031\, A^2_0\, c^2_{A_b} + 5.35\, \times\, {10}^{-6}\, A^2_0\, c^2_{A_\tau} - 0.0342\, A^2_0\, c^2_{A_t} - 0.00678\, c^2_{M_1}\, M^2 \,\nonumber\\ &+& 0.372\, c^2_{M_2}\, M^2 + 4.04\, c^2_{M_3}\, M^2 + 0.00768\, c^2_{H_d}\, m^2_0 - 0.129\, c^2_{H_u}\, m^2_0 - 0.014\, c^2_{b_R}\, m^2_0 \,\nonumber\\ &+& 0.0108\, c^2_{\tau_L}\, m^2_0 - 0.0108\, c^2_{\tau_R}\, m^2_0 + 0.868\, c^2_{t_L}\, m^2_0 - 0.0965\, c^2_{t_R}\, m^2_0 \,\nonumber\\ &-& 0.00197\, c_{M_1}\, c_{M_2}\, M^2\, \cos \Phi_{12} - 0.00865\, c_{M_1}\, c_{M_3}\, M^2\, \cos \Phi_{13} \,\nonumber\\ &+& 0.00016\, A_0\, c_{A_b}\, c_{M_1}\, M\, \cos \Phi_{1b} - 1.29\, \times\, {10}^{-6}\, A_0\, c_{A_\tau}\, c_{M_1}\, M\, \cos \Phi_{1\tau} \,\nonumber\\ &+& 0.00403\, A_0\, c_{A_t}\, c_{M_1}\, M\, \cos \Phi_{1t} - 0.0592\, c_{M_2}\, c_{M_3}\, M^2\, \cos \Phi_{23} \,\nonumber\\ &+& 0.00196\, A_0\, c_{A_b}\, c_{M_2}\, M\, \cos \Phi_{2b} - 6.44\, \times\, {10}^{-6}\, A_0\, c_{A_\tau}\, c_{M_2}\, M\, \cos \Phi_{2\tau} \,\nonumber\\ &+& 0.0239\, A_0\, c_{A_t}\, c_{M_2}\, M\, \cos \Phi_{2t} + 0.00789\, A_0\, c_{A_b}\, c_{M_3}\, M\, \cos \Phi_{3b} \,\nonumber\\ &-& 0.000017\, A_0\, c_{A_\tau}\, c_{M_3}\, M\, \cos \Phi_{3\tau} + 0.0965\, A_0\, c_{A_t}\, c_{M_3}\, M\, \cos \Phi_{3t} \,\nonumber\\ &+& 0.0000106\, A^2_0\, c_{A_b}\, c_{A_\tau}\, \cos\Phi_{b\tau} + 0.000627\, A^2_0\, c_{A_b}\, c_{A_t}\, \cos\Phi_{tb} \,\nonumber\\ &-& 1.17\, \times\, {10}^{-6}\, A^2_0\, c_{A_\tau}\,c_{A_t}\, \cos \Phi_{t\tau}~,\,\end{aligned}$$ $$\begin{aligned} m^2_{\tilde t_R}&=&0.000412\, A^2_0\, c^2_{A_b} - 5.2\, \times\, {10}^{-7}\, A^2_0\, c^2_{A_\tau} - 0.0686\, A^2_0\, c^2_{A_t} + 0.0443\, c^2_{M_1}\, M^2 \,\nonumber\\ &-& 0.168\, c^2_{M_2}\, M^2 + 3.41\, c^2_{M_3}\, M^2 - 0.0429\, c^2_{H_d}\, m^2_0 - 0.194\, c^2_{H_u}\, m^2_0 + 0.0438\, c^2_{b_R}\, m^2_0 \,\nonumber\\ &-& 0.0434\, c^2_{\tau_L}\, m^2_0 + 0.0434\, c^2_{\tau_R}\, m^2_0 - 0.193\, c^2_{t_L}\, m^2_0 + 0.676\, c^2_{t_R}\, m^2_0 \,\nonumber\\ &-& 0.00381\, c_{M_1}\, c_{M_2}\,M^2\, \cos \Phi_{12} - 0.0168\, c_{M_1}\, c_{M_3}\, M^2\, \cos \Phi_{13} \,\nonumber\\ &-& 0.0000408\, A_0\, c_{A_b}\, c_{M_1}\, M\, \cos \Phi_{1b} + 1.42\, \times\, {10}^{-7}\, A_0\, c_{A_\tau}\, c_{M_1}\,M\, \cos \Phi_{1\tau} \,\nonumber\\ &+& 0.0081\, A_0\, c_{A_t}\, c_{M_1}\, M\, \cos \Phi_{1t} - 0.112\, c_{M_2}\, c_{M_3}\, M^2\, \cos \Phi_{23} \,\nonumber\\ &-& 0.000357\, A_0\, c_{A_b}\, c_{M_2}\, M\, \cos\Phi_{2b} + 7.45\, \times\, {10}^{-7}\, A_0\, c_{A_\tau}\, c_{M_2}\,M\, \cos \Phi_{2\tau} \,\nonumber\\ &+& 0.0484\, A_0\, c_{A_t}\, c_{M_2}\, M\, \cos\Phi_{2t} - 0.00144\, A_0\, c_{A_b}\, c_{M_3}\, M\, \cos\Phi_{3b} \,\nonumber\\ &+& 2.32\, \times\, {10}^{-6}\, A_0\, c_{A_\tau}\, c_{M_3}\,M\, \cos \Phi_{3\tau} + 0.195\, A_0\, c_{A_t}\, c_{M_3}\, M\, \cos\Phi_{3t} \,\nonumber\\ &-& 1.03\, \times\, {10}^{-6}\, A^2_0\, c_{A_b}\,c_{A_\tau}\, \cos \Phi_{b\tau} + 0.00019\, A^2_0\, c_{A_b}\, c_{A_t}\, \cos\Phi_{tb} \,\nonumber\\ &-& 2.19\, \times\, {10}^{-7}\, A^2_0\, c_{A_\tau}\,c_{A_t}\, \cos \Phi_{t\tau}~,\,\end{aligned}$$ $$\begin{aligned} m^2_{\tilde b_R}&=&-0.00662\, A^2_0\, c^2_{A_b} + 0.0000112\, A^2_0\, c^2_{A_\tau} + 0.000191\, A^2_0\, c^2_{A_t} + 0.0162\, c^2_{M_1}\, M^2 \,\nonumber\\ &-& 0.00483\, c^2_{M_2}\, M^2 + 4.66\, c^2_{M_3}\, M^2 + 0.0149\, c^2_{H_d}\, m^2_0 - 0.0211\, c^2_{H_u}\, m^2_0 + 0.972\, c^2_{b_R}\, m^2_0 \,\nonumber\\ &+& 0.0217\, c^2_{\tau_L}\, m^2_0 - 0.0217\, c^2_{\tau_R}\, m^2_0 - 0.0279\, c^2_{t_L}\, m^2_0 + 0.0439\, c^2_{t_R}\, m^2_0 \,\nonumber\\ &-& 0.000119\, c_{M_1}\, c_{M_2}\, M^2\, \cos \Phi_{12} - 0.000501\, c_{M_1}\, c_{M_3}\, M^2\, \cos \Phi_{13} \,\nonumber\\ &+& 0.000361\, A_0\, c_{A_b}\, c_{M_1}\, M\, \cos \Phi_{1b} - 2.71\, \times\, {10}^{-6}\, A_0\, c_{A_\tau}\, c_{M_1}\,M\, \cos \Phi_{1\tau} \,\nonumber\\ &-& 0.0000533\, A_0\, c_{A_t}\, c_{M_1}\, M\, \cos \Phi_{1t} - 0.00647\, c_{M_2}\, c_{M_3}\, M^2\, \cos \Phi_{23} \,\nonumber\\ &+& 0.00427\, A_0\, c_{A_b}\, c_{M_2}\, M\, \cos\Phi_{2b} - 0.0000136\, A_0\, c_{A_\tau}\, c_{M_2}\, M\, \cos \Phi_{2\tau} \,\nonumber\\ &-& 0.000509\, A_0\, c_{A_t}\, c_{M_2}\, M\, \cos\Phi_{2t} + 0.0172\, A_0\, c_{A_b}\, c_{M_3}\, M\, \cos\Phi_{3b} \,\nonumber\\ &-& 0.0000363\, A_0\, c_{A_\tau}\, c_{M_3}\, M\, \cos (\Phi_{3\tau} - 0.00205\, A_0\, c_{A_t}\, c_{M_3}\, M\, \cos\Phi_{3t} \,\nonumber\\ &+& 0.0000222\, A^2_0\, c_{A_b}\, c_{A_\tau}\, \cos \Phi_{b\tau} + 0.00106\, A^2_0\, c_{A_b}\, c_{A_t}\, \cos\Phi_{tb} \,\nonumber\\ &-& 2.11\, \times\, {10}^{-6}\, A^2_0\, c_{A_\tau}\,c_{A_t}\, \cos \Phi_{t\tau}~,\,\end{aligned}$$ $$\begin{aligned} m^2_{\tilde \tau_L}&=&0.000011\, A^2_0\, c^2_{A_b} - 0.00274\, A^2_0\, c^2_{A_\tau} - 3.11\, \times\, {10}^{-7}\, A^2_0\, c^2_{A_t} + 0.0365\, c^2_{M_1}\, M^2 \,\nonumber\\ &+& 0.457\, c^2_{M_2}\, M^2 + 0.000035\, c^2_{M_3}\, M^2 - 0.0353\, c^2_{H_d}\, m^2_0 + 0.0325\, c^2_{H_u}\, m^2_0 + 0.0325\, c^2_{b_R}\, m^2_0 \,\nonumber\\ &+& 0.965\, c^2_{\tau_L}\, m^2_0 + 0.0297\, c^2_{\tau_R}\, m^2_0 + 0.0325\, c^2_{t_L}\, m^2_0 - 0.065\, c^2_{t_R}\, m^2_0 \,\nonumber\\ &-& 0.000204\, c_{M_1}\, c_{M_2}\,M^2\, \cos \Phi_{12} + 3.\, \times\, {10}^{-6}\, c_{M_1}\, c_{M_3}\,M^2\, \cos \Phi_{13} \,\nonumber\\ &-& 3.46\, \times\, {10}^{-6}\, A_0\, c_{A_b}\, c_{M_1}\,M\, \cos \Phi_{1b} + 0.00059\, A_0\, c_{A_\tau}\, c_{M_1}\, M\, \cos \Phi_{1\tau} \,\nonumber\\ &+& 2.42\, \times\, {10}^{-7}\, A_0\, c_{A_t}\, c_{M_1}\,M\, \cos \Phi_{1t} + 0.0000132\, c_{M_2}\, c_{M_3}\,M^2\, \cos \Phi_{23} \,\nonumber\\ &-& 0.000014\, A_0\, c_{A_b}\, c_{M_2}\, M\, \cos\Phi_{2b} + 0.00162\, A_0\, c_{A_\tau}\, c_{M_2}\, M\, \cos\Phi_{2\tau} \,\nonumber\\ &+& 1.07\, \times\, {10}^{-6}\, A_0\, c_{A_t}\, c_{M_2}\,M\, \cos \Phi_{2t} - 0.0000176\, A_0\, c_{A_b}\, c_{M_3}\, M\, \cos \Phi_{3b} \,\nonumber\\ &-& 0.0000178\, A_0\, c_{A_\tau}\, c_{M_3}\, M\, \cos \Phi_{3\tau} + 1.78\, \times\, {10}^{-6}\, A_0\, c_{A_t}\, c_{M_3}\,M\, \cos \Phi_{3t} \,\nonumber\\ &+& 0.0000222\, A^2_0\, c_{A_b}\, c_{A_\tau}\, \cos \Phi_{b\tau} - 1.28\, \times\, {10}^{-6}\, A^2_0\, c_{A_b}\, c_{A_t}\, \cos \Phi_{tb} \,\nonumber\\ &-& 1.29\, \times\, {10}^{-6}\, A^2_0\, c_{A_\tau}\,c_{A_t}\, \cos \Phi_{t\tau}~,\,\end{aligned}$$ $$\begin{aligned} m^2_{\tilde \tau_R}&=&0.0000221\, A^2_0\, c^2_{A_b} - 0.00548\, A^2_0\, c^2_{A_\tau} - 6.21\, \times\, {10}^{-7}\, A^2_0\, c^2_{A_t} + 0.147\, c^2_{M_1}\, M^2 \,\nonumber\\ &-& 0.00366\, c^2_{M_2}\, M^2 + 0.0000699\, c^2_{M_3}\, M^2 + 0.0595\, c^2_{H_d}\, m^2_0 - 0.065\, c^2_{H_u}\, m^2_0 - 0.065\, c^2_{b_R}\, m^2_0 \,\nonumber\\ &+& 0.0595\, c^2_{\tau_L}\, m^2_0 + 0.929\, c^2_{\tau_R}\, m^2_0 - 0.065\, c^2_{t_L}\, m^2_0 + 0.13\, c^2_{t_R}\, m^2_0 \,\nonumber\\ &-& 0.000409\, c_{M_1}\, c_{M_2}\,M^2\, \cos \Phi_{12} + 5.99\, \times\, {10}^{-6}\, c_{M_1}\, c_{M_3}\,M^2\, \cos \Phi_{13} \,\nonumber\\ &-& 6.93\, \times\, {10}^{-6}\, A_0\, c_{A_b}\, c_{M_1}\,M\, \cos \Phi_{1b} + 0.00118\, A_0\, c_{A_\tau}\, c_{M_1}\, M\, \cos\Phi_{1\tau} \,\nonumber\\ &+& 4.83\, \times\, {10}^{-7}\, A_0\, c_{A_t}\, c_{M_1}\,M\, \cos \Phi_{1t} + 0.0000264\, c_{M_2}\, c_{M_3}\,M^2\, \cos \Phi_{23} \,\nonumber\\ &-& 0.000028\, A_0\, c_{A_b}\, c_{M_2}\, M\, \cos\Phi_{2b} + 0.00324\, A_0\, c_{A_\tau}\, c_{M_2}\, M\, \cos\Phi_{2\tau} \,\nonumber\\ &+& 2.13\, \times\, {10}^{-6}\, A_0\, c_{A_t}\, c_{M_2}\,M\, \cos \Phi_{2t} - 0.0000352\, A_0\, c_{A_b}\, c_{M_3}\, M\, \cos \Phi_{3b} \,\nonumber\\ &-& 0.0000355\, A_0\, c_{A_\tau}\, c_{M_3}\, M\, \cos \Phi_{3\tau} + 3.57\, \times\, {10}^{-6}\, A_0\, c_{A_t}\, c_{M_3}\,M\, \cos \Phi_{3t} \,\nonumber\\ &+& 0.0000444\, A^2_0\, c_{A_b}\, c_{A_\tau}\, \cos \Phi_{b\tau} - 2.55\, \times\, {10}^{-6}\, A^2_0\, c_{A_b}\,c_{A_t}\, \cos \Phi_{tb} \,\nonumber\\ &-& 2.58\, \times\, {10}^{-6}\, A^2_0\, c_{A_\tau}\,c_{A_t}\, \cos \Phi_{t\tau}~.\,\,\end{aligned}$$ For Gauginos we found the followings $$M_1=0.432\, c_{M_1}\, M,\,\,M_2=0.833\, c_{M_2}\, M,\,\,M_3=2.51\, c_{M_3}\, M,\,.$$ Similarly, for trilinear terms we have $$\begin{aligned} A_t&=&-0.00198\, A_0\, c_{A_b} + 3.81\, \times\, {10}^{-6}\, A_0\, c_{A_\tau} + 0.27\, A_0\, c_{A_t} - 0.0303\, c_{M_1}\, M - 0.231\, c_{M_2}\, M \,\nonumber\\ &-& 1.55\, c_{M_3}\, M\end{aligned}$$ $$\begin{aligned} A_b&=&0.147\, A_0\, c_{A_b} - 0.00041\, A_0\, c_{A_\tau} - 0.0175\, A_0\, c_{A_t} - 0.00484\, c_{M_1}\, M - 0.0675\, c_{M_2}\, M \,\nonumber\\ &-& 0.372\, c_{M_3}\,\end{aligned}$$ $$\begin{aligned} A_\tau&=&-0.00101\, A_0\, c_{A_b} + 0.0989\, A_0\, c_{A_\tau} + 0.0000811\, A_0\, c_{A_t} - 0.0153\, c_{M_1}\, M - 0.0493\, c_{M_2}\, M \,\nonumber\\ &+& 0.00131\, c_{M_3}\, M\end{aligned}$$ Our expression for $B$ is $$\begin{aligned} B&=&B_0 - 0.0095\, A_0\, c_{A_b} - 0.00276\, A_0\, c_{A_\tau} - 0.354\, A_0\, c_{A_t} - 0.0301\, c_{M_1}\, M - 0.371\, c_{M_2}\, M \,\nonumber\\ &+& 0.518\, c_{M_3}\, M\end{aligned}$$ and, lastly, for the $\mu$ parameter our result reads $$\mu=0.995\, \mu_0\,.$$ [99]{} D. J. H. Chung, L. L. Everett, G. L. Kane, S. F. King, J. D. Lykken and L. T. Wang, Phys. Rept.  [**407**]{}, 1 (2005) \[arXiv:hep-ph/0312378\]. J. E. Kim and H. P. Nilles, Phys. Lett. B [**138**]{} (1984) 150. J. F. Donoghue, H. P. Nilles and D. Wyler, Phys. Lett. B [**128**]{} (1983) 55; S. Dimopoulos and D. W. Sutter, Nucl. Phys. B [**452**]{} (1995) 496 \[arXiv:hep-ph/9504415\]. J. R. Ellis, S. Heinemeyer, K. A. Olive and G. Weiglein, JHEP [**0605**]{}, 005 (2006) \[arXiv:hep-ph/0602220\]. C. S. Huang, W. Liao, Q. S. Yan and S. H. Zhu, J. Phys. G [**27**]{}, 833 (2001) \[arXiv:hep-ph/0008166\]. D. Kazakov and G. Moultaka, Nucl. Phys. B [**577**]{}, 121 (2000) \[arXiv:hep-ph/9912271\]. H. Baer, A. Mustafayev, S. Profumo, A. Belyaev and X. Tata, JHEP [**0507**]{}, 065 (2005) \[arXiv:hep-ph/0504001\]. S. P. Martin and M. T. Vaughn, Phys. Rev. D [**50**]{}, 2282 (1994) \[arXiv:hep-ph/9311340\]. D. A. Demir, JHEP [**0511**]{}, 003 (2005) \[arXiv:hep-ph/0408043\]; M. A. Cakir, S. Mutlu and L. Solmaz, Phys. Rev. D [**71**]{}, 115005 (2005) \[arXiv:hep-ph/0501286\]. S. Codoban, M. Jurcisin and D. Kazakov, Phys. Lett. B [**477**]{}, 223 (2000) \[arXiv:hep-ph/9912504\]. J. R. Ellis, K. A. Olive, Y. Santoso and V. C. Spanos, JHEP [**0605**]{}, 063 (2006) \[arXiv:hep-ph/0603136\]. S. P. Martin, arXiv:hep-ph/9709356. R. Barate [*et al.*]{} \[LEP Working Group for Higgs boson searches\], Phys. Lett. B [**565**]{}, 61 (2003) \[arXiv:hep-ex/0306033\]. J. A. Aguilar-Saavedra [*et al.*]{}, arXiv:hep-ph/0511344. B. C. Allanach [*et al.*]{}, in [*Proc. of the APS/DPF/DPB Summer Study on the Future of Particle Physics (Snowmass 2001)* ]{} ed. N. Graf, Eur. Phys. J. C [**25**]{}, 113 (2002) \[eConf [**C010630**]{}, P125 (2001)\] \[arXiv:hep-ph/0202233\]. F. E. Paige, S. D. Protopopescu, H. Baer and X. Tata, arXiv:hep-ph/0312045. B. C. Allanach, Comput. Phys. Commun.  [**143**]{} (2002) 305 \[arXiv:hep-ph/0104145\]. G. Belanger, S. Kraml and A. Pukhov, Phys. Rev. D [**72**]{}, 015003 (2005) \[arXiv:hep-ph/0502079\]; B. C. Allanach, S. Kraml and W. Porod, JHEP [**0303**]{}, 016 (2003) \[arXiv:hep-ph/0302102\]; See also URL http://cern.ch/kraml/comparison/ G. Weiglein [*et al.*]{} \[LHC/LC Study Group\], arXiv:hep-ph/0410364. H.P. Nilles, Phys. Rep. [**110**]{} (1984) 1; A. Djouadi, M. Drees and J. L. Kneur, arXiv:hep-ph/0602001. [^1]: Department of Physics, Izmir Institute of Technology, IZTECH, Turkey TR35430
--- abstract: | We show that gaseous disks of primordial composition irradiated by an external radiation field can develop a multiphase medium with temperatures between $10^2$ and $10^4$ K due to the formation of molecular hydrogen. For a given column density there is a critical value of the radiation field below which only the cold phase can exist. Due to a time decreasing quasar background, the gas starts cooling slowly after recombination until the lowest stable temperature in the warm phase is reached at a critical redshift $z=z_{\rm cr}$. Below this redshift the formation of molecular hydrogen promotes a rapid transition towards the cold phase. We find that disks of protogalaxies with $10^{20}\simlt N_{HI}\simlt 10^{21}$ cm$^{-2}$ are gravitationally stable at $T\sim 10^4$ K and can start their star formation history only at $z \simlt z_{\rm cr}\sim 2$, after the gas in the central portion of the disk has cooled to temperatures $T\simlt 300$ K. Such a delayed starbust phase in galaxies of low gas surface density and low dynamical mass can disrupt the disks and cause them to fade away. These objects could contribute significantly to the faint blue galaxy population. author: - 'Edvige Corbelli, Daniele Galli and Francesco Palla' title: 'STAR FORMATION IN DISK GALAXIES DRIVEN BY PRIMORDIAL H$_2$' --- = ==1=1=0pt =2=2=0pt \[.5in=1\] Introduction ============ One of the most interesting and controversial discoveries in the field of galaxy formation and evolution is related to the excess number count of Faint Blue Objects (see for example Koo 1996 or Ellis 1997 for a review). Various evolutionary models favour a scenario in which massive galaxies form stars at high redshifts ($z\sim $ 2–3), while low-mass systems (dwarfs) experience an initial and disruptive starburst much later and contribute significantly to the Faint Blue Objects counts (Corbelli $\&$ Salpeter 1995, hereafter CS; Ferguson & McGaug 1995; Babul & Ferguson 1996; Gwyn $\&$ Hartwick 1996; Cowie et al. 1997; Guzman et al. 1997). The delayed star formation in dwarf galaxies has been explained as a consequence of the delayed recombination of ionized hydrogen (Babul & Rees 1992; Efstathiou 1992) and of the late onset of the gravitational instability which, in low mass systems, takes place only when the temperature drops well below $10^4$ K (Babul & Rees 1993, CS). Cooling below $10^4$ K can be caused by a slow build-up of metals, due to some sporadic star formation events before the onset of the large scale starburst (CS). However, since the metal abundance required for efficient cooling increases as the column density of protodisks decreases, it is likely that additional processes must regulate the cooling in systems which are at lower end of the gas surface density distribution. In this Letter we examine if the formation of molecular hydrogen can provide such cooling in gaseous disks. The role of molecular coolants has been investigated in various cosmological contexts, such as the collapse and fragmentation of Jeans unstable clouds, pregalactic shocks and galaxy formation models (Shapiro & Kang 1987; Vietri & Pesce 1995; Ostriker & Gnedin 1996; Haiman et al. 1995, 1996; Anninos & Norman 1996; Tegmark et al. 1997; Kepner et al. 1997). All these studies have emphasized the importance of molecular hydrogen as the major agent to drive the temperature of the gas well below $10^4$ K. The ability of H$_2$ to form in sufficient amount depends critically on the shielding of the gas to the background ionizing and dissociating radiation. Here, we study the evolution of the central regions of a protogalactic disk which at some redshift are warm and in thermal equilibrium with an external radiation field. We show that, as the radiation field decreases with time, the formation of a small concentration of H$_2$ drives the gas out of equilibrium causing a rapid transition of the gas from the warm to the cold phase at a critical redshift $z_{\rm cr}$. The consequent reduction of thermal support can cause a gravitational instability of the disk and we examine which are the most favorable conditions for this to happen. Physical Background =================== We consider low gas surface density proto-disks (LSPs) embedded in a spherical dark matter halo and rotationally supported against gravitational collapse. Similarly to what is today the disk of a low-surface brightness galaxy (de Blok et al. 1996), LSPs are expected to have slowly rising rotation curves, with lower asymptotic values, $V_{\infty}$, and shallower gas radial distributions compared to disks with higher gas surface density. We concentrate on the central regions of LSPs at $R\simlt R_0$, with $R_0$ in the range 3–8 kpc, where we can consider the dark matter density and the gas column density as radially constant. Here the column density of neutral hydrogen (which is by far the dominant species) is typically in the range $10^{20}\simlt N_{HI}\simlt 10^{21}$ cm$^{-2}$, and the corresponding gas masses inside $R_0$ are between $10^7$–$10^9$ [$M_\odot$]{}. The dark matter determines the value of $V_{\infty}$; for $V_{\infty}=30$–150 km s$^{-1}$ the total mass inside $R_0$ is $10^8$–$10^{10} $ [$M_\odot$]{}. Disks with such characteristics are expected to exist already at $z\simeq 2$, as shown by the theoretical calculations by Kauffmann (1996) and by observations of damped Ly$\alpha$ systems (Wolfe et al. 1994). After recombination the vertical gas distribution is nearly isothermal, in hydrostatic equilibrium at each radius. It will be shown in the next Section that the gas stays quite uniform in temperature for most of the time and therefore in this Letter we do not consider explicitly its vertical stratification. We approximate the total gas pressure with its value at midplane as $${P\over k} \simeq 40\Bigl({N_{HI}\over 10^{20}{\hbox{cm}}^{-2}}\Bigr)^2 + 2 \Bigl({N_{HI}\over 10^{20}{\hbox{cm}}^{-2}}\Bigr) \Bigl({c_s\over {\hbox{km s}}^{-1}}\Bigr) \Bigl({V_{\infty}\over {\hbox{km s}}^{-1}}\Bigr) \Bigl({R_0\over {\hbox{kpc}}}\Bigr)^{-1} \qquad {\hbox{cm}}^{-3} ~{\hbox{K}} \eqno (2.1)$$ where $k$ is the Boltzmann constant, and $c_s$ is the isothermal sound speed. The two terms in eq. (2.1) represent the contributions of the gas self-gravity and the dark matter, respectively. The pressure is radially constant in the central region and changes with time as the slab cools due to the temperature dependence of $c_s$, which we follow explicitly. The chemistry of H$_2$ depends upon the density of free electrons and therefore on the ionizing flux at energies $E>100$ eV, able to penetrate column densities $\simgt 10^{20}$ cm$^{-2}$, as well as on the UV flux below the Lyman limit. At redshifts $z\simlt 2$, the quasar background decreases with time and accounts for most of the ionizing flux (Irwin, McMahon, & Hazard 1991). We shall use the following fit to the results of Haardt & Madau (1996) at $z=1$ $$J(E,z)= \Bigl({1+ z\over 2}\Bigr)^3 \times \cases {A\times 1.7\times 10^{-21} \Bigl({E \over 4.96~} \Bigr)^{-0.3} &for $E<$ 4.96 \cr A\times 1.5\times 10^{-22} \Bigl({E \over 13.6~} \Bigr)^{-2.4} &for 4.96$\le E < $13.6 \cr B\times 5.0\times 10^{-24} \Bigl({E \over 13.6~} \Bigr)^{-0.3} &for 13.6$\le E <$ 413 \cr B\times 1.8\times 10^{-24} \Bigl({E \over 413~} \Bigr)^{-1.5} &for $E \ge $413 \cr} \eqno (2.2)$$ where $E$ is in eV, and $J(E,z)$ in erg s$^{-1}$ cm$^{-2}$ sr$^{-1}$ Hz$^{-1}$. For the Haardt & Madau (1996) spectrum, $A=B=1$ which defines our [*standard*]{} flux spectrum. We consider variations of $A$ and $B$ to account for uncertainties in the intrinsic emission spectrum of quasars, in the attenuation by intervening clouds, and in additional UV radiation from nearby bright star-forming galaxies or from sporadic internal star formation. We have followed the evolution of $e$, H, H$^+$, H$^-$ and H$_2$ by means of a reduced chemical network which includes the most relevant reactions listed in Table 1. We have also considered the H$_2^+$ channel for H$_2$ formation, but its contribution was found to be negligible. Since temperatures of interest in this paper are below $10^4$ K, secondary electrons from H and He have been included following the prescription of Shull & van Steenberg (1985). These electrons tend to level the difference between He and H fractional ionization deriving from direct photoionization alone. Thus, the residual abundance of He$^+$ is set equal to that of H$^+$. For the photoionization rates we have integrated the cross sections given in the references listed in Table 1 over the background spectrum. We account for the attenuation of the flux as it penetrates the slab to the midplane assuming that the density of the chemical species is uniform. The recombination coefficient for hydrogen includes all the recombinations to $n\ge 2$ (on the spot approximation) and we assume that all the recombinations of He produce one ionizing photon for H. The line-shielding of H$_2$ molecules in the ground level has been computed using the prescriptions of Federman et al. (1979) in the absence of dust grains. The photodissociation probabilities and oscillator strengths for Lyman and Werner bands are from Dalgarno & Stephens (1970) and Abgrall & Roueff (1989). The thermal evolution of the gas is computed by solving the energy equation together with the chemical network (we have used a Hubble constant of 75 km s$^{-1}$ Mpc$^{-1}$). Heating of the gas is provided by photoionization of atomic H and He (Shull $\&$ van Steenberg 1985). Cooling is due to electron-impact excitation of atomic H (Spitzer 1978) and to H-impact excitation of roto-vibrational levels of H$_2$ (Hollenbach & McKee 1989, but with the correct sign as in Flower et al. 1986). We have neglected the contribution of H$_2$-H$_2$ collisions to the cooling because of the small molecular abundance. The two phase equilibrium and the fast gas cooling ================================================== For a given value of $V_{\infty}/R_0$ the gas temperature of a slab in thermal equilibrium depends on the intensity of the background field and on the gas column density of the slab. For the standard background spectrum Fig. 1a (curves 1 and 2) shows the possible equilibrium temperatures, after the gas has recombined, as function of redshift for two representative values of $N_{HI}$. We see from Fig. 1a that conditions for a multiphase medium, in which gas at different temperatures coexists in pressure balance, can be achieved over a significant range of redshift. This is analogous to what happens in the present-day interstellar medium (Field et. al 1969), where metal lines radiation rather than $H_2$ dominates the cooling. In general a multiphase medium is possible if for a given pressure the cooling per atom is a non monotonic function of $T$. In the present context at $T\simgt 7000$ K the cooling rate is an increasing function of $T$ since the dominant process is the e-H impact excitation. Below 7000 K, $H-H_2$ impact excitation becomes the relevant cooling process: H$_2$ is formed efficiently by reaction (3) and is destroyed by reaction (5). The strong temperature dependence of reaction (5), proportional to $e^{-21300/T}$, allows a rapid rise of the H$_2$ fractional abundance as the temperature diminishes. Due to the weak temperature dependence of the $H_2$ cooling function in this region, cooling per particle becomes a decreasing function of $T$ and the equilibrium curve has an inflection. This is a thermally unstable regime which ends at $T\simlt 2000$ K, when the 2-step photodissociation takes over reaction (5) (direct dissociation by hard photons, reaction 8, never contributes significantly to the H$_2$ balance). Under some circumstances it may not be obvious to determine the dominat thermal phase if, for a given ionizing flux, three equilibrium temperatures are possible. For the warm slab considered here the temperature evolution is much simpler: as the background flux declines with time, the temperature decreases monotonically from $\sim 10^4$ K moving along the equilibrium curve towards the right hand side of Fig.. 1a. The non monotonic relation $T(z$) implies that at a certain redshift $z_{\rm cr}$ the gas reaches the lowest stable equilibrium temperature in the warm phase: we shall call this [*transition point*]{} (marked by an asterisk in Fig. 1). For a slab of typical column density $N_{HI}=10^{20.4}$ cm$^{-2}$ $z_{\rm cr}$ is $\simeq 0.5$. The gas is thereafter forced to get out of equilibrium. The dashed tracks of Figure 1a show the subsequent time evolution of $T$ computed for a parcel of gas close to the midplane. We can see that the gas cools quickly toward the cold stable phase spending a very short interval of time at temperatures between 7000 K and 200 K. The transition towards the cold phase propagates rapidly vertically since the upper layers loose thermal support and settle towards the center incresing their density and cooling further. Due to the rapid cooling, shortly after the cold core forms most of the gas will be at close temperatures and the warm atmosphere left above contains little mass. During the transition the fractional abundance of H$_2$, $f$(H$_2$), increases from $\sim 3 \times10^{-5}$ to $\sim 10^{-3}$ and then levels off (Fig. 1b). The final value of $f$(H$_2$) is not very sensitive to either the background flux or the gas column density. LSPs with $N_{HI}$ column density between 10$^{20.1}$ and 10$^{20.8}$ cm$^{-2}$ make a transition to the cold phase in the redshift interval $z=2$ to $z=0$. The value of $z_{\rm cr}$ can vary appreciably with the intensity of the background flux. From curve 3 of Fig. 1$(a)$ we can see in fact that as $A$ increases, cooling of the slab is delayed until a much lower $z$. We have examined cases with $0.1<A<10$ and $0.3<B<3$ and the important point is that there is always a range of column densities between $\sim 10^{20}$ and $\sim 10^{21}$ cm$^{-2}$ which make the transition to the cold phase in the redshift interval $z=0$–2. This conclusion holds even if a small amount of metals (up to 0.01 solar) is present in the gas mixture as a contamination from an early generation of stars. Therefore, for the central regions of LSPs, molecular hydrogen plays a fundamental role in promoting and driving the thermal evolution of the gas. Gravitational instability and the star formation phase ====================================================== Protodisks exposed to a time-varying external radiation field follow the evolution described in the previous section if they are still gaseous and gravitationally stable in the warm phase. We examine the global gravitational stability of the simple hydrostatic model described in Sect. 2 using the Toomre criterion (Toomre 1964; see also CS). For $R<R_0$ the parameter $Q$ can be written as $$Q\simeq 0.5 \Bigl({T\over 7000{\hbox{K}}}\Bigr)^{1/2} \Biggl\lbrack\Bigl({N_{HI}\over 10^{20}{\hbox{cm}}^{-2}}\Bigr)^{-2} \Bigl({V_{\infty}\over {\hbox{km s}}^{-1}}\Bigr)^2 \Bigl({R_0\over {\hbox{kpc}}}\Bigr)^{-2}+ 12\Bigl({N_{HI}\over 10^{20}{\hbox{cm}}^{-2}}\Bigr)^{-1} \Biggr\rbrack^{1/2}\eqno (4.1)$$ To evaluate the contribution of the self-gravity term we have used eq. (25) of Toomre (1963) assuming that the protogalaxy has a total gas mass $M_g\simeq 4\times 10^8 (N_{HI}/ 10^{20})$ M$_\odot$. This guarantees a radially constant gas surface density for $R<R_0$ and a slowly rising rotation curve with $V_\infty/R_0$ determined by the amount of dark matter. Therefore, for any $N_{HI}$, we can derive $Q$ as function of redshift. We are interested in all the disk models which are stable in the warm phase but become unstable ($Q<1$) only at $z<z_{\rm cr}$ due to the decrease of the isothermal sound speed driven by H$_2$ cooling. These galaxies will experience a delay in the onset of large scale star formation. A comprehensive view of the stability and dominant thermal phases of the gas in the plane $N_{HI}-V_{\infty}/R_0$ for the standard background spectrum is given in Figure 2. In the region labelled [*cold $\&$ unstable*]{} slabs which are stable for $z<z_{\rm cr}$ become gravitationally unstable before $z=0$. LSPs in this region span a considerable range of column densities below $10^{21}$ cm$^{-2}$. Models in the [*warm $\&$ unstable*]{} region have $Q<1$ for $T=7000$ K and the collapse starts as soon as the gas settles into the disk. The shaded regions in the left part of Fig. 2 show stable models which can be either cold or warm but cannot form stars. For some of these slabs cooling of the gas did actually start, but the temperature was not low enough to trigger the instability before $z=0$ ([*cold $\&$ stable*]{}). Since cold disks have not been observed, it is likely that LSPs were never formed with these initial conditions (very low $N_{HI}$ and high $V_{\infty}/R_0$ values). Finally, in the [*warm $\&$ stable*]{} region the gas remains warm up to $z=0$; here the parameters of the slabs are more appropriate for the outer regions of disk galaxies (Corbelli & Salpeter 1993). Outer regions of LSPs and their central regions prior to $z_{\rm cr}$ might also contribute significantly to QSO absorption lines (see also Linder 1997) because the time the gas spends in the warm phase at $T \simgt 7000$ K is long compared to the transition time toward the cold and disruptive phase. The dotted lines in the [*cold $\&$ unstable*]{} region of Fig. 2 represent the loci of $Q=1$ and are labelled with the corresponding redshift (the $z=0$ line coincides with the heavy solid line at the left hand side of this region). Systems of increasingly lower column density start to cool and become unstable at lower and lower redshifts. For values of $A$ and $B$ higher than the standard case, the left border of the [*cold $\&$ unstable*]{} region and the dotted lines shift to the right. If at some redshift slabs of higher $N_{HI}$ had already become cold and unstable but the background flux intensity receives an extra input, the whole cooling process might stop. This can explain the lack of a large population of faint star forming disks of very low column density at $z=0$. On the other hand, the observed deficit can also be due to a truncation of the distribution function of protodisks of low column densities. How does the theoretical scheme of Fig. 2 compare with observations? A precise answer to this question is difficult since the parameters $N_{HI}$ and $V_{\infty}/R_0$ of protogalactic disks are unknown. However, as an illustration, we can use the results of de Blok et al. (1996) on Low Surface Brightness galaxies (LSBs) which are reminiscent of our models of LSPs. We have estimated $N_{HI}$ and $V_{\infty}/R_0$ for ten galaxies of their sample with slowly rising rotation curves, excluding LSBs with steeply declining gas column density profiles in the inner regions. The selected objects correspond to LSBs with $10<R_{25}<30$ arcsec and the results are shown by the filled triangles of Fig. 2. The data fall within the boundaries of the [*cold $\&$ unstable*]{} region and indicate that if LSPs were similar to LSBs they had the adequate parameters to follow the evolution described in this Letter. Within this region, systems of low column density and dynamical mass are those for which the ensuing starburst easily disrupts the gaseous central region and reduces considerably the gas mass content. Thus, further star formation is inhibited and the systems experience a fading away phase (CS). The identification of LSPs with the progenitors of LSBs is quite appealing since some of the properties of the blue galaxies observed at moderate $z$ can be explained if these are edge-on manifestations of LSBs (Dalcanton & Shectman 1996). Present-day LSBs might descend from those LSPs which, for environmental effects or some internal feature (higher dark matter or gas content), had a slower, and less violent star formation phase. We are grateful to E. E. Salpeter and to the referee for their comments and to R. Baglioni for his help with the figures. This work was supported in part by ASI-95-RS-120. Abgrall, H., & Roueff, E. 1989, AAS, 79, 313 Anninos, P., Norman, M. L. 1996, ApJ, 460, 556 Babul, A., & Rees, M. J. 1992, MNRAS, 255, 346 Babul, A., & Rees, M. J. 1993, in The Evolution of Galaxies and Their Einvironment, NATO Conf. Ser., ed. D. Hollenbach, H. Thronson, M. J. Shull (Nasa: Moffett Field), p. 80 Babul, A., & Ferguson, H. C., 1996, ApJ, 458, 100 Bieniek, R.J. 1980, J.Phys. B, 13, 4405 Cen, R. 1992, ApJS, 78, 341 Corbelli, E., & Salpeter, E. E. 1993, ApJ, 419, 94 Corbelli, E., & Salpeter, E. E. 1995, ApJ, 450, 32 (CS) Cowie, L. L., Hu, E. M., Songaila, A., & Egami, E. 1997, ApJ, 481, L9 Dalcanton, J. J., & Shectman, S. A. 1996, ApJ, 465, L9 Dalgarno, A., & Stephens, T. L. 1970, ApJ, 160, L107 de Blok, W. J. G., McGaugh, S. S., & van der Hulst, J. M. 1996, MNRAS, 283, 18 de Jong, T. 1972, AA, 20, 263 Efstathiou, G. 1992, MNRAS, 256, 43p Ellis, R. S. 1997, ARAA, 35, in press Federman, S. R., Glassgold, A. E., & Kwan, J. 1979, ApJ, 227, 466 Ferguson, H. C., & McGaug, S. S. 1995, ApJ, 440, 470 Field, G. B., Goldsmith, D. W. & Habing H. J. 1969, ApJ, 115, L149 Flower, D. R., Pineau-des-Forêts, G., & Hartquist, T. W. 1986, MNRAS, 218, 729 Guzman, R., Phillips, A. C., Gallego, J., Koo, D. C., & Lowenthal, J. D. 1997, ApJ, in press Gwyn, S. D. J., & Hartwick, F. D. A. 1996, ApJ, 468, L77 Haardt, F., & Madau, P. 1996, ApJ, 461, 20 Haiman, Z., Thoul, A. A., & Loeb, A. 1995, ApJ, 464, 563 Haiman, Z., Rees, M., & Loeb, A. 1996, ApJ, 467, 522 Hollenbach, D., & McKee, C. F. 1989, ApJ, 342, 306 Irwin, M., McMahon, R. G., & Hazard, C. 1991, in Space Distribution of Quasars, ed. D. Crampton (San Francisco: ASP), 183 Kauffmann, G. 1996, MNRAS, 281, 475 Kepner, J. V., Babul, A., Spergel, D. N. 1997, ApJ, in press Koo, D. C. 1996, in IAU Symp. 168, ed. M. Kafatos (Dordrecht: Kluwer), 201 Linder, S. M. 1997, in The Young Galaxies and QSO Absorption-line Systems, in press O’Neil, S. V., & Reinhardt, W. P. 1978, J. Chem. Phys., 69, 2126 Ostriker, J. P., & Gnedin, N. Y. 1996, 472, L63 Peterson, J. A., Aberth, W. H., Moseley, J. T., & Sheridan, J. R. 1971, Phys. Rev. A, 3, 1651 Seaton, M. F. 1959, MNRAS, 119, 81 Shapiro, P. R., & Kang, H. 1987, ApJ, 318, 32 Shull, J. M., & van Steenberg, M. E. 1985, ApJ, 298, 268 Spitzer, L. Jr. 1978, Physical Processes in the Interstellar Medium (New York: Wiley) Tegmark, M., Silk, J., Rees, M. J., Blanchard, A., Abel, T., & Palla, F. 1997, ApJ, 474, 1 Toomre, A. 1963, ApJ, 138, 385 Toomre, A. 1964, ApJ, 139, 1217 Vietri, M., & Pesce, E. 1995, ApJ 442, 618 Wishart, A. W. 1979, MNRAS, 187, P59 Wolfe, A. M., Fan, X. M., Tytler, D., Vogt, S. S., Keane, M. J., Lanzetta, K. M. 1994, ApJ, 435, L101 \# Reaction Reference ---- ------------------------------------------------ ------------------------- 1 H$^+$ $+$ $e$ $\rightarrow$ H $+$ $h\nu$ Seaton 1959 2 H $+$ $e$ $\rightarrow$ H$^-$ $+$ $h\nu$ De Jong 1972 3 H$^-$ $+$ H $\rightarrow$ H$_2$ $+$ $e$ Bieniek 1980 4 H$^-$ $+$ H$^+$ $\rightarrow$ 2H Peterson et al. 1971 5 H$_2$ $+$ H$^+$ $\rightarrow$ H$_2^+$ $+$ H Hollenbach & McKee 1989 6 H $+$ $h\nu$ $\rightarrow$ H$^+$ $+$ $e$ Cen 1992 7 H$^-$ $+$ $h\nu$ $\rightarrow$ H $+$ $e$ Wishart 1979 8 H$_2$ $+$ $h\nu$ $\rightarrow$ H$_2^+$ $+$ $e$ O’Neil & Reinhardt 1978 9 H$_2$ $+$ $h\nu$ $\rightarrow$ 2H see text [**FIGURE CAPTIONS**]{}
--- abstract: 'Continuous-wave (CW) gravitational waves (GWs) call for computationally-intensive methods. Low signal-to-noise ratio signals need templated searches with long coherent integration times and thus fine parameter-space resolution. Longer integration increases sensitivity. Low-Mass X-ray Binaries (LMXBs) such as Scorpius X-1 (Sco X-1) may emit accretion-driven CWs at strains reachable by current ground-based observatories. Binary orbital parameters induce phase modulation. This paper describes how resampling corrects binary and detector motion, yielding source-frame time series used for cross-correlation. Compared to the previous, detector-frame, templated cross-correlation method, used for Sco X-1 on data from the first Advanced LIGO observing run (O1), resampling is about $20\times$ faster in the costliest, most-sensitive frequency bands. Speed-up factors depend on integration time and search set-up. The speed could be reinvested into longer integration with a forecast sensitivity gain, $20$ to $125$ Hz median, of approximately $51\%$, or from $20$ to $250$ Hz, $11\%$, given the same per-band cost and set-up. This paper’s timing model enables future set-up optimization. Resampling scales well with longer integration, and at $10\times$ unoptimized cost could reach respectively $2.83\times$ and $2.75\times$ median sensitivities, limited by spin-wandering. Then an O1 search could yield a marginalized-polarization upper limit reaching torque-balance at 100 Hz. Frequencies from 40 to 140 Hz might be probed in equal observing time with $2\times$ improved detectors.' author: - 'G.D. Meadors' - 'B. Krishnan' - 'M.A. Papa' - 'John T. Whelan' - Yuanhao Zhang bibliography: - 'bibliography.bib' title: 'Resampling to accelerate cross-correlation searches for continuous gravitational waves from binary systems' --- Introduction\[introduction\] ============================ New gravitational-wave (GW) source types await sensitive analyses. Transient signals such as GW150914 [@GW150914LIGO] can reach strain amplitudes $h_0$ of approximately $10^{-21}$. Yet-unseen continuous-wave (CW) signals, from sources such as non-axisymmetric neutron stars (NSs) [@Brady1998], are constrained to be significantly weaker: for Scorpius X-1 (Sco X-1), the brightest Low Mass X-ray Binary (LMXB), the best $95\%$-confidence marginalized-polarization upper limit reaches $2.3\times10^{-25}$ [@ScoX1CrossCorr2017ApJO1]. Accretion-driven torque-balance could drive GW emission from LMXBs: infalling matter’s angular momentum is predicted to be balanced by that radiated gravitationally [@PapaloizouPringle1978; @Wagoner1984]. Sco X-1 attracts attention [@Bildsten1998] as the brightest persistent X-ray source [@Giacconi1962]. Emission might be expected at a GW frequency $f_0$ equal to $2 \nu$, for an NS spin frequency $\nu$, assuming that the compact object in the system is an NS radiating *via* the $l=m=2$ mass quadrupole moment. An NS could also emit *via* $r$-mode (Rossby) oscillations [@Shawhan2010; @Owen2010], depending on the equation of state and dissipative mechanisms [@Andersson1998; @Friedman1998; @Owen1998]. Its spin frequency is unknown, so an $f_0$ range must be searched. In this paper, we discuss how to accelerate and increase the sensitivity of a broadband search for Sco X-1. CW analyses are computationally demanding. Long coherent integration times $T_\mathrm{coh}$, for low signal-to-noise ratio (SNR) signals, induce a steep metric [@Brady1998] on the parameter space, increasing the matched-filtering template density. While an optimal statistic [@Jaranowski1998] can maximize out *amplitude parameters*, the *Doppler parameters* need explicit templating. Sensitivity, which grows from longer integration, conflicts with computational cost, which grows faster. Semicoherent methods [@HierarchicalBrady2000] tune this balance: an observing run of data is subdivided into coherent segments. Summing statistics from segments increases total sensitivity, while the metric depends mainly on the coherent-segment length. Sensitivity benefits from both total observing time $T_\mathrm{obs}$ and $T_\mathrm{coh}$. Whole observation runs are typically used, with coherent segments as long as resources permit. Speed frees resources to be invested in coherent integration time. *Resampling* [@Jaranowski1998; @Patel:2009qe] techniques can accelerate the cross-correlation methods (CrossCorr) [@Dhurandhar2008; @Chung2011; @ScoX1CrossCorr2015PRD] that have to date shown the most sensitive results for Sco X-$1$ in simulation [@ScoX1MDC2015PRD] and Advanced LIGO data [@ScoX1CrossCorr2017ApJO1]. After over a decade of GW investigations into Sco X-$1$ [@AbbottScoX12007; @AbadieStoch2011; @GoetzTwoSpectResults2014; @Sammut2014PRD; @Sideband2015; @MeadorsS6ScoX1PRD2017; @O1Radiometer2017; @O1Sideband2017], the nominal torque-balance level is near. Discovery may yield new astrophysics. Detection becomes more likely as the GW strain amplitude $h_0$ sensitivity approaches torque-balance (TB). LMXB accretion torque could recycle NSs to higher $\nu$ [@PapaloizouPringle1978]. If spin-up torque balances GW spin-down [@Wagoner1984], the apparent speed limit on millisecond pulsars slightly over 700 Hz [@Chakrabarty2003] may be explained. Sco X-1 and similar LMXBs could radiate GWs from NS asymmetries. By Bildsten Equation 4 [@Bildsten1998], with characteristic strain $h_c$ related by $h_c/h_0 = 2.9/4.0$, for an LMXB with flux $\mathcal{F}_\mathrm{X-ray}$, $$h_c \approx 4\times10^{-27}\left[\frac{300\mathrm{~Hz}}{\nu_s} \times \frac{\mathcal{F}_\mathrm{X-ray}}{10^{-8}\mathrm{~erg~cm}^{-2}\mathrm{~s}^{-1}}\right]^{1/2}. \label{torque_bal_eq}$$ High X-ray flux ($3.9 \times 10^{-7}$ erg cm$^{-2}$ s$^{-1}$ [@Watts2008]), assuming a nominal 1.4 solar mass, 10 km radius NS of unknown spin frequency, implies Sco X-1’s $h_0$: $$h_0 \approx 3.5\times 10^{-26} [600~\mathrm{Hz}]^{1/2} f_0^{-1/2}.$$ Advanced LIGO Observing Run 1 (O1) data was searched [@ScoX1CrossCorr2017ApJO1] with the cross-correlation method [@ScoX1CrossCorr2015PRD], setting a 95%-confidence marginalized-polarization upper limit at 175 Hz of $2.3\times 10^{-25}$, or $8.0\times 10^{-26}$ assuming optimal, circular polarization, respectively $3.5\times$ and $1.2\times$ above torque-balance. This analysis spanned 25 to 2000 Hz, with the detector noise curve and computational cost reducing the depth of the upper limits. As Advanced LIGO [@ALIGOStandardRef], Advanced Virgo [@AVirgoStandardRef], and KAGRA [@KAGRAStandardRef] observatories improve, sensitivity varies linearly with noise amplitude spectral density(ASD), $S_H^{1/2}(f)$, for fixed $T_\mathrm{obs}$. *Sensitivity depth* $D^C(f_0)$ [@BehnkeGalacticCenter2015; @LeaciPrixDirectedFStatPRD] factors away this noise floor, to characterize analyses: $$D^C(f_0) \equiv S_H^{1/2}(f_0)[h_0^C(f_0)]^{-1}.$$ Depth should be specified at a confidence $C$, such as $D^{95\%}(f_0)$, based on a strain upper limit $h_0^C$. *Coherent SNR* is proportional to $h_0\sqrt{T_\mathrm{obs} / S_H}$; deeper methods find lower SNR signals. Methods vary [@Riles2013] for finding CWs from NS in binary systems. Isolated CW techniques [@Jaranowski1998; @HoughTransformKrishnan2004; @LSCPulsarS4; @LSCPowerFlux2009; @PowerFluxMethod2010; @PowerFluxAllSky2012] inform searches at unknown sky location, as well as for known ephemerides [@DupuisWoan2005; @AasiPulsarInitialResults2014], and also for the *directed* case of known sky location but uncertain ephemerides. Sco X-1 searches are directed (Table \[scox1\_table\_params\]). Five Doppler parameters arise from the binary orbit. New techniques address these parameters’ computational cost [@Messenger2007CQG; @Sammut2014PRD; @SidebandMarkovModelSuvorova2016; @GoetzTwoSpectMethods2011; @MeadorsDirectedMethods2016; @Ballmer2006CQG; @AbadieStoch2011; @2010JPhCS.228a2005V; @Dhurandhar2008; @ScoX1CrossCorr2015PRD], with more in development [@LeaciPrixDirectedFStatPRD]. The cross-correlation method found all simulations in a 2015 Mock Data Challenge (MDC) [@ScoX1MDC2015PRD] and sets O1 upper limits 3 to 4 times more stringent than others [@O1Radiometer2017; @O1Sideband2017]. The *resampled* cross-correlation method could surpass these limits. Resampling was proposed [@Jaranowski1998] and detailed [@Patel:2009qe] for isolated-star $\mathcal{F}$-statistic calculations. Strain $h_0(t)$ is interpolated from the detector frame, where Earth and source motion introduce phase modulation, to the source frame. In the source frame, the statistic simplifies (with normalization factors determined by the detector antenna functions) to frequency bin power. Although interpolation is costly, subsequent computations can be faster than interpolating across time-varying frequency bins. We adapt the cross-correlation method for resampling. Speed-up and sensitivity performance projections are estimated from implemented code tested with simulated data. Deeper, resampled cross-correlation methods could bring CW analyses of Sco X-1 and similar LMXBs to the brink of detection. Section \[crosscorr\_method\] details the cross-correlation method, Section \[resampling\] explains resampling, and Section \[resamp\_cost\] measures the cost and benefits. Figures \[projected-ul\] and \[projected-sens-depth\] show predicted astrophysical reach. Section \[conclusion\] concludes. [r r r r r]{} Sco X-1 parameter & Ref. & Value & Uncertainty & Units\ \ Right ascension ($\alpha$) & [@2mass06] & 16:19:55.067 & $\pm 0.06'' $ & —\ Declination ($\delta$) & [@2mass06] & $-15^\circ 38'25.02''$ & $\pm 0.06''$ & —\ Distance ($d$) & [@Bradshaw1999] & $2.8$ & $\pm0.3$ & kpc\ X-ray flux at Earth ($\mathcal{F}_\mathrm{X-ray}$) & [@Watts2008] & $3.9\times10^{-7}$ & — & erg cm$^{-2}$ s$^{-1}$\ Orbital eccentricity ($e$) & [@ScoX1MDC2015PRD; @Galloway2014] & $< 0.068$ & $(3 \sigma)$ & —\ Orbital period ($P$) & [@Galloway2014] & $68023.70 $ & $\pm 0.04$ & s\ Orbital projected semi-major axis ($a_p$) & [@WangSteeghsGalloway2016; @ScoX1CrossCorr2017ApJO1] & $1.805$ & $\pm 1.445$ & s\ Compact object time of ascension ($T_\mathrm{asc}$) & [@Galloway2014; @ScoX1MDC2015PRD] & $897753994$ & $\pm100$ & s\ Companion mass ($M_2$) & [@2002ApJ...568..273S] & $0.42$ & — & $M_\mathrm{sol}$\ Cross-Correlation Method\[crosscorr\_method\] ============================================= Detecting a sinusoid should be simple. Low SNR, amplitude- and phase-modulated sinusoids are hard. The ‘CrossCorr’ cross-correlation method [@Dhurandhar2008; @ScoX1CrossCorr2015PRD] intersects two paths to this problem: the stochastic radiometer [@Allen1999; @Ballmer2006CQG] and the multi-detector $\mathcal{F}$-statistic [@Jaranowski1998; @CutlerMulti2005; @BStatPrix2009]. This cross-correlation method computes a statistic, $\rho$, which approaches the others in limiting cases. We summarize $\rho$ to clarify, and to explain how resampling [@Jaranowski1998; @Patel:2009qe], designed for the $\mathcal{F}$-statistic, is transferable. The principle remains – a semicoherent matched filter using a signal model for continuous, modulated GWs, then a frequentist statistic proportional to the power, $(h_0)^2$. Signal model\[signal\_model\] ----------------------------- Continuous waves from NS in binary systems are defined by a signal model in amplitude and Doppler parameters. Amplitude parameters $\bar{\mathcal{A}}^i$ [@Jaranowski1998], are factored out: reference phase $\Phi_0$, polarization angle $\psi$, NS inclination angle $\iota$ (with respect to the line of sight), and strain amplitude $h_0$. Sky location is in right ascension $\alpha$ and declination $\delta$. The Doppler parameters $\lambda$ for an isolated system include frequency $f_0$ and higher-order Taylor-expanded *spindown* (or *spinup*) terms $f^{(1)}$, $f^{(2)}$, *etc*. Assuming an NS source spinning at frequency $\nu$, GW $f^{(k)} \equiv (\sigma d^k \nu(\tau)/d\tau^k | \tau = t_\mathrm{ref})$, with emission time in the source frame $\tau$, evaluated at arbitrary reference time $t_\mathrm{ref}$ (conventions follow [@LeaciPrixDirectedFStatPRD]). For quadrupole emission, $\sigma = 2$. Assuming torque-balance, LMXB searches have set spindown terms to zero and instead consider *spin-wandering* [@MukherjeeSpinWandering2016], an unmodeled stochastic drift about $f_0$. For an isolated system without spindown, the measured frequency in the solar system barycenter (SSB) will be constant. For a binary system, the $\lambda$ parameters further include $(a_P,P,T_\mathrm{asc},T_\mathrm{p},e)$. Orbital projected semi-major axis, in time units, is $a_p \equiv (a \sin i) / c$ ($a$ measured in light-seconds). Orbital period is $P$. Time of ascension is $T_\mathrm{asc}$, when the compact object crosses the ascending node, heading away from an SSB observer. Because only the companion’s inferior conjunction time $T_0$ [@Galloway2014] is known, the compact object $T_\mathrm{asc} = T_0 - P/4$ [@ScoX1MDC2015PRD] (stated in SSB GPS seconds). Time of periapsis passage is $T_\mathrm{p}$. Orbital eccentricity is $e$. When $e = 0$, $T_\mathrm{p} = T_\mathrm{asc}$ by convention. For Sco X-1, $\alpha$ and $\delta$ are precise enough that one point covers uncertainty. ### Strain and amplitude parameters Strain amplitude $h(t)$ is measured; GW phase $\Phi(t;\lambda)$ is key to its signal model: $$\begin{aligned} h(t) &=& \left[F_+ (t; \alpha, \delta), F_\times (t; \alpha, \delta) \right] \left[ \begin{array}{c} A_+ \cos \Phi(t;\lambda) \\ A_\times \sin \Phi(t;\lambda) \end{array}\right],\end{aligned}$$ where $t$ is detector GPS time measured; $F_+$ and $F_\times$ are called *beam-pattern functions*. The amplitude model factors loosely depend on time and sky location *via* the detector response in the *antenna functions* $a(t; \alpha, \delta)$ and $b(t; \alpha, \delta)$ [@Jaranowski1998; @Dhurandhar2008]. Since we discuss known sky location targets, $(\alpha,\delta)$ will be implicit in $a(t)$, $b(t)$: $$\begin{aligned} \left[ \begin{array}{c} F_+ (t) \\ F_\times (t) \end{array} \right] &=& \left[ \begin{array}{c} a(t) \cos 2\psi + b(t) \sin 2 \psi \\ b(t) \cos 2 \psi - a(t) \sin 2\psi \end{array} \right],\\ \left[ \begin{array}{c} A_+ \\ A_\times \end{array} \right] &=& h_0 \left[ \begin{array}{c} \frac{1+\cos^2 \iota}{2} \\ \cos \iota \end{array} \right].\end{aligned}$$ Amplitude parameters can be projected into four new coordinates which affect the waveform linearly [@Jaranowski1998; @CutlerMulti2005; @PrixMultiMetric2007; @BStatPrix2009; @WhelanNewAmplitude2014CQG]. These *canonical* coordinates $\mathcal{A}^\mu$ satisfy, for basis functions $h_\mu(t; \lambda)$, $$\begin{aligned} h (t;\lambda) &=& \sum_{\mu=1}^{4} \mathcal{A}^\mu h_\mu (t;\lambda). \label{decomposition-projection-f}\end{aligned}$$ Maximization, or marginalization, over $\mathcal{A}^\mu$ leads to approximately Neyman-Pearson optimal statistics (respectively $\mathcal{F}$, $\mathcal{B}$) [@BStatPrix2009]. The cross-correlation method obtains a similar statistic $\rho$ with different motives [@Dhurandhar2008] (see Section \[crosscorr-stat\]). ### Doppler parameters The $\Phi(t;\lambda)$ model is defined with source time $\tau$ as a function of $t$, *via* SSB time $t_\mathrm{SSB}$. In the spindown-free source frame, $\Phi(\tau) = 2\pi f_0 \tau$, so $\Phi(t;\lambda) = 2\pi f_0 \tau(t_\mathrm{SSB}(t;\alpha, \delta);\lambda)$. One can find the barycentric time $t_\mathrm{SSB}$ from $(\alpha, \delta)$, *via* the vector $\vec r(t)$ pointing from the SSB to the detector and the unit vector $\vec n$ pointing from the SSB to the source. The latter vector is defined as $\vec n(\alpha, \delta) = (\cos \alpha \cos \delta, \sin \alpha \cos \delta, \sin \delta)$ [@LeaciPrixDirectedFStatPRD]. With GW unit wavevector $\vec k = -\vec n$ in the far-field approximation, $$t_\mathrm{SSB}(t; \alpha,\delta) = t + \frac{\vec r(t) \cdot \vec n(\alpha, \delta) }{c}. \label{t-ssb-equation}$$ The relativistic $t_\mathrm{SSB}$ is corrected for Shapiro and Einstein delays, in addition to the Earth orbital and rotational Roemer delays encoded by $\vec r$ [@LeaciPrixDirectedFStatPRD]. The binary orbital Roemer delay comes from the projected radial distance $R$ along the line of sight. Following conventions [@BlandfordBinary1976; @LeaciPrixDirectedFStatPRD], $$\tau(t_\mathrm{SSB};\lambda) = t_\mathrm{SSB} - \frac{d}{c} - \frac{R(t_\mathrm{SSB};\lambda)}{c}, \label{tau-equation}$$ wherein larger $R$ signifies greater distance from the binary barycenter (BB) along the line of sight, away from the observer. Source distance will affect $h_0$ and cause an overall time shift $d/c$ equivalent to changing $\Phi_0$, and inertial motion effects an overall constant Doppler shift to $f_0$. As $d$ would also affect electromagnetic observations and is indistinguishable from other parameters, we now drop $(d/c)$, in effect equating the SSB with the BB. Kepler’s equations involve a constant argument of periapse $\omega$ (the angle from the ascending node to periapsis in the direction of motion, dependent on $T_\mathrm{p}$ and $T_\mathrm{asc}$) and a time-varying eccentric anomaly $E$ (implicit in $\tau$ [@LeaciPrixDirectedFStatPRD]). These equations describe system dynamics: $$\begin{aligned} \tau &=& T_\mathrm{p} + \frac{P}{2 \pi} (E - e \sin E),\label{solve-for-E}\\ \frac{R}{c} &=& a_p \left[ \sin \omega (\cos E - e) + \cos \omega \sin E \sqrt{1-e^2}\right].\label{Kepler-equation}\end{aligned}$$ Sco X-1’s orbit is near-circular ($e < 0.068$ at $3\sigma$), so we will focus on $e=0$, though resampling can handle elliptical orbits. Sco X-1’s $a_p$ is four orders of magnitude less than $P$, so we approximate $E(\tau) = E(t)$. Let $\Omega \equiv 2 \pi/P$. In this circular case [@ScoX1CrossCorr2015PRD], $$\begin{aligned} \frac{R(t;\lambda)}{c} &=& a_p \sin \left(\Omega [t - T_\mathrm{asc}] \right),\\ \phi(t;\lambda) &=& 2 \pi f_0 \left[ t_\mathrm{SSB}(t;\alpha, \delta) - \frac{R(t;\lambda)}{c} \right],\label{time-varying-phase-eq}\\ \Phi (t; \lambda) &=& \Phi_0 + \phi(t;\lambda). \label{phase-model-eq}\end{aligned}$$ Phase modulation induces an effective frequency modulation depth, $\Delta f_\mathrm{obs}$. This modulation adds to Doppler shift from detector velocity, $\vec v = d \vec r/dt$ (dominated by Earth’s orbit $v_\mathrm{Earth}$), when calculating the total physical frequency bandwidth $\Delta f_\mathrm{drift}$ through which the signal can drift: $$\begin{aligned} \Delta f_\mathrm{drift} &=& 2\times\left(\frac{\max{(\vec v \cdot \vec n)}}{c} + \Delta f_\mathrm{obs} \right), \label{f-drift-eq}\\ \Delta f_\mathrm{obs} &=& a_p \Omega f_0.\label{delta-f-obs}\end{aligned}$$ With $|\vec v| \approx v_\mathrm{Earth}$, $\mathrm{max}(v / c) \approx 10^{-4}$ (lower off-ecliptic). For an unmodulated signal, $d\Phi/dt = 2\pi f_0$, reducing to a Fourier transform [@Brady1998]. For modulated signals, $\phi(t)$ must be tracked to maintain coherence. Given Equation \[phase-model-eq\], the cross-correlation method tracks a CW signal as the signal changes instantaneous frequency. Mismatch in Doppler parameters can lead to false dismissal. The phase mismatch metric [@Brady1998] ([@ScoX1CrossCorr2015PRD] for the cross-correlation method) sets the parameter-space density required for Doppler parameters. A mismatch in the phase-model Roemer delay of about a half-cycle of $f_0$ between the beginning and end of each integration time $T_\mathrm{coh}$ will lose the signal. (A $100$ Hz signal accumulates $\mathcal{O}(10^2)$ cycles over $a_p$ of Sco X-1 and $\mathcal{O}(10^5)$ cycles over 2 AU). The computational cost stems from the parameter-space density needed for the long $T_\mathrm{coh}$ that low-SNR signals require. We define detection statistics for these signals. This paper will show that resampling is a more efficient way to compute the cross-correlation method’s $\rho$ statistic. Cross-correlation statistic\[crosscorr-stat\] --------------------------------------------- The goal is to calculate the statistic, $\rho$, as efficiently as possible. See Figure \[cost\_per\_template\_figure\] for a cost per template comparison of the previous ‘demodulation’ and resampled methods. Let us define $\rho$ as in Whelan *et al* [@ScoX1CrossCorr2015PRD]. (In Appendix \[relationships-to-other-optimal-statistics\] we compare $\rho$, like Dhurandhar *et al* [@Dhurandhar2008], with the $\mathcal{F}$-statistic [@Jaranowski1998] and radiometer [@Allen1999; @Ballmer2006CQG]). Data start by being parcelled into short Fourier transforms (SFTs) [@AllenMendellSFT2004], each of duration $T_\mathrm{sft}$. The total data set spanning $T_\mathrm{obs}$ for $Q$ detectors may contain up to $N_\mathrm{sft} \leq Q T_\mathrm{obs}/T_\mathrm{sft}$ SFTs. The *cross-correlation* in our method is made between pairs of SFTs: the first component of the pair is indexed by $K$, the second component by $L$. In the Whelan *et al* construction, $K$ and $L$ span all detectors, meaning they both can range from $0$ up to $N_\mathrm{sft}$. (Particulars are discussed in Section \[pair-selection-for-resampling\], where $K$ and $L$ are redefined). The sets of SFTs $\{K\}$ and $\{L\}$ are defined by an allowable lag-time, $T_\mathrm{max}$, the difference between start times of given SFTs $K$ and $L$. It is common to require $K \neq L$ (to avoid auto-correlation). SFT pairs $KL$ in the set $\mathcal{P}$ are *cross-correlated.* A time series $x_K(t) = h_K(t) + n_K(t)$, signal $h$ and noise $n$, has one-sided power spectral density (PSD) $S_K$. Analyze the Fourier transform $\tilde{x}$ (using Equation 2.1 [@ScoX1CrossCorr2015PRD] conventions), with sampling time $\delta t$, SFT mid-time $t_K$: $$\begin{aligned} \tilde{x}_{Km} = \sum_{j=0}^{N-1} x_K (t_K - T_\mathrm{sft}/2 + j \delta t) e^{-\mathrm{i} 2\pi j m \delta t / T_\mathrm{sft}} \delta t, \label{fourier-transform-def}\end{aligned}$$ so normalized data $z_{K m}$ in frequency bin $m$ ($k$ in [@ScoX1CrossCorr2015PRD] and our appendices) is, $$z_{K m} = \tilde{x}_{K m} \sqrt{\frac{2}{T_\mathrm{sft} S_K}}. \label{normalized-z-bin}$$ SFT bin frequency is $f_m$, but the signal instantaneous frequency is $f_K$; these must not be confused. The discrepancy, to the nearest bin from the instantaneous frequency, is $\kappa_{K m}$, $$\begin{aligned} f_m &=& \frac{m}{T_\mathrm{sft}},\\ \kappa_{K m} &=& m - f_K T_\mathrm{sft}.\end{aligned}$$ Multiple bins in a set $\mathcal{K}_K$ are part of the *Dirichlet kernel*, discussed around Equation 6.5 of [@Allen2002]. The signal contribution to each bin is found by the normalized *sinc* function, $\mathrm{sinc} \alpha = \frac{\sin{\pi \alpha}}{\pi \alpha}$. The total data vector $\textbf{z}$ has elements $z_K$, which are the Fourier-transformed data. Each element is summed from all bins that could contain a signal at a frequency $f$ (implicitly specified by the set $\mathcal{K}_K$ and the $f_K$ model in $\kappa_{K m}$), then indexed by SFT $K$, $$\begin{aligned} \Xi_K &\equiv& \sqrt{\sum_{m'\in \mathcal{K}_K} \mathrm{sinc}^2(\kappa_{K m'})},\\ z_K &=& \frac{1}{\Xi_K} \sum_{m\in\mathcal{K}_K} (-1)^m \mathrm{sinc} (\kappa_{K m}) z_{K m} . \label{total-data-eq}\end{aligned}$$ The cross-correlation method constructs $\rho$ with a *filter*, Hermitian weighting matrix $\textbf{W}$. It uses the conjugate transpose $\dag$. With matrix entries $KL$ that correlate elements $K$ from the SFT vector $\textbf{z}$, $$\rho = \textbf{z}^\dag \textbf{W} \textbf{z}, \label{abstract-rho-matrix}$$ or in explicit notation, $\rho = \sum_K \left(\sum_L z_L^* W_{KL}\right) z_K$. Equation \[abstract-rho-matrix\] depends, *via* $\textbf{W}$, on the point in $\lambda$ parameter space (including frequency $f$) of the signal model. A near-optimal $\textbf{W}$ is the geometrical factor $\hat \Gamma^\mathrm{ave}_{KL}$ (chosen for $\psi$-independence, Whelan *et al* Equation 2.33 [@ScoX1CrossCorr2015PRD]). Let a hat symbol indicate noise-weighted normalization, *e.g.,* $\hat a^K \equiv \sqrt{2 T_\mathrm{sft}/S_K} a^K$. Taking $a^K$ ($a(t)$ at the (mid-)time of SFT $K$) and likewise $a^L$, $b^K$, $b^L$, we can find $\hat \Gamma^\mathrm{ave}_{KL}$. With overall normalization $N$ (*ibid.* Equation 3.6), $$\begin{aligned} \hat \Gamma^\mathrm{ave}_{KL} &=& \frac{1}{10}\left(\hat a^K \hat a^L + \hat b^K \hat b^L \right), \label{geometric-filter}\\ N &=& \left(2 \sum_{KL\in\mathcal{P}} \Xi_K^2 \Xi_L^2 (\hat \Gamma_{KL}^\mathrm{ave})^2\right)^{-1/2}.\label{norm-cc}\end{aligned}$$ Another weight, $\hat \Gamma^\mathrm{circ}_{KL} = \frac{1}{10}(\hat a^K \hat b^L - \hat b^K \hat a^L)$, is also $\psi$-independent. Combining $\hat \Gamma^\mathrm{ave}$ and $\hat \Gamma^\mathrm{circ}$ can fix $\iota$. To obtain $\rho$, Equation 2.36 of [@ScoX1CrossCorr2015PRD] (analogous to Equation 4.11 of [@Dhurandhar2008]), we cross-correlate with the paired data in SFTs $L$ indexed by bin $n$. We unite Fourier bins using the filter, complex conjugation $*$, and the signal model phase difference between SFTs, $\Delta \Phi_{KL} = \Phi_K - \Phi_L$: $$\begin{aligned} \rho = &N& \sum_{KL\in\mathcal{P}} \hat \Gamma^\mathrm{ave}_{KL} \sum_{m\in\mathcal{K}_K} \sum_{n\in\mathcal{K}_L} (-1)^{m-n} \label{textbook-cc-rho}\\ &\times& \mathrm{sinc}(\kappa_{K m}) \mathrm{sinc}(\kappa_{L n}) \nonumber\\ &\times& (e^{\mathrm{i}\Delta\Phi_{KL}} z^*_{K m} z_{L n} + e^{-\mathrm{i}\Delta\Phi_{KL}} z_{K m} z^*_{L n} ). \nonumber\end{aligned}$$ Implicit in $\Phi_K$ is $f_K$, hence all Doppler parameters: $\rho$ must be calculated for each $\lambda$ template. Since billions [@ScoX1CrossCorr2017ApJO1] of templates are common, efficiency is paramount. Thanks to links between $\rho$ and the $\mathcal{F}$-statistic that are explored in Appendix \[relationships-to-other-optimal-statistics\], resampling speeds the search. Resampling\[resampling\] ======================== Many signals can be resampled into the source frame. (Compact binary coalescences were contemplated first [@SchutzCBCResamp2017]). This paper focuses on CWs [@Jaranowski1998] and adheres in notation to code documentation [@LALAppsRepo; @PrixTimingModel2017]. Resampling abstractly moves phase demodulation from **W** onto **z**. Delay causes phase modulation: Equation \[time-varying-phase-eq\] is Roemer-delayed by Earth and source binary motion. We want to sample $\phi(t;\lambda) = 2\pi f_0 \tau$, but in equally-spaced $\tau$ (source frame) instead of equally-spaced $t$ (detector frame). Although they consider spindown rather than binary parameters, $\tau \sim t_b$ in [@Patel:2009qe]; calculating Equation \[t-ssb-equation\] and Equation \[tau-equation\] (with numerical solutions to Equations \[solve-for-E\] and \[Kepler-equation\]), $\tau(t;\lambda)$ can be found. Because $x(t)$ is discrete, sampling $x(\tau)$ requires interpolation. The sinc function interpolates between time-domain samples, paralleling frequency-domain use [@Allen2002]. As it is computationally-prudent to analyze small frequency bands $f_\mathrm{band}$ independently, data are heterodyned, by selecting the band of interest from a Fourier transform, then inverse Fourier transforming into a downsampled, complex time series, then interpolating. Since this procedure differs from [@Patel:2009qe], we describe it. A time series $x(t)$ sampled at $\delta t$ has a Nyquist frequency of $f_N = 1/(2\delta t)$. Each SFT $K$ contains its own set of time indices $j$ ranging from $0$ to $N-1$, so $j$ implicitly refers to $K$. With respect to an arbitrary reference time, $t = t_K - T_\mathrm{sft}/2 + j \delta t$. Given a set of $M = T_\mathrm{obs}/T_\mathrm{sft}$ SFTs indexed by $K$, each with frequency bins $k$, spaced by $\delta f = 1/T_\mathrm{sft}$, with $N = T_\mathrm{sft}/(\delta t)$ samples, $x(t)$ can be reconstructed by the inverse FFT: $$\begin{aligned} x_K(t_K - T_\mathrm{sft}/2 + j\delta t) &=& \sum_{k=0}^{N-1} z_{Kk} e^{i 2 \pi j k \delta t / T_\mathrm{sft}} \delta f. \label{inverse-fft-eq}\end{aligned}$$ Time series segments and frequency bands can be selected by indices. Equation \[inverse-fft-eq\] can be simplified by using the index $q_K \equiv (t_K - T_\mathrm{sft}/2)/(\delta t) + j$ in its argument. The $q_K$ is the index with respect to the start of $T_\mathrm{obs}$. A new sampling interval $\delta t'$ and corresponding index $q_K'$ for some time $t = q_K' \delta t'$ can define a downsampled time series. This time series (heterodyne frequency $f_h$) is $x' (q' \delta t')$ and is produced as in Appendix \[downsampling-and-heterodyning\]. Resampling theory\[resampling-theory\] -------------------------------------- ### Interpolation When data is unaliased and approximately stationary during each SFT, $x'(q'\delta t')$ is a complete representation. Sinc-interpolation allows us to interpolate $x'(\tau)$. The Shannon formula as implemented [@LALAppsRepo] states that for integer $D$ Dirichlet elements, integer index $j$ and $j^* \equiv \mathrm{round}(t/(\delta t))$, $j0 \equiv j^* - D$, and a window $w_j$ (here, Hamming with length $2D + 1$), $$\begin{aligned} \delta_j &\equiv& \frac{t-t_j}{\delta t},\\ x(t) &\approx& %\sum_{j = j^* - \mathrm{D}}^{j^* + \mathrm{D}} x_j w_j \frac{\sin\left(\pi \delta_j\right)}{\pi \delta_j}, %\\ % &=& \frac{\sin{\left( \pi \delta_{j0} \right)}}{\pi} \sum_{j=j^* - \mathrm{D}}^{j^* + \mathrm{D}} (-1)^{(j-j0)} \frac{x_j w_j}{\delta_j},\label{shannon-interp-formula}\end{aligned}$$ converging when $D\rightarrow \infty$. A typical $D=8$, minimizing costs of sinc-interpolation (linear in $D$) plus subsequent FFTs (linear in Appendix \[downsampling-and-heterodyning\]’s $\Delta f_\mathrm{load}$). ### Resampling into the source frame Let our source-frame time series be indexed by $r$ with constant spacing $\delta t'$: $\tau = r \delta t'$. We use the function $t(\tau;\lambda)$, the functional inverse of the function $\tau(t;\lambda)$ from Equation \[tau-equation\]. Over timescales $T_\mathrm{sft}$ when the signal stays in one frequency bin, $(d^2\tau/dt^2) T_\mathrm{sft}^2 f_0 \ll 1$, Taylor approximation is valid around $t_0$: $$\begin{aligned} \tau(t;\lambda) &\approx& \tau(t_0;\lambda) + \left[\frac{d\tau(t)}{dt}|_{t=t_0}\right] t,\\ t(\tau;\lambda) &\approx& t(\tau_0;\lambda) + \left[\frac{d\tau(t)}{dt}|_{t=t_0}\right]^{-1} \tau,\end{aligned}$$ making computations practical. Translating from detector time to source time introduces a timeshift $\Delta t^* = r \delta t' - t(r \delta t';\lambda)$ to $x(t)$. The discrete source-frame time series is $x'(r\delta t') = \exp{(-i2\pi f_h \Delta t^*)} x'(t(r \delta t';\lambda))$: $$\begin{aligned} \delta_{q'} &\equiv& \frac{t(r\delta t';\lambda)-q' \delta t'}{\delta t'},\label{delta-q-eq}\\ {r}^* &\equiv& \mathrm{round}\left(\frac{t(r\delta t';\lambda)}{\delta t'}\right),\label{q-prime-eq}\\ x'(r \delta t') &\approx& %e^{-2\pi f_h [r \delta t' - t(r\delta t';\lambda) ]} % && \times\sum_{q' = {r}^* - \mathrm{D}}^{{r}^* + \mathrm{D}} x_{q'}' w_{q'} \frac{\sin\left(\pi \delta_{q'}\right)}{\pi \delta_{q'}}, \nonumber\\ % &=& \frac{\sin{\left( \pi \delta_{q0'} \right)}}{\pi} e^{-2\pi f_h [r \delta t' - t(r \delta t';\lambda)]} \nonumber\\ &&\times \sum_{q'={r}^* - \mathrm{D}}^{{r}^* + \mathrm{D}} (-1)^{(q'-q0')} \frac{x_{q'}' w_{q'}}{\delta_{q'}}. \label{resampled-time-series-q-eq}\end{aligned}$$ Then $x_r' \equiv x'(r\delta t')$ is the complex, heterodyned, downsampled, discrete time series that equally samples the source frame $x(\tau)$. Roemer delays vanish in $x(\tau)$, if the Doppler parameters $\lambda$ are accurate. Mismatch results in residual phase modulation. No finite lattice of $\lambda$ can perfectly sample the space. The required resolution is determined by the phase mismatch metric $g$ [@Brady1998]. Derivatives $d/d\lambda$ for $\lambda \in (f, a_p, T_\mathrm{asc}, P)$ have been calculated for the cross-correlation method’s metric [@ScoX1CrossCorr2015PRD]. In the similar $\mathcal{F}$-statistic metric [@LeaciPrixDirectedFStatPRD], $e$ and $T_\mathrm{p}$ are discussed. The metric is computed in software over the phase mismatch $\Delta \Phi_{\alpha,i}$ for the cross-correlation method’s pairs indexed by $\alpha = KL$ and Doppler parameters indexed by $i$, $$\begin{aligned} g_{ij} &\approx& %\frac{1}{2} \langle \Delta \Phi_{\alpha, i} \Delta \Phi_{\alpha,j} \rangle_\alpha, %\\ % &=& \frac{1}{2} \left\langle \left(\frac{\partial (\Phi_K - \Phi_L)}{\partial \lambda^i} \right)\left( \frac{\partial(\Phi_K - \Phi_L)}{\partial \lambda^j} \right) \right\rangle_\alpha, \label{phase-derivs-metric}\end{aligned}$$ extending to any Doppler parameters in the phase model. (Metric vielbeins represent the natural units of distance for a parameter-space vector). Given the metric, a lattice is calculated with the spacing in each dimension set by the allowed mismatch, $\lambda_\mu$. Mismatch is a tunable choice about the statistic’s acceptable *fractional loss:* $\mu_\lambda = (\textrm{max}(\rho) - \rho)/\textrm{max}(\rho)$. A simple cubic lattice grid for a diagonal metric has spacings $\delta \lambda^i$, $$\delta \lambda_i = \sqrt{\frac{\mu_{\lambda_i} }{g_{ii}} }.\label{metric-spacing-eq}$$ However, the metric is only a local approximation [@Brady1998]. The total derivative $d\tau$ contains many approximate degeneracies, for example when frequency mismatch $df$ equals modulation depth mismatch $d\Delta f_\mathrm{obs}$ arising from offset $a_p$ or $T_\mathrm{asc}$ (see Appendix \[stat-interp-rho\]). Mismatch studies are thus needed to verify the loss and chose spacings. Each lattice point in orbital parameter space must have its own resampled $x(\tau)$. Resampling interpolation yields $x(\tau)$ so that a putative signal is concentrated at a single frequency $f_0$. Next, taking the Fourier transform [@Jaranowski1998; @Patel:2009qe] generates $\rho$. Resampled cross-correlation method implementation\[implementation\] ------------------------------------------------------------------- Source-frame $x(\tau)$ speeds Section \[crosscorr-stat\]’s $\rho$ calculation. Supplied with $T_\mathrm{obs}$, we divide data into semicoherent segments with a shortest timescale of $T_\mathrm{short}$, replacing $T_\mathrm{sft}$. This $T_\mathrm{short}$ is the duration we will take from each $K$ side of a pair of the cross-correlation method. The $L$ side of the pair will be composed of all other $T_\mathrm{short}$ intervals with start times up to a *maximum lag-time* $T_\mathrm{max}$ before or after. A total, cross-detector, coherent integration duration of $T_\mathrm{coh}$ includes a central $T_\mathrm{short}$ plus $T_\mathrm{max}$ on both sides: $$\begin{aligned} |\tau_K - \tau_L| &\leq& T_\mathrm{max},\label{lag-time-constraint}\\ T_\mathrm{coh} &=& 2 T_\mathrm{max}+T_\mathrm{short}.\end{aligned}$$ For same-detector correlations, only $T_\mathrm{max}$ on one side is typically used, to avoid auto-correlation and double-counting, but we preserve the above definition of $T_\mathrm{coh}$ to keep frequency resolution the same. Times $\tau_K$ and $\tau_L$ evenly divide the resampled time series if calculated in the source frame, though this means that slightly unequal amounts of detector data go into $T_\mathrm{short}$. As $|\tau_K - t_K| \leq |\vec r \cdot \vec n /c + a_p|$, the difference between an interval start time in detector and source frame is bounded by the Roemer delay. We neglect these effects because relative inequality from one interval to the next is proportional to $d\tau/dt \leq 2\times 10^{-4}$. Based on prior experience [@LeaciPrixDirectedFStatPRD], these delays do not affect the metric estimation. For the cross-correlation method’s metric [@ScoX1CrossCorr2015PRD], the goal is to constrain the (pair-averaged) phase mismatch over $T_\mathrm{max}$ from offset $\delta \lambda_i$, which grows linearly proportionally to $T_\mathrm{max}$, so it is negligible from the phase mismatch over $(1+d\tau/dt)T_\mathrm{max}$. Nor are average noise weightings affected much by resampling, because the normalization $N$ is a sum over $T_\mathrm{obs}$. However, weightings are based on average noise per SFT. To find the weights, we average noise for each $T_\mathrm{short}$ interval by interpolating with Equation \[shannon-interp-formula\]. Terms $T_\mathrm{sft}$ in Section \[crosscorr-stat\] become replaced with $T_\mathrm{short}$. The current implementation zero-pad gaps instead of skipping them. These gaps contribute nothing to $\rho$, and, because the noise-weighted antenna functions $\hat a(t)$ and $\hat b (t)$ give gaps zero weight, they contribute nothing to $N$. Compared to the non-resampling cross-correlation method [@ScoX1CrossCorr2015PRD], resampling yields two benefits. First, $T_\mathrm{short}$ supercedes $T_\mathrm{sft}$, the latter being limited by modulation moving the signal out of bin. Increasing $T_\mathrm{short}$ reduces the number of (new) pairs, $N_\mathrm{pairs} \approx N_\mathrm{det}^2 T_\mathrm{max} T_\mathrm{obs} T_\mathrm{short}^{-2}$ (replacing $T_\mathrm{sft}$ from Equation $3.27$ in [@ScoX1CrossCorr2015PRD]). Because sensitivity is, to zeroth order, proportional to $h_0^\mathrm{sens} \propto (N_\mathrm{det}^{2} T_\mathrm{obs} T_\mathrm{max})$, independent of $T_\mathrm{sft}$, but cost is linearly proportional to the number of templates times the number of pairs, it is optimal to minimize the number of pairs by maximizing $T_\mathrm{short}$. Second, the number of frequency templates required is automatically supplied by an FFT. An FFT over a time period $T_\mathrm{coh}$ is spaced at $1/T_\mathrm{coh} \propto T_\mathrm{max}^{-1}$. This scaling comes from the metric element $g_{ff}$ for that lag-time, indepedent of $T_\mathrm{sft}$ and resampling. Rather than needing to repeat this fine frequency grid for every SFT, resampling allows all the data to be gathered into one FFT with time $T_\mathrm{FFT} \geq T_\mathrm{coh}$. (For finer sampling, the FFT can be zero-padded; for coarser, its output can be decimated). ### Pair selection for resampled statistic\[pair-selection-for-resampling\] Resampled $x'(\tau)$ as given by Equation \[resampled-time-series-q-eq\] must be divided into pairs to calculate the $\rho$ statistic. The set of pairs $\mathcal{P}$ must be constructed. Taking $Q$ detectors, they are indexed by $X$ for the first component of the cross-correlation method’s pair and $Y$ for the second component. These $X,Y$ indices range from $0$ to $Q-1$. An option exists to exclude same-detector correlations, as in the stochastic radiometer. Here, we allow same-detector correlations, except same-detector same-time correlations, that is, the auto-correlation. We reuse indices $K$ and $L$ from previous sections but restrict the range of each to a single detector. Indexing $T_\mathrm{short}$ intervals is marked by $K$ for detector $X$ and $L$ for detector $Y$. Indices $K,L$ range from $0$ to $M = T_\mathrm{obs}/T_\mathrm{short}$, regardless of any gaps. Approximating Equation \[lag-time-constraint\] in the detector frame, such that $$\begin{aligned} \{L|K\} &:& |K T_\mathrm{short} - L T_\mathrm{short}| \leq T_\mathrm{max},\\ \implies && \{K - T_\mathrm{max}/T_\mathrm{short},\ldots,K + T_\mathrm{max}/T_\mathrm{short}\},\nonumber\end{aligned}$$ which is straightforward when $T_\mathrm{max}$ is an integer multiple $R$ of $T_\mathrm{short}$. (Performance is best in practice when $T_\mathrm{short} = T_\mathrm{max}$). This set $\{L|K\}$ contains $M_L = 2R+1$ elements for cross-detector correlations and $R$ for same-detector correlations, to avoid double-counting. Detector-time pairing is predictable, and it is acceptable because $K$ to $K+1$ differences are of order $d\tau/dt \approx 2\times 10^{-4}$. Yet the resampled time series do not start at precisely the same source frame time. Let $(\tau_X = \tau_K|K=0)$, $(\tau_Y = \tau_L|L=0)$. They can differ by $(\vec r_X - \vec r_Y)\cdot \vec n/c$, which for ground-based detectors is of order $10$ ms at most. This $\Delta \tau_{XY}$ is still a full cycle at $100$ Hz, and it must be accounted for, by timeshifting the resampled time series to the same starting epoch. The correct factor is the physical frequency $f_0$. Differences $\tau_K - \tau_L$ require a further timeshift at the heterodyned frequency, $f_0 - f_\mathrm{het}$, as they are internal to the resampled time series. ### Fourier transform size and phase shift The above definitions separate $\mathcal{P}$ pairs into intervals and detectors. To construct $\rho$ from resampled data in these pairs using an FFT, we require the number of FFT samples, $N_\mathrm{FFT}$. The metric resolution answers this question. Then we will substitute the pair definition into $\rho$ to make an explicit quadruple sum. The metric spacing $\delta \lambda_f$ will be achieved by an FFT of duration $1/(\delta \lambda_f)$. For typical mismatch $\mu_f$, Equation \[metric-spacing-eq\] and Equation 4.31a of [@ScoX1CrossCorr2015PRD] yield $\delta \lambda_f < 1/T_\mathrm{coh}$. Specifically, Equation 4.33 [@ScoX1CrossCorr2015PRD] becomes $(3/4)T_\mathrm{max}^2$ on the right-hand side in the case $T_\mathrm{max} = T_\mathrm{short}$, $$\delta \lambda_f = \sqrt{\frac{6\mu_f}{\pi}}\frac{1}{T_\mathrm{coh}},$$ which provides $\delta \lambda_f T_\mathrm{coh} < 1$ up to $\mu_f \approx 0.52$. This is a high value of mismatch. Any FFT with that mismatch or finer frequency spacing is automatically long enough to include all the data in $T_\mathrm{coh}$. (For coarser mismatch, decimation by a ratio $\nu_D \equiv \mathrm{ceil}(\delta \lambda_f \times T_\mathrm{coh})$ after the FFT can select the frequencies of interest). Conversely, if $\mu_f = 0.1$, $\delta \lambda_f$ implies FFT duration $\geq 2.3 T_\mathrm{coh}$. Dirichlet frequency interpolation is replaced by zero-padding to the metric resolution. The recovered fraction of spectral power is known from Equation 3.18 [@ScoX1CrossCorr2015PRD], $\langle \Xi^2\rangle$ (to which $\rho$ is linearly proportional): for Dirichlet interpolation with $m$ bins, $$\langle \Xi^2 \rangle = 2 \int_0^{m/2} \mathrm{sinc}^2 \kappa d \kappa.$$ In that paper, $m = 2$ was recommended to capture $0.903$ of $\rho$. The function $\delta_{T_\mathrm{sft}}(f-f')$ is a continuous function determined by data; only $\kappa_{Kk}$ are discrete. Zero-padding from $T_\mathrm{coh}$ to $T_\mathrm{FFT}$ (and taking only $1$ bin of the FFT, so $m=1$) gives, $$\langle \Xi^2 \rangle_\mathrm{resamp} = 2 \int_0^{1/2} \mathrm{sinc} \left(\frac{T_\mathrm{coh}}{T_\mathrm{FFT}} \kappa\right) \mathrm{sinc} \left(\frac{T_\mathrm{short}}{T_\mathrm{FFT}} \kappa\right) d \kappa.$$ Hence ($T_\mathrm{coh} = 3 T_\mathrm{short}$), $\langle \Xi^2\rangle \approx 0.861$ when $T_\mathrm{FFT} = T_\mathrm{coh}$, the minimal possible by design. More typically, $\langle \Xi^2\rangle \approx 0.963$ when $T_\mathrm{FFT} = 2 T_\mathrm{coh}$, or $\approx 0.983$ when $T_\mathrm{FFT} = 3 T_\mathrm{coh}$. This is sufficient to forego the cost of Dirichlet interpolation in the frequency domain. Any desired improvement in $\langle \Xi^2 \rangle_\mathrm{resamp}$ can be obtained by requesting smaller $\mu_f$. Practical considerations mean that FFT speed is most predictable when $N_\mathrm{FFT}$ is an integer power of $2$. Our resampled time series has a fixed $\delta t'$, so the only way to increase the number of samples is to zero-pad further in time. Starting with the required $N_\mathrm{FFT0}$, $$\begin{aligned} N_\mathrm{FFT0} &=& \frac{\Delta f_\mathrm{load}}{\delta \lambda_f} \mathrm{ceil}(\delta \lambda_f \times T_\mathrm{coh}),\\ N_\mathrm{FFT} &=& 2^{\mathrm{ceil}\left( \log_2 N_\mathrm{FFT0} \right)}. \label{n-fft-final}\end{aligned}$$ In time, $T_\mathrm{FFT} = \delta t' N_\mathrm{FFT}$. The extension from $N_\mathrm{FFT0}$ to $N_\mathrm{FFT}$ causes over-sampling in the frequency domain. From this we decimate by rounding down to the nearest bin with a real-valued ratio $\nu_R$, $$\begin{aligned} \nu_R = (\delta \lambda_f)(\delta t') N_\mathrm{FFT},\end{aligned}$$ To maximize recovered power, we use bin-centered frequency. Bin offset ($f_h \approx \bar f_h$ in Equation \[heterodyne-shift-approx\]) is solved with a shift $f_r^*$ to the nearest FFT bin: $$\begin{aligned} \mathrm{remainder}(a,b) &\equiv& a - \frac{a}{|a|}\mathrm{floor}\left(\frac{|a|}{|b|}\right),\\ f_r^* &=& \mathrm{remainder}\left( -f_\mathrm{band}/2 , T_\mathrm{FFT}^{-1} \right).\end{aligned}$$ We will multiply $a_r$ and $b_r$ each by $\exp{(-i2\pi f_r^* \tau)}$. Preceeding time shifts using $f_h$ remain valid. The smallest FFT frequency $f_\mathrm{FFT}$, at $k_0$, causes the smallest output frequency $f_\mathrm{min} = f_h - f_\mathrm{band}/2$ to be found at bin $k_0$: $$\begin{aligned} % = f_h + f_r - \Delta f_load * T_FFT/2 f_\mathrm{FFT} &=& f_h + f_r^* - \frac{1}{2} f_\mathrm{band} T_\mathrm{FFT},\\ k_0 &=& \mathrm{lround}\left(\frac{f_h - f_\mathrm{band}/2 - f_\mathrm{FFT} }{ T_\mathrm{FFT}^{-1} }\right),\end{aligned}$$ where the $\mathrm{lround}$ function rounds to the integer less than its argument. ### Antenna function weighting \[antenna-function-weighting\] Equation \[resampled-time-series-q-eq\] expresses a discrete time series $x_r' = x'(r\delta t')$ of $x'$ in $\tau = r \delta t'$. Time series accounting for amplitude modulation by antenna functions $a$ and $b$ are returned. The noise-weighted $\sqrt{2/(T_\mathrm{sft}S_h)} x_r'$ are multiplied by the noise-normalized $\hat a$,$\hat b$ antenna function time series. This hat symbol equals multiplication by $\sqrt{2T_\mathrm{sft}/S_h}$. In the following paragraphs, let us outline some practical considerations, because the implementation may otherwise be ambiguous. When computed, elements $a_r$, $b_r$ should be normed to order unity for numerical stability [@PrixFStatModel2011]. A noise-normalization $\mathcal{S}_a = \sqrt{2T_\mathrm{sft}/S_h}\langle a\rangle$ should be used. Multiplication by $\mathcal{S}_a$ can subsequently restore $\hat a$, $\hat b$ for the resampled time series. An error-prone point is that we must use a factor of $\mathcal{S}_a \sqrt{T_\mathrm{short}/T_\mathrm{sft}}$ in the implementation of $\hat \Gamma^\mathrm{ave}_{KL}$ (because we have written new indices $K$,$L$ in terms of $T_\mathrm{short}$). As the statistic contains factors of $a^2$, $b^2$, we track the ratio $T_\mathrm{short}/T_\mathrm{sft}$. This choice preserves the correct normalization factor and ensures numerical stability at each stage. (A clean-slate code implementation could be more straightforward). The physically-meaningful values $a$, $b$ remain unchanged throughout. The product of the normalizations equals $2/S_h$ (for $S_h$ approximated by the nearest SFT). The kernel timestep is $\delta t'$ (in implementation, after the FFT). Multiplication by the requisite frequency shift $f_r^*$ obtains $a_r$,$b_r$: $$\begin{aligned} a_r &\equiv& \frac{2 \delta t' }{S_h} a(r \delta t') x'(r \delta t') e^{-\mathrm{i}2\pi f_r^* \tau}, \label{norm-a-eq}\\ b_r &\equiv& \frac{2 \delta t' }{S_h} b(r \delta t') x'(r \delta t') e^{-\mathrm{i}2\pi f_r^* \tau}.\end{aligned}$$ Here $a(t)$ and $b(t)$ are real-valued amplitude modulations with period of one sidereal day. They are not heterodyned. (Their period is also greater than the maximum Roemer delay, giving $a(r \delta t') \approx a(t(r \delta t'))$, $b(r \delta t') \approx b(t(r \delta t'))$). Antenna functions are effectively constant over $T_\mathrm{sft}$. Multiplying $a(t)$ and $b(t)$ by $x(t)$ prepares the optimal filter for the $\mathcal{F}$-statistic [@Jaranowski1998] as well as for our inner product. ### Phase shifts after Fourier transform Subsequent shifts are labeled $\Phi_\mathrm{out}$ and $\Phi_\mathrm{in}$. $\Phi_{\mathrm{out}_K}$ is the shift at the physical frequency of bin $k$, $f = f_h - f_\mathrm{band}/2 + k(\delta \lambda_f)$, due to start time (epoch) for that detector’s ($X$ for $K$, $Y$ for $L$) resampled time series. $\Phi_{\mathrm{in}_K}$ is the shift at the heterodyned frequency of bin $k$, $[k_0 + \mathrm{floor}(\nu_R k)] T_\mathrm{FFT}^{-1}$, from different start times $K T_\mathrm{sft}$ within the resampled time series. $$\begin{aligned} \Phi_{\mathrm{out}_K}(k) &=& 2\pi [f_h - f_\mathrm{band}/2 + k (\delta \lambda_f)] (\tau_X), \label{phi-out-eq}\\ % Yes, this is fBand and not fLoad -- see the code, it uses FreqOut0 = f_h - fBand/2 \Phi_{\mathrm{in}_K}(k) &=& 2\pi [k_0 + \mathrm{floor}(\nu_R k)] T_\mathrm{FFT}^{-1} K T_\mathrm{short}.\end{aligned}$$ Considerations include the antenna-weighted, phase-model corrected frequency-domain data $\hat a^K \zeta_K$. The $\zeta_K$ term equals the product $\Xi_K z_K \exp{(-\mathrm{i} \Phi_K)}$. This term is explored in Appendix \[relationships-to-other-optimal-statistics\], Equation \[from-here-resamp\]; in contrast with Equation \[fourier-transform-def\], $k$ refers to a frequency bin, instead of $m$. In the Appendix, the index $m \equiv j - T_\mathrm{sft}/(2 \delta t)$ is introduced for a time-domain sample. Let us now reconstruct $\hat a^K \zeta_K$ with resampling: $\delta t$ becomes $\delta t'$, $T_\mathrm{sft}$ becomes $T_\mathrm{short}$. The index $m$ increases with $r$. Precisely, $t_m = t_K + m\delta t$ is the overall time, analogous to $r \delta t'$. So $m$ becomes $r - t_K/(\delta t')$. We look at the time-domain limits of the data $\hat a^K \zeta_K$ as defined in Equation \[from-here-resamp\]. The lower limit, $m = -T_\mathrm{sft}/(2\delta t')$, becomes $r = (t_K - T_\mathrm{short}/2)/(\delta t')$. The upper limit becomes $r = (t_K + T_\mathrm{short}/2)/(\delta t')$. We call them (non-integer) $r_{B,K}$ and $r_{U,K}$. The discrete sum must round them. No samples are missed when $r_{U,K} = r_{B,K+1}$. As long as the ideal sample number, $N'_\mathrm{ideal} = T_\mathrm{short}/(\delta t')$, is $N'_\mathrm{ideal} \gg 1$, rounding is tolerable. We will soon replace $r_{U,K}$ with the zero-padded $r_{B,K} + N_\mathrm{FFT}$. The term $a^K x_K(t_m)$ contains $t_m = r \delta t'$. Allowing $r_k \equiv \mathrm{round}(t_K/(\delta t'))$, then $r = m + t_K/(\delta t')$ is simply $r = m + r_K$. So $a^K x_K(t_M)$ translates to $a_{m + r_K}$. This is the $a_r$ weighted in Equation \[norm-a-eq\]. Substitute the above into $\hat a^K \zeta_K$: $$\begin{aligned} \hat a^K \zeta_K %&=& \sum_{r=r_{B,K}}^{r=r_{U,K}} a_r e^{-\mathrm{i} (2\pi f_K [r \delta t' - t_K] + \Phi_K)}, &=& \sum_{r=r_{B,K}}^{r=r_{U,K}} a_r e^{-\mathrm{i} 2\pi f_K r \delta t'},\label{fft-inklings}\end{aligned}$$ observing that $\Phi_K = f_K t_K$ (source-frame frequency is constant). Equation \[fft-inklings\] foretells a Fourier transform from $r$ into $k$. Heterodyning has $f_K = f_0 - f_h$, discretely indexed as $k = f_K T_\mathrm{FFT}$. Raise $r_{U,K}$ to $r_{B,K} + N_\mathrm{FFT}$. Zero-padding (mathematically, using the Heaviside step function $H$) keeps the sum constant: $$\begin{aligned} \hat a^K \zeta_K %&=& \sum_{r=r_{B,K}}^{r=r_{B,K} + N_\mathrm{FFT}} \frac{H(r_{U,K} - r) a_r}{ \exp{(\mathrm{i} 2\pi k r \delta t'/ T_\mathrm{FFT})} },\\ &=& \sum_{r=r_{B,K}}^{r=r_{B,K} + N_\mathrm{FFT}} \frac{H(r_{U,K} - r) a_r}{ \exp{(\mathrm{i} 2\pi k r / N_\mathrm{FFT})} },\end{aligned}$$ In practice, an FFT starts at $s = r - r_{B,K}$. Re-indexing, $$\begin{aligned} \hat a^K \zeta_K &=& \sum_{s=k}^{N_\mathrm{FFT}} \frac{H(r_{U,K} - r_{B,K} - s) a_{s +r_{B,K}}}{ \exp{(\mathrm{i} 2\pi k [s + r_{B,K}] / N_\mathrm{FFT})} },\end{aligned}$$ wherein $r_{B,K}$ factors in the kernel: $$\begin{aligned} \frac{2\pi k r_{B,K} }{ N_\mathrm{FFT} } %&=& 2\pi (f_0 - f_h) t_{B,K},\\ &=& 2\pi \frac{k_0 + (k-k_0)}{T_\mathrm{FFT}} K T_\mathrm{short},\end{aligned}$$ expressing $k$ in terms of distance from a minimum $k_0$. If we pick bins $\bar k$ above $k_0$ at a continuous decimation rate $\nu_R$, $$\begin{aligned} \frac{2\pi \bar k r_{B,K} }{N _\mathrm{FFT}} % &=& 2\pi \frac{ k_0 + \mathrm{floor}(\nu_R \bar k ) }{ T_\mathrm{FFT} } K T_\mathrm{short},\\ &=& \Phi_{\mathrm{in}_K}(\bar k).\end{aligned}$$ Finally, as in Equation \[phi-out-eq\], $\Phi_{\mathrm{out}_K}$ corrects an overall time shift in the resampling epoch, $\tau_X$. When the heterodyning starts at epoch $\tau_X$ after reference time $\tau_0$, $$\begin{aligned} \frac{ 2\pi k [s + r_{B,K}]}{N_\mathrm{FFT}} &=& 2\pi \Phi(\tau-\tau_0),\\ \Phi(\tau-\tau_0) &=& f_0 \tau H([\tau_0+\tau_X] - \tau) \\ && + (f_0 - f_H) \tau H(\tau - [\tau_0+\tau_X]),\nonumber\end{aligned}$$ expanding the first Heaviside function into a Boxcar $B$, $$\begin{aligned} %B(\tau_0, \tau_0 + \tau_X) &=& H(\tau_0 - \tau) - H([\tau_0 + \tau_X] - \tau),\\ H(\tau_0 + \tau_X - \tau) &=& H(\tau_0 - \tau) + B(\tau_0, \tau_0 + \tau_X),\end{aligned}$$ so during $\tau_0$ to $\tau_X$, $f_0 \tau_X$ cycles are accumulated, justifying $\Phi_{\mathrm{out}_K}$. (The second Heaviside function is null, because $r \delta t'$ starts at $\tau_X$). ### Frequencies returned from Fourier transform With a discrete Fourier transform (DFT) from time samples $s$ into frequencies $k$ being the operation $\mathcal{F}^s_k$, $$\begin{aligned} \mathcal{F}^s_k y_s &=& \sum_{s=k}^{N_\mathrm{FFT}} e^{-(\mathrm{i} 2\pi k s / N_\mathrm{FFT})} y_s,\\ (a^K \zeta_K)_k &=& e^{-\mathrm{i}2\pi[\Phi_{\mathrm{in}_K} + \Phi_{\mathrm{out}_K}](k) },\\ && \times \mathcal{F}^s_k \left(H(r_{U,K} - r_{B,K} - s) a_{s +r_{B,K}} \right),\nonumber\end{aligned}$$ DFTs return a frequency *vector* indexed by $k$, rather than a scalar as in the previous demodulation search [@ScoX1CrossCorr2015PRD]. We select the set of frequencies $\bar k$. Mathematically, we represent this as a selection function $\delta_{\bar k}^k$ that reduces to the Dirichlet delta function when $\nu_R = 1$, so $$(a^K \zeta_K)_{\bar k } = \delta_{\bar k}^k (a^K \zeta_K)_k.$$ In the case of $\{L|K\}$, where $M_L$ multiple, often consecutive, $T_\mathrm{short}$ intervals are present at a single detector, we can do one Fourier transform, because $T_\mathrm{FFT}\geq T_\mathrm{coh}$. Call the sum $S^{(L|K)}_{\bar k}$: $$\begin{aligned} S^{(L|K)}_{\bar k} %&=& \delta_{\bar k}^{k} \sum_{L=L_0}^{L_0+M_L} (a^L \zeta_L)_{k},\\ % &=& \delta_{\bar k}^{k} \sum_{L=L_0}^{L_0 + M_L}e^{-\mathrm{i}2\pi[\Phi_{\mathrm{in}_L} + \Phi_{\mathrm{out}_L}](k) }, \nonumber\\ % && \times \mathcal{F}^s_{k} \left(H(r_{U,L} - r_{B,L} - s) a_{s +r_{B,L}} \right),\nonumber\\ &=& \delta_{\bar k}^{k} e^{-\mathrm{i}2\pi[\Phi_{\mathrm{in}_{L_0}} + \Phi_{\mathrm{out}_{L_0}}](k) }\\ && \times \mathcal{F}^s_k \left(H(r_{U,(L_0 + M_L)} - r_{B,L_0} - s) a_{s +r_{B,L_0}} \right),\nonumber\end{aligned}$$ so the whole sum can be done in a single FFT. This is because $\Phi_{\mathrm{out}_L}$ depends only on $L$ for its detector time epoch ($\tau_Y$), and $\Phi_{\mathrm{out}_L}$ is proportional to $L T_\mathrm{short}$, which is absorbed into the Fourier transform kernel. If $S_{(L|K)}$ skips some term, *e.g.,* auto-correlation where $L=K$ in the same detector, this is handled, both in theory (by subtracting a Boxcar function) and practice (by skipping that time and putting the next $T_\mathrm{short}$ at the following place in the zero-padded time series). Segment $L$ depends implicitly on its detector $Y$. ### Statistic in resampled data and physical meaning Taking a look at $\rho$ from Equation \[textbook-cc-rho\], we see it can be phrased in terms of $\hat a^K \zeta_{K}$ in the Appendix \[relationships-to-other-optimal-statistics\], Equation \[staged-to-turn-to-fafb\]. We will break it into explicit pairs over $Q$ detectors (first cross-correlation pair component indexed by $X$, second by $Y$), each of which has $M \equiv T_\mathrm{obs}/T_\mathrm{short}$ (zero-padded gapless) data segments, as in Section \[pair-selection-for-resampling\]. A data segment index $K$ for the first component of a pair is matched by $M_L$ terms of the second component, starting from $L_0$. Eliding $b$ terms, $$\begin{aligned} \rho &=& \frac{N}{5} \Re \sum_{X=0}^Q \sum_{K=0}^M \hat a^K \zeta_{K}^* \sum_{Y=0}^Q \sum_{L=L_0}^{M_L} \hat a^L \zeta_{L} + \ldots,\end{aligned}$$ so we insert the Fourier transforms to get the vector $\rho_{\bar k}$, $$\begin{aligned} \rho_{\bar k} &=& \frac{N}{5} \Re \sum_{X=0}^Q \sum_{K=0}^M (\hat a^K \zeta_{K})_{\bar k}^* \sum_{Y=0}^Q S^{(L|K)}_{\bar k}+ \ldots,\end{aligned}$$ A commonly-used projection of the in-phase data onto $a(t)$, the sub-interval integral $F_{a_I}$ (as in Appendix \[relationships-to-other-optimal-statistics\], Equation \[the-grand-id\]) motivates us to name the key quantities: $$\begin{aligned} \bar F_{a_{K,\bar k}} &=& (\hat a^K \zeta_{K})_{\bar k},\\ \bar F_{a_{L,\bar k}} &=& S^{(L|K)}_{\bar k}.\end{aligned}$$ Note: unlike terms $F_a$ and $F_b$ in Appendix \[relationships-to-other-optimal-statistics\], the above quantities include noise normalization. Overall normalizations $\hat A^2_\mathcal{P}$, $\hat B^2_\mathcal{P}$, $\hat C^2_\mathcal{P}$ are the sums over pairs of $(\hat a^K \hat a^L)^2$, $(\hat b^K \hat b^L)^2$, and $(\hat a^K \hat b^L \hat a^L \hat b^K)$, respectively. Then the resampled $\rho$ statistic parallels Equation \[well-formatted-rho\]: $$\begin{aligned} \rho_{\bar k} &=& \frac{\sqrt{2} }{\sqrt{ \hat A_\mathcal{P}^2 + 2 \hat C^2_\mathcal{P}+ \hat B_\mathcal{P}^2 }} \times \ldots \label{final-formatted-rho} \\ && \Re \sum_{X=0}^Q \sum_{K=0}^M \sum_{Y=0}^Q \left[ \bar F_{a_{K,\bar k}} \bar F_{a_{L,\bar k}} + \bar F_{b_{K,\bar k}} \bar F_{b_{L,\bar k}} \right] \nonumber .\end{aligned}$$ Equation \[final-formatted-rho\] holds in any reference frame. Dependence on detector and source motion has been absorbed by resampling, so the remaining formula is manifestly invariant. This formula for $\rho$ is a semicoherent matched filter assuming a sinusoidal waveform. In the (non-physical) case of zero Roemer delay, frequency is constant and no resampling is needed, so Equation \[final-formatted-rho\] exactly equals Equation \[well-formatted-rho\]. Resampling is elegantly interpreted as a shift to a frame with zero Roemer delay, where the frequency is effectively constant (up to the accuracy of the resampling parameters and numerical precision). It is unsurprising but reassuring that the result is independent of the original frame. ### Summary of resampling implementation Resampling has been ported from the $\mathcal{F}$-statistic computation into the cross-correlation method. The implementation differs in that $\mathcal{F}$ needs no concept of $T_\mathrm{short}$: its coherence time is the FFT time, and because each segment is resampled individually without being subdivided into pairs, $\Phi_\mathrm{in} = 0$. The $\mathcal{F}$-statistic includes auto-correlation, and there is no extra overlap. (Some inefficiency in recalculating the same overlapping pairs in the cross-correlation method could be reduced by caching partial terms $F_a$, $F_b$ from Appendix \[relationships-to-other-optimal-statistics\]). For the $\mathcal{F}$-statistic, resampling has already accelerated long $T_\mathrm{coh}$ searches. Resampling should also speed-up the cross-correlation method. Considering Equation \[abstract-rho-matrix\], we have offloaded phase-correction from the $\textbf{W}$ matrix onto the $\textbf{z}$ vector, turning a quadratic operation into a linear one. That the remaining matrix can be evaluated by an FFT is a further improvement. In the next section, we measure computational speed and sensitivity. Computational cost and sensitivity\[resamp\_cost\] ================================================== The computational speed and cost of resampling for the cross-correlation method is to be measured. A first comparison (Figure \[cost\_per\_template\_figure\]) takes overall run times of the demodulation and resampling techniques for a given number of templates. The relative speed-up, in Figure \[ratio\_of\_runtimes\_figure\], governs how much can be re-invested in search depth. Deeper understanding helps predict the computational cost in time required for conceivable use cases: the *timing model*. Demodulation timing model ------------------------- First, define the timing model for the demodulation search. Let each dimension have spacing $\delta \lambda$ determined by the metric, requiring $N_\lambda$ templates be searched in each dimension to cover a range $\Delta \lambda = N_\lambda \delta \lambda$. Using a simple cubic lattice, $$N_\mathrm{template} = \prod_{\lambda} \frac{\Delta \lambda}{\delta \lambda}.$$ Take a test case for a single point in orbital parameter space. With $n_\mathrm{bin} = 2$ Dirichlet interpolation bins, $N_\mathrm{template} = 55488$, $T_\mathrm{max} = 22800$ s, $T_\mathrm{obs} = 3.0\times 10^{6}$ s, $T_\mathrm{sft} = 1440$ s ($N_\mathrm{det} =2$) this case is measured to take a total time of $T_\mathrm{demod} = 159.80$ s. (Single-threaded without SIMD instructions on an Intel Core i7-4980HQ at 2.8 GHz). Normalizing these parameters into a single timing constant, $\tau_\mathrm{demod}$ for two detectors, and with scalings taken from [@ScoX1CrossCorr2015PRD], we have a timing function, $$\begin{aligned} N_\mathrm{pairs} &\approx& \frac{N_\mathrm{det}(N_\mathrm{det}+1)}{2} T_\mathrm{max} T_\mathrm{obs} T_\mathrm{sft}^{-2},\\ T_\mathrm{demod} &=& \tau_\mathrm{demod} n_\mathrm{bin} N_\mathrm{template} N_\mathrm{pairs}. \label{demod-timing-eq}\end{aligned}$$ Using this measurement, $\tau_\mathrm{demod}$ is about $1.5 \times 10^{-8}$ s. Note that this single measurement is based on gapless data. In the presence of gaps, the demodulation search can easily skip to the next SFT (at present, resampling cannot skip gaps). Template count $N_\mathrm{template}$ depends on every parameter’s $\delta \lambda$. Because $\delta \lambda$ depends on $T_\mathrm{max}$ for all four Doppler parameters, the computational cost increases with longer lag-time. Each $\delta \lambda$ is proportional to the inverse square root of the corresponding metric element $g_{\lambda \lambda}$ as in Equation \[metric-spacing-eq\]. Whelan *et al* [@ScoX1CrossCorr2015PRD] note that the metric element $g_{ff}$ increases with $T_\mathrm{max}^2$, while the orbital parameter elements also increase as $T_\mathrm{max}^2$ for $T_\mathrm{max} \ll P_\mathrm{orb}$ before asymptoting as $T_\mathrm{max}$ approaches the $P_\mathrm{orb}$. Uncertainty in $P_\mathrm{orb}$ is low enough that a single template is enough to cover it for short $T_\mathrm{max}$, but not generally at high $T_\mathrm{max}$. So the computational cost scaling for demodulation has $1+2+1 = 4$ powers of $T_\mathrm{max}$: it is $T_\mathrm{demod} \propto T_\mathrm{max}^4$ for short lag-time. After the orbital period resolves and also asymptotes for long lag-time, the scaling is $\propto T_\mathrm{max}^2$, with a larger coefficient. Contrast this case with resampling. Resampling timing model ----------------------- [r | l l | l l]{} Coefficient & Low $N_\mathrm{FFT}$ value \[s\] & Low $N_\mathrm{FFT}$ uncertainty \[s\] & High $N_\mathrm{FFT}$ value \[s\] & High $N_\mathrm{FFT} $ uncertainty \[s\]\ \ $\tau_{0,\mathrm{CCbin}}$ & $1.01 \times 10^{-7}$ & $\pm1.10 \times 10^{-8}$ & $1.34 \times 10^{-7}$ & $\pm2.25 \times 10^{-8} $\ $\tau_{0,\mathrm{bary}}$ & $1.62 \times 10^{-7}$ & $\pm7.48 \times 10^{-10}$ & $1.62 \times 10^{-7}$ & $\pm3.20 \times 10^{-9}$\ $\tau_{0,\mathrm{FFT}}$ & $5.27 \times 10^{-10}$ & $\pm5.19 \times 10^{-11}$ & $1.40 \times 10^{-9}$ & $\pm6.00 \times 10^{-11}$ Better *scaling* is sought from the resampling timing function. Longer lag-times are theoretically easier to achieve with resampling. It is the measurements of the coefficients that determine whether the overall computational cost is affordable. The resampling timing function is complicated: it involves three timing constants. Table \[timing-coefficient-table\] lists these constants. First is the timing constant $\tau_{0,\mathrm{CCbin}}$ for per-template (per-bin) operations, such as multiplying, adding, copying, and phase-shifting results to and from the FFT. Second is the timing constant $\tau_{0,\mathrm{bary}}$: the cost for barycentering for each point in orbital parameter space. Third and last is the timing $\tau_{0,\mathrm{FFT}}$: the cost of the FFT operation (using the *FFTW* library) for each template. This division into three parts is motivated by a pre-existing timing model for the $\mathcal{F}$-statistic [@PrixTimingModel2017]. The $\tau$ constants are measured using Atlas, the cluster at AEI Hannover, Germany. A typical cluster node uses an Intel Xeon E3-1220v3 at 3.1 GHz; a smaller set of E3-1231v3 (3.4 GHz) and E5-1650v2 (3.5 GHz) CPUs are also in use. Approximately 120 configurations, varying frequency band ($f_\mathrm{band}$), observing time ($T_\mathrm{obs}$), lag-time ($T_\mathrm{max}$), number of observatories ($N_\mathrm{det}$), starting frequency ($f_h - f_\mathrm{band}/2$), projected semi-major axis ($a_p$), allowed frequency mismatch ($\mu_f$) in the statistic, and number of Dirichlet kernel terms ($D$), are tested and fit to the three search parameters. This fit minimizes the discrepancy between predicted and measured time, as shown in Figure \[timing-constants-resamp\]. Time $T_\mathrm{resamp}$ is predicted as follows. It is most efficient to take $T_\mathrm{max} = T_\mathrm{short}$. We divide the analysis into bands of $\Delta f_\mathrm{load}$ (Equation \[delta-f-load\]). We next separate $N_\mathrm{template} = N_\mathrm{orb} N_f$, into orbital $N_\mathrm{orb}$ and frequency $N_f$ template counts. The FFT size is $N_\mathrm{FFT}$ (by Equation \[n-fft-final\]). A ‘triangular’ function accounts for detector pairings, $$\mathrm{triang}(N) = 1 + \frac{N+1}{2}.$$ Taking a prefactor of $5$ for the FFT logarithmic term is based on [@PrixTimingModel2017], from which the basic scheme of our model is motivated. Absorbing a typical number, $D=8$, into $\tau_{0,\mathrm{bary}}$, is efficient. The total time is then $T_\mathrm{resamp}$: $$\begin{aligned} T_\mathrm{resamp} &=& N_\mathrm{orb} N_\mathrm{det} (T_\mathrm{obs} / T_\mathrm{max}) [\ldots \\ && \tau_{0,\mathrm{CCbin}} N_f \mathrm{triang}(N_\mathrm{det}) +\ldots \nonumber \\ && \tau_{0,\mathrm{bary}} \left(2 \Delta f_\mathrm{load} \times T_\mathrm{max} \times (D/8) \right) + \ldots \nonumber \\ && \tau_{0,\mathrm{FFT}} N_\mathrm{FFT} \times 5 \mathrm{log}_2(N_\mathrm{FFT}) \times \mathrm{triang}(N_\mathrm{det}) \nonumber ] \label{resampTimingModelEq}\end{aligned}$$ Observe that $N_\mathrm{FFT}$ is proportional, albeit through power-of-two steps, to $N_f$, and $N_f$ is proportional to $T_\mathrm{max}$ as before. At low lag-time, $N_\mathrm{orb}=2$, so the resampling time scales $T_\mathrm{resamp} \propto T_\mathrm{max}^2 \mathrm{log} T_\mathrm{max}$. At high lag-time, after the number of orbital templates has asymptoted and period dimension resolved, it is, with a larger coefficient, $T_\mathrm{resamp} \propto \mathrm{log} T_\mathrm{max}$. The improvement stems from two parts of the new code: the ‘SFT gain’ by reducing the number of pairs saves a factor of $T_\mathrm{max}$, and the ‘FFT gain’ by converting the $\mathbf{W}$ weights matrix into an FFT operator effects $T_\mathrm{max}^2 \rightarrow T_\mathrm{max} \log T_\mathrm{max}$. *Caveats:* the *FFTW* functions for FFTs alert us to a $3\times$ increase in $\tau_\mathrm{0,FFT}$ for $N_\mathrm{FFT}$ above about $2^{18}$. This behavior is observed and is why Table \[timing-coefficient-table\] is divided into low and high $N_\mathrm{FFT}$ sections. Our prediction for $T_\mathrm{resamp}$ applies a factor of $3$ multiplier when $N_\mathrm{FFT}$ is predicted to be in this slow regime. A key *caveat* is that the precise $N_\mathrm{FFT}$ is difficult to calculate *a priori*. (The *post hoc* $N_\mathrm{FFT}$ is used to make $\tau$ estimates more accurate). This difficulty comes from the metric calculation depending on the true phase derivatives instead of a simpler diagonal approximation (as explained in [@ScoX1CrossCorr2015PRD]). Slight misprediction in metric-derived spacing can be amplified by power-of-2 rounding in $N_\mathrm{FFT}$. Future improvement in $T_\mathrm{resamp}$ estimation can be expected from reusing the exact code used for metric calculation in the timing predictor. Sensitivity of optimized set-up\[sensitivity\_gain\] ---------------------------------------------------- [r r | r r | r r ]{} $\min f_0$ \[Hz\] & $\max f_0$ \[Hz\] & $\max T_\mathrm{max}$ \[s\] & $\min T_\mathrm{max}$ \[s\] & $f_\mathrm{band} [Hz]$ & $T_\mathrm{sft}$ \[s\]\ \ 25 & 50 & 25920 & 10080 & 0.050 & 1440\ 50 & 100 & 19380 & 8160 & 0.050 & 1080\ 100 & 150 & 15120 & 6720 & 0.050 & 720\ 150 & 200 & 11520 & 5040 & 0.050 & 720\ 200 & 300 & 6600 & 2400 & 0.050 & 540\ 300 & 400 & 4080 & 1530 & 0.050 & 540\ 400 & 600 & 1800 & 360 & 0.050 & 360\ 600 & 800 & 720 & 360 & 0.050 & 360\ 800 & 1200 & 300 & 300 & 0.050 & 300\ 1200 & 2000 & 240 & 240 & 0.050 & 240\ Sensitivity depth $D^C$ for the semi-coherent cross-correlation method search scales $T_\mathrm{max}^{1/4}$ [@ScoX1CrossCorr2015PRD], up to an uncertain time where spin-wandering makes longer integration incoherent. The demodulation technique gives an effective scaling of $D^C \propto (T_\mathrm{demod})^{1/16}$ for low lag-time $T_\mathrm{max}$, compared to $P_\mathrm{orb}$, or $\propto (T_\mathrm{demod})^{1/8}$ for high lag-time. Resampling, dropping the logarithmic term, offers $D^C \propto T_\mathrm{resamp}^{1/8}$ for low lag-time or $D^C \approx \mathrm{(constant)}$ for high. Once the computational cost reaches the orbital parameter metric plateau and asymptotes, additional sensitivity is nearly cost-free with resampling. Surprisingly, in the frequency dimension, the number of templates continues to increase $\propto T_\mathrm{max}$, but because $T_\mathrm{short} = T_\mathrm{max}$, the number of semicoherent segments decreases linearly as $T_\mathrm{max}$ increases, so there are longer but fewer FFTs to do. Small cost increases do continue, in the logarithmic FFT term. Two caveats: the number of period templates still depends on $T_\mathrm{obs}$, and power-law scalings assume a large number of semicoherent segments. The conceivable case of $T_\mathrm{max} = 10$ days, $T_\mathrm{obs} = 3$ months may be close to the limit where this approximation holds, and excluding the auto-correlation means that the ratios of $T_\mathrm{obs}/T_\mathrm{max} < 5$ (approximately) may exclude some data. (The latter is partly-solvable by decoupling $T_\mathrm{short}$ from $T_\mathrm{max}$). Nevertheless, the ease of high $T_\mathrm{max}$ with resampling helps both future searches and follow-ups. Gains in search sensitivity depend on the measured timing constants. We iteratively estimate the maximum $T_\mathrm{max}$ possible with the resampling code for the same computing resources made available, in a given band, as to the demodulation O1 search [@ScoX1CrossCorr2017ApJO1]. For future searches, the distribution across bands can be re-allocated to maximize detection probability. For now, we consult Figures \[03-days-spin-wander-gains\] and \[10-days-spin-wander-gains\]. These figures show, using the $D^C \propto T_\mathrm{max}^{1/4}$ assumption, the forecast sensitivity gain from resampling’s speed-up relative to demodulation. The exact same test set-ups are run for both resampling and demodulation and the run time is measured. Then Equation \[resampTimingModelEq\] is used to predict the run time of resampling with longer $T_\mathrm{max}$, iteratively increasing $T_\mathrm{max}$ by $1\%$ until the original demodulation cost is predicted to be reached. The quarter root of the final $T_\mathrm{max}$ is taken as the forecast gain. (If resampling already takes as least as long as demodulation for a given set-up, this gain defaults to unity). Gains depend on the test bands’ set-ups (Table \[search-setup-table\]). Figure \[03-days-spin-wander-gains\]’s right side contrasts empirical sensitivity gains with predictions. The actual relative gain, given by the square root of the ratio of the $\rho$ statistic with given $T_\mathrm{max}$, tends to be less than the power-law prediction. Sensitivity forecasts in Figures \[projected-ul\] and \[projected-sens-depth\] should thus be read as cautiously optimistic. Long lag-times benefit the most from resampling. Figure \[ratio\_of\_runtimes\_figure\] illustrates that resampling is only faster than demodulation for bands of $T_\mathrm{max} \gtrsim 2000$ to $5000$ s, which Table \[search-setup-table\] shows to be in frequency bands less than roughly $200$ to $400$ Hz in the O1 setup. These $T_\mathrm{max}$ allocations [@ScoX1CrossCorr2017ApJO1] were designed to maximize detection probability by investing integration time in high-probability regions of orbital parameter space and frequencies near the torque balance level. Where $T_\mathrm{max}$ is already large, resampling offers more acceleration, thus more computing to be reinvested, and Figures \[03-days-spin-wander-gains\] and \[10-days-spin-wander-gains\] show bigger gains. In principle, the cost allocation is a global problem: we want to maximize the detection probability of the entire search, not one band. This problem has been addressed not only in [@ScoX1CrossCorr2017ApJO1] but also [@MingSetup2015]. In the future, these methods can be turned to the complicated task of re-optimizing the resampling cost allocation to maximize detection probability. For this paper, forecasts are based on the O1 allocation. Also note that we assume that the sensitivity gains $\propto T_\mathrm{max}$ will uniformly scale the detection efficiency curves that set upper limits. Taking this product of averages is only approximate: the true sensitivity is an average constructed from the products of gains in each band. As the $\rho$ statistic ratio from long $T_\mathrm{max}$ is less than predicted, a systematic study is needed about the sensitivity gain from computational reinvestment. In the future, we expect our assumptions to be tested by a second Mock Data Challenge (following [@ScoX1MDC2015PRD]). At present, results are suggestive. Figure \[projected-ul\] shows the projected upper limits that are forecast based on O1 results [@ScoX1CrossCorr2017ApJO1], divided by the sensitivity gain estimated for each band. Figure \[projected-sens-depth\] shows these upper limits divided by the noise ASD of the detector, to show sensitivity depth $D^C$, which is easier to compare with other methods. Both figures refer to results marginalized over $\cos \iota$, as the inclination angle of Sco X-1 is unknown. Long $T_\mathrm{max}$ bands at low frequencies can potentially double to triple in sensitivity. Given equal cost allowance and the assumption of $T_\mathrm{max}$ limited to $3$ days by spin-wandering, the gain is limited: from $20$ to $125$ Hz, the median gain is $51\%$, and from $20$ to $250$ Hz, it is only $11\%$, with minimal benefit at higher frequencies. The sensitivity depth varies between the mid-$30$s and mid-$60$s Hz$^{-1/2}$, depending on position in orbital parameter space. Given tenfold resources and the assumption of $T_\mathrm{max}$ limited to $10$ days, the gains are respectively $2.83\times$ and $2.75\times$ over O1. This sensitivity depth is approximately $100$ Hz$^{-1/2}$. Given O1 noise, the latter scenario would just touch the torque-balance level at $100$ Hz. Given twofold detector improvement, the upper limits would scale linearly, and resampling could potentially reach below torque balance from approximately $40$ to $140$ Hz. Longer observing runs should improve sensitivity with the usual $T_\mathrm{obs}^{1/4}$ scaling [@ScoX1CrossCorr2015PRD]. Future computational enhancements in the cross-correlation method, such as GPU acceleration for the barycentering and FFT operations, may make the tenfold gain in cost allowance realistic, as may access to larger computing resources. For example, one *Einstein@home* Month (EM) of computing power assumes 12 thousand cores [@MingSetup2015], or roughly $8.64$ million CPU hours. Depending on CPU performance compared to the Atlas cluster, multi-EM allocations could extend the cross-correlation method’s depth. It may be possible to use Bessel functions, as in [@SidebandBessel2017], or a loosely-coherent approach [@PowerFluxMethod2010], to accelerate moving through the orbital parameter space: the phase modulation can in principle be ‘resampled’ in the frequency domain as well as our time-domain approach, and some fusion of the two may be faster. Even now, resampling can accelerate longer lag-time follow-ups (progressive $4\times$ increases in $T_\mathrm{max}$ for search candidates [@ScoX1CrossCorr2017ApJO1]) and improve the low-frequency search. The ‘CrossCorr’ cross-correlation method is not the only method that may reach such performance. The ‘Sideband’ method [@SidebandMarkovModelSuvorova2016] is under active development, and a binary-oriented, resampled $\mathcal{F}$-statistic code [@LeaciPrixDirectedFStatPRD] has offered even greater sensitivity depth. The latter predicts that torque-balance could be reach up to $160-200$ Hz for conservative assumptions about eccentricity or $500-600$ Hz if eccentricity is assumed to be well-constrained. (By assuming the eccentricity to be circular, our result of $140$ Hz is comparable to the latter case). Predictions are highly sensitive to the timing function and cost allowances of the final code, as well as to assumptions about spin-wandering. Here we have presented our estimates based on working search code and extrapolations from the finished O1 search using the cross-correlation method. Conclusions\[conclusion\] ========================= Resampling accelerates the deepest current search for Sco X-1 and similar LMXBs, the cross-correlation method [@ScoX1CrossCorr2017ApJO1]. By calculating the cross-correlation method’s $\rho$ statistic using barycentric interpolation to the source frame, followed by an FFT, speed-up is possible for long coherent integration lag-times. Because of the plateauing of the binary orbital parameter space, this acceleration can drive the cross-correlation method’s forecast sensitivity to the torque-balance level in conceivable scenarios. In the most optimistic case with O1-like data, it may graze this level at 100 Hz; with a detector twice as sensitive (closer to Advanced LIGO design sensitivity), this range may extend from 40 to 140 Hz. Re-optimization of the computational cost distribution across parameter space [@MingSetup2015] can focus resources where detection is most probable. Reaching torque-balance might then be possible without large increases in computing power. Future improvement may allow it to compete up to higher frequencies, as might other proposed methods [@LeaciPrixDirectedFStatPRD]. The cross-correlation method with resampling works already. This success is possible thanks to the deep similarity between the $\mathcal{F}$-statistic and $\rho$-statistic and the shared codebase of the *LIGO Applications Library*, which allowed the importation of large portions of the resampling algorithm, once the mathematics were understood. Future improvements to any of this family of methods might be transplanted to benefit all. Many unknowns remain in Sco X-1. The depicted torque-balance level assumes a $10$-km radius and $1.4$-solar mass for a NS that itself has not been confirmed in the system; the level varies with the object’s moment of inertia. Expectation has held that Sco X-1’s luminosity makes it a promising target. Other systems may prove promising alternative targets, particularly if they have a known spin frequency. Known frequency, or much more precise orbital parameters, could reduce the cost of the cross-correlation method and similar semicoherent searches by many orders of magnitude. Then a sensitivity limited only by spin-wandering might be easily reached, regardless of location on the spectrum. Until then, computational optimizations will play a pivotal role in broadband searches. We see potential in applying this proven method to Advanced LIGO searches – gravitational waves from Sco X-1 have never been closer to detection. This work was partly funded by the Max-Planck-Institut. JTW and YZ were supported by NSF grants No. PHY-1207010 and No. PHY-1505629. JTW acknowledges the hospitality of the Max Planck Institute for Gravitational Physics (Albert Einstein Institute) in Hannover. These investigations use data and computing resources from the LIGO Scientific Collaboration. Further thanks to the Albert-Einstein-Institut Hannover and the Leibniz Universität Hannover for support of the Atlas cluster, on which most of the computing for this project was done. Many people offered helpful comments, especially R. Prix for extensive knowledge on the resampling code implementation, K. Wette for familiarity with the LIGO Applications Library, along with V. Dergachev, A. Mukherjee, K. Riles, S. Walsh, S. Zhu, E. Goetz, M. Cabero-Müller, C. Messenger, C. Aulbert, H. Fehrmann, C. Beer, O. Bock, H.-B. Eggenstein and B. Maschenschalk, L. Sun, E. Thrane, A. Melatos, B. Allen, B. Schutz, and all members of the AEI and LIGO Scientific Collaboration-Virgo continuous waves (CW) groups. We also thank our referee for helpful reading and comments. This document bears LIGO Document Number DCC-P1600327. Relationships to other optimal statistics\[relationships-to-other-optimal-statistics\] ====================================================================================== Terms called $F_a$ and $F_b$ [@Dhurandhar2008] relate $\rho$ to the $\mathcal{F}$-statistic, already amenable to resampling [@Patel:2009qe]. These $F_a$ and $F_b$ are the components of the statistic that are respectively projections of data along the $a$ and $b$ time series. To investigate these components, we will look at the phase-model corrected frequency-domain data, $\zeta_K$. (Precisely, $\zeta_K = \Xi_K z_K \exp{(-i\Phi_K)}$ for $z_K$, $\Xi_K$ from Equation \[total-data-eq\]). We can arrange the data $z_{Kk}$, indexed by frequency bin $k$, to include phase shift $\exp{(-\mathrm{i}\Phi_K)}$, $$\zeta_{K} \equiv \sum_{k\in\mathcal{K}_K}(\mathrm{i})^{2k} \mathrm{sinc}(\kappa_{Kk}) z_{Kk} e^{-\mathrm{i}\Phi_K},$$ and likewise $\zeta_{L}$, substituting (real-valued) Equation \[geometric-filter\] into $\rho$ ($\Re$ denoting the real part) and grouping terms: $$\begin{aligned} \rho &=& \frac{N}{5} \Re \sum_{KL\in\mathcal{P}} \left[(\hat a^K \zeta_{K})^* \hat a^L \zeta_{L} + (\hat b^K \zeta_{K})^* \hat b^L \zeta_{L} \right], \label{staged-to-turn-to-fafb}\end{aligned}$$ which merits inspection of $\hat a^K \zeta_K$. Insert $\hat a^K$ and Equation \[fourier-transform-def\], noting $(-1)^{k-l} = (-1)^{l-k}$, $(\mathrm{i})^{2k} = \exp(\mathrm{i} \pi k)$: $$\begin{aligned} \hat a^K \zeta_K %= &&\hat a^K \sum_{k\in\mathcal{K}_K}(\mathrm{i})^{2k} \mathrm{sinc}(\kappa_{Kk}) \\ % &\times& \tilde{x}_{Kk} \sqrt{\frac{2}{T_\mathrm{sft} S_K}} e^{-\mathrm{i}\Phi_K}, \nonumber\\ %= &&\sqrt{\frac{2 T_\mathrm{sft}}{S_K}} a^K \sum_{k\in\mathcal{K}_K}(\mathrm{i})^{2k} \mathrm{sinc}(\kappa_{Kk}) \nonumber\\ % &\times& \sum_{j=0}^{N-1} x_K (t_K - T_\mathrm{sft}/2 + j \delta t) \nonumber \\ % &\times& e^{-\mathrm{i} 2\pi j k \delta t / T_\mathrm{sft}} \delta t \sqrt{\frac{2}{T_\mathrm{sft} S_K}} e^{-\mathrm{i}\Phi_K},\nonumber\\ = &&\frac{2}{S_K} a^K \sum_{k\in\mathcal{K}_K} \mathrm{sinc}(\kappa_{Kk}) \\ &\times& \sum_{j=0}^{N-1} x_K (t_K - T_\mathrm{sft}/2 + j \delta t) \nonumber \\ &\times& e^{-\mathrm{i} 2\pi k (j \delta t - T_\mathrm{sft}/2) / T_\mathrm{sft}} \delta t e^{-\mathrm{i}\Phi_K},\nonumber\end{aligned}$$ Whereas $a(t)$, $b(t)$ amplitude modulations have a period on the order of a sideral day, as in Section \[antenna-function-weighting\], the $a^K$, $S_K$ terms vary much more slowly than $f_K$, So we take $m \equiv j - T_\mathrm{sft}/(2\delta t)$, $t_j \equiv t_K - T_\mathrm{sft}/2 + j \delta t$, $t_m \equiv t_K + m \delta t$, moving the antenna functions inside the sum over $m$, $$\begin{aligned} \hat a^K \zeta_K = && \sum_{m=-T_\mathrm{sft}/(2\delta t)}^{m=T_\mathrm{sft}/(2\delta t)-1} \delta t \frac{2}{ S_K} a^K x_K (t_m) \label{slowly-varying-a-zeta}\\ &\times& \sum_{k\in\mathcal{K}_K}\mathrm{sinc}(\kappa_{Kk}) e^{-\mathrm{i} (2\pi k m \delta t / T_\mathrm{sft} + \Phi_K)}.\nonumber\end{aligned}$$ Instead of including all frequency bins $k$ for $z_K$ of Equation \[total-data-eq\], the SFT signal-resolution can be zero-padded (see Equation 9.3 of [@Allen2002]). Zero-padding $k$ brings the nearest bin closer to $f_K$, approaching $\kappa_{Kk}\approx 0$, $$\begin{aligned} \hat a^K \zeta_K \approx \sum_{m=-T_\mathrm{sft}/(2\delta t)}^{m=T_\mathrm{sft}/(2\delta t)-1} && \delta t \frac{2}{S_K} a^K x_K (t_m) \label{from-here-resamp}\\ &\times& e^{-\mathrm{i} (2\pi (f_K T_\mathrm{sft}) m \delta t / T_\mathrm{sft} + \Phi_K)}.\nonumber\end{aligned}$$ Statistic in conventional quantities ------------------------------------ Proceeding to continuous time, Equation 5.10 of [@Dhurandhar2008] has sub-interval integral $F_{a_I}$, $$\begin{aligned} F_{a_I} &=& \int_{T_I - \Delta T/2}^{T_I + \Delta T/2} a(t) x(t) e^{-\mathrm{i} \varphi (t)} dt,\\ \label{crosscorr-fa-i} F_{a} &=& \sum_I F_{a_I}. \label{f-stat-fa}\end{aligned}$$ Treating sums as integrals, $\delta t \rightarrow dt$, $T_\mathrm{sft} \rightarrow \Delta T$, $a_K \rightarrow a(t)$, $x_K (t_m) \rightarrow x(t)$, and $\exp{(-\mathrm{i} [2\pi (f_K T_\mathrm{sft}) m \delta t / T_\mathrm{sft} + \Phi_K] )} \rightarrow \exp{(-\mathrm{i} \varphi(t))}$. Observe that with $2 \pi f_K = (d\Phi_K / dt | t= t^K)$, $t = m\delta t$, $\varphi$ is a Taylor approximation of $\Phi$: $$\begin{aligned} %\varphi(t) &=& % 2 \pi f_K m \delta t + \Phi_K,\\ % &=& \left(\frac{d\Phi_K}{ dt} | t= t^K \right) t + \Phi(t^K), \nonumber\\ % &\approx& \Phi(t), \nonumber \\ \hat a^K \zeta_K &=& \frac{2}{S_K} F_{a_K}.\label{the-grand-id}\end{aligned}$$ Taking $T_0 \rightarrow T_\mathrm{sft}$ in Equation 42 of [@Jaranowski1998], we write an inner product, $$(x || y) \equiv \frac{2}{T_0}\int_{-T_0 / 2}^{T_0/2} x(t) y(t) dt, \label{jks-double-inner-product}$$ so $F_{a_K} = [T_\mathrm{sft} / 2] ( a\cdot x || \exp{(-\mathrm{i}\varphi)} )$ can be viewed as a projection of the amplitude-modulated data onto the phase-model basis. A reader may wonder whether this is not a Fourier transform. Not quite: $\varphi(t)$ is phase-modulated and does not increase linearly in evenly-sampled detector time $m \delta t$. Before addressing this problem with resampling (Section \[resampling\]), we connect $\rho$ to related statistics. Using $F_{a_K}$ in Equation \[staged-to-turn-to-fafb\], $$\begin{aligned} \rho &=& \frac{N}{5} 4\Re \sum_{KL\in\mathcal{P}} \left[ \frac{F_{a_K}^*}{S_K} \frac{F_{a_L}}{S_L} + \frac{F_{b_K}^*}{S_K} \frac{F_{b_L}}{S_L} \right].\end{aligned}$$ In the bin-centered limit (Equation 3.18 in [@ScoX1CrossCorr2015PRD]), $\langle \Xi^2 \rangle \approx 1$. Establishing $N$ without $\Xi$ but with, $$\begin{aligned} \hat A^2_\mathcal{P} &\equiv& \sum_{KL\in\mathcal{P}} (\hat a^K \hat a^L)^2, \\ \hat B^2_\mathcal{P} &\equiv& \sum_{KL\in\mathcal{P}} (\hat b^K \hat b^L)^2, \\ \hat C^2_\mathcal{P} &\equiv& \sum_{KL\in\mathcal{P}} (\hat a^K \hat b^L \hat a^L \hat b^K),\end{aligned}$$ we obtain, $$\begin{aligned} N &=& \frac{10}{\sqrt{2}} \left[ \hat A_\mathcal{P}^2 + 2 \hat C_\mathcal{P}^2+ \hat B_\mathcal{P}^2\right]^{-1/2},\end{aligned}$$ $$\begin{aligned} \rho &=& \frac{4\sqrt{2}\Re \sum_{KL\in\mathcal{P}} \left[ \frac{F_{a_K}^*}{S_K} \frac{F_{a_L}}{S_L} + \frac{F_{b_K}^*}{S_L} \frac{F_{b_L}}{S_L} \right] }{\sqrt{ \hat A_\mathcal{P}^2 + 2 \hat C^2_\mathcal{P}+ \hat B_\mathcal{P}^2 }} . \label{well-formatted-rho}\end{aligned}$$ Compare $\rho$ to the $\mathcal{F}$-statistic in a specific case. Take $Q$ detectors indexed by $X$, $Y$, each with $M\equiv T_\mathrm{obs}/T_\mathrm{sft}$ SFTs. Assume a frequency-dependent, stationary noise PSD $S_h(f)$. Allow all pairs $\mathcal{P}$, so the sum expands into a double sum of a double sum seen in Equation \[f-stat-fa\], $$\begin{aligned} \rho = && 4\sqrt{2} \left( S_h^2(f) \sqrt{\hat A_\mathcal{P}^2 + 2 \hat C^2_\mathcal{P}+ \hat B_\mathcal{P}^2 } \right)^{-1} \label{explicit-detector-rho-comparison}\\ &\times& \Re \left[ \sum_X^Q \sum^M_{K(X)} F_{a_{K(X)}}^* \sum_Y^Q \sum^M_{L(X)} F_{a_{L(Y)}} \ldots \right. \nonumber \\ && \left. + \sum_X^Q \sum^M_{K(X)} F_{b_{K(X)}}^* \sum_Y^Q \sum^M_{L(Y)} F_{b_{L(Y)}} \right] ,\nonumber\end{aligned}$$ As the index $I$ in Equation \[f-stat-fa\] is detector independent, $$\begin{aligned} \sum_X^Q \sum^M_{K(X)} F_{a_{K(X)}}^* %&=& \sum_K^{QM} F_{a_K}^*,\\ &=& F_a^*, %\nonumber\end{aligned}$$ so too for the $L$ index and $b$ terms, allowing (self-) *auto-correlations.* Normally, the cross-correlation method does not allow auto-correlations [@ScoX1CrossCorr2015PRD], but it can [@Dhurandhar2008], such that, $$\begin{aligned} \rho &=& 4\sqrt{2}\frac{|F_a|^2 + |F_b|^2 }{S_h^2(f) \sqrt{\hat A_\mathcal{P}^2 + 2 \hat C^2_\mathcal{P}+ \hat B_\mathcal{P}^2 }} ,\end{aligned}$$ Simplifying the denominator, $$\begin{aligned} \hat A^2_\mathcal{P} %&=& \sum_X^Q \sum_{K(X)}^M \left(\hat a^{K(X)}\right)^2 \sum_Y^Q \sum_{L(Y)}^M \left(a^{L(Y)}\right)^2,\\ % &=& \sum_K^{QM} (\hat a^K)^2 \sum_L^{QM} (a^L)^2,\nonumber\\ &=& \left(\sum_I^{QM} (\hat a^I)^2 \right)^2, \\ \hat A_\mathcal{P} &=& \frac{1}{T_\mathrm{sft}} \sum_I^{QM} \hat a^I \hat a^I T_\mathrm{sft},\end{aligned}$$ which Riemann integrates for $a(t)$, $b(t)$ that vary slowly compared to $T_\mathrm{sft}$ (faster than $T_\mathrm{obs}$, so an overall shift is negligible and $T_0 \rightarrow T_\mathrm{obs}$ in Equation \[jks-double-inner-product\]), $$\begin{aligned} \hat A_\mathcal{P} &\approx & %\frac{1}{T_\mathrm{sft}} \sum_I^Q \int_{0}^{T_\mathrm{obs}} \hat a^I(t) \hat a^I(t) dt, \\ % &\approx & \frac{1}{T_\mathrm{sft}} \frac{2 T_\mathrm{sft}}{S_h (f)} Q \int_{0}^{T_\mathrm{obs}} a(t) a(t) dt,\nonumber\\ % &=& \frac{Q T_\mathrm{obs}}{S_h (f)} (a || a) %\nonumber.\end{aligned}$$ Forming norms $A\equiv (a||a)$, $B\equiv (b||b)$, $C\equiv(a||b)$ [@Jaranowski1998]: $$\begin{aligned} \rho = 4 \sqrt{2} \frac{|F_a|^2 + |F_b|^2}{S_h(f) Q T_\mathrm{obs} \sqrt{A^2 + 2C^2 + B^2}}.\end{aligned}$$ Comparison to the $\mathcal{F}$-statistic ----------------------------------------- The $\mathcal{F}$-statistic is a maximum-likelihood (ML) estimator. Values of $\mathcal{A}^\mu$ are chosen where the likelihood ratio is a maximum, $\Lambda_\mathrm{ML}$. Composing frequency-integrated projections $x_\mu$ onto the basis $h^\mu$ in Equation \[decomposition-projection-f\], with $\mathcal{M}^{\mu\nu}$ the ML projections of $h^\mu$ onto $h^\nu$ [@Jaranowski1998; @CutlerMulti2005; @BStatPrix2009; @Patel:2009qe; @WhelanNewAmplitude2014CQG]: $$\begin{aligned} \Lambda_\mathrm{ML} &=& e^\mathcal{F},\label{abstract-fstat}\\ \mathcal{F} &\equiv& \frac{1}{2} x_\mu \mathcal{M}^{\mu\nu} x_\nu,\\ &=& \frac{4}{S_h(f) T_0}\frac{B|F_a|^2 + A|F_b|^2 - 2C\Re(F_a F_b^*)}{A\cdot B - C^2}. \nonumber \label{the-fstat}\end{aligned}$$ Both $\rho$ and $\mathcal{F}$ are dimensionless. As after Equation 5.15 in [@Dhurandhar2008], $\rho$ and $\mathcal{F}$ are proportional when $A\approx B$, $C \ll A,B$: $$\begin{aligned} \rho &\approx& 4\sqrt{2} \frac{|F_a|^2 + |F_b|^2}{S_h(f) Q T_\mathrm{obs} \sqrt{2 A^2}},\end{aligned}$$ equating $\mathcal{F}$ with $T_0 = Q T_\mathrm{obs}$, $$\begin{aligned} \mathcal{F} &\approx& \frac{4}{S_h(f) T_0}\frac{|F_a|^2 + |F_b|^2}{A}.\end{aligned}$$ Even for multiple detectors, (all-pairs) $\rho$ can converge to the (fully-coherent) $\mathcal{F}$-statistic. Illustrating the crossover is now possible. Dhurandhar *et al* [@Dhurandhar2008] introduce the cross-correlation method starting from two data streams, like the stochastic radiometer [@Ballmer2006CQG], instead of the multi-detector $\mathcal{F}$-statistic [@CutlerMulti2005]. The weight matrix $\textbf{W}$ of Whelan *et al* [@ScoX1CrossCorr2015PRD] can merge these viewpoints. Any SFT, from any detector, is a dimension in $\textbf{z}$ (‘flattening’ SFTs over the Greek indices also represented as boldface in [@CutlerMulti2005] to represent different detectors). Cutler & Schutz Equation 3.8 [@CutlerMulti2005] has $2\mathcal{F} = \sum_{a,d}(\Gamma^{-1})^{ad} (\mathbf{x}|\mathbf{h}_\mathbf{a})(\mathbf{x}|\mathbf{h}_\mathbf{d})$: $a,d$ are the waveform components $\mu,\nu$ in our Equation \[abstract-fstat\]. Their inner products of $\mathbf{x}$ with the waveforms $\mathbf{h}_{\mathbf{a}},\mathbf{h}_\mathbf{d}$ are scalar-valued vectors indexed by $a$ and $d$, equivalent to summing $F_a$ or $F_b$ from multiple detectors. Only then is $\mathcal{F}$ computed. The sum of fully-coherent single-detector $\mathcal{F}$ does not equal the *fully*-coherent multiple-detector $\mathcal{F}$, which takes into account the cross-detector terms and converges with the ideal cross-correlation method. Divergence can occur with *semi*-coherent methods [@CutlerSemi2005; @PrixShaltev2012]. Semicoherent calculations with $T_\mathrm{coh} < T_\mathrm{obs}$ are more efficient, having higher sensitivity at fixed computational cost, than fully-coherent methods [@HierarchicalBrady2000; @CutlerSemi2005; @PrixShaltev2012]. The sum of $\mathcal{F}$-statistics over $T_\mathrm{obs}/T_\mathrm{coh}$ segments of $\mathcal{F}$-statistics is computed, albeit with reduced sensitivity compared to the much more expensive fully-coherent search. Joint- and single-detector $\mathcal{F}$ can both be computed for each $T_\mathrm{coh}$. (Comparison between joint and single is the basis of the $\mathcal{F}$-statistic consistency veto [@EinsteinHomeS52013]). The main difference between the cross-correlation method and the semicoherent $\mathcal{F}$-statistic is that the former, distinguishing $K$ and $L$, helps to exclude auto-correlations. Examine the optimal amplitude parameters in $\mathcal{M}^{\mu\nu}$ and weights $\textbf{W}$. Despite Equation \[abstract-rho-matrix\]’s resemblance to Equation \[the-fstat\], $\textbf{W}$ and $\mathcal{M}^{\mu\nu}$ are matrices over different spaces. $\textbf{W}$ (implicit indices) is of SFTs, whereas $\mathcal{M}^{\mu\nu}$ (explicit indices) is of four amplitude parameters. The amplitude-parameter space metric is $\mathcal{M}^{\mu\nu}$, so $2\mathcal{F} = x^\mu x_\nu$ [@BStatPrix2009]. In principle, $\rho_\mathrm{ideal}$ might not use $\hat \Gamma^\mathrm{ave}_{KL}$ (chosen to avoid specifying $\cos \iota$ and $\psi$ [@ScoX1CrossCorr2015PRD]), but instead $\Gamma$ based on maximization or marginalization [@BStatPrix2009] of $\mathcal{A}^\mu$. A start would be projections, $\textbf{z}_\mu$, of $\textbf{z}$ onto the $h_\mu$ basis. Each $\textbf{z}$ (a data vector, implicitly indexed, *e.g.*, by SFTs) can be projected to extract the components along the $4$ amplitude-parameter space dimensions, producing the $N_\mathrm{sft}\times 4$ matrix, $\textbf{z}_\mu$. Schematically, $$\rho_\mathrm{ideal} = \frac{1}{2} \mathcal{M}^{\mu\nu}\textbf{z}_\mu^\dag \textbf{W} \textbf{z}_\nu.$$ Hence $\textbf{z}_\mu$ absorb $\Phi$ and can be thought of as Fourier transforms of source-frame data. The matrix $\mathcal{M}$ absorbs $\Gamma$ from $\textbf{W}$, leaving $\textbf{W}$ a binary-valued index of which SFTs to pair. Such a statistic would echo the likelihood ratio mentioned in Section V of [@Dhurandhar2008]. Recall, $\rho$ and $\mathcal{F}$ converge when $A\approx B$ and $C \ll A,B$, *i.e.,* when $\mathcal{M}$ is proportional to the identity matrix. So, when amplitude space is flat and auto-correlations are included, $\rho \approx \mathcal{F}$. One impetus for cross-detector correlation is that only signal should be coherent. This ground underlies stochastic searches: GW strains between detectors are related, but the noise is statistically independent (see Section III of [@Allen1999]). Appendix \[glitch-merits-of-crosscorr\] will revisit the robustness and merits of pairing choices for $\textbf{W}$. Comparison to the stochastic search ----------------------------------- The stochastic search [@Allen1999; @Ballmer2006CQG; @ThraneStochastic2009] is built on same-time cross-correlation of multiple detectors [@ChristensenStochastic1992], whereby sensitivity depends on an overlap reduction function [@FlanaganStochastic1993]. In stochastic literature, $S$ often indicates detector strains and $P$ indicates noise PSDs. To keep consistency in this paper, $S$ will be replaced by $X$ for strain and $P$ by $S$ for noise. For the stochastic radiometer $Y$-statistic [@FlanaganStochastic1993; @Ballmer2006CQG], $$Y = \int_{-\infty}^{+\infty} d f \int_{-\infty}^{+\infty} df \delta_T (f - f') X_1^* (f) Q(f') X_2 (f'),$$ where $\delta_T$ is a finite-time Dirac delta function approximation, $X_1$, $X_2$ are Fourier-transformed detector strains, and $Q$ is an optimal filter. For sky direction $\hat \Omega ' = \hat \Omega$, $$Y_{\hat \Omega '} = (\lambda T) \int_{-\infty}^{+\infty} df \frac{\gamma^*_{\hat \Omega '} H}{S_1 S_2} X_1^* X_2. \label{radiometer-directed-y}$$ Expanding, with normalization factor $\lambda$, measurement duration $T$, and $S_1$ and $S_2$ the noise PSD, as well as $H(f)$ the strain power of the stochastic background, with overlap reduction function $\gamma_{\hat \Omega '}$ and polarizations $A\in \{+,\times\}$ and detector separation vector $\Delta \vec x$: $$\begin{aligned} \gamma_{\hat \Omega '} &=& \frac{1}{2} \sum_A e^{\mathrm{i} 2\pi f \hat \Omega \cdot \frac{\Delta \vec x}{c}} F_1^A (\hat \Omega)F_2^A (\hat \Omega ).\end{aligned}$$ This $Y$ is effectively a case of the simultaneous cross-correlation method’s $\rho$ restricted to different detectors [@ScoX1CrossCorr2015PRD]. Notationally, $\hat \Omega = \vec n$. Radiometer $\Delta x$ is $c$ times detector arrival time difference $\Delta d_{KL} \equiv (\vec r_K(t) - \vec r_L(t)) \cdot \vec n /c $, stemming from Equation \[t-ssb-equation\]. Then, the radiometer phase difference $2\pi f \hat \Omega \cdot (\Delta \vec x /c)$ equals $\Delta \Phi_{KL}$ in Equation \[textbook-cc-rho\] and is $2\pi f_0 \Delta d_{KL}$. Because $10 \Gamma^\mathrm{ave}_{KL} = F_+^K F_+^L + F_\times^K F_\times^L$ [@ScoX1CrossCorr2015PRD], $$\gamma_{\hat \Omega'} = 5 \frac{\sqrt{S_K S_L}}{2 T_\mathrm{sft}} e^{i \Delta \Phi_{KL}} \hat \Gamma_{KL}^\mathrm{ave}.$$ To be exact [@MitraRadiometer2008], where $\tilde Q(\hat \Omega, t, f; H) = Q(f')$, $\Delta t = T_\mathrm{sft}$ is time segment length and $\gamma^*(\Omega, t, f) = \gamma_{\hat \Omega'}$, $$\begin{aligned} \tilde{Q}(\hat \Omega, t, f; H) &=& \lambda(\Omega, t) \frac{H(f) \gamma^* (\Omega, t,f)}{S_1(t; |f|) S_2(t;|f|)},\end{aligned}$$ and $\lambda(\Omega,t) = \lambda T$, absorbing $\lambda$ in Equation \[radiometer-directed-y\]. Absent a $\Phi$ model, radiometer must equally sum frequency bin contributions over $\Delta f \geq \Delta f_\mathrm{obs}$ (Equation \[delta-f-obs\]). This width means $\Xi \approx 1$ and $X_1 = \sum_K \sum_k \tilde{x}_{Kk}$, $X_2 = \sum_L \sum_l \tilde{x}_{Ll}$ (referring to Equation \[normalized-z-bin\]; this is imprecise when radiometer uses overlapping, windowed bins [@ThraneStochastic2009] and the cross-correlation method uses non-overlapping rectangular bins). Moreover, $S_1 = S_K$, $S_2 = S_L$, $T = \Delta t$. Compare to looking for an isolated point with no other sources and refer to the discussion following Equation 3.36 of [@MitraRadiometer2008]. If the stochastic background is taken as constant in frequency, $H^2(f)=1$, $\lambda$ simplifies (integrating over frequency and substituting Equation 3.34 of [@MitraRadiometer2008] as directed for network power $P_{NW}^2$), $$\begin{aligned} \lambda(t) &\approx& [\Delta t P^2_{NW}(t)]^{-1},\end{aligned}$$ $$\begin{aligned} [\lambda(\Omega, t)\Delta t]^{-1} &\approx&\frac{5}{T_\mathrm{sft}} \frac{1}{\sqrt{S_K S_L}} \hat{\Gamma}^\mathrm{ave}_{KL}, \\ \lambda(\Omega, t) &=& \frac{\sqrt{S_K S_L}}{5}\frac{1}{\hat \Gamma^\mathrm{ave}_{KL}},\\ N &=& \frac{5}{\sqrt{2}} \bar \lambda(\Omega),\end{aligned}$$ where $N$ is the cross-correlation method’s normalization and $\bar \lambda (\Omega)$ is harmonic root mean square radiometer normalization. In that case, after all substitutions and considering the cross-correlation method’s $\rho$ evaluated over all bins and only between the same SFT pairs as radiometer, $$\begin{aligned} \rho &\approx& 4\sqrt{2} \Re \left(Y_{\Omega '}\right),\end{aligned}$$ by taking sums over cross-correlation method indices $K$ and $L$ to produce radiometer $S_1$ and $S_2$. Exact equality results for a single pair, such as the fully-coherent, cross-detector-only $\rho$. This conclusion bolsters Whelan *et al* [@ScoX1CrossCorr2015PRD] (notably Section III.D), stating that the cross-correlation method is similar to the radiometer with a phase model to allow different-time correlations. The cross-correlation method, the radiometer, and the $\mathcal{F}$-statistic, which all are described as near-*optimal* under different conditions, do converge in certain limits. Understanding the cross-correlation method’s intersections aids theory and practice. In theory, viewing $\mathcal{F}$ as approximating the Bayesian $\mathcal{B}$-statistic [@BStatPrix2009] informs $\rho$ as an approximate function of the likelihood ratio [@Dhurandhar2008]. This perspective might facilitate Bayesian model selection for vetoes using alternative line hypotheses to compare against the signal hypothesis [@KeitelRobust2014]. It may also link search set-up optimization for detection probability to rigorous statements about posterior probability [@MingSetup2015]. Radiometric techniques might generate a deconvolved sky map of future detections [@MitraRadiometer2008; @ThraneStochastic2009]. This paper should resolve confusion about the cross-correlation method. It does not use cross-detector data as its template. The cross-correlation method is a matched-filter-based semicoherent search, with the template corresponding to the signal model with chosen amplitude parameters, searched over the Doppler parameters. It differs in which filtered data are conjugated in the real-valued statistic. In practice, at present, ties between the statistics help bring resampling from the $\mathcal{F}$-statistic into the cross-correlation method. Resampling solves the problem that $\Phi(t)$, particularly the time-varying $\phi(t)$, is not increasing at uniform frequency $f_0$, because of the time-varying Doppler shifts. If Doppler modulation were constant, then a Fourier transform could supply $F_a$ (or the cross-correlation method’s components $F_{a_K}$ and $F_{a_L}$) and $F_b$, providing an entire frequency band. The data must be moved into the *source frame*, in which velocity with respect to the source is constant. Downsampling and heterodyning\[downsampling-and-heterodyning\] ============================================================== Section \[resampling\] is done with downsampled data, heterodyned downwards in frequency by $f_h$. Consider a bandpass-limited sample (subscript $p$) of Short Fourier Transform data for SFT $K$, equivalent to a rectangular frequency-domain window with starting bin $k_a$ and ending bin $k_b$. Gaps in the set of SFTs are zero-padded to yield $M = T_\mathrm{obs}/T_\mathrm{sft}$ . Equation \[inverse-fft-eq\] says that the time index with respect to SFT start time is $j$, and with respect to the observation run is $q_K$. The $q_K$ are non-overlapping integers from $0$ to $MN -1$, whereas $j$ (implicitly depending on $K$) range from $0$ to $N-1$. Start times and SFT durations are integer multiples of the sampling time, $t_K - T_\mathrm{sft}/2 \equiv K T_\mathrm{sft}$, and $j \delta t = q_K \delta t - t_K + T_\mathrm{sft}/2$. The ideal *bandpassed* data $x_{K,p}$ from an inverse Fourier transform of the whole $T_\mathrm{obs}$ would be, $$\begin{aligned} x_{K,p}(q_K\delta t) &\equiv& \sum_{k=k_a}^{k = k_b} e^{\mathrm{i} 2\pi q_K \delta t \frac{k}{T_\mathrm{sft} } }\frac{z_{Kk}}{T_\mathrm{sft}},\\ & = & \sum_{k=k_a}^{k = k_b} e^{\mathrm{i} 2\pi j \delta t \frac{k}{T_\mathrm{sft} } }\frac{z_{Kk}}{T_\mathrm{sft}},\nonumber\end{aligned}$$ because $t_K - T_\mathrm{sft}/2$ is an integer. When $k_a = 0$, $k_b = N-1$, $x_{K,p}$ is equivalent to $x_K$ in Equation \[inverse-fft-eq\]. Yet we want not simply bandpassed data, but downsampled, heterodyned data. Since $M>1$, we handle the sum over $K$. The difficulty is keeping phase coherence between inverse Fourier transforms. *Heterodyne frequency* $f_h$ is in the center of the band, near central bin $k_h \equiv (k_a + k_b)/2$. For discrete bins, the nearest frequency $\bar f_h \equiv k_h T_\mathrm{sft}^{-1}$. The frequency $\bar f_h = f_h - f_r$ is of the nearest integer bin to the ideal heterodyne $f_h$, where $f_r$ is the remainder. Let $l \equiv k - k_h$, so $k = l + k_h$: $$\begin{aligned} x_{K,p}(q_K\delta t) &=& \sum_{(l+k_h)=k_a}^{(l+k_h) = k_b} e^{\mathrm{i} 2\pi j \delta t \frac{(l+k_h)}{T_\mathrm{sft} } }\frac{z_{K(l+k_h)}}{T_\mathrm{sft}} \label{bandpassed-data-eq}\\ %&=& \sum_{l=k_a-k_h}^{l = k_b - k_h} e^{\mathrm{i} 2\pi j \delta t \frac{(l+k_h)}{T_\mathrm{sft} } }\frac{z_{K(l+k_h)}}{T_\mathrm{sft}} \nonumber \\ &=& e^{\mathrm{i} 2\pi j \delta t \frac{k_h}{T_\mathrm{sft} } } \sum_{l= k_a - k_h}^{l = k_b - k_h} e^{\mathrm{i} 2\pi j \delta t \frac{l}{T_\mathrm{sft} } }\frac{z_{K(l+k_h)}}{T_\mathrm{sft}}\nonumber.\end{aligned}$$ The sum contains all information on $[k_a,k_b]$. Call it $x^h_K$: $$\begin{aligned} x^h_K (q_K \delta t) &\equiv& \sum_{l= k _a - k_h}^{l = k_b - k_h } e^{\mathrm{i} 2\pi j \delta t \frac{l}{T_\mathrm{sft} } }\frac{z_{K(l+k_h)}}{T_\mathrm{sft}}, \label{freq-shifted-eq}\\ x_{K,p}(q_K\delta t) &=& e^{i 2\pi j \delta t \frac{k_h}{T_\mathrm{sft}}} x^h_K(q_K\delta t),\label{up-to-phase-het-corr} %x_{K,p}(q_K\delta t) &=& e^{i 2\pi f_h j \delta t} x_K'(q_K\delta t),\label{up-to-phase-het-corr}\end{aligned}$$ expressing bandpassed $x_{K,p}$ in terms of the desired, frequency-shifted $x^h_K$. In continuous time, Equation \[up-to-phase-het-corr\] is the expression $x_{K,p}(t) = \exp{(i2\pi f_h t)} x^h_K(t)$, where $x_{K,p}$ is the bandpassed data (frequency content at $f$) and $x^h_K$ is the heterodyned data (frequency content at $f-f_h$). Many derivations stop here, but we need the phase corrections for heterodyning a set of SFTs. To represent *complex, downsampled* data in a frequency band $f_\mathrm{band}$ without aliasing, we need a total bandwidth of $\Delta f_\mathrm{load}$. Note that $\Delta f_\mathrm{load}$ must cover not only all frequencies of interest but also frequency modulation’s Doppler wings, $\Delta f_\mathrm{drift}$, with additional bins to account for spectral leakage, including $D$ ‘Dirichlet terms’. The total width $\Delta f_\mathrm{load}$ is [@PrixTimingModel2017], $$\Delta f_\mathrm{load} = \left(1+\frac{4}{2 D +1} \right)\left(f_\mathrm{band} + \Delta f_\mathrm{drift} + \frac{16}{T_\mathrm{sft}}\right). \label{delta-f-load}$$ Then we find the new sampling time interval is not $\delta t$ but rather $\delta t' = 1/\Delta f_\mathrm{load}$. The old number of samples in an SFT is $N = T_\mathrm{sft}/(\delta t)$ and the new number is $N' = \Delta f_\mathrm{load} T_\mathrm{sft}$; $N' \leq N$. ($N'$ can be rounded up to ensure it is an integer). Create the new coordinate $q_K'$, so $t = q_K' \delta t'$. The Fourier transform kernel must contain an integer, and $q_K' \approx (t_K - T_\mathrm{sft}/2 + j \delta t)/(\delta t')$ is not generally an integer. Additional phase corrections thus arise. Note, $$\begin{aligned} %j' &=& 2 f_\mathrm{band} j \delta t,\\ %j' k / N' &=& (j \delta t) k / T_\mathrm{sft}, q_K' &=& \Delta f_\mathrm{load} q_K \delta t,\\ q_K' k / N' &=& (q_K \delta t) k / T_\mathrm{sft},\end{aligned}$$ Meanwhile we can choose $k_a$ and $k_b$ with a difference $k_b - k_a = \Delta f_\mathrm{load} T_\mathrm{sft}$, ergo $k_b - k_a = N'$: $$\begin{aligned} k_a &=& \left(f_h - \frac{1}{2} \Delta f_\mathrm{load}\right) T_\mathrm{sft},\\ k_b &=& \left(f_h + \frac{1}{2} \Delta f_\mathrm{load}\right) T_\mathrm{sft}.\end{aligned}$$ In practice, we will use the minimum frequency of interest, $f_a = f_\mathrm{min}$, to choose a heterodyne frequency $f_h = f_\mathrm{min} + \frac{1}{2} f_\mathrm{band}$. As $t = q_K \delta t$, we can substitute $q_K' \delta t'$ into the argument of $x_{K,p}(q_K \delta t)$ as defined in Equation \[bandpassed-data-eq\]. Using $N'$, $$\begin{aligned} x_{K,p}(q_K' \delta t') &=& \sum_{l=k_a - k_h}^{l=k_b - k_h} e^{\mathrm{i} 2\pi q_K' \frac{l+k_h}{2 \Delta f_\mathrm{load} T_\mathrm{sft} } }\frac{z_{K(l + k_h)}}{T_\mathrm{sft}}, \label{first-shift-approx}\\ %&=& \sum_{l=-N'/2}^{l=N'/2 - 1} e^{\mathrm{i} 2\pi j' (l + k_h) / N' } \frac{z_{K(l+k_h)}}{T_\mathrm{sft}}, \nonumber\\ %&=& \sum_{l=k_a - k_h}^{l=k_b - k_h} e^{\mathrm{i} 2\pi \frac{(t_K - T_\mathrm{sft}/2)}{\delta t'} \frac{l+k_h}{N'}} \nonumber\\ %&& \times e^{\mathrm{i} 2\pi j' (l + k_h) / N' } \frac{z_{K(l+k_h)}}{T_\mathrm{sft}}, \nonumber\\ %&=& e^{\mathrm{i} 2\pi j' k_h / N' } \sum_{l=-N'/2}^{l=N'/2 - 1} e^{\mathrm{i} 2\pi j' l / N' } \frac{z_{K(l+k_h)}}{T_\mathrm{sft}}, \nonumber &=& e^{\mathrm{i} 2\pi q_K' k_h / N' } \sum_{l=k_a - k_h}^{l=k_b - k_h} e^{\mathrm{i} 2\pi q_K' l / N' } \frac{z_{K(l+k_h)}}{T_\mathrm{sft}}, \nonumber\end{aligned}$$ We need to break apart $q_K'$ in the sum: $$\begin{aligned} q_K' /N' %&=& \frac{t_K - T_\mathrm{sft}/2 + j \delta t}{2 f_\mathrm{band} T_\mathrm{sft} /(2 f_\mathrm{band})}\\ &=& \frac{t_K - T_\mathrm{sft}/2}{ T_\mathrm{sft}} + \frac{j \delta t}{ T_\mathrm{sft} },\end{aligned}$$ where again, because $(t_K - T_\mathrm{sft}/2)$ is always an integer multiplied by integer $l$, the first term evaluates to unity in the sum exponent. In $q_K' k_h/N'$, however, though $k_h$ is also integer, we leave the term so we can see the effect of approximating $\bar f_h$. We find, $$\begin{aligned} x_{K,p}(q_K' \delta t') %&=& e^{\mathrm{i} 2\pi q_K' k_h / N' } \sum_{l=k_a -k_h}^{l=k_b - k_h} % && \times e^{\mathrm{i} 2\pi j \delta t \frac{l}{T_\mathrm{sft}} } \frac{z_{K(l+k_h)}}{T_\mathrm{sft}}, \nonumber\\ % &=& e^{\mathrm{i} 2\pi q_K' k_h / N' } x_K^h(q_K \delta t), \nonumber \\ &=& e^{\mathrm{i} 2\pi q_K \delta t \frac{k_h}{T_\mathrm{sft} }} x_K^h(q_K \delta t). \label{eq-with-broken-sum}\end{aligned}$$ This result concords with Equation \[up-to-phase-het-corr\]. Considering $\bar f_h$, $$\begin{aligned} x_{K,p}(q_K' \delta t') %&=& e^{\mathrm{i} 2\pi q_K \delta t k_h / T_\mathrm{sft} } x_K^h(q_K \delta t),\\ % &=& e^{\mathrm{i} 2\pi q_K \delta t \bar f_h } x_K^h(q_K \delta t),\nonumber\\ &=& e^{\mathrm{i} 2\pi q_K \delta t (f_h - f_r)} x_K^h(q_K \delta t),\end{aligned}$$ where an approximation is used for this Appendix, $$\begin{aligned} x_{K,p}(q_K' \delta t') &\approx& e^{\mathrm{i} 2\pi q_K \delta t f_h } x_K^h(q_K \delta t). \label{heterodyne-shift-approx}\end{aligned}$$ Generally the code will have access to $f_h$ but not $k_h$; the remainder $f_r$ is fixed by later by rounding to the nearest bin (in the paper body, $f_r^*$). Next, we seek $x_K^h$ in downsampled time. Our goal is $x^h$ covering all the observing time, but we must go through $x_{K,p}$ to preserve the phase shifts between SFTs. For the single-SFT case, we could just substitute $q_K' \delta t'$ into the argument for $x_K^h$ and be done. Notice that $k_a - k_h = -N'/2$, $k_b - k_h = +N'/2 - 1$ (for an even number of samples including 0). For any point in time, comparison with Equation \[freq-shifted-eq\] shows, $$\begin{aligned} x_K^h (q_K \delta t) %&=& %\sum_{l=-N'/2}^{l=N'/2 - 1} e^{\mathrm{i} 2\pi j \delta t \frac{l (2 \Delta f_\mathrm{load})}{N'} } \frac{z_{K(l+k_h)}}{T_\mathrm{sft}}, &=& \sum_{l=-N'/2}^{l=N'/2 - 1} e^{\mathrm{i} 2\pi j (\delta t/\delta t') \frac{l}{N'} } \frac{z_{K(l+k_h)}}{T_\mathrm{sft}}, \label{sum-for-heterodyne}\end{aligned}$$ we could define the generally non-integer $j' = j (\delta t/\delta t')$; fortunately, $j'/N' = j/N$. The exponent is then $\exp{(\mathrm{i} 2 \pi j' l /N)}$. For a detour, note that $x_K^h$ is almost fit for a Fourier transform, but it requires an index shift. Periodicity in the Fourier transform means that any substitution $j k \rightarrow j k + Q N$ for a transform with time steps $j$, frequency steps $k$, and number of samples $N$, by integer $Q$, leaves the result invariant. For half-integer $Q$, the substitution moves positive frequencies into negative frequencies (increasing in the same direction as before) and vice versa. Choose new index $m \equiv l + N'/2$, so $l = m - N'/2$: $$\begin{aligned} %&=& \sum_{m=0}^{l=N' - 1} e^{\mathrm{i} 2\pi j' (l-N'/2) / N' } \frac{z_{K(m+k_h-N'/2)}}{T_\mathrm{sft}},\\ x_K^h (q_K \delta t) % &=& \sum_{l=-N'/2}^{l=N'/2 - 1} e^{\mathrm{i} 2\pi \frac{ j' (m-N'/2)}{N'} } %\frac{z_{K(l+k_h)}}{T_\mathrm{sft}}, %\\ %&=& e^{-\mathrm{i} \pi j'} \sum_{m=0}^{m=N' - 1} \nonumber \\ %&& \times e^{\mathrm{i} 2\pi j' m / N' } \frac{z_{K(m+k_h-N'/2)}}{T_\mathrm{sft}},\nonumber\\ &=& (-1)^{-j'} \sum_{m=0}^{m=N - 1} B(0,N') \nonumber \\ && \times e^{\mathrm{i} 2\pi j m / N } \frac{z_{K(m+k_h-N'/2)}}{T_\mathrm{sft}},\label{shifted-for-fft}\end{aligned}$$ where, for illustration, $B(0, N')$ is the Boxcar function, acting as a bandpass. This $x^h_K$ is at the full sampling rate and is only theoretical. The sum term is a straightforward inverse Fourier transform, from $m$ to $j$, of the $z_K$ data from frequency bins $k_h - N'/2$ to $k_h + N'/2 -1$. In practice, the $(-1)^{-j'}$ factor (the move from positive to negative frequencies) depends into the conventions of Fast Fourier Transform programs. Care is required to ensure the right convention. For us, the interface with the *FFTW* library absorbs this factor. We will use this Fourier transform after constructing the time series. To construct the full time-series for the entire observing run, use the time-shift Equation \[heterodyne-shift-approx\] for $x_{K,p}$ and Equation \[sum-for-heterodyne\] for $x_K^h$, noting that $\bar f_h \approx f_h$: $$\begin{aligned} %\exp{(i2\pi f_h j' \delta t')} &=& e^{i2\pi f_h [t_K - T_\mathrm{sft}/2 + j'\delta t']},\\ \exp{(i2\pi f_h q_K \delta t)} %&=& e^{i2\pi f_h [t_K - T_\mathrm{sft}/2 + j\delta t]},\\ &\approx& e^{i2\pi f_h [t_K - T_\mathrm{sft}/2 + j' k_h/N']}, %\nonumber %x_{K,p}(q_K'\delta t') &=& e^{i2\pi f_h[t_K - T_\mathrm{sft}/2]} \\ %x_{K,p}(q_K' \delta t') &=& e^{i2\pi f_h[t_K - T_\mathrm{sft}/2]} \sum_{l=-N'2/2}^{l=N'/2 - 1} \\ %&& \times e^{\mathrm{i} 2\pi (j' l / N' + f_h j \delta t) } \frac{z_{K(l+k_h)}}{T_\mathrm{sft}},\nonumber\end{aligned}$$ whereby the *frequency-shifted, heterodyned, downsampled* $x_K^h(q_K' \delta t')$ has the SFT start time phase shift with respect to $x_K^h(q_K \delta t)$: $$\begin{aligned} x_K^h(q_K' \delta t') = e^{-i2\pi f_h [t_K - \frac{T_\mathrm{sft}}{2} + \frac{j' k_h}{N'}]} x_{K,p}(q'_K \delta t').\end{aligned}$$ The $j' k_h/N'$ can be absorbed into the bandpassing by a change of index, providing a quantity amenable to an FFT. Returning to Equation \[bandpassed-data-eq\] for $x_{K,p}(q_K \delta t)$, which equals $x_{K,p}(q_K' \delta t')$ at equal times $t$: $$\begin{aligned} %e^{-i2\pi \frac{ j' k_h}{N'}} x_{K,p}(q'_K \delta t') &=& \sum_{l=-N'/2}^{l=N'/2+1} e^{\mathrm{i} 2\pi (q_K' l + q_K' k_h + j' k_h)/N'},\\ x_K^h(q_K \delta t) i %&=& e^{-i2\pi f_h [t_K - \frac{T_\mathrm{sft}}{2}]} \\ % &&\times\sum_{l=-N'/2}^{N'/2+1} e^{-\mathrm{i} 2\pi \frac{j' k_h}{N'}} e^{\mathrm{i} 2\pi \frac{j (l+k_h)}{N}} \frac{z_{K(l+k_h)}}{T_\mathrm{sft}},\nonumber\\ &=& e^{-i2\pi f_h [t_K - \frac{T_\mathrm{sft}}{2}]} \\ &&\times\sum_{l=-N'/2}^{N'/2+1} e^{\mathrm{i} 2\pi \left(\frac{j' (l+k_h)}{N'} - \frac{j' k_h}{N'}\right)} \frac{z_{K(l+k_h)}}{T_\mathrm{sft}}.\nonumber\end{aligned}$$ Invoking Equation \[shifted-for-fft\], $$\begin{aligned} x_K^h(q_K' \delta t') &=& e^{-i2\pi f_h [t_K - \frac{T_\mathrm{sft}}{2}]} (-1)^{-j'} \\ && \times \sum_{m=0}^{N'-1} e^{\mathrm{i} 2\pi j' m / N'} \frac{z_{K(m+k_h - N'/2)} }{T_\mathrm{sft}} \nonumber\end{aligned}$$ While for arbitrary $q_K$, $j'$ is not an integer, the downsampled time-series $q_K'$ is specifically chosen for times where it is. Then the sum is indeed an inverse discrete Fourier transform of bandpassed data (which by itself is $x_{K,p}$), but also shifted by $k_h$. Including the negative-frequency sign convention with $(-1)^{-j'}$, call this $x_{K,s}$: $$\begin{aligned} x_{K,s}(q'_K \delta t') &\equiv& (-1)^{-j'} \\ && \times \sum_{m=0}^{N'-1} e^{\mathrm{i} 2\pi j' m / N'} \frac{z_{K(m+k_h - N'/2)} }{T_\mathrm{sft}} \nonumber\\ x_K^h(q_K' \delta t') &=& e^{-i2\pi f_h [t_K - \frac{T_\mathrm{sft}}{2}]} x_{K,s}(q_K' \delta t').\end{aligned}$$ This result for the exponent depends on the heterodyne frequency $f_h$ and SFT mid-time $t_K$ but not the index $q_K$. In comparison with Equation \[heterodyne-shift-approx\], the index $j \delta t$ has been absorbed. So it is generally true of any time $t = q_K \delta t$, including $t = q_K' \delta t'$. Carefully note, however, that $x_K^h$ is *still* heterodyned in the sense that a Fourier transform will yield the spectrum shifted by $f_h$. All the correction has done is shift the phase so that different SFTs are in-phase. We now use this alignment to construct the complete time series from the SFTs. Since $q_K$ is distinct for the entire time series, that series of $x_{q'}^h\equiv x^h(q'\delta t')$ is the sum (neglecting windowing), $$\begin{aligned} x^h(q' \delta t') %&=& \sum_{K=0}^M x_{K}^h(q_K' \delta t'), &=& \sum_{K=0}^M e^{-\mathrm{i}2\pi f_h [t_K - T_\mathrm{sft}/2]} x_{K,s}(q_K' \delta t'). \label{long-time-series-het-corrected}\end{aligned}$$ In practice, the quantity $x_{K,s}(q_K' \delta t')$ is computed from the inverse Fourier transform of a band of data centered around $f_h$ with bandwidth $f_\mathrm{band}$, so Equation \[long-time-series-het-corrected\] is the simplest construction of the complete downsampled time series. Again, any signal at frequency $f_0$ in $x$ is at $f_0 - f_h$ in $x'$. Downsampling also reduces the computation cost of interpolating into the BB frame. Interpretation and degeneracies of statistic\[stat-interp-rho\] =============================================================== Several properties of the $\rho$ statistic should be noted that do not neatly fit into the main text. In the fully-coherent limit, just as the $\mathcal{F}$-statistic is proportional to the log-likelihood ratio of a sinusoidal waveform hypothesis compared to Gaussian noise [@Jaranowski1998], so too should the $\rho$ statistic be interpreted. In this limit, the set of output $\rho(f_0,\lambda)$ from a search constitutes a sampling of the likelihood surface. This likelihood surface is amenable to composite hypothesis testing, as well as Bayesian interpretation [@BStatPrix2009]. Locally, the ‘likelihood surface’ of $\rho$ is well-described by the metric approximation [@ScoX1CrossCorr2015PRD]. Globally, long-range degeneracies appear. Degeneracies step mainly from surfaces of $d\Phi = 0$ in the phase model, Equation \[phase-model-eq\]. In the $(f_0, a_p, T_\mathrm{asc})$ space, these degeneracies form a cone along the $f_0$ axis, with the vertex at the maximum $\rho$. The surface of the cone arises from the largest component of the set of sidebands from residual phase modulation when $(a_p, T_\mathrm{asc})$ are offset from their true values. This surface has been noted elsewhere in cross-section as a 2-dimensional $X$ shape, for example in the $(f_0, a_p)$ plane [@SidebandMarkovModelSuvorova2016; @MeadorsDirectedMethods2016]. Because this extended surface correlates neighboring templates, naïve division by a trials factor equal to the number of templates (Bonferroni correction) may yield an overly-conservative $p$-value. The metric may also be too conservative for high values of mismatch [@WettePRD2016]. Semicoherent statistics such as $\rho$ grow proportionally to $(T_\mathrm{obs} T_\mathrm{max})^{1/4}$, and they also grow proportionally to $h_0^2$. This is in contrast to fully-coherent statistics, which take $T_\mathrm{max} = T_\mathrm{obs}$ and therefore grow proportionally to $T_\mathrm{obs}^{1/2}$. However, another class of power-based statistics, such as the ‘TwoSpect’ method [@GoetzTwoSpectMethods2011], also grows as $T_\mathrm{obs}^{1/4}$ but, differently from the semicoherent case, as $h_0^4$. GW phase coherence is not used over timescales longer than one SFT in these power-based statistics, and the final statistic depends on the power of a second FFT, over the orbital cycle. The cross-correlation method’s code must calculate $\rho$ as efficiently as possible in a sample of the likelihood surface that does not miss its peak. Viewed as semi-coherent choice of the weights matrix $\textbf{W}$, the goal is the calculate the largest number of elements of the weights matrix for the lowest cost. Skipping the auto-correlation in our code comes at the cost of the statistic contribution from that element. Avoidance of auto-correlation is natural from the standpoint of the *Radiometer*, which only permits same-time correlations and has no signal model. For the radiometer, auto-correlation would contaminate the search with the noise of the detector. From the standpoint of the $\mathcal{F}$-statistic, it is conversely natural to include the auto-correlation, because it fits in the middle of an FFT. Capturing the adjacent elements of the weights matrix from the cross-correlation method with an FFT requires additional overlap of a factor of $T_\mathrm{coh}/T_\mathrm{short} \geq 3$. It should be determined whether the cost of this overlap is worth the exclusion of noise (and signal) contributions from the auto-correlation. Merits of the cross-correlation method in noisy data\[glitch-merits-of-crosscorr\] ================================================================================== The cross-correlation method, unlike the $\mathcal{F}$-statistic but like the *Radiometer* method, avoids auto-correlation by default. Consider the presence of some sine-Gaussian glitch in the data that might justify this avoidance: $$g(t) = A e^{-(t-t_0)^2/(2\sigma^2)} \sin{\omega t - \phi_0}.$$ In the Fourier domain in which the cross-correlation method computes its statistic, the Fourier transform of $g(t)$, $\tilde{g}(f)$, is the convolution of the Fourier transforms of the Gaussian and sinusoidal terms, which are respectively also Gaussian and a Dirac delta function. The glitch does contribute noise in a Gaussian frequency distribution around the frequency $\omega$, with amplitude proportional to $A$. By removing the auto-correlation, such glitches will never correlate with themselves. Assuming that $\omega$ and $t_0$ are randomly-distributed, they will be unlikely to correlate with other glitches at different times. Therefore, the noise background of the cross-correlation method could conceivably be lower. Empirically, values of $\rho$ and $\mathcal{F}$ appear similar for comparable noise and signal strength. Whether the theoretically lower background of the cross-correlation method holds in real data is an important test. If the two statistics recover signals comparably well for the same coherent integration time, then whichever calculates a given coherence time most efficiently is best. This paper has established a path between the two methods.
--- abstract: 'A method is presented in which matrix elements for some processes are calculated recursively. This recursive calculational technique is based on the method of basis spinors.' --- [**Recursive technique for evaluation of Feynman diagrams** ]{}\ **V. V. Andreev** [^1]\ Gomel State University, Physics Department,\ Gomel, Belarus Introduction ============ The possibility of investigation higher and higher energies at present and future colliders, entails the necessity of predicting and calculating with high precision more and more complicated processes. When the number of final particles is high it becomes hard, even to calculate the corresponding tree level Feynman diagrams and the final expression of cross section is often an intricated function of several variables, inadequate for practical use. This has lead to that it is necessary abandon the standard methods for perturbative calculations and to use instead the new effective ones. The standard method to obtain a cross section with the fermions in perturbative quantum field theories is to reduce the squared amplitude to a trace from products of $\gamma$-matrices. An alternative approach is to calculate the Feynman amplitudes directly. The idea of calculating amplitudes has a long enough history. In 1949 it was suggested in Ref.[@Powell] to calculate a matrix element by means of explicit form of $\gamma$-matrices and Dirac spinors (more detailed bibliography on the problem can be found in Refs. [@Galynski; @Bondarev]). Various methods of calculating the reaction amplitudes with fermions have been proposed and successfully applied in recent years. In general the methods of matrix element calculation can be classified into two basic types. The first type includes methods of direct numerical calculation of the Feynman diagrams. The second type includes methods of analytical calculations of amplitudes with the subsequent numerical calculations of cross sections. Notice that there are methods of calculating cross sections without the Feynman diagrams [@Caravaglios; @Helac]. Analytical methods of calculating the Feynman amplitudes can be divided into two basic groups. The first group involves the analytical methods that reduce the calculation of $S$-matrix element to a trace calculation. The reduction of a matrix element to trace calculation from products of $\gamma$-matrices underlies a large number of methods (see, Refs.[@Galynski; @Bondarev] and Refs. [@Bellomo]-[@Vega] etc.). In this method the matrix element is expressed as the algebraic function in terms of scalar products of four-vectors and their contractions with the Levi-Civita tensor. The second group involves the analytical methods that practically do not use the operations with traces from products of $\gamma$-matrices. The method of the CALCUL group which was used for the calculations of the reactions with massless fermions is the most famous among [@Berendz]-[@Giele]. The basic idea behind the CALCUL method is to replace $S$-matrix element by spinor products of bispinors and to use the fact that expressions $\bar{u}_{\lambda}\left(p\right)u_{-\lambda}\left(k\right)$ are simple scalar functions of the momenta $p,k$ and the helicity $\lambda$. However, the operation of matrix element reduction is not so simple as the calculation of traces. It requires the use of Chisholm spinor identities (see [@Kleiss]). Also it takes the representation of contraction $ \slash{p} = p^\mu \gamma_\mu$ with four-momenta $p^\mu$ and polarization vectors of external photons through bispinors. For gauge massive bosons the additional mathematical constructions are needed [@Kleiss]. There are generalizations of the CALCUL method for massive Dirac particles both for special choices of the fermion polarization ([@Kleiss],[@Berendz3]-[@Hagiwara]) and for an arbitrary fermion polarization [@Gongora; @Andreev]. We call the polarization states of fermions in Ref.[@Berendz3; @Kleiss] as Berends-Daverveldt-Kleiss-Stirling or $BDKS$-states. Notice that for massless fermions we can obtain amplitude in terms of the scalar products of four-momentum vectors and current-like constructions of the type $J^{\mu} \sim \bar{u}_{\lambda}\left(p\right) \gamma^{\mu} u_ {\lambda} \left (k\right)$. The components of $J^{\mu}$ are calculated by means of momentum components $p,k$ (so called $E$-vector formalism; see Ref. [@Papadopoulos]). For $BDKS$-states Ref.[@Ballestrero] presents the iterative scheme of calculation that reduces expression for the fermion chain $\bar{u}_{\lambda} \left(p\right)Q\; u_{\lambda} \left(k\right) $ to the combination of spinor products $\bar{u}_{\lambda}\left(p\right)u_{\lambda}\left(k\right)$ and (or) $\bar{u}_{\lambda}\left(p\right)\gamma^{\mu}\left(g_V+g_A \gamma_5 \right) u_{\lambda} \left(k\right)$ by means of inserting the complete set of non-physical states of bispinors (with $p^2<0$) into the fermion chain. In all above-mentioned methods the spinor products and current constructions were calculated by means of traces and then used as scalar functions of the momenta and of the helicities (similar to scalar product of four-vectors). Due to their easy implementation methods of matrix element calculation have become a basis for modern programs dealing with evaluation cross-sections for various processes. Examples for such programs are generators `HELAS` [@Murayama], `GRACE` [@Grace], `MadGraph` [@MadGraph], `O’MEGA` [@omega], `FeynArts/FormCalc4` [@FeynArts; @Hahn] and `CompHEP` [@Comphep] (planned the calculation of the matrix element [@Ilyin]). There is large number of more specialized programs as `AMEGIC++` [@Amegic], `ALPGEN` [@Alpgen], `WPHACT` [@wphact], ` LUSIFER `[@lusifer] and *et al*. The detailed list of such programs can be found in Ref.[@Harlander]. In the paper we describe an approach to Feynman diagrams which is based on the using of an isotropic tetrad in Minkowski space and basis spinors connected with it (see [@Andreev3b],[@Andreev1]). Here we don’t use an explicit form of Dirac spinors and $\gamma$–matrices and the operation of trace calculations. The method is based on the active using of the massless basis spinors connected with isotropic tetrad vectors and we will call it as Method of Basis Spinors (`MBS`). In this method as well as in the trace methods the matrix element of Feynman amplitudes is reduced to the combination of scalar products of momenta and polarization vectors. Unlike spinor technique in different variants [@Berendz]-[@Kleiss1], this method doesn’t use either Chisholm identities, or the presentation of the contraction $\slash{p}$ with four vector $p$ and of the polarization vector of bosons through the bispinors. Unlike `WvD` spinor technique [@Giele],[@Dittmaier] `MBS` doesn’t use special Feynman rules for calculating of the matrix elements. We propose to use recursion relations as a technique to evaluate the Feynman amplitudes of processes. The advantage of recursive technique is that for calculation of a $n+1$ matrix element of some process we can use the calculation of $n$ process. Both for analytic and numerical evaluation this is asset. Method of Basis Spinors ======================= When evaluating a Feynman amplitude involving the fermions, the amplitude is expressed as the sum of terms that have the form $$\begin{aligned} && \mathcal{M}_{\lambda _p,\lambda _k}\left(p,s_p\;,\; k,\;s_k\; ;Q\right)= \nonumber\\ && =\mathcal{M}_{\lambda _p,\lambda _k}\left(\left[p\right], \left[k\right] ;Q\right)=\bar{u}_{\lambda _p}\left( p,s_p\right) Q ~u_{\lambda _k}\left(k,s_k\right)\;, \label{anpic1}\end{aligned}$$ where $\lambda_{p}$ and $\lambda_{k}$ are the polarizations of the external particles with four-momentum $p,k$ and arbitrary polarization vectors $s_p,s_k$. The operator $Q$ is the sum of products of Dirac $\gamma$-matrices. The matrix element (\[anpic1\]) with Dirac spinors is a scalar function. Thus, it should be expressible in terms of scalar functions formed from the spin and momentum four-vectors of the fermions, including $p, s_p, k, s_k$ and the operator $Q$. We will now consider that in the our approach this matrix element (\[anpic1\]) can be represented as linear combinations of the products of the lower-order matrix elements. Isotropic tetrad ---------------- We use the metric and matrix convention as in the book by Bjorken and Drell [@Bjorken1], i.e. the Levi-Civita tensor is determined as $\epsilon_{0 1 2 3}=1$ and the matrix $ \gamma_5=i \gamma ^{0} \gamma ^{1} \gamma ^{2} \gamma ^{3}$. Let us introduce the orthonormal four-vector basis in Minkowski space which satisfies the relations: $$\label{pic1} l_0^{\mu} \cdot l_0^{\nu} -l_1^{\mu} \cdot l_1^{\nu} -l_2^{\mu} \cdot l_2^{\nu}-l_3^{\mu} \cdot l_3^{\nu} = g^{\mu \nu}, ~~\left(l_{A} \cdot l_{B}\right)=g_{A B},$$ where $g$ is the Lorentz metric tensor. With the help of the basis vectors $l_{A}\left(A=0,1,2,3\right)$ we can define lightlike vectors, which form the isotropic tetrad in Minkowski space (see, [@Borodulin]) $$b_\rho =\frac{l_0+\rho l_3}{2}\;,\; n_\lambda =\frac{\lambda\;l_1+\mathrm{i} l_2}{2}\;, ( \rho ,\lambda =\pm 1)\;. \label{pic2}$$ From Eqs. (\[pic1\]), (\[pic2\]) it follows that $$(b_\rho \cdot b_{-\lambda })=\frac{\delta _{\lambda,\;\rho }}{2}~,~~(n_\lambda \cdot n_{-\rho })=\frac{\delta _{\lambda,\;\rho }}{2}~,~~\left(b_\rho \cdot n_\lambda \right) =0\;, \label{pic3}$$ $$g^{\mu \nu}=2\sum_{\lambda =-1}^1\left[ b_\lambda ^\mu \cdot b_{-\lambda}^\nu +n_\lambda ^\mu \cdot n_{-\lambda }^\nu \right]\;. \label{pic4}$$ It is always possible to construct the basis of an isotropic tetrad (\[pic2\]) as numerical four-vectors $$\label{pic4a} \left(b_{\pm 1}\right)_{\mu}=\left(1/2\right)\left\{1, 0, 0, \pm 1\right\}\;,\; \left(n_{\pm 1}\right)_{\mu}=\left(1/2\right)\left\{0, \pm 1, \mathrm{i}, 0\right\}$$ or by means of the physical vectors for reaction. For practical applications it is convenient to introduce additional four-vectors $$\tilde{b}_\rho =2\; b_\rho ,\; \tilde{n}_\lambda=2\; n_{\lambda} \label{add1}$$ By means of the isotropic tetrad vectors we can determine the polarization vectors of massless (also and massive) vector bosons. For photons with momentum $k^\mu$ and helicity $\lambda=\pm 1$ we use the following definition of polarizations in the axial gauge $$\begin{aligned} && \varepsilon _{\lambda }\left( k\right) =\frac{\left(k \cdot \tilde{n}_{-\lambda}\right) \tilde{b}_{-1}}{\sqrt{2}\;\left(k \cdot \tilde{b}_{-1}\right)}-\frac{\tilde{n}_{-\lambda}}{\sqrt{2}} \label{picc1b}\end{aligned}$$ provided that, the four-vectors $k, b_1, b_{-1}$ are linearly independent. Massless basis spinors {#sec:level2} ---------------------- By means of the isotropic tetrad vectors (\[pic2\]) we define [*massless basis spinors*]{} $u_{\lambda}\left(b_{-1}\right)$ and $u_{\lambda}\left(b_{1}\right)$ $$\slash{b}_{-1} u_\lambda \left( b_{-1}\right) =0\;, ~~u_\lambda \left(b_{1}\right) \equiv \slash{b}_{1}u_{-\lambda}\left(b_{-1}\right)\;,\label{pic7}$$ $$\omega _\lambda u_\lambda \left(b_{\pm 1}\right)= u_\lambda \left(b_{\pm 1}\right) \label{pic8}$$ with the matrix $\omega _{\lambda} = 1/2 \hskip 1pt \left( 1+\lambda \gamma_5\right)$ and the normalization condition $$\label{pic8a} u_\lambda \left( b_{\pm 1}\right) \bar{u}_\lambda \left(b_{\pm 1}\right) =\omega _\lambda \slash{b}_{\pm 1}.$$ The relative phase between basis spinors with different helicity is given by $$\slash{n}_\lambda u_{-\nu }\left( b_{-1}\right) =\delta_{\lambda, \nu} u_\lambda \left( b_{-1}\right). \label{pic9}$$ The important property of basis spinors (\[pic7\]) is the completeness relation $$\sum_{\lambda,A=-1}^{1} u_\lambda \left( b_A\right) \bar{u}_{-\lambda} \left(b_{-A}\right)= I\;, \label{pic10}$$ which follows from Eqs.(\[pic7\]), (\[pic9\]). Thus, the arbitrary Dirac spinor can be decomposed in terms of basis spinors $u_{\lambda}\left(b_{A} \right)$. Dirac spinors and basis spinors ------------------------------- Arbitrary Dirac spinor can be determined through the basis spinor (\[pic7\]) with the help of projection operators $u_{\lambda_{p}} \left(p,s_p\right)\bar{u}_{\lambda_{p}} \left(p,s_p\right)$. The Dirac spinors $w^A_\lambda \left(p,s_p\right)$ for massive fermion and antifermion with four-momentum $p\;( p^2=m_p^2 )$ , arbitrary polarization vector $s_p$ and spin number $\lambda= \pm 1$ can be obtained with the help of basis spinors by means of equation: $$\begin{aligned} && w^A_\lambda \left(p,s_p\right) =\frac{\left(\slash{p}+A m_p\right) \left(1+ \lambda \gamma_{5}\slash{s}_{p}\right)}{2\sqrt{\left(b_{-1}\cdot \left( p+m_p s_p \right)\right)}} u_{-A \times \lambda} \left(b_{-1} \right)\nonumber\\ && =\frac{ \left[ \slash{\xi}^{p}_{1}+A\; \slash{\xi}^{p}_{-1}\slash{\xi}^{p}_{1}/ m_p \right] {u}_{-A\times \lambda_{p}} \left(b_{-1}\right)}{\sqrt{\left( \tilde{b}_{-1}\cdot \xi^{p}_{1}\right)}}\nonumber\\ && =T_{\lambda}\left(p,s_p\right) u_{-A \times \lambda} \left(b_{-1} \right)\;. \label{anpic8}\end{aligned}$$ The notation $w^{A}_{\lambda _p}\left(p,s_p\right)$ stands for either $u_{\lambda _p}\left( p,s_p\right)$ (bispinor of fermion; $A=+1$) or $\upsilon_{\lambda _p}\left( p,s_p\right)$ (bispinor of antifermion; $A=-1$). Here we have introduced the abbreviations $$\xi _{\pm 1}^p =\frac{p \pm m_p s_p}{2}\;. \label{anpic11}$$ The bispinors $u_\lambda \left( p,s_p\right)$ and $\upsilon_\lambda \left( p,s_p\right) $ satisfy Dirac equations and spin conditions for massive fermion and antifermion $$\begin{aligned} \slash{p} \hskip 2pt u_\lambda \left(p,s_p\right)& =&m_p \hskip 2pt u_{\lambda }\left(p,s_p\right), \hskip 14pt \slash{p} \hskip 2pt \upsilon_\lambda \left(p,s_p\right) = - m_p \hskip 2pt \upsilon_{\lambda }\left(p,s_p\right) , \nonumber\\ \gamma_5 \slash{s}_{p} \; u_\lambda \left(p,s_p\right) &=& \hskip 2pt \lambda \; u_{\lambda }\left(p,s_p\right),~ \gamma_5 \slash{s}_{p} \hskip 2pt \upsilon_\lambda \left(p,s_p\right) = \lambda \hskip 2pt \upsilon_{\lambda }\left(p,s_p\right). \label{anpic8a}\end{aligned}$$ We also found, that the Dirac spinors of fermions and antifermions are related by $$\upsilon_\lambda \left( p,s_p\right)= -\lambda \gamma _5\;u_{-\lambda }\left( p,s_p\right), \hskip 7pt \bar{\upsilon}_\lambda \left( p,s_p\right) =\bar{u}_{-\lambda }\left( p,s_p\right) \lambda \hskip 2pt \gamma_5\;. \label{stp31}$$ Let us consider the particular case of Eq.(\[anpic8\])–the $BDKS$ polarization states of fermions, as they are the most-used in calculations of matrix elements. The polarization vector of $BDKS$-states is defined as follows [@Kleiss; @Berendz3; @Andreev; @Ballestrero]: $$s_{KS} \equiv s_p= \frac{p}{m_p}-\frac{m_p\;b_{-1}}{\left(p \cdot b_{-1}\right)}\; . \label{stp33aa}$$ Performing a calculation for Eqs.(\[pic7\]),(\[anpic8\]) and Eq.(\[stp33aa\]), we find the simple result for massive Dirac spinor [@Kleiss; @Ballestrero; @Andreev]: $$\label{stp34aa} w_{\lambda}^{A} \left(p,s_{KS}\right) = \frac{\left(\slash{p}+A\; m_p \right)\;u_{-A\times\lambda }\left(b_{-1}\right)}{\sqrt{\left(p \cdot \tilde{b}_{-1}\right)}}$$ Notice that, in Ref.[@Ballestrero] relation between the Dirac spinor of fermion and the Dirac spinor of antifermion differs from the Eq.(\[stp31\]). The Dirac spinor $u_\lambda \left(p\right)$ of massless fermion with momentum $p$ ($p^2=0, \left(p \cdot b_{-1}\right) \not = 0$) and helicity $\lambda$ is defined by (for example, see Ref.[@Kleiss]) $$u_\lambda \left(p\right) = \frac{\slash{p}\; u_{-\lambda }\left(b_{-1}\right)}{\sqrt{\left(p \cdot \tilde{b}_{-1}\right) }}\; . \label{anpic6}$$ Main equations of `MBS` ----------------------- The spinor products of massless basis spinors (\[pic7\]) are determined by $$\bar{u}_\lambda \left( b_C\right) u_{\rho}\left( b_A\right) = \delta_{\lambda, -\rho}\; \delta_{C, -A},\;~\;C,A=\pm 1,\; ~\lambda,\rho=\pm 1\;. \label{stp18}$$ With the help of Eq.(\[pic4\]) Dirac matrix $\gamma^\mu $ can be rewritten as $$\gamma ^\mu =\sum_{\lambda =-1}^1\left[\slash{b}_{-\lambda} \tilde{b}_\lambda ^\mu +\slash{n}_{-\lambda } \tilde{n}_\lambda ^\mu \right]\;. \label{pic5}$$ Using the Eqs.(\[pic7\]),(\[pic8\]) and (\[pic9\]) we can obtain that $$\gamma^\mu u_\lambda \left( b_A\right) =\tilde{b}_A^\mu \;u_{-\lambda }\left(b_{-A}\right) -A \;{\tilde{n}}_{-A \times \lambda }^\mu \; u_{-\lambda }\left(b_A\right) \label{pic11}$$ and $$\label{pic13a} \gamma_{5}\; u_\rho \left(b_A\right)= \rho \;u_\rho \left(b_A\right)\;.$$ Eqs.(\[pic11\])-(\[pic13a\]) and Eq.(\[stp18\]) underlie the method of basis spinors (`MBS`) [@Andreev3b; @Andreev1]. By means of the Eq.(\[pic11\]) we can determine that product of two $\gamma$-matrices can be represented as $$\gamma ^\mu \gamma ^\nu u_\lambda \left( b_A\right)=Y_{A,\;\lambda }^{\mu,\; \nu}\;u_{\lambda }\left(b_{A}\right) -A \; {X}_{A,\;\lambda }^{\mu, \nu}\; u_{\lambda }\left(b_{-A}\right)\;, \label{pic14a}$$ where $X^{\mu, \nu}, Y^{\mu, \nu}$ are the Lorentz tensors: $$\begin{aligned} && X_{A,\;\lambda}^{\mu,\; \nu}=\tilde{b}_{A}^{\mu}\cdot \tilde{n}_{-A\times \lambda}^{\nu}- \tilde{n}_{-A\times \lambda}^{\mu}\cdot \tilde{b}_{A}^{\nu}\;, \label{anpic12new1}\\ && Y_{A,\;\lambda}^{\mu,\; \nu}=\tilde{b}_{-A}^{\mu} \cdot \tilde{b}_{A}^{\nu}+ \tilde{n}_{A\times \lambda}^{\mu} \cdot \tilde{n}_{-A\times\lambda}^{\nu}\;. \label{anpic12new2}\end{aligned}$$ The product $\mathcal{S}^{n}=\gamma ^{\mu_{1}} \gamma ^{\mu_{2}}\ldots \gamma ^{\mu_{n}}$ can be written as $$\begin{aligned} &&\mathcal{S}^{n}\;u_\lambda \left( b_A\right) \nonumber\\ && =\mathcal{B}_{A,\;\lambda}^{\left\{\mu_{1},\ldots \mu_{n}\right\}}\;u_{\lambda_{n}^{\prime} }\left(b_{A_{n}^{\prime}}\right) -A\; \mathcal{N}_{A,\;\lambda}^{\left\{\mu_{1},\ldots \mu_{n}\right\}}\; u_{\lambda_{n}^{\prime} }\left(b_{-A_{n}^{\prime}}\right)\;, \label{pic14ab}\end{aligned}$$ where $$\lambda_{n}^{\prime}=\left(-1\right)^{n}\lambda\;,\; A_{n}^{\prime}=\left(-1\right)^{n} A \label{al}$$ and $\mathcal{B}_{A,\;\lambda}^{\left\{\mu_{1},\ldots \mu_{n}\right\}}$, $\mathcal{N}_{A,\;\lambda}^{\left\{\mu_{1},\ldots \mu_{n}\right\}}$ are some Lorentz tensors, which are related with isotropic tetrad vectors (\[add1\]). As follows from Eqs.(\[pic11\]),(\[pic14a\]) we have that in particular cases: $$\begin{aligned} && \mathcal{B}_{A,\;\lambda}^{\left\{\mu_{1}\right\}}=\tilde{b}_{A}^{\mu_{1}},\; \mathcal{N}_{A,\;\lambda}^{\left\{\mu_{1}\right\}}=\tilde{n}_{-A\times\lambda}^{\mu_{1}}\;, \label{bn1}\\ && \mathcal{B}_{A,\;\lambda}^{\left\{\mu_{1},\; \mu_{2}\right\}}=Y_{A,\;\lambda}^{\mu_{1},\; \mu_{2}} \;,\; \mathcal{N}_{A,\;\lambda}^{\left\{\mu_{1},\;\mu_{2}\right\}}=X_{A,\;\lambda}^{\mu_{1},\; \mu_{2}} \;. \label{bn2}\end{aligned}$$ Recursion relations for the matrix elements =========================================== The **basic idea of Method of Basis Spinors** is to replace Dirac spinors in Eq.(\[anpic1\]) by massless basis spinors (\[pic7\]), and to use the Eq.(\[stp18\])and Eqs.(\[pic11\])-(\[pic13a\]) for calculation of matrix element (\[anpic1\]) in terms of scalar functions $\mathcal{B},\mathcal{N}$. With the help of the Eq.(\[anpic8\]) the matrix element (\[anpic1\]) transforms to fermion “string” with massless basis spinors $u_{\lambda}\left(b_{A}\right)$ i.e. $$\begin{aligned} && \mathcal{M}_{\lambda _p,\lambda _k}\left(p,s_p\;,\; k,s_k\; ;Q\right)= \nonumber\\ && =\bar{u}_{-\lambda _p} \left(b_{-1}\right) T_{\lambda_{p}}\left(p,s_p\right)Q\; T_{\lambda_{k}}\left(k,s_k\right)u_{-\lambda _k}\left(b_{-1}\right)=\nonumber\\ && =\mathcal{M}_{-\lambda _p,-\lambda _k}\left(b_{-1},\;b_{-1}\; ;T_{\lambda_{p}}\left(p,s_p\right)Q\; T_{\lambda_{k}}\left(k,s_k\right)\right)\;, \label{anpic9a}\end{aligned}$$ where operator $T_{\lambda}$ is determined by Eq.(\[anpic8\]). Let us consider a special variants of a matrix element. Basic matrix element -------------------- Let us consider an important type of a matrix element (\[anpic1\]), when $p=b_{-C}$ and $k=b_A$, i.e. $$\begin{aligned} && \mathcal{M}_{-\sigma,\rho}\left(b_{-C}\;,\;b_A\; ; Q\right)\equiv \Gamma^{C,\;A}_{\sigma ,\rho}\left[Q\right] \nonumber\\ && = \bar{u}_{-\sigma} \left( b_{-C}\right) Q \;u_\rho \left( b_A\right)\;. \label{pic13}\end{aligned}$$ We call this type of matrix element as **basic matrix element**. Note that, the matrix element (\[anpic1\]) is a particular case of basic matrix element i.e. $$\mathcal{M}_{\lambda _p,\lambda _k}\left(p,s_p,\; k,s_k\; ;Q\right)=\Gamma^{1,-1}_{\lambda _p ,-\lambda _k}\left[T_{\lambda_{p}}\left(p,s_p\right)Q\; T_{\lambda_{k}}\left(k,s_k\right)\right] \;.\label{anpic9ab}$$ With the help of the completeness relation (\[pic10\]) we can obtain the recursion formula for $\Gamma^{C,A}_{ \sigma ,\rho}\left[Q_1 Q_2\right]$ $$\label{pic14} \Gamma^{C,\;A}_{\sigma ,\rho}\left[Q_1 Q_2\right]= \sum_{D,\lambda=-1}^{1}\Gamma^{C,\;D}_{\sigma, \lambda }\left[Q_1\right] \Gamma^{D,\;A}_{\lambda, \rho}\left[Q_2\right].$$ By means of the relations (\[pic11\]),(\[pic14a\]),(\[pic14ab\]) and Eq.(\[stp18\]) it is easy to calculate $\Gamma^{C,A}_{\sigma, \rho}$ in terms of the isotropic tetrad vectors. For instance, $$\begin{aligned} && \Gamma^{C,\;A}_{\sigma,\;\rho }\left[\gamma^{\mu}\right]=\delta_{\sigma,\; -\rho} \left(\delta_{C,\;-A}\; \tilde{b}_{A}^{\mu}-A\;\delta_{C,A} \; \tilde{n}_{-A \times \rho }^{\mu}\right)\;, \label{pic16}\\ && \Gamma^{C,\;A}_{\sigma,\; \rho}\left[\gamma^\mu\;\gamma^\nu\right]= \delta_{\sigma , \rho} \left(\delta_{C,A} Y_{A,\;\rho }^{\mu,\; \nu}-A\;\delta_{C,-A} {X}_{A,\;\rho }^{\mu,\; \nu} \right)\; \label{pic16a}\end{aligned}$$ and $$\begin{aligned} && \Gamma^{C,\;A}_{\sigma,\;\rho}\left[\gamma ^{\mu_{1}} \gamma ^{\mu_{2}}\ldots \gamma ^{\mu_{n}}\right]= \Gamma^{C,\;A}_{\sigma,\; \rho}\left[\mathcal{S}^{n}\right] \nonumber\\ && =\delta_{\sigma,\;\rho_{n}^{\prime} } \left(\delta_{C,\; A_{n}^{\prime}}\; \mathcal{B}_{A, \;\rho}^{\left\{\mu_{1},\ldots \mu_{n}\right\}} -A\;\delta_{C, -A_{n}^{\prime}}\; \mathcal{N}_{A,\;\rho}^{\left\{\mu_{1},\ldots \mu_{n}\right\}}\right)\;. \label{pic16ab}\end{aligned}$$ With the help of the Eqs. (\[pic14\]) and (\[pic16ab\]) we obtain recursion relations for $\mathcal{B}_{A, \lambda}^{\left\{\mu_{1},\ldots \mu_{n}\right\}}$ and $\mathcal{N}_{A, \lambda}^{\left\{\mu_{1},\ldots \mu_{n}\right\}}$ $$\begin{aligned} && \mathcal{B}_{A,\; \lambda}^{\left\{\mu_{1},\ldots \mu_{n}\right\}}=\mathcal{B}_{A_{n-k}^{\prime},\; \lambda_{n-k}^{\prime}}^{\left\{\mu_{1},\ldots \mu_{k}\right\}}\mathcal{B}_{A, \;\lambda}^{\left\{\mu_{k+1},\ldots \mu_{n}\right\}}+\left(-1\right)^{n-k+1}\mathcal{N}_{-A_{n-k}^{\prime},\; \lambda_{n-k}^{\prime}}^{\left\{\mu_{1},\ldots \mu_{k}\right\}}\mathcal{N}_{A,\; \lambda}^{\left\{\mu_{k+1},\ldots \mu_{n}\right\}}\;, \label{recurbn1}\\ && \mathcal{N}_{A,\; \lambda}^{\left\{\mu_{1},\ldots \mu_{n}\right\}}=\mathcal{B}_{-A_{n-k}^{\prime},\; \lambda_{n-k}^{\prime}}^{\left\{\mu_{1},\ldots \mu_{k}\right\}}\mathcal{N}_{A,\; \lambda}^{\left\{\mu_{k+1},\ldots \mu_{n}\right\}}+\left(-1\right)^{n-k}\mathcal{N}_{A_{n-k}^{\prime},\; \lambda_{n-k}^{\prime}}^{\left\{\mu_{1},\ldots \mu_{k}\right\}}\mathcal{B}_{A,\; \lambda}^{\left\{\mu_{k+1},\ldots \mu_{n}\right\}} \;.\label{recurbn2}\end{aligned}$$ The recursion Eqs.(\[recurbn1\])-(\[recurbn2\]) allow to convert scalar functions $\mathcal{B},\mathcal{N}$ into Lorentz tensors in terms of isotropic tetrad vector with the help of Eq.(\[bn1\]) or Eq.(\[bn2\]). For example, the constructions $ q_1^{\mu_{1}} q_2^{\mu_{2}} q_3^{\mu_{3}} \mathcal{B}_{A,\; \lambda}^{\left\{\mu_{1},\;\mu_{2},\;\mu_{3}\right\}}=\mathcal{B}_{A,\; \lambda}^{\left\{q_1,\;q_2,\;q_3\right\}}$ and $q_1^{\mu_{1}} q_2^{\mu_{2}} q_3^{\mu_{3}} \mathcal{N}_{A, \lambda}^{\left\{\mu_{1},\;\mu_{2},\;\mu_{3}\right\}}=\mathcal{N}_{A,\; \lambda}^{\left\{q_1, \;q_2,\; q_3\right\}}$ can be represented as a combination of the scalar functions $X,\;Y$ and scalar products of the isotropic tetrad vectors: $$\begin{aligned} && q_1^{\mu_{1}} q_2^{\mu_{2}} q_3^{\mu_{3}} \mathcal{B}_{A,\; \lambda}^{\left\{\mu_{1},\;\mu_{2},\;\mu_{3}\right\}}=\mathcal{B}_{A,\; \lambda}^{\left\{q_1,\; q_2,\; q_3\right\}} \nonumber\\ &&=\left(q_3 \cdot \tilde{b}_{A}\right)Y_{-A,\;-\lambda}^{q_{1},\; q_{2}}+ \left(q_3 \cdot \tilde{n}_{-A \times \lambda}\right)X_{A,\;-\lambda}^{q_{1},\; q_{2}}\;, \label{bn3}\end{aligned}$$ $$\begin{aligned} && q_1^{\mu_{1}} q_2^{\mu_{2}} q_3^{\mu_{3}} \mathcal{N}_{A,\; \lambda}^{\left\{\mu_{1},\;\mu_{2},\;\mu_{3}\right\}}=\mathcal{N}_{A,\; \lambda}^{\left\{q_1,\; q_2,\; q_3\right\}} \nonumber\\ &&=\left(q_3 \cdot \tilde{n}_{-A \times\lambda}\right)Y_{A,\;-\lambda}^{q_{1},\; q_{2}}- \left(q_3 \cdot \tilde{b}_{A}\right)X_{-A,\;-\lambda}^{q_{1},\; q_{2}}\;. \label{bn4}\end{aligned}$$ Decomposition coefficients -------------------------- The next type of lower-order matrix element (\[anpic1\]) is $$\begin{aligned} && \mathcal{M}_{\rho,\;\lambda _p}\left(b_{A}\;, \left[p\right];I\right)\equiv \mathcal{M}_{\rho,\;\lambda _p}\left(b_{A}\;,\left[p\right]\right)= \nonumber\\ &&= \bar{u}_{\rho} \left(b_{A}\right)u_{\lambda_{p}} \left(p,s_p\right)\;. \label{anpic3}\end{aligned}$$ The matrix element (\[anpic3\]) is determined by the decomposition coefficients of an arbitrary Dirac spinor $u_{\lambda_{p}} \left(p,s_p\right)$ on basis spinors (\[pic7\]). With the help of the Eqs.(\[anpic8\]),(\[anpic6\]) the matrix element (\[anpic3\]) transforms to $$\begin{aligned} && \mathcal{M}_{\rho,\;\lambda _p}\left(b_A,\left[p\right]\right)= \nonumber\\ && =\frac{\bar{u}_{\rho} \left(b_{A}\right)\left[ \slash{\xi}^{p}_{1}+ \slash{\xi}^{p}_{-1}\slash{\xi}^{p}_{1}/ m_p \right] \bar{u}_{-\lambda_{p}} \left(b_{-1}\right)}{\sqrt{\left( \tilde{b}_{-1}\cdot \xi^{p}_{1}\right)}} \label{anpic1aa}\end{aligned}$$ for the massive fermions with arbitrary vector of polarization and transforms to $$\begin{aligned} && \mathcal{M}_{\rho,\;\lambda _p}\left(b_A,p\right)=\frac{\bar{u}_{\rho}\left(b_{A}\right) \slash{p} \;u_{-\lambda _p}\left(b_{-1}\right)}{\sqrt{\left(\tilde{b}_{-1}\cdot p\right)}} \label{anpic1aa1}\end{aligned}$$ for massless fermions. Using the Eqs.(\[pic11\])-(\[pic13a\]) and (\[pic14a\]) the matrix element (\[anpic3\]) is reduced to an algebraic expression in terms of scalar products of isotropic tetrad vectors and physical vectors or in terms of components of four-vectors. Let us consider massless fermions. Using Eqs.(\[stp18\]),(\[pic11\]) we obtain, that $$\mathcal{M}_{\rho,\;\lambda}\left(b_A,p\right)=\delta_{\lambda,-\rho} \left(\delta_{A,-1}\sqrt{\left(p \cdot \tilde{b}_{-1}\right)}+ \delta_{A,1}\frac{\left(p \cdot \tilde{n}_{-\lambda}\right)}{\sqrt{\left(p \cdot \tilde{b}_{-1}\right)}}\right)\;. \label{anpic7}$$ For numerical calculations, as well as in the case of spinor techniques, it is convenient to determine the (\[anpic7\]) through the momentum components $p =\left (p^0\right.$, $\;p^x = p^0 \sin\theta_p\sin\varphi_p$, $p^y= p^0 \sin\theta_p\cos\varphi_p$, $\left. p^z=p^0 \cos\theta_p\right)$ $$\begin{aligned} && \mathcal{M}_{\rho,\;\lambda}\left(b_A,\;p\right)= \delta_{\lambda,-\rho}\left[ \delta_{A,-1}\sqrt{ p^{-}}- \delta_{A,1}\;\lambda \exp\left(-\mathrm{i} \lambda \varphi_{p}\right)\sqrt{p^{+}}\right]= \nonumber\\ &&=\delta_{\lambda,-\rho}\sqrt{2 p_0}\left[ \delta_{A,-1}\sin \frac{\theta_p}{2}- \delta_{A,1}\;\lambda\; \cos \frac{\theta_p}{2}\exp\left(-\mathrm{i} \lambda \varphi_{p}\right) \right], \label{stp24}\end{aligned}$$ where $$p^{\pm}=p^0\pm p^z, ~~ p^x+\mathrm{i} \lambda p^y=\sqrt{\left(p^x \right)^2+\left(p^y \right)^2} \exp\left(\mathrm{i} \lambda \varphi_{p}\right).$$ Let us consider massive Dirac particles with arbitrary polarization vector $s_p$. After evaluations we obtain, that the decomposition coefficients for a massive fermion with momentum $p$, an arbitrary polarization vector $s_p$ and mass $m_p $ can be written as scalar products of tetrad and physical vectors $$\begin{aligned} && \mathcal{M}_{\rho,\;\lambda _p}\left(b_A,p,s_p\right)= \frac 1{\sqrt{\left(\tilde{b}_{-1} \cdot \xi _1^p \right)}}\left.\Bigg[ \delta _{\lambda,\; -\rho} \left\{ \delta_{A,-1} \left(\tilde{b}_{-1} \cdot \xi _1^p\right)+\delta_{A,1}\left(\tilde{n}_{-\lambda _p} \cdot \xi _1^p\right) \right\} \right.+ \nonumber\\ &&\left. + \delta _{\lambda,\; \rho }\left\{\delta_{A,1}Y_{-1,\;-\lambda_{p} }^{\xi _{-1}^{p},\;\xi _{1}^{p}}+\frac{\delta_{A,-1}}{m_p} X_{-1,-\lambda_{p} }^{\xi _{-1}^{p},\;\xi _{1}^{p}} \right\} \right]\; , \label{anpic9}\end{aligned}$$ where the scalar functions $Y^{p,q}, X^{p,q}$ are determined by Eqs.(\[anpic12new1\])-(\[anpic12new2\]). For $BDKS$ polarization states with the polarization vector (\[stp33aa\]) the matrix element (\[anpic9\]) has a compact form $$\begin{aligned} &&\mathcal{M}_{\rho,\;\lambda}\left(b_A,\;p,\;s_{KS}\right)= \delta_{\lambda,\;-\rho}\left[ \delta_{A,-1}\sqrt{\left(p \cdot \tilde{b}_{-1}\right) }+ \delta_{A,1}\frac{\left(p \cdot \tilde{n}_{-\lambda}\right)}{\sqrt{\left(p \cdot \tilde{b}_{-1}\right)}}\right]+ \nonumber\\ && +\delta_{\lambda,\rho}\;\delta_{A,1}\frac{m_p} {\sqrt{\left(p \cdot \tilde{b}_{-1}\right)}}\;. \label{dc1}\end{aligned}$$ The matrix element (\[anpic3\]) with the antifermion can be easily obtained with the help of Eq.(\[stp31\]): $$\begin{aligned} &&\widetilde{\mathcal{M}}_{\rho,\;\lambda _p}\left(b_{A}\;,\left[p\right]\right)= \bar{u}_{\rho} \left(b_{A}\right)\upsilon_{\lambda_{p}} \left(p,\;s_p\right)= \nonumber\\ && =\rho \lambda_{p}\;\mathcal{M}_{\rho,\;-\lambda _p}\left(b_{A}\;,\left[p\right]\right)\;. \label{anpic3af}\end{aligned}$$ Recursion relation ------------------ With the help of completeness relation (\[pic10\]) the amplitude (\[anpic1\]) with $Q=Q_2 Q_1$ is expressed as combinations of the lower-order matrix element $$\begin{aligned} && \mathcal{M}_{\lambda _p,\;\lambda _k}\left( \left[p\right],\left[k\right];Q_2 Q_1\right)= \nonumber\\ && =\sum_{\sigma ,A =-1}^1 \mathcal{M}_{\lambda _p,\;\sigma}\left(\left[p\right],b_{A};Q_2\right) \mathcal{M}_{-\sigma,\; \lambda _k } \left(b_{-A},\left[k\right];Q_1\right)\;. \label{anpic2}\end{aligned}$$ This insertion allows us to “cut” fermion chain into pieces of fermion chains with basis spinors $u_{\lambda}\left(b_{A}\right)$. Hence our formalism enables to calculate the blocks of the Feynman diagrams and then to use them in the calculation as scalar functions. All possible Feynman amplitudes can be built up from a set of “building” blocks. Let us consider the matrix element (\[anpic1\]) with an operator $$\label{anpic13} Z^{\left(n\right)}=Q_{n} Q_{n-1}\cdots Q_1 Q_0$$ with $Q_{0}=I$. In the Eq.(\[anpic13\]) all operators $Q_j$ have an identical mathematical expressions. Using Eq.(\[anpic2\]) we find that $$\begin{aligned} && \mathcal{M}_{\lambda _p,\;\lambda _k}\left( \left[p\right],\left[k\right];Z^{\left(n\right)}\right)\equiv \mathcal{M}_{\lambda _p,\;\lambda _k}^{\left(n\right)}\left( \left[p\right],\left[k\right]\right)= \nonumber\\ && =\sum_{\sigma ,A =-1}^1 \mathcal{M}_{\lambda _p,\;\sigma}\left(\left[p\right],b_{A}\right) \mathcal{M}_{-\sigma,\; \lambda _k }^{\left(n\right)} \left(b_{-A},\left[k\right]\right)\;, \label{anpic14}\end{aligned}$$ where matrix element $\mathcal{M}_{-\sigma, \;\lambda _k }^{\left(n\right)} \left(b_{-A},\left[k\right]\right)$ can be calculated with the help of recursion relation $$\begin{aligned} && \mathcal{M}_{-\sigma,\; \lambda _k }^{\left(n\right)} \left(b_{-A},\left[k\right]\right)= \nonumber\\ &&=\sum_{\rho ,C =-1}^1 \Gamma^{A,\;C}_{\sigma,\;\rho}\left[Q_n \right] \mathcal{M}_{-\rho,\; \lambda _k }^{\left(n-1\right)} \left(b_{-C},\left[k\right]\right)\;. \label{anpic15a}\end{aligned}$$ Once scalar functions $\mathcal{M}_{\rho,\; \lambda _k } \left(b_{A},\left[k\right]\right)$ and $\Gamma^{A,C}_{\sigma,\rho}\left[Q_j \right]$ are known (see Eqs.(\[anpic7\]), (\[anpic9\])-(\[dc1\]) and Eqs.(\[pic16\])-(\[pic16a\])), it is possible to evaluate the higher order of $\mathcal{M}_{-\sigma, \lambda _k }^{\left(n\right)} \left(b_{-A},\left[k\right]\right)$ with the help of recursion relation (\[anpic15a\]). Examples ======== Consider the “toy” example $$\begin{aligned} &&\mathcal{M}_{\lambda _{p},\;\lambda _{k}}^{\left( n\right) }\left( p,\;s_{p},\;k,\;s_{k}\right)= \nonumber \\ &&=\bar{u}_{\lambda _{p}}\left(p,s_{p}\right) \slash{q}_{n} \slash{q}_{n-1}\ldots \slash{q}_{1}u_{\lambda _{k}}\left( k,s_{k}\right),\label{anpic25}\end{aligned}$$ where $q_j$ are some arbitrary four-vectors. Therefore, we have that $$Z^{\left( n\right) }= \slash{q}_{n} \slash{q}_{n-1}\ldots \slash{q}_{1}\;. \label{anpic26}$$ Using the Eqs.(\[pic16\])-(\[pic16a\]), (\[anpic15a\]) we find that $$\begin{aligned} &&\mathcal{M}_{-\rho,\; \lambda _k }^{\left(j\right)} \left(b_{-C},\left[k\right]\right) =\left(q_{j}\cdot \tilde{b}_{-C}\right) \mathcal{M}_{\rho,\; \lambda _k }^{\left(j-1\right)} \left(b_{C},\left[k\right]\right)- \nonumber\\ && -C\;\left(q_{j}\cdot \tilde{n}_{C \rho }\right)\mathcal{M}_{\rho,\; \lambda _k }^{\left(j-1\right)} \left(b_{-C},\left[k\right]\right)\; \label{anpic28}\end{aligned}$$ and $$\begin{aligned} &&\mathcal{M}_{-\rho,\; \lambda _k }^{\left(j\right)} \left(b_{-C},\left[k\right]\right) =\mathcal{B}_{C_{j-k}^{\prime},\rho_{j-k}^{\prime}}\left[q_{j},\ldots q_{j-k}\right] \mathcal{M}_{-\rho_{j-k}^{\prime},\; \lambda _k }^{\left(j-k\right)} \left(b_{-C_{j-k}^{\prime}},\left[k\right]\right)+ \nonumber\\ && +C_{j-k}^{\prime}\mathcal{N}_{-C_{j-k}^{\prime},\; \rho_{j-k}^{\prime}}\left[q_{j},\ldots q_{j-k}\right]\mathcal{M}_{-\rho_{j-k}^{\prime},\; \lambda _k }^{\left(j-k\right)} \left(b_{C_{j-k}^{\prime}},\left[k\right]\right)\;, \label{anpic28a}\end{aligned}$$ where $C_{j-k}^{\prime}=\left(-1\right)^{j-k} C, \rho_{j-k}^{\prime}=\left(-1\right)^{j-k}\rho, k<j$. With the help of Eq.(\[anpic14\]) we can obtain the recursion formulas for calculation matrix element (\[anpic25\]): $$\begin{aligned} && \mathcal{M}_{\lambda _{p},\;\lambda _{k}}^{\left(j\right) }\left(\left[p\right],\left[k\right]\right)= \nonumber\\ && =\sum_{\rho ,C =-1}^1 \mathcal{M}_{\lambda _p,\;\rho}\left(\left[p\right],b_{C}\right) \left[\left(q_{j}\cdot \tilde{b}_{-C}\right) \mathcal{M}_{\rho, \;\lambda _k }^{\left(j-1\right)} \left(b_{C},\left[k\right]\right)\right.- \nonumber\\ && \left.-C\;\left(q_{j}\cdot \tilde{n}_{C \rho }\right)\mathcal{M}_{\rho,\; \lambda _k }^{\left(j-1\right)} \left(b_{-C},\left[k\right]\right) \right]\;, \label{anpic27}\end{aligned}$$ and $$\begin{aligned} && \mathcal{M}_{\lambda _{p},\;\lambda _{k}}^{\left(j\right) }\left(\left[p\right],\left[k\right]\right)= \nonumber\\ && =\sum_{\rho ,C =-1}^1 \mathcal{M}_{\lambda _p,\;\rho}\left(\left[p\right],\;b_{C}\right)\left[\right. \mathcal{B}_{C_{j-k}^{\prime},\rho_{j-k}^{\prime}}\left[q_{j},\ldots q_{j-k}\right] \mathcal{M}_{-\rho_{j-k}^{\prime},\; \lambda _k }^{\left(j-k\right)} \left(b_{-C_{j-k}^{\prime}},\left[k\right]\right)+ \nonumber\\ && +C_{j-k}^{\prime}\mathcal{N}_{-C_{j-k}^{\prime},\; \rho_{j-k}^{\prime}}\left[q_{j},\ldots q_{j-k}\right]\mathcal{M}_{-\rho_{j-k}^{\prime},\; \lambda _k }^{\left(j-k\right)} \left(b_{C_{j-k}^{\prime}},\left[k\right]\right)\left.\right]\;. \label{anpic27a}\end{aligned}$$ We have obtained that the matrix element (\[anpic25\]) can be represented as a combination of the scalar functions $\mathcal{B},\mathcal{N}$, decomposition coefficients $\mathcal{M}_{\lambda _p,\rho}\left(\left[p\right],b_{C}\right)$ and lower order matrix elements. Photon emission --------------- Let us consider the matrix element where photon with momentum $k$ and helicity $\sigma=\pm 1$ emission occurs from the incoming electron $$\begin{aligned} && \mathcal{M}_{\lambda _2,\;\lambda _1} \left( \left[p_2\right],\left[p_1\right]; Q\;Z_{k} \right) = \bar{u}_{\lambda _{2}}\left( p_2,\;s_{p_2}\right) Q\;Z_{k} \;u_{\lambda _1}\left(p_1,\;s_{p_1}\right) ~~\mbox{with} \label{fot1}\\ &&Z_k=e\; \frac{\left(\slash{p}_{1}-\slash{k}+m\right)\slash{\varepsilon }_{\sigma}\left(k\right)}{\left(p_{1}-k\right)^{2}-m^{2}} \;.\label{fot2}\end{aligned}$$ Using the algebra of $\gamma$-matrix and Dirac equation the operator $Z_k$ can be rewritten as $$\label{fot3} Z_k=-e\;\left\{\frac{\left(p_1\;\varepsilon_{\sigma}\left(k\right)\right)}{\left(p_1\;k \right)}-\frac{\slash{k}\slash{\varepsilon }_{\sigma}\left(k\right)}{2\left(p_1\;k \right)}\right\}\;.$$ Now we get $$\begin{aligned} && \mathcal{M}_{\lambda _2,\;\lambda _1} \left( \left[p_2\right],\left[p_1\right]; Q\;Z_{k} \right) =-e\;\frac{\left(p_1\;\varepsilon_{\sigma}\left(k\right)\right)}{\left(p_1\;k \right)}\mathcal{M}_{\lambda _2,\;\lambda _1} \left( \left[p_2\right],\left[p_1\right]; Q \right)+ \nonumber\\ &&+\frac{e}{2\left(p_1\;k \right)}\mathcal{M}_{\lambda _2,\;\lambda _1}\left( \left[p_2\right],\left[p_1\right]; Q \;\slash{k}\slash{\varepsilon }_{\sigma}\left(k\right)\right)\;. \label{fot4}\end{aligned}$$ The recursion technique (see Eq.(\[anpic2\])) imply $$\begin{aligned} && \mathcal{M}_{\lambda _2,\;\lambda _1}\left( \left[p_2\right],\left[p_1\right]; Q \;\slash{k}\slash{\varepsilon }_{\sigma}\left(k\right)\right)= \nonumber\\ &&=\sum_{\sigma ,C =-1}^1 \mathcal{M}_{\lambda _2,\;\sigma}\left(\left[p_2\right],b_{C};Q\right) \mathcal{M}_{-\sigma, \lambda _1 } \left(b_{-C},\left[p_1\right]; \slash{k}\slash{\varepsilon }_{\sigma}\left(k\right)\right)= \nonumber\\ &&=\sum_{\sigma ,C =-1}^1 \mathcal{M}_{\lambda _2,\;\sigma}\left(\left[p_2\right],b_{C};Q\right) \Gamma_{\sigma,-\lambda _1}^{C,-1} \left[\slash{k},\slash{\varepsilon}_{\sigma}\left(k\right),T_{\lambda_1}\left(p_1, s_{p_1}\right)\right]\;,\label{fot5}\end{aligned}$$ where $$\label{toper1} T_{\lambda}\left(p, s_{p}\right) =\frac{ \slash{\xi}^{p}_{1}+ \slash{\xi}^{p}_{-1}\slash{\xi}^{p}_{1}/ m_p}{\sqrt{\left( \tilde{b}_{-1}\cdot \xi^{p}_{1}\right)}}\;$$ for fermion with arbitrary polarization vector (see Eq.(\[anpic8\])), $$\label{toper2} T_{\lambda}\left(p, s_{p}\right) =\frac{ \slash{p}+m_p}{\sqrt{\left(p \cdot \tilde{b}_{-1}\right)}}\;$$ for fermion with $BDKS$ polarization vector (\[stp34aa\]), $$\label{toper3} T_{\lambda}\left(p, s_{p}\right)=\frac{ \slash{p}}{\sqrt{\left(p \cdot \tilde{b}_{-1}\right)}} \;$$ for massless fermion (see Eq.(\[anpic6\])). Let us consider initial massive fermion with $BDKS$ polarization vector in expression (\[fot5\]). After calculations, for the matrix element (\[fot1\]) with the $BDKS$ polarization state of initial fermion and helicity $\sigma$ of photon we have the exact formula in terms of lower-order matrix elements with operator $Q$ and scalar functions $\mathcal{B},\mathcal{N}$: $$\begin{aligned} && \mathcal{M}_{\lambda _2,\;\lambda _1}\left( \left[p_2\right],\left[p_1\right]; Q \;Z_k\right)= -e\;\frac{\left(p_1\;\varepsilon_{\sigma}\left(k\right)\right)}{\left(p_1\;k \right)}\mathcal{M}_{\lambda _2,\;\lambda _1} \left( \left[p_2\right],\left[p_1\right]; Q \right)+ \nonumber\\ &&+\frac{e}{2\left(p_1\;k \right)\sqrt{\left(p_{1}\;\tilde{b}_{-1}\right)}} \bigg[ \mathcal{M}_{\lambda_2,\;\lambda_1} \left(\left[p_2\right],b_{-1};Q\right) \mathcal{N}^{~k,~\varepsilon_{\sigma}\left(k\right),\;p_{1}}_{-1,-\lambda_{1}}+ \nonumber\\ &&+ \mathcal{M}_{\lambda _2,\;\lambda_1}\left(\left[p_2\right],b_{1};Q\right) \mathcal{B}^{~k,~\varepsilon_{\sigma}\left(k\right),\;p_{1}}_{-1,-\lambda_{1}} + m_{p_1}\left\{ \right. \mathcal{M}_{\lambda_2,\;-\lambda_1} \left(\left[p_2\right],b_{-1};Q\right) Y^{~k,~\varepsilon_{\sigma}\left(k\right)}_{-1,-\lambda_{1}} \nonumber\\ && +\mathcal{M}_{\lambda _2,\;-\lambda_1}\left(\left[p_2\right],b_{1};Q\right) X^{~k,~\varepsilon_{\sigma}\left(k\right)}_{-1,-\lambda_{1}} \left.\right\}\bigg] \;. \label{fot5a}\end{aligned}$$ Scalar functions $\mathcal{B}^{~k,~\varepsilon_{\sigma}\left(k\right),p_{1}}_{A,\;\lambda}$, $\mathcal{N}^{~k,~\varepsilon_{\sigma}\left(k\right),p_{1}}_{A,\;\lambda}$ are determined in terms of scalar product with the help of the Eqs.(\[bn3\])-(\[bn4\]) and scalar functions $X^{~k,~\varepsilon_{\sigma}\left(k\right)}_{-1,-\lambda_{1}}$, $Y^{~k,~\varepsilon_{\sigma}\left(k\right)}_{-1,-\lambda_{1}}$ are determined by Eqs.(\[anpic12new1\])-(\[anpic12new2\]). Scalar functions can be easily calculated in terms of physical vector components. Using a vector of polarization of a photon as Eq.(\[picc1b\]) we obtain simple result: $$\begin{aligned} && \mathcal{B}^{~k,~\varepsilon_{\sigma}\left(k\right),\;p}_{-1,\;-\lambda_1}= \sqrt{2}\;\sigma\left(p^{-}k_{T,\;\lambda_{1}} \left[\delta_{\lambda_{1},\;\sigma}-1\right]+k^{-}p_{T,\;-\sigma}\right)\;, \nonumber\\ && \mathcal{N}^{~k,~\varepsilon_{\sigma}\left(k\right),\;p}_{-1,\;-\lambda_1}= \frac{\sqrt{2}}{k^{-}}\left[p^{-} k_{T,-\lambda_{1}}k_{T,-\sigma} +k^{-}\left(\delta_{\lambda_{1},\;-\sigma}k^{+}p^{-}- \delta_{\lambda_{1},\;\sigma} p_{T,\;-\sigma} k_{T,\;-\sigma}\right) \right]\;, \nonumber\\ &&Y^{~k,~\varepsilon_{\sigma}\left(k\right)}_{-1,\;-\lambda_{1}}= -\sqrt{2}\;\lambda_{1}\;\delta_{\lambda_{1},\;-\sigma} k_{T,\;-\sigma}\;,\;~~X^{~k,~\varepsilon_{\sigma}\left(k\right)}_{-1,\;-\lambda_{1}}= -\sqrt{2}\;\delta_{\lambda_{1},\;-\sigma}\;k^{-}\;, \label{scalarfunct}\end{aligned}$$ where $$p^{\pm }=p^0 \pm p^z\;,\;~ p_{T,\;\lambda}=p^x+\mathrm{i}\;\lambda p^y\;. \label{index1}$$ The process $e^{+} e^{-} \to n \gamma $ --------------------------------------- Consider the process $$\label{anpic29} e^{+}\left(p_{2}, \sigma_{2}\right)+ e^{-}\left(p_{1}, \sigma_{1}\right) \to \gamma \left(k_1,\lambda_{1}\right)+\gamma \left(k_2,\lambda_{2}\right)+ \cdots +\gamma \left(k_n,\lambda_{n}\right),$$ where the momenta of the particles and spin numbers are given between parentheses. The Feynman diagrams of the processes (\[anpic29\]) contain the matrix element $$\begin{aligned} &&M_{\sigma _{2},\;\sigma _{1}}^{\left( \lambda _{1},\;\lambda _{2}, \ldots \lambda _{n}\right) }\left( p_{2},s_{p_{2}},p_{1},s_{p_{1}};k_{1},k_{2},\ldots k_{n}\right)= \nonumber \\ &&={M}_{\sigma _{2},\;\sigma _{1}}^{\left(n\right) }\left( \left[p_2\right],\left[p_1\right]\right)= \bar{\upsilon }_{\sigma _{2}}\left( p_{2},s_{p_{2}}\right) \slash{ \varepsilon }_{\lambda _{n}}\left( k_{n}\right) \cdots \nonumber \\ && \cdots \slash{\varepsilon } _{\lambda _{3}}\left( k_{3}\right) \frac{\slash{Q}_{2}+m}{Q_{2}^{2}-m^{2}} \slash{\varepsilon }_{\lambda _{2}}\left( k_{2}\right) \frac{\slash{Q} _{1}+m}{Q_{1}^{2}-m^{2}}\slash{\varepsilon }_{\lambda _{1}}\left( k_{1}\right) u_{\sigma _{1}}\left( p_{1},s_{p_{1}}\right)+\nonumber \\ &&+\left(n!-1 \right)\mbox{other permutations of} \left(1,2,\ldots, n\right)\;, \label{anpic20}\end{aligned}$$ where $$Q_{j}=p_{1}-\sum_{i=1}^{j}k_{i}\; . \label{anpic21}$$ Hence, we have that $$Z^{\left( n\right)}=\frac{\slash{Q}_{n}+m}{Q_{n}^{2}-m^{2}}\slash{ \varepsilon }_{\lambda _{n}}\left( k_{n}\right) \cdots \frac{\slash{Q} _{1}+m}{Q_{1}^{2}-m^{2}}\slash{\varepsilon }_{\lambda _{1}}\left( k_{1}\right) \label{anpic22}$$ and $$\begin{aligned} && {M}_{\sigma _{2},\sigma _{1}}^{\left(n\right) }\left(\left[p_2\right],\left[p_1\right] \right) \nonumber\\ && =\sum_{\rho ,C =-1}^1 \widetilde{\mathcal{M}}_{\sigma _{2},\rho}\left(\left[p_{2} \right],b_C; \slash{\varepsilon}_{\lambda _{n}}\left(k_{n}\right)\right) \mathcal{M}_{-\rho,\sigma _{1}}\left(b_{-C},\left[p_{1}\right]; Z^{\left(n-1\right)} \right) \label{anpic23}\;.\end{aligned}$$ Here $$\widetilde{\mathcal{M}}_{\sigma _{2},\rho}\left(\left[p_{2} \right],b_C; \slash{\varepsilon}_{\lambda _{n}}\left(k_{n}\right)\right) =\bar{\upsilon }_{\sigma _{2}}\left( p_{2},s_{p_{2}}\right)\slash{\varepsilon}_{\lambda _{n}}\left(k_{n}\right) u_{\rho}\left(b_{C}\right) \label{anpic23a}$$ and $$\mathcal{M}^{\left(n-1\right)}_{-\rho,\sigma _{1}}\left(b_{-C},\left[p_{1}\right]\right)=\mathcal{M}_{-\rho,\sigma _{1}}\left(b_{-C},\left[p_{1}\right]; Z^{\left(n-1\right)} \right) \;. \label{anpic23abc}$$ Using the expressions (\[pic16a\]) and (\[picc1b\]) we obtain, that (\[anpic23a\]) for $BDKS$ massive Dirac spinor (\[stp34aa\]) is determined by $$\begin{aligned} &&\widetilde{\mathcal{M}}_{\sigma _{2},\rho}\left(\left[p_{2} \right],b_C; \slash{\varepsilon}_{\lambda _{n}}\left(k_{n}\right)\right)= \nonumber\\ && =1/\sqrt{\left(p_2 \cdot \tilde{b}_{-1} \right)} \left.\bigg[\delta_{\sigma_{2},\;\rho} \left(\delta_{C,-1}X_{-1,\;\sigma_{2}}^{p_2,\;\varepsilon_{n}}+ \delta_{C,1}Y_{1,\;\sigma_{2}}^{p_2,\;\varepsilon_{n}}\right)\right. \nonumber\\ &&\left. -m\; \delta_{\sigma_{2},\;-\rho}\left(\delta_{C,-1}\left(\tilde{b}_{-1} \cdot \varepsilon_{n}\right)-\delta_{C,1}\left(\tilde{n}_{\sigma_{2}} \cdot \varepsilon_{n}\right)\right)\right] \label{lastmn}\end{aligned}$$ with $\varepsilon_{n}=\varepsilon_{\lambda _{n}}\left( k_{n}\right)$. The final recursion relation of the process (\[anpic29\]) with arbitrary helicities of the photons and $BDKS$ polarization states of positron is written as $$\begin{aligned} && {M}_{\sigma _{2},\sigma _{1}}^{\left(n\right) }\left(\left[p_2\right],\left[p_1\right] \right) =1/\sqrt{\left(p_2 \cdot \tilde{b}_{-1} \right)} \nonumber\\ && \left[X_{-1,\;\sigma_{2}}^{p_2,\;\varepsilon_{n}} \mathcal{M}_{-\sigma_{2},\sigma _{1}}^{\left(n-1\right)}\left(b_{1}, \left[p_{1}\right]\right)+ Y_{1,\;\sigma_{2}}^{p_2,\;\varepsilon_{n}}\mathcal{M}_{-\sigma_{2},\sigma _{1}}^{\left(n-1\right)}\left(b_{-1}, \left[p_{1}\right]\right) \right.\nonumber\\ && \left.+m\;\left(\left(\tilde{n}_{\sigma_{2}} \cdot \varepsilon_{n}\right)\mathcal{M}_{\sigma_{2},\sigma _{1}}^{\left(n-1\right)}\left(b_{-1}, \left[p_{1}\right]\right)+ \left(\tilde{b}_{-1} \cdot \varepsilon_{n}\right)\mathcal{M}_{\sigma_{2},\sigma _{1}}^{\left(n-1\right)}\left(b_{1}, \left[p_{1}\right]\right)\right)\right] \;, \label{finalres}\end{aligned}$$ where (see Eq.(\[anpic27\]) and Eq.(\[anpic27a\])) the matrix element $\mathcal{M}_{-\rho ,\sigma _{1}}^{\left( j\right) }\left( b_{-C},\left[ p_{1}\right] \right)$ with arbitrary polarization vector of electron is calculated by means of the recursion formula $$\begin{aligned} &&\mathcal{M}_{-\rho ,\sigma _{1}}^{\left( j\right) }\left( b_{-C},\left[ p_{1}\right] \right) =\frac{1}{Q_{j}^{2}-m^{2}}\left\{ {}\right. \nonumber \\ &&m\left[ {}\right. \left( \varepsilon _{j}\cdot b_{-C}\right) \mathcal{M} _{-\rho ,\sigma _{1}}^{\left( j-1\right) }\left( b_{-C},\left[ p_{1}\right] \right) -C\;\left( \varepsilon _{j}\cdot n_{C\rho }\right) \mathcal{M} _{-\rho ,\sigma _{1}}^{\left( j-1\right) }\left( b_{C},\left[ k\right] \right) \left. {}\right] + \nonumber \\ &&+Y_{C,\rho }^{Q_{j},\varepsilon _{j}}\mathcal{M}_{\rho ,\sigma _{1}}^{\left( j-1\right) }\left( b_{C},\left[ p_{1}\right] \right) +C\;X_{-C,\rho }^{Q_{j},\;\varepsilon _{j}}\mathcal{M}_{\rho ,\sigma _{1}}^{\left( j-1\right) }\left( b_{-C},\left[ p_{1}\right] \right) \left. {}\right\} \;. \label{anpic24x}\end{aligned}$$ Summary and Acknowledgements ============================ In present paper we have formulated a new effective method to calculate the Feynman amplitudes for various processes with fermions of arbitrary polarizations. In our method it is much easier to keep track of partial results and to set up recursive schemes of evaluation which compute and store for later use subdiagrams of increasing size and complexity. In our approach of the matrix element calculation: 1 : we don’t use an explicit form of Dirac spinors and $\gamma$–matrices 2 : we don’t use the calculation of traces 3 : as well as in the trace methods the matrix element of Feynman amplitudes is reduced to the combination of scalar products of momenta and polarization vectors. 4 : Unlike spinor technique in different variants [@Berendz]-[@Zhan] in this method we don’t use either Chisholm identities, or the presentation of the contraction $\slash{p}$ with four vector $p$ and of the polarization vector of bosons through the Dirac spinors. 5 : Unlike `WvD` technique [@Giele],[@Dittmaier] in this method we don’t use special Feynman rules for calculating of the matrix elements. 6 : Expression for matrix element calculates $\mathcal{M}_{\lambda _p,\lambda _k} \left(p,s_p,\; k,s_k\; ;Q\right)$ (\[anpic1\]) for all values $\lambda _p, \lambda _k$ simultaneously. The recursive algorithms can be easily realized in the various systems of symbolic calculation (Mathematica, Maple, Reduce, Form) and in such packages as `FeynArts` [@FeynArts], `FeynCalc` [@FeynCalc], `HIP` [@hip] and so on. I would like to thank organizers for warm an kind hospitality throughout the Conference. Also I want to thank A.L. Bondarev for his useful remarks and discussion. [99]{} J.L. Powell, Phys. Rev. **75**,  32  (1949). M.V. Galynskii, S.M. Sikach, Phys.Part.Nucl.**29**,469 (1998); Fiz.Elem.Chast.Atom.Yadra **29**, 1133  (1998) \[arXiv:hep-ph/9910284\]. A.L. Bondarev , arXiv:hep-ph/9710398. F. Caravaglios, M. Moretti, Phys. Lett. **B358**, 332  (1995). A. Kanaki and C.G. Papadopoulos, Comput. Phys. Commun. **132**, 306  (2000). E. Bellomo, I1 Nuovo Cimento. Ser.X **21**, 730  (1961). A.A. Bogush, F.I. Fedorov,  Vesti AN BSSR, ser.fiz.-m.n. **N 2**, 26 (1962) (in Russian). A.A. Bogush,  Vesti AN BSSR, ser.fiz.-m.n., **N 2**, 29  (1964) (in Russian). J.D. Bjorken and M.C. Chen, Phys.Rev. **154**, 1335  (1966). H.W. Fearing, R.R. Sillbar, Phys.Rev. **D6**, 471  (1972). F.I. Fedorov, Izvestiya Vyzov. Fizika, **N 2**, 32  (1980) (in Russian). M. Caffo, E. Remiddi, Helvetica Phys. Acta. **55**, 339  (1982). R. Vega and J. Wudka, Phys. Rev. **D53**, 5286 (1996); *erratum* Phys. Rev. **D56**, 6037 (1997). F.A. Berends, P.de Gausmaeceker, R. Gastmans, R. Kleiss, W.Troost and Tai Tsun Wu, Phys.Lett. **B103** 124 (1981). G.R. Farrar and F. Neri, Phys. Lett. **130B**,  109 (1983). R. Kleiss, W.J. Stirling, Nucl. Phys. **B262**, 235 (1985). Zhan Xu, Da-Hua Zhang, and Lee Chang, Nucl.Phys. **B291**, 392 (1987). F.A. Berends and W.T. Giele, Nucl. Phys. **B294**, 700 (1987). F.A. Berends, P.H. Daverveldt, and R. Kleiss, Nucl. Phys. **B253**, 441  (1985). R. Kleiss, Z.Phys.C. **33**, 433 (1987). S. Dittmaier, Phys. Rev. [**D59**]{}, 016007 (1999). K. Hagiwara, D. Zeppenfeld, Nucl. Phys. **B274**, 1 (1986). T. A.-Gongora, R.G. Stuart, Z.Phys. **C42**, 617 (1989). V.V. Andreev, Phys.Rev. **D62**, 014029 (2000). E.N. Argyres, C.G. Papadopoulos, Phys. Lett. **B263**, 298 (1991). A. Ballestrero and E. Maina, Phys. Lett. **B350**, 225  (1995). H. Murayama, I. Watanabe and K. Hagiwara, KEK-91-11. H. Tanaka, Comput.Phys.Commun. **58**, 153 (1990). T. Stelzer and W.F. Long, Comput. Phys. Commun. **81**, 357 (1994). M. Moretti, T.Ohl and J.Reuter, arXiv: hep-ph/0102195. J. Küblbeck, M. Böhm, and A. Denner, Comp. Phys. Commun. **60**, 165 (1990). T.Hanh, arXiv: hep-ph/0406028 A.E. Pukhov et al., Preprint INP MSU 98-41/542 (1998); hep-ph/9908288. P.S. Cherzor, V.A. Ilyin, A.E. Pukhov, hep-ph/0101265. F. Krauss, R. Kuhn and G. Soff, JHEP [**0202**]{}, 044 (2002); \[arXiv:hep-ph/0109036\]. M. L. Mangano, M. Moretti, F. Piccinini, R. Pittau and A. D. Polosa, JHEP **0307**, 001  (2003); \[arXiv:hep-ph/0206293\]. E. Accomando, A. Ballestrero and E. Maina, Comput.Phys.Commun. **150**, 166 (2003); \[arXiv: hep-ph/0204052\]. Dittmaier S. and Roth M.  Lusifer: a Lucid approach to SIx-FERmion production// \[e-Print Archive:  hep-ph/0206070\]. R. Harlander and M. Steinhauser, Preprint TTP98-41, BUTP-98/28 1998; \[arXiv: hep-ph/9812357\]. V.V. Andreev, Physics of At.Nuclei V.**66**, N **2**. 2003 P.383-393. V.V. Andreev, Nucl.Instr.&Methods in Phys.Research **A502** N.2-3, 607 (2003). J.D. Bjorken and S.D. Drell, *Relativistic Quantum Mechanics*. McGraw-Hill, New York, (1964). V.I. Borodulin, R.N. Rogalyov, and S.R. Slabospitsky, Preprint IHEP 95-90, (1995); \[arXiv:hep-ph/9507456\]. R. Mertig, M.Böhm, and A. Denner, Comp. Phys. Commun. **64**, 345 (1991). E.Yehudai, Preprint Fermi-Lab-Pub-92/22-T (1992). [^1]: : **ANDREEV@GSU.UNIBEL.BY**
--- abstract: 'In this short note we use a simple model to describe the dynamical effects of break-up processes in the subbarrier fusion involving weakly bound nuclei. We model two similar cases involving either a neutron or a proton halo nucleus, both schematically coupled to the break-up channels. We find that the decrease of the coulomb barrier in the proton break-up channel leads, [*ceteris paribus*]{}, to a larger enhancement of the subbarier fusion probabilities with respect to the neutron-halo case.' author: - Raj Kumar - 'J. A. Lay' - 'A. Vitturi' bibliography: - 'fusion.bib' title: Enhanced subbarrier fusion for proton halo nuclei --- Subbarrier heavy-ion fusion processes have been in the last decades an interesting issue for the low-energy nuclear physics community for the natural link involved between structure and dynamics. It has been in fact recognized that the basis feature characterizing the subbarrier behavior is the dynamical coupling to the internal degrees of freedom of the two fusing partners [@Bal98; @Das83b; @Hag12]. The proper description of a fusion process, therefore, is essentially demanding to single out the relevant coupled channels involved and to determine the associated diagonal and coupling potentials. This makes the situation with weakly bound nuclei more complex, due to the non trivial inclusion of the strongly coupled continuum break-up channels and the consequent opening of final three-body (or four, in the case of two-particle halo nuclei) channels. This had led from a theoretical point of view of diverging results on the enhancement/suppression of the fusion probabilities, and to extremely difficult experimental measurements to determine (and separate) different fusion and reaction channels [@Agu11; @Agu09; @Scu11; @Hag00; @Vin13; @Gom11; @Nak04]. Given the complexity of the situation, every case behaves differently and has to be specifically treated, with particular ion-ion potentials, associated heights of the coulomb barrier, coupling form factors, specific relevant transfer channels and $Q$-values. For this reason it is not easy, in a fully treated coupled-channel description, to single out the role of specific issues. One of these is the possible role of the charged break-up channels in proton-halo nuclei with respect to the more common neutron break-up channels in neutron-halo nuclei. For this reason we introduce here a very simplified two-channel model, the first being the entrance channel and the second representing the full set of continuum break-up channels. In this channel we neglect the ejected particle (neutron or proton) and properly rescale energies and ion-ion potential. Our model has been applied, as representative cases of neutron or proton haloes, to the fusion with $^{58}$Ni of either $^{11}$Be and $^{8}$B. To single out just the dynamical effects due to the neutron/proton nature of the two halo nuclei, potentials in the different channels have been constructed using the simple parameterization of Broglia and Winther [@BW] and an equal strength for the coupling between entrance channels and the “break-up” ones. In Fig. \[pot\] we display the resulting ion-ion potentials for the $^{8}$B+$^{58}$Ni (left frame) and $^{11}$Be+$^{58}$Ni (right frame) reactions. For comparison in the same figures we also display the corresponding ion-ion potentials in our ”break-up" channels, i.e. for the $^{7}$Be+$^{58}$Ni and $^{10}$Be+$^{58}$Ni cases. For a quick view, we also show in the figure as a line one energy $E$ in the incoming channel (20 MeV in the case of $^{8}$B and 17 MeV in the case of $^{11}$Be) and the corresponding energy in the break-up channel. This energy can be estimated by subtracting the energy needed for break-up and the average excitation energy, $\langle E^* \rangle$, in the core-nucleon relative motion, and then sharing the energy between then according to a distant break-up scenario. In this way, we consider $E_{bu}=(E-S_{1N}-\langle E^* \rangle) \cdot \frac{A-1}{A}$. $S_{1N}$ reads for the one neutron or one proton separation energy, i.e. $S_{1p}=0.136$ MeV for $^8$B and $S_{1n}=0.504$ MeV for $^{11}$Be. $\langle E^* \rangle$ is approximated by the peak energy for the dipole electromagnetic transition probabilities, $\langle E^* \rangle=0.5$ MeV for $^8$B and $\langle E^* \rangle=0.4$ MeV for $^{11}$Be. It is evident from the figure that while in the neutron case the barriers in the incoming and break-up channels are similar (while the energy at disposal in the latter is smaller), in the proton case the reduction in energy in the break-up channel is more than compensated by the lower Coulomb barrier due to the reduced charge in the projectile. ![Ion-ion potentials for $^{8}$B+$^{58}$Ni in the left frame and $^{11}$Be+$^{58}$Ni in the right frame (solid lines). The dashed lines corresponds to the break-up channels, i.e. for the $^{7}$Be+$^{58}$Ni and $^{10}$Be+$^{58}$Ni respectively. The nuclear part of the potential is computed according the proximity potential of Broglia and Winther [@BW]. []{data-label="pot"}](Fig1.pdf){width="1.\columnwidth"} Fusion probabilities are calculated by solving the corresponding coupled-channel equations under ingoing-wave boundary conditions (IWBC). The coupled-channel formalism for direct reaction processes given by Austern [@Aus87] expands the total wave function in terms of the wavefunction for the internal state of the projectile $\phi_{\beta}$ and the radial wave functions $\chi_{\beta}$ that acounts for the relative motion between projectile and target: $$\Psi^{(+)}=\Sigma_{\beta}\frac{\chi_{\beta}(R)}{R}\phi_{\beta}. \label{eq.1}$$ This leads to a set of coupled equations for the radial wave functions: $$\frac{d^{2}\chi_{\beta}}{dR^{2}}+\frac{2\mu_{\beta}}{\hbar^{2}}[E_{\beta}-V_{\beta}^{eff}(R)]\chi_{\beta} =\frac{2\mu_{\beta}}{\hbar^{2}}\Sigma_{\alpha\ne\beta}V_{\beta\alpha}^{coup}(R)\chi_{\alpha} \label{eq.3}$$ In these expression V is the interaction potential while, for a given channel $\beta$, $\mu_{\beta}$ is the reduced mass, and $E_{\beta}$ is the relative energy. In our model case, we will only consider two channels, the incomming channel and one channel representative of the break-up and later fusion without the ejected particle. The two channel problem in one spatial dimension $R$ is given by: $$\begin{aligned} &&\frac{d^{2}\chi_{1}}{dR^{2}}+\frac{2\mu_{1}}{\hbar^{2}}[E_{1}-V_{1}]\chi_{1} =\frac{2\mu_{1}}{\hbar^{2}}V_{coup}\chi_{2},\nonumber\\ &&\frac{d^{2}\chi_{2}}{dR^{2}}+\frac{2\mu_{2}}{\hbar^{2}}[E_{2}-V_{2}]\chi_{2} =\frac{2\mu_{2}}{\hbar^{2}}V_{coup}\chi_{1}, \label{eq.5}\end{aligned}$$ where, in our case, $E_{1}=E$, the incoming energy, and $E_{2}=E_{bu}$, the enegy in the break-up channel. The total potential for each channel $V_{1,2}(R)$ is given by the sum of Coulomb and a nuclear proximity potential given by Broglia and Winther [@BW] parameterization. The coupling potential $V_{coup}$ is given as a derivative Woods Saxon form with same radius and difuseness of the proximity potential for the incoming channel. The strength is set to a 10% of the strength of the same proximity potential. The coupled channel equations are solved by imposing the boundary conditions that there are only incoming waves at R=$R_{min}$, i.e. the minimum position of the Coulomb pocket inside the barrier, and there are only outgoing waves at infinity for all channels except for the entrance channel ($\beta$=1), which has an incoming wave with amplitude one as well. This boundary condition is referred to as the incoming wave boundary condition (IWBC) [@Bal98; @Hag12; @Das83], and is valid for heavy-ion reactions, where there is strong absorption inside the Coulomb barrier. The numerical solution is matched to a linear combination of incoming and outgoing and Coulomb wave functions at finite distance $R_{max}$ beyond which both the nuclear proximity and the coupling potential are negligible. The boundary condition of a wave incident from the right in channel $\beta$=1 and transmitted and reflected waves in both channels is given by, $$\begin{aligned} \chi_{\beta}(R) \xrightarrow{R\rightarrow\infty} & \delta_{\beta1} H^{(-)}_{\ell}(k_{\beta}R)&+~~r_{\beta}H^{(+)}_{\ell}(k_{\beta}R); \nonumber\\ \chi_{\beta}(R=R_{min}) =&t_{\beta} H^{(-)}_{\ell}(k_{\beta}R),& \label{eq:12}\end{aligned}$$ where $\ell$ is angular momentum, $H^{(+)}_{\ell}$ and $H^{(-)}_{\ell}$ are the outgoing and incoming Coulomb wave functions, respectively and $k=\sqrt{2\mu E/\hbar^2}$ is the wave number associated with the energy $E$. The total transmission probability is then given by, $$\begin{aligned} T=\sum_{\beta}\mid T_{\beta}^2\mid = |t_1|^2+\frac{v_2}{v_1} |t_2|^2 \label{eq:13}\end{aligned}$$ where $v_1$ and $v_2$ are the velocities corresponding to channel 1 and 2. The fusion cross-section, in terms of partial waves, is given by $$\sigma=\sum_{\ell=0}^{\ell_{max}}\sigma_{\ell}=\frac{\pi\hbar^2}{2\mu_{1} E}\sum_{\ell=0}^{\ell_{max}}(2\ell+1)T_{\ell}(E). \label{eq:14}$$ The probability of transmission for the partial wave can also be calculated simply by a shift of energy, $$T_{\ell}\cong T_{0}\left[E-\frac{\ell(\ell+1)\hbar^{2}}{2\mu_{1} r_{0}^2} \right], \label{eq:15}$$ where $r_{0}$ is the position of the barrier for the s-wave [@Bal98]. ![Fusion cross sections for the $^{8}$B+$^{58}$Ni (left panel) and $^{11}$Be+$^{58}$Ni (right panel) reactions. Solid lines represent the case without break-up, with a single channel and no coupling, whereas the dashed lines show the two channels case with coupling to the proton (left) and neutron (right) break-up channels.[]{data-label="fullxsec"}](Fig2.pdf){width="1.0\columnwidth"} The resulting cross section for both $^{8}$B+$^{58}$Ni and $^{11}$Be+$^{58}$Ni fusion reactions are shown in Fig. \[fullxsec\]. For each reaction, we compare the situation without break-up, where there is no coupling to the second channel (solid lines), with the possibility of coupling to the break-up channel (dashed lines). In both cases, and as a result of this coupling, a certain enhancement is found. In order to compare both cases appropriately, we show in Fig. \[redxsec\] a reduced fusion cross sections in terms of the collision radius of each reaction versus the energy divided by the estimated Coulomb barrier. As expected, the two no-coupling cross sections coincide almost perfectly, whereas the coupling cases show different results. Here, it is clearly seen how the proton break-up case has a larger cross section at low energies. On the other hand, the neutron break-up case has a larger enhancement at energies inmediately close to the energy of the Coulomb barrier. For the sake of cancelling the effects of choosing two different nuclei for the neutron and the proton case we add a third case in Fig. \[redxsec\] for the $^{8}$B+$^{58}$Ni case where the same potential, and so the same Coulomb barrier, is used for both channels, $V_{2}=V_{1}$ (dot-dashed line). This case is similar to consider that the $^{8}$B looses one neutron instead of a proton. As expected, the cross section follows the same trend as the $^{11}$Be+$^{58}$Ni but with an apparently smaller enhancement. ![(Color online) Cross section divided by the square of the interaction radius versus the energy divided by the estimation of the Coulomb barrier in the incoming channel ($V_{B}$) for $^{8}$B+$^{58}$Ni and $^{11}$Be+$^{58}$Ni fusion reactions. We compare the no coupling cases for both reactions (solid line) with the proton (dotted line) and neutron (dashed line) break-up cases.[]{data-label="redxsec"}](Fig3.pdf){width="0.8\columnwidth"} ![Barrier distributions for the $^{8}$B+$^{58}$Ni (left panels) and $^{11}$Be+$^{58}$Ni (right panels) fusion reactions both with (dashed) and without (solid) coupling to the break-up channel. In upper panels we show the derivative of the transmission factor for $\ell=0$ whereas in the lower panels we evaluate the second derivative of the fusion cross section times the energy.[]{data-label="bardist"}](Fig4.pdf){width="0.8\columnwidth"} In order to clarify which processes are giving rise to these two different behaviors, it is useful to show the barrier distributions for both reactions. This can be done by evaluating the second energy derivative of the product of the cross section and the energy, or the first derivative of the transmission for $\ell=0$. Both observables are shown in Fig \[bardist\]. A clear difference between the proton and neutron induced effects on fusion is found. Both cases present two barriers as expected according to Fig. \[pot\]. However, in the proton case, the secondary barrier is below the barrier in the incoming channel and so it allows a larger enhancement at low energies. Instead, in the neutron case, the secondary barrier is at a higher energy. Therefore, the neutron enhancement simply arises from the displacement towards a lower energy of the final effective Coulomb barrier. These results obtained here are similar to the effect of negative or positive $Q$-values on barrier penetration [@Das97; @Das83b]. As shown, for example, in figure 5.1 in [@Das97], the positive $Q$-value case shows the same cross section and barrier distribution as the proton break-up case, and the same parallelism is found for negative $Q$-value and neutron break-up cases. Indeed, effective $Q$-values can be considered and compared from the difference between the energies and the barriers in each channel. This effective $Q$-value may be evaluated as $$Q_{eff}=(E_{bu}-V_{B}^{2})-(E-V_{B}^{1}),$$ where $V_{B}^{1}$ and $V_{B}^{2}$ are the energies of the Coulomb barrier for the incoming and break-up channels respectively. Here we have also neglected the effect of the separation energy and the average excitation energy of the projectile. Looking at the energies plotted in Fig. \[pot\], we obtain $Q_{eff}=1.97$ MeV for the proton case and $Q_{eff}=-1.12$ MeV for the neutron case. The exact value for $Q_{eff}$ will depend on the incoming energy. Nevertheless, it can be shown that it is always negative for the neutron case, whereas it is positive for the proton case at energies around or bellow the Coulomb barrier. Therefore, the differences between the energies and the Coulomb barrier due to the loss of a neutron or a proton can explain the results obtained in both cases. In conclusion, the possibility of proton break-up produces an enhancement of the subbarrier fusion. Similar results were also found by Nakatsukasa *et al.* [@Nak04] in a time-dependant approach. This fact can explain the enhancement recently found for the proton halo nucleus $^{8}$B [@Agu11]. This enhancement is larger than in the neutron case, and also the energy distribution is far different. Indeed, for the neutron case, the enhancement is mainly due to a displacement in the energy of the Coulomb barrier. This can also explain why it is unclear if neutron halo produces or not an enhanced subbarrier fusion. This work has been supported by MIUR research fund PRIN 2009TWL3MX. The authors acknowledge L. F. Canto for useful discussions.
--- abstract: 'We search for persistent and quasi-periodic release events of streamer blobs during 2007 with the Large Angle Spectrometric Coronagraph on the *Solar and Heliospheric Observatory* and assess the velocity of the slow solar wind along the plasma sheet above the corresponding streamer by measuring the dynamic parameters of blobs. We find 10 quasi-periodic release events of streamer blobs lasting for three to four days. In each day of these events, we observe three-five blobs. The results are in line with previous studies using data observed near the last solar minimum. Using the measured blob velocity as a proxy for that of the mean flow, we suggest that the velocity of the background slow solar wind near the Sun can vary significantly within a few hours. This provides an observational manifestation of the large velocity variability of the slow solar wind near the Sun.' author: - | H.Q. $^{1,2}$, Y. $^{1}$, K. $^{3}$,\ S.W. $^{1}$, L.D. $^{1}$ title: 'Quasi-Periodic Releases of Streamer Blobs and Velocity Variability of the Slow Solar Wind near the Sun' --- Introduction ============ Sheeley *et al*. (1997) were the first to report the observation of plasma blobs, released from the tips of streamers as revealed in the data obtained by the Large Angle Spectrometric Coronagraph (LASCO) on the *Solar and Heliospheric Observatory* (SOHO) spacecraft (Brueckner *et al*., 1995) around the last solar minimum. According to the data analysis by Sheeley *et al*. (1997) and a series of following studies by Wang *et al*. (1998,2000) blobs emerge at about 2-4 Solar Radii ($R_\odot$) from the Sun center as radially elongated structures with initial sizes being about 1 $R_\odot$ in the radial direction and 0.1 $R_\odot$ in the transverse direction. They move outward radially, maintaining an almost constant angular span and their lengths increased from $\approx$1 $R_\odot$ to $\approx$3 $R_\odot$ within the LASCO field of view (FOV). Their velocities also increase gradually with increasing length. Besides understanding the plasma process accounting for the formation of blobs themselves, there are at least two other issues directly related to blob studies: The first is that blob studies provide a practical technique of assessing the velocity of the embedded solar wind, since blobs are believed to get closely coupled to and flow outward together with the background solar wind shortly after their emission. This component of the solar wind (*i.e.*, the wind originated from the plasma sheet above a streamer) is usually taken to be a part of the slow solar wind (*e.g.*, Woo and Martin, 1997; Sheeley *et al*., 1997; Habbal *et al*., 1997; Wang *et al*., 2000). That is to say that the measurements of the blob motion can be used to represent the velocity of the embedded slow wind plasmas above a certain distance. Another issue of blob studies concerns the possible diagnostics of plasma properties enclosed in the closed magnetic-field regions of streamers, especially, in the streamer cusp region, through *in situ* detection of blob structures in interplanetary space. This is deduced from the assumption that blobs originate from inside the closed arcades right below the streamer cusp, which is used or supported by several physical and numerical models of blob formation (*e.g.*, Wu *et al*., 2000; Wang *et al*., 2000; Chen *et al*., 2009). Note that there are also models that suggest that blobs are the aftermath of magnetic reconnections along the current sheet embedding in the solar wind with open magnetic geometry (*e.g.*, Einaudi *et al*., 1999; Lapenta and Knoll, 2005). The possibility of collecting samples of plasmas originated from inside the closed magnetic-field regions with *in situ* measurements is important for the evaluation of elemental compositions in the blob source region, and for understanding the formation and stability of coronal streamers, as well as the delicate coupling process between plasmas and magnetic field near the cusps. Wang *et al*. (1998) reported a very interesting event with steady quasi-periodic releases of blobs above a streamer during the eight days from 19 to 26 April, 1997. The daily rate of blobs in this event is observed to be three-five with a release period ranging from five to eight hours. To interpret the formation and quasi-periodic releases of blobs, Chen *et al*. (2009) designed a numerical model accounting for the magnetohydrodynamic coupling process between the closed streamer magnetic arcades and the solar wind expansion. They found that the streamer-cusp geometry is subject to an intrinsic instability originating from the peculiar magnetic topological feature at the cusp region despite the long-term stability of the overall morphology. According to Chen *et al*. (2009), the process of the instability consists of two successive processes. One is the plasma magnetic-field expansion through the localized cusp region where the field is too weak to maintain plasma confinement; the continuing expansion brings strong velocity shear into the slow wind regime, providing the free energy necessary for the onset of a streaming sausage mode instability (Lee, Wang, and Wei, 1988; Wang, Li, and Wei, 1988). The other is then the onset and nonlinear development of the streaming instability, which causes pinches of magnetic-field lines and drives reconnections at the pinching points to form separated magnetic blobs. After the birth of a blob, the streamer system returns to the configuration with a lower cusp point, subject to another cycle of the instability. As mentioned, the whole process originates from the topological feature at the cusp region, which is intrinsically associated with a typical coronal streamer; therefore Chen *et al*. (2009) made use of the word “intrinsic” to describe the streamer instability. We point out in passing that other numerical models demonstrating various aspects of streamer instabilities exist in the literature (*e.g.*, Suess, Wang, and Wu, 1996; Wu *et al*., 2000; Endeve, Leer, and Holzer, 2003; Endeve, Holzer, and Leer, 2004). According to the numerical results given by Chen *et al*. (2009), the period of blob formation is about four-five hours. Thus, hypothetically, one can observe four-six blobs per day on average, in agreement with what is observed by Wang *et al*. (1998). We find this agreement with the observations to be very encouraging considering the simplicity of Chen *et al*.’s numerical model (2009). However, in the series of blob studies by Wang *et al*. (1998,2000), only a few events with continuous and quasi-periodic releases of blobs are reported (in April 1997 and December 1998). If the scenario proposed by Chen *et al*. (2009) is basically correct-that the blobs are the aftermath of an intrinsic instability of coronal streamers with release periods being several hours-there should exist more events with steady blob releases. It is the primary purpose of this paper to search for events similar to those reported by Wang *et al*. (1998) with the LASCO data. As a starting point, we only deal with the data accumulated in the whole year of 2007 in this paper. As mentioned previously, the velocity measurement of blobs can be used as a proxy for that of the embedded slow solar wind along the plasma sheet. This argument is further supported by a recent numerical calculation presented in Chen *et al*. (2009). They show that, as a result of the dynamical coupling to the mean flow, the blobs are basically accelerated to the same velocity after they further propagate a distance of 2-3 $R_\odot$ from the disconnection point. Therefore, in general, beyond a certain heliocentric distance of, say, 5 to 7 $R_\odot$, the background solar wind velocity can be well represented by that of the blobs. However, most blobs are too weak to be observable beyond 20 $R_\odot$ by the LASCO C3 coronagraph. Therefore, the major region where this method is usable is limited to 4-20 $R_\odot$. At present, there are only a few other indirect techniques, such as the Doppler dimming technique (*e.g.*, Li *et al*., 1998; Cranmer *et al*., 1999; Strachan *et al*., 2002) and the IPS (Interplanetary Scintillation) technique (*e.g.*, Grail *et al*., 1996; Breen *et al*., 1999), that can be used to determine the wind velocity within the first 20 $R_\odot$ of the corona. For instance, one can use the measured ratio of the O VI doublet to evaluate the outflowing velocities of O$^{5+}$ ions. The velocities obtained by both the Doppler dimming technique and the IPS techniques are usually model dependent with large errors. As mentioned, the presence of blobs provides another velocity diagnostic technique of the solar wind in the corona. This can be referred to as the blob technique. Among the various methods of velocity measurement in the corona, the blob technique may serve as the most accurate, at least in cases where blobs are clearly measurable. One serious limitation of this method is that only the projected velocity of the solar wind along the plasma sheet can be revealed. Also, it should be noted that the blob technique is based on the general assumption that blobs can be taken as effective velocity tracers of the mean flow. Nevertheless, the second purpose of this paper is to examine the velocity of the solar wind along the plasma sheet, which is usually regarded as a part of the source region of the slow solar wind, as mentioned previously. The details of our observations and results are described in the following section. The summary and discussion are provided in the last section of this paper. Observations and Results ======================== As already mentioned, one of the main purposes of this paper is to search for steady release events of blobs. To investigate the quasi-periodic character of blob emissions, we need to observe enough blobs emitted above a streamer. Therefore, only those events with emission lasting for at least three days are reported in this study. By examining all of the white-light data taken by the LASCO coronagraph in 2007, we have identified 10 events with steady emission of blobs lasting for three to four days. Some information about these 10 events is listed in Table 1, where the number in the first column indicates the time sequence of the blob emission. In the remaining columns, we list the start and end dates of the events, the position angle (PA) of the central axis of the streamers from which blobs are released, the total number of blobs and the average daily rate released during the event, and the minimum and maximum values of the blob velocities at a specific height, say, 9 $R_\odot$. The PA increases counterclockwise, taken to be zero in the northward direction. The varying ranges of the deduced blob accelerations are also presented in the last column of this table. The velocities and accelerations given in this table are projected quantities on the plane of the sky, which are obtained with a second-order polynomial fitting to the measured blob tracks. The details of our data reduction method will be introduced as we proceed. ----- -------------- ----------------- ------------------ ------------------------------ -------------------- No. Observation PA ($^{\circ}$) Total number Velocity range Acceleration date /avg. daily rate at 9 $R_\odot$ (km s$^{-1}$) range (m s$^{-2}$) 1 Feb 14-16 103 11/3.7 183-356 3.6-14.2 2 Apr 04-07 248 12/3 191-335 1.1-13.8 3 Apr 25-27 288 9/3 197-299 1.3-6.2 4 Apr 30-May 2 246 9/3 169-298 0.6-8.0 5 May 07-09 71 10/3.3 240-400 3.6-18.2 6 Jun 05-08 67 13/3.3 173-303 2.6-11.4 7 Jun 13-15 119 12/4 228-379 2.2-17.8 8 Jun 30-Jul 2 69 10/3.3 162-287 2.2-11.2 9 Jul 18-20 106 11/3.7 200-334 3.9-10.4 10 Sep 27-29 244 9/3 192-286 2.2-11.0 ----- -------------- ----------------- ------------------ ------------------------------ -------------------- : Information on the 10 events with quasi-periodic releases of blobs lasting for three-four days observed in 2007. The blob structures are only marginally brighter than the background coronal emission, as seen from the white-light brightness and polarization measurements by LASCO (Sheeley *et al*., 1997; Wang *et al*., 1998). Therefore, it is generally difficult to recognize a blob from the original coronagraph images. The usual way to emphasize the blob features is to make running-difference images by subtraction of two successive images taken tens of minutes to one hour apart in time. After this procedure, the blob structures are more easily identified. They reveal themselves as radially elongated white-leading-black bipolar islands. The white (black) color indicates a brightness increase (decrease) in the corresponding region during the elapsed interval. In the following discussion, we first introduce our data analysis method by presenting two examples observed during 13 to 15 June and during 30 June to 2 July, which are the seventh and eighth events listed in Table 1 and are referred to as Event A and Event B, respectively. The two white-light images shown in Figures 1(a) and 1(b) are recorded at 05:18 UT on 15 June and at the same time on 1 July, where the white circle represents the surface of the Sun and the one-quarter solid disk is where the LASCO C3 occulting disk is located. The size of each image is 30 $R_\odot$ along the horizontal direction and 15 $R_\odot$ along the vertical direction. The standard routines provided with the solarsoft software (http://www.lmsal.com/solarsoft/) are used to produce these images. A background representing the contribution of the F corona and instrumental stray light has been subtracted from each image. It can be seen that a well-defined streamer exists at the the southeastern part (PA$=$119$^{\circ}$) and the northeastern part (PA$=$69$^{\circ}$) in Figures 1(a) and 1(b), respectively. The blob structures that we are interested in are emitted right atop of these two streamers. To recognize the blobs clearly, in Figures 1(c) and 1(d) we present two running-difference images by subtracting the images taken one hour earlier from those shown in Figures 1(a) and 1(b). The blob structures are indicated with white arrows. In Figure 1(d), two blobs are emitted successively from the streamer. To view more blob events simultaneously in one figure, we produce the temporal evolutionary map, which is the stacked time series of radial strips centered along the corresponding streamer stalk in the running-difference images. This method has been used in previous blob studies (*e.g.*, Wang *et al*., 1998; Wang *et al*., 2000). The width of the narrow region is taken to be about 6 pixels, and the height is given by the C3 FOV. Such height-time maps are presented in Figures 1(e) and 1(f) for the two blob events, where the abscissa represents the time of observation and the ordinate the height of the strips. It is obvious that the outward-moving blob structures are represented as white-black tracks in these height-time maps. By counting the number and deducing the slope of these tracks, we can easily obtain the daily rate and the velocity profiles of the blob structures. Note that only the data obtained by LASCO C3 are analyzed in this paper. The reason for excluding the C2 data is twofold. First, the blobs are observed initially near the streamer tips, which are generally located at about 2 to 4 $R_\odot$ in the middle part of the C2 FOV. At this height, it is generally difficult to discern the blob structures even from the running-difference images since the intensity of the background streamer emission is relatively strong. The seeing condition of blobs gets better when they enter the C3 FOV starting from 3.7 $R_\odot$. Second, the C3 FOV already covers the outer part of the C2 FOV and our main purpose of this study is to search for the persistent and quasi-periodic release events of blobs and to determine the associated solar wind velocity, which is thought to be well fulfilled by only using the C3 data. As can be seen from Figures 1(e) and 1(f) there are a total of 12 blobs observed during the three days from 13 to 15 June and 10 blobs from 30 June to 2 July with an average daily rate being 4 and 3.3, respectively. By fitting the apparent blob tracks with a second-order polynomial of the form $r=r_{0}+v_{0}t+\frac{1}2{}at^{2}$, where $r_{0}$ and $v_{0}$ represent the heliocentric distance and speed at the starting point of the selected event, the constant acceleration $a$ can be determined by the quadratic fit. The temporal derivative of this equation gives the expression of the fitted blob speed as $v=v_{0}+at$. The fitted velocity profiles as a function of heliocentric distance are plotted in Figures 1(g) and 1(h) for the two events discussed. Different symbols represent the velocities of different blobs; the numbers before the symbols are ordered according to the temporal sequence of the blob occurrence. As mentioned previously, the blob speed can be used as a proxy for that of the mean solar wind projected to the sky plane beyond a heliocentric distance of about 5-7 $R_\odot$. We see that for most distances involved in Figures 1(g) and 1(h) the symbols can be regarded as velocities for both the blobs and the associated solar wind along the streamer stalks. The velocities increase gradually with increasing distances from 3.7 to 20 $R_\odot$. Also, it can be seen that the velocities at a fixed distance vary significantly from blob to blob. To indicate this, in Table 1 we present the varying ranges of the blob velocities at 9 $R_\odot$ for all events. We see that, for Event A, the minimum and maximum of the blob (or the solar wind) velocities at 9 $R_\odot$ are 228 and 379 km s$^{-1}$, respectively. The relative velocity variation is 66% for this event and 77% for Event B. To reveal more details of the velocity variability, in Figure 2 we plot the fitted velocities at three heliocentric distances of 6 $R_\odot$ (squares), 9 $R_\odot$ (circles), and 12 $R_\odot$ (triangles) for Event A \[Figure 2(a)\] and B \[Figure 2(b)\]. The abscissa of this figure is the time starting from 0 UT of the first day of the event. It can be clearly seen that the speeds of different blobs at a fixed distance vary significantly with time. There are two possible physical causes accounting for such large temporal velocity variations at a fixed distance. The first one is the variation of the velocity of the local solar wind plasma, and the second one is the change of the projection angle caused by solar rotation during an event. We see that there is no apparent regular pattern governing the velocity variations at the three distances. Moreover, large velocity variations can take place within a few hours. For example, for the first two blobs shown in Figure 2(a) the velocity decreases abruptly from 430 to 280 km s$^{-1}$ at 12 $R_\odot$ and from 355 to 242 km s$^{-1}$ at 9 $R_\odot$. The two blobs are separated temporally by several hours. In such a short time, the effect of solar rotation on the projection angle is basically negligible. Besides, if the temporal change at a certain distance was caused by the projection effect, the velocity would tend to be either monotonic or first increase then decrease. Therefore, we suggest that the velocity change presented in Figure 2 is mainly attributed to the velocity variability of the local solar wind plasma. It is well known that large velocity variability is one of the most apparent characteristics of the slow solar wind (*e.g.*, McComas *et al*., 2000). It has also already been mentioned in the previous section that the wind along the plasma sheet above a streamer is usually regarded as one source of the slow solar wind; therefore, it is reasonable to deduce that this study provides an observational manifestation of the large velocity variability of the slow solar wind near the Sun. Using exactly the same method of data reduction as that for these two events, we examine the other eight events listed in Table 1. The PA of the streamer axis, the total number and the average daily rate of blobs in each event are shown in the third and fourth columns of Table 1. The obtained height-time maps showing the blob tracks are shown in Figure 3. The time of observation is taken to be the abscissa, and the height of the radial strips cropped from the series of running-difference images is shown as the ordinate of this figure, the same as in Figures 1(e) and 1(f). We can see that the most apparent common feature of the eight panels in Figure 3 is the persistent and quasi-periodic distribution of the white-black blob tracks. During each day of these events, we observe about three to five blobs released along the stalk of the corresponding streamer. The time span between two adjacent blobs is about five to eight hours. Note that a coronal mass ejection (CME) event was observed by LASCO C3 from 04:42 UT on 9 May, whose white-black track is obviously brighter than that of nearby blobs. For every blob release event, we have double-checked the white-light images of LASCO C3 to determine whether the white-black tracks in both Figure 3 and Figure 1 are caused by the small-scale blob events or the large-scale eruptive events. It is found that all the tracks are caused by small-scale blob events except the one mentioned here, which is not included in our statistics for the blobs. In Figure 4, we plotted the velocity profiles of all the blobs shown in Figure 3. The velocities are obtained by the same method as that for Events A and B. Similarly to Figures 1(g) and 1(h), the velocities of different blobs are represented with different symbols, and the numbers in front of the symbols represent the temporal order of the blob occurrence. It can be seen that, in all these eight events, the blob velocities can vary significantly on a time scale of several hours to a few days. Again, there are no apparent patterns governing the velocity variations. For instance, from 14 to 16 February with 11 blobs detected, the velocities at a fixed distance, say, 9 $R_\odot$, vary dramatically in a few hours from blob to blob. Specifically, the velocities of the first six blobs are 356, 229, 313, 326, 253, and 223 km s$^{-1}$. As suggested previously, such large velocity variability should be taken as a consequence of the temporal evolution of the velocity of the local slow solar wind. In other words, the data analysis results shown in both Figure 1 and Figure 3 may provide observational evidence for the presence of large velocity variability of the slow wind near the Sun. Note that the varying ranges of the fitted blob velocities at 9 $R_\odot$ have been given in the fifth column of Table 1. In addition, the sixth column of this table presents the minimum and maximum values of the fitted acceleration, which also varies significantly from blob to blob. From the analysis, we suggest that the slow solar wind near the Sun flows outward from its source region already with both a highly variable speed and acceleration. It is apparent that both these two aspects may contribute to the large velocity variability of the slow wind observed *in situ* at much greater distances. We point out in passing that the PAs of the 10 streamers used in this study are distributed over a wide range from 67 to 288 degrees. For all the blobs observed in the 10 events listed in Table 1, we plot the velocity versus height profile in Figure 5. It can be seen that the blobs generally accelerate gradually within the LASCO C3 FOV. Their velocities increase slowly from 50-150 km s$^{-1}$ at 3.7 $R_\odot$ and to 350-450 km s$^{-1}$ at 20 $R_\odot$. These statistical results are in ful agreement with previous results by Wang *et al*. (1998) using the data observed near the last solar minimum. We expect more persistent and quasi-periodic blob release events can be revealed in the future. Summary and Discussion ====================== In this paper we have examined the LASCO C3 data obtained in 2007 and found 10 persistent and quasi-periodic blob release events lasting for three to four days. The average daily rate of blobs is found to be three to five, in agreement with previous studies for the last solar minimum. It is found that the velocities of blobs vary significantly from blob to blob over a time scale of several hours to a few days. Taking the fitted blob speed beyond a certain distance as a proxy for that of the mean flow, we suggest that the large velocity variability, one of the most apparent signatures of the slow solar wind observed *in situ*, may develop near the Sun, say, within the first tens of solar radii. Sheeley, Wang, and coauthors (Sheeley *et al*., 1997; Wang *et al*., 1998; Wang *et al*., 2000) reported a few persistent blob release events around the last solar minimum. To interpret such steady blob releases from the tip of a streamer, Chen *et al*. (2009) proposed that the closed magnetic field geometry associated with a streamer cusp can become unstable to the expansion of the hot coronal plasmas, which results in a so-called intrinsic instability of corona streamers and the formation of blobs. For more details of this process, refer to the first section of this paper or to Chen *et al*. (2009). The modeled number density and velocity signatures, even the daily rate of blobs, are in agreement with previous observations. However, it is also apparent that not all streamers are associated with blobs. There are several possible reasons for this: *i*) The excitation and nonlinear development of the mentioned instability require certain specific physical conditions that develop over time. Blobs are not released, or, in other words, the instability does not develop or develop maturely, if the required conditions are not fulfilled or the developing process is disturbed by other coronal activities such as CMEs. *ii*) The brightness of the blob structures is only marginally higher than that of the background plasmas, so some blobs, even if released, are not observable owing to the limitations in resolution of current coronagraphs and interference from instrumental backgrounds (*e.g.*, stray light). *iii*) The blob signature may be obscured by other structures or eruptive phenomena in the foreground or the background corona along the line of sight. The measurements of the dynamical parameters of the blob structures provide important complements to the other state-of-the-art techniques aimed at velocity diagnostics of the solar wind near the Sun. It may be expected that a distribution map of the solar wind velocities in the outer corona can be coarsely delineated with enough data accumulated. Although the flow velocity along the streamer stalk is provided only within a height ranging from a few solar radii to about 20 $R_\odot$, it is still useful for constraining the solar wind condition in the outer corona. These constraints may help in establishing the background conditions that can be used in models for CME initiation and propagation, as well as for some space weather forecasting models. From Figure 2, we see that velocities of successive blobs at a fixed distance can vary significantly within a few hours. Assuming streamer blobs to be velocity tracers of the slow solar wind along the plasma sheet, we therefore deduce that large velocity variability, observed *in situ* in the slow solar wind, is already manifested near the Sun. It is a good question to ask how the blob velocity variability compares with that of the slow solar wind. To address this question, we examined the solar wind velocity data obtained by, say, the *Ulysses*/SWOOPS instrument and found that large velocity variations similar to that presented in Figure 2 are not unusual. However, we point out that such comparisons should be conducted very carefully to reach a physically meaningful conclusion. This is mainly due to the large distance for the solar wind plasmas to travel from their source region to the point of *in situ* measurements. The original velocity profiles as revealed by the blob observations may undergo significant changes caused by the intrinsic dynamical evolution and coupling processes with nearby solar wind plasmas. The plasmas and magnetic structures associated with eruptive transient events, such as magnetic clouds, may also contribute to reshaping the solar wind velocity profiles. Therefore, comparison between the blob variability and the slow wind variability is in general not a trivial task, and so will not be further discussed here. Another very interesting and meaningful study would be to search for the counterpart of the blob structures in interplanetary space with *in situ* data. As mentioned in the introduction, there are models that suggest the blobs originate from closed-field regions below the streamer cusp or along the current sheet in the open magnetic geometry; therefore, the determination of the *in situ* blob counterpart will be helpful to describe different formation mechanisms of blobs and to assess plasma properties in the region near the streamer cusp. Many spacecraft, such as *Ulysses*, SOHO, *Wind*, ACE, as well as the recently launched STEREO (Kaiser *et al*., 2008; Galvin *et al*., 2008), have already accumulated enough data that would be appropriate for this study. The *in situ* counterpart of a blob could be recognized by examining the elemental composition and abundance, ionic temperature, and charge-state distribution, as well as the magnetic-field geometry of the structures carried by the solar wind. This study should be conducted in future. The SOHO/LASCO data used here are produced by a consortium of the Naval Research Laboratory (USA), Max-Planck-Institut für Aeronomie (Germany), Laboratoire d’Astronomie (France), and the University of Birmingham (UK). SOHO is a project of international cooperation between ESA and NASA. This work was supported by grants NNSFC 40774094, 40825014, 40890162, and NSBRSF G2006CB806304 and by the Specialized Research Fund for State Key Laboratory of Space Weather in China. H.Q. Song is grateful to C.L. Shen, X.H. Zhao, and H.D. Chen for their assistance in preparing this paper. Breen, A.R., Mikic, Z., Linker, J.A., Lazarus, A.J., Thompson, B.J., Biesecker, D.A., Moran, P.J., Varley, C.A., Williams, P.J.S., Lecinski, A.: 1999, *J. Geophys. Res.* **104**, 9847. Brueckner, G.E., Howard, R.A., Koomen, M.J., Korendyke, C.M., Michels, D.J., Moses, J.D., Socker, D.G., Dere, K.P., Lamy, P.L., Llebaria, A., *et al*.: 1995, *Solar Phys.* **162**, 357. Chen, Y., Li, X., Song, H.Q., Shi, Q.Q., Feng, S.W., Xia, L.D.: 2009, *Astrophys. J.* **691**, 1936. Cranmer, S.R., Kohl, J.L., Noci, G., Antonucci, E., Tondello, G., Huber, M.C.E., Strachan, L., Panasyuk, A.V., Gardner, L.D., Romoli, M., *et al*.: 1999, *Astrophys. J.* **511**, 481. Einaudi, G., Boncinelli, P., Dahlburg, R.B., Karpen, J.T.: 1999, *J. Geophys. Res.* **104**, 521. Endeve, E., Holzer, T.E., Leer, E.: 2004, *Astrophys. J.* **603**, 307. Endeve, E., Leer, E., Holzer, T.E.: 2003, *Astrophys. J.* **589**, 1040. Galvin, A.B., Kistler, L.M., Popecki, M.A., Farrugia, C.J., Simunac, K.D.C., Ellis, L., Möbius, E., Lee, M.A., Boehm, M., Carroll, J., *et al*.: 2008, *Space Sci. Rev.* **136**, 437. Grail, R.R., Coles, W.A., Klinglesmith, M.T., Breen, A.R., Williams, P.J.S., Markkanen, J., Esser, R.: 1996, *Nature* **379**, 429. Habbal, S.R., Woo, R., Fineschi, S., O’Neal, R., Kohl, J., Noci, G., Korendyke, C.: 1997, *Astrophys. J.* **489**, L103. Kaiser, M.L., Kucera, T.A., Davila, J.M., Cyr, O.C.St., Guhathakurta, M., Christian, E.: 2008, *Space Sci. Rev.* **136**, 5. Lapenta, G., Knoll, D.A.,: 2005, *Astrophys. J.* **624**, 1049. Lee, L.C., Wang, S., Wei, C.Q.: 1988, *J. Geophys. Res.* **93**, 7354. Li, X., Habbal, S.R., Kohl, J.L., Noci, G.: 1998, *Astrophys. J.* **501**, L133. McComas, D.J., Barraclough, B.L., Funsten, H.O., Gosling, J.T., Santiago-Muñoz, E., Skoug, R.M., Goldstein, B.E., Neugebauer, M., Riley, P., Balogh, A.: 2000, *J. Geophys. Res.* **105**, 10419. Sheeley, N.R., Wang, Y.M., Hawley, S.H., Brueckner, G.E., Dere, K.P., Howard, R.A., Koomen, M.J., Korendyke, C.M., Michels, D.J., Paswaters, S.E., *et al*.: 1997, *Astrophys. J.* **484**, 472. Strachan, L., Suleiman, R., Panasyuk, A.V., Biesecker, D.A., Kohl, J.L.: 2002, *Astrophys. J.* **571**, 1008. Suess, S.T., Wang, A.H., Wu, S.T.: 1996, *J. Geophys. Res.* **101**, 19957. Wang, S., Lee, L.C., Wei, C.Q.: 1988, *Phys. Fluids* **31**, 1544. Wang, Y.M., Sheeley, N.R., Socker, D.J., Howard, R.A., Rich, N.B.: 2000, *J. Geophys. Res.* **105**, 25133. Wang, Y.M., Sheeley, N.R., Walters, J.H., Brueckner, G.E., Howard, R.A., Michels, D.J., Lamy, P.L., Schwenn, R., Simnett, G.M.: 1998, *Astrophys. J.* **498**, L165. Woo, R., Martin, J.M.: 1997, *Geophys. Res. Lett.* **24**, 2535. Wu, S.T., Wang, A.H., Plunkett, S.P., Michels, D.J.: 2000, *Astrophys. J.* **545**, 1101. ![Two examples of quasi-periodic releases of blobs observed by LASCO C3, which are referred to as Event A (13-15 June ) and Event B (30 June-2 July) in the text. In the eight panels, we show: instantaneous, background-subtracted images recorded at 0518 UT on 15 June (a) and 1 July (b) by C3, cropped to 30 $R_\odot$ in the horizontal direction and 15 $R_\odot$ in the vertical direction; the difference of images taken at 0518 and 0418 UT on 15 June (c) and 1 July (d) with the same size as that of panels a and b; height-time tracks of blobs for Events A (e) and B (f), which are produced by stacking radial strips centered along the streamer axis extracted from successive running-difference images; and the fitted blob velocities as a function of heliocentric distance (panels g and h for Events A and B). See text for more details.](Fig1.eps){width="100.00000%"} ![ The fitted velocities at three heliocentric distances 6 $R_\odot$ (squares), 9 $R_\odot$ (circles) and 12 $R_\odot$ (triangles) for Events A (a) and B (b). The abscissa is the time starting from 0 UT of the first day of the event.](Fig2.eps){width="100.00000%"} ![Height-time tracks of blobs for the eight events listed in Table 1. The images are produced by stacking radial strips centered along the streamer axis extracted from successive running-difference images of C3.](Fig3.eps){width="100.00000%"} ![The fitted velocities of blobs as a function of heliocentric distance for the eight events listed in Table 1.](Fig4.eps){width="100.00000%"} ![Scatterplot of velocity versus heliocentric distance for the 106 blobs observed in the 10 events listed in Table 1.](Fig5.eps){width="100.00000%"}
--- abstract: 'Polar codes are recursive general concatenated codes. This property motivates a recursive formalization of the known decoding algorithms: Successive Cancellation, Successive Cancellation with Lists and Belief Propagation. This description allows an easy development of the first two algorithms for arbitrary polarizing kernels. Hardware architectures for these decoding algorithms are also described in a recursive way, both for Arikan’s standard polar codes and for arbitrary polarizing kernels.' author: - 'Noam Presman and Simon Litsyn[^1]' bibliography: - 'IEEEabrv.bib' - 'bibTexPolar.bib' title: Recursive Descriptions of Decoding Algorithms and Hardware Architectures for Polar Codes --- Introduction ============ Polar codes were introduced by Arikan [@Arikan] and provided a scheme for achieving the symmetric capacity of binary memoryless channels (B-MC) with polynomial encoding and decoding complexities. Arikan used the so-called $(u+v,v)$ construction, which is based on the following linear kernel $$G_2 = \left( \begin{array}{cc} 1 & 0 \\ 1 & 1 \\ \end{array} \right).$$ In this scheme, a $2^n\times2^n$ matrix, $G_2^{\bigotimes n}$, is generated by performing the Kronecker power on $G_2$. An input vector $\bf u$ of length $N=2^n$ is transformed to an $N$ length vector $\bf x$ by multiplying a certain permutation of the vector $\bf u$ by $G_2^{\bigotimes n}$. The vector $\bf x$ is transmitted through $N$ independent copies of the memoryless channel, $W$. This results in new $N$ (dependent) channels between the individual components of $\bf u$ and the outputs of the channels. Arikan showed that these channels exhibit the phenomenon of polarization under Successive Cancellation (SC) decoding. This means that as $n$ grows, there is a proportion of $I(W)$ (the symmetric channel capacity) of the channels that become clean channels (i.e. having the capacity approaching $1$) and the rest of the channels become completely noisy (i.e. with the capacity approaching $0$). Arikan showed that the SC decoding algorithm has an algorithmic time and space complexity which is $O(N\cdot \log(N))$ (The same complexity holds also for the encoding algorithm). Furthermore, it was shown [@Arikan2] that asymptotically in the block length $N$, the block error probability of this scheme decays to zero like $O(2^{-\sqrt{N}})$. Generalizations of Arikan’s code structures were soon to follow. Korada *et al.* considered binary and linear kernels [@Korada]. They showed that a binary linear kernel is polarizing if and only if its corresponding generating matrix is upper-triangular, and analyzed their rate of polarization, by introducing the notion of kernel exponent. Mori and Tanaka considered the general case of a mapping $g(\cdot)$, which is not necessarily linear and binary, as a basis for channel polarization constructions [@Mori2010]. They gave sufficient conditions for polarization and generalized the exponent for these cases. They further showed examples of linear and non-binary Reed-Solomon codes and Algebraic Geometry with exponents that are far better than the exponents of the known binary kernels [@MoriandTanka3]. The authors of this correspondence gave examples of binary but non-linear kernels having the optimal exponent per their kernel dimensions [@PrShLi2]. All of these structures were having homogenous kernels, meaning that the alphabet of their inputs and their outputs were the same. The authors of this correspondence considered the case that some of the inputs of a kernel may have different alphabet than the rest of the inputs [@Presman2011]. This results in the so-called mixed kernel structure, that have demonstrated good performance for finite length codes in many cases. A further generalization of the polar code structure was suggested by Trifonov [@Trifonov2011], in which the outer polar codes were replaced by suitable codes along with their appropriate decoding algorithms. We note here, that the representation of polar codes as instances of general concatenated codes (GCC) is fundamental to this correspondence, and we elaborate on it in the sequel. Generalizations and alternatives to SC as the decoding algorithm were also studied. Tal and Vardy introduced the Successive Cancellation List (SCL) decoder [@Tal11; @Tal2012]. In this algorithm, the decoder consider up to $M$ concurrent decoding paths at each one of its stages, where $M$ is the size of the list. At the final stage of the algorithm, the most likely result is selected from the list. The asymptotic time and space complexity of this decoder are the same of those of the standard SC algorithm, multiplied by $M$. Furthermore, an introduction of a cyclic redundancy check code (CRC) as an outer code, results in a scheme with an excellent error-correcting performance, which is sometimes compatible with state of the art schemes (see e.g. [@Tal2012 Section V]). Bonik *et al.* suggested using a separate CRC and a different list size for each outer code, in the GCC structure of the polar code. This approach seems to give better results, comparing it to standard list approach with the same average list size. Finally, Li *et al.* [@Li2012], suggested an iterative SCL with CRC algorithm in which the decoder increases the list size by a multiplicative factor of $2$ and restart the algorithm, if at the end of the SCL algorithm there doesn’t exist a result that satisfies the CRC. Here again, the excellent performance is achieved with limited average list size and outperforms Tal and Vardy’s original approach. Note, however, that here the average time and space complexity (rather than the worst case complexity) is the basis for comparison between the approaches. Belief-Propagation is an alternative to the SC decoding algorithm . This is a message passing iterative decoding algorithm that operates on the normal factor graph representation of the code. It is known to outperform SC over the Binary Erasure Channel (BEC) [@Hussami2009], and seems to have good performance on other channels as well [@Hussami2009; @Arikan3]. Leroux *et al.* considered efficient hardware implementations for the SC decoder for the $(u+v,v)$ polar code [@Leroux10; @Leroux2012]. They gave an explicit design of a “line decoder” with $N/2$ processing elements and $O(N)$ memory elements. Their work, contains an efficient approximate min-sum decoder, and a discussion on a fixed point implementation. Their design is verified by an ASIC synthesis. Pamuk considered a hardware design of BP decoder tailored for an FPGA implementation [@Pamuk2011]. The goal of this paper is to emphasize the formalization of polar codes as recursive GCCs and the implication of this structure on the decoding algorithms. The main contributions of this correspondance are as follows: 1) Formalizing Tal and Vardy’s SCL as a recursive algorithm, and thereby generalizing it to arbitrary kernels. 2) Formalizing Leroux *et al.* SC line decoder and generalizing it to arbitrary kernels. 3) Defining a BP decoder with GCC schedule, and suggesting a BP line architecture for it. The paper is organized as follows. In Section \[sec:Prelim\], we describe polar codes kernels as the generating building blocks of polar codes. We then elaborate on the fact that polar codes are examples of recursive GCC structures. This fundamental notion, is the motivation for formalizing the decoding algorithms in a recursive fashion in Section \[sec:RecDescOfDecAlgor\]. Specifically, we do this for the standard SC, the SCL (both for Arikan’s kernels and arbitrary ones) and BP (for Arikan’s kernel using the GCC schedule). These formalizations lay the ground for hardware architectures of the decoding algorithms in Section \[sec:HrdwreArikConstr\]. Specifically, we restate Leroux *et al.* SC pipeline and SC line decoders, and introduce a line decoder for the GCC schedule of the BP algorithm. Finally, in Section \[sec:HardArchiForOthKer\], we consider generalizations of these architectures for arbitrary kernels. Preliminaries {#sec:Prelim} ============= Throughout, we use the following notations. Vectors are denoted by bold letters, for example ${\bf u}$. For $i\geq j$, let ${\bf u}^i_j=(u_j,...,u_i)$ be the sub-vector of a vector ${\bf u}$ of length $i-j+1$ (if $i<j$ we say the ${\bf u}^i_j=()$, the empty vector, and its length is $0$). In this paper we consider kernels that are based on bijective transformations over a field $F$. A channel polarization kernel of dimension ${\ell}$, denoted by $g(\cdot)$, is a mapping $$g:F^{{\ell}}\rightarrow F^{{\ell}}.$$ This means that $g({\bf u})={\bf x}, \,\,\,\, {\bf u}, {\bf x} \in F^{{\ell}}$. Denote the output components of the transformation by $$g_i({\bf u})=x_i \,\,\,\ 0 \leq i \leq \ell-1,$$ We note that this type of kernel is referred to as *homogeneous kernel*, because its $\ell$ input coordinates and $\ell$ output coordinate are from the same alphabet $F$. The homogenous kernel $g(\cdot)$ may generate a polar code of length $\ell^m$ by inducing a larger mapping from it, in the following way [@Mori2010; @Presman2011]. \[def:constructG2\] Given a transformation $g(\cdot)$ of dimension ${\ell}$, we construct a mapping $g^{(m)}(\cdot)$ of dimension ${\ell}^m$ (i.e. $g^{(m)}(\cdot):\left\{0,1\right\}^{{\ell}^m}\rightarrow\left\{0,1\right\}^{{\ell}^m}$) in the following recursive fashion. $$g^{(1)}({\bf u}_0^{\ell-1})=g({\bf u}_0^{\ell-1})\,\,\,;$$ $$g^{(m)}=\Big[ g^{(1)}\left(\gamma_{0,0}, \gamma_{1,0}, \gamma_{2,0}, \ldots, \gamma_{\ell-1,0}\right),$$ $$\,\,\,\,\,\,\,g^{(1)}\left(\gamma_{0,1}, \gamma_{1,1}, \gamma_{2,1}, \ldots, \gamma_{\ell-1,1}\right),\ldots,$$ $$\,\,\,\,\,\,\,g^{(1)}\left(\gamma_{0, {\ell}^{m-1}}, \gamma_{1, {\ell}^{m-1}}, \gamma_{2, {\ell}^{m-1}}, \ldots, \gamma_{\ell-1,{\ell}^{m-1}}\right) \Big],$$ where $$\gamma_{i,j}=g_j^{(m-1)}\left({\bf u}_{ i \cdot {\ell^{m-1}} +1}^{(i+1)\cdot {\ell}}\right) \,\,\,\,\, 0\leq i\leq {\ell}^{m-1}-1 \,\,\,\,\,\, 0 \leq j \leq {\ell}-1.$$ General Concatenated Codes (GCC)[^2] are error correcting codes that are generated by a construction technique, which was introduced by Blokh and Zyabolov [@Blokh1974] and Zinoviev [@Zinoviev1976]. In this construction, we have $\ell$ outer codes $\left\{\mathcal{C}_r\right\}_{r=0}^{\ell-1}$, where $\mathcal{C}_r$ is an $N_{out}$ length code of size $M_r$ over alphabet $F_r$. We also have an inner code of length $N_{in}$ and size $\prod_{r=0}^{\ell-1}|F_r|$ over alphabet $F$, with a nested encoding function $\phi : F_0\times F_1 \times ... \times F_{\ell-1} \rightarrow F^{N_{in}}$. The GCC that is generated by these components is a code of length $N_{out}\cdot N_{in}$ and of size $\prod_{r=0}^{\ell-1}M_r$. It is created by taking an $\ell\times N_{out}$ matrix, in which the $r^{th}$ row is a codeword from $\mathcal{C}_r$, and applying the inner mapping $\phi$ on each of the $N_{out}$ columns of the matrix. As Dumer describes in his survey [@DumerConcatCodes], the GCCs can give good code parameters for short length codes due to a good combination of outer codes and a nested inner code. In fact, some of them give the best parameters known. Moreover, it is common that the decoding algorithms associated with them, utilize their structure by performing local decoding steps on the (short) outer-codes and exchanging decisions via the inner code decoding. As Arikan already noted, polar codes are examples of recursive GCCs [@Arikan Section I.D]. This observation is useful as it allows to formalize the construction of large length polar code as a concatenation of several smaller length polar codes (outer codes) by using a kernel mapping (an inner code). Therefore, applying this notion to Definition \[def:constructG2\], we see that a polar code of length $\ell^m$, may be regarded as a collection of $\ell$ outer polar codes of length $\ell^{m-1}$. These codes are then joined together by applying an inner code (defined by the mapping $g^{(1)}(\cdot)$) on the outputs of these mappings. This idea is illustrated in Figure \[fig: def2GCC\]. In this figure, we see the $\ell$ outer codewords of length $\ell^{m-1}$ organized in $\ell$ rows of the matrix. The inner codeword mapping is depicted as the vertical rectangle that is located on top of them. This is appropriate, as this mapping operates on columns of the of the matrix which rows are the outer codewords. Note, that for brevity we only drew one instance of the inner mapping, but there should be $\ell^{m-1}$ instances of it, one for each column of this matrix. In the homogenous case, the outer codes themselves are constructed in the same manner. Although the outer coeds have the same structure, they are different in the general case, because they may have different set of frozen bits. \ Let ${\bf u}$ be an $N=2^m$ length binary vector. The vector $\bf u$ is transformed to an $N$ length vector $\bf x$ by using a bijective mapping $g(\cdot):\{0,1\}^{N}\rightarrow \{0,1\}^{N}$. The transformation is defined recursively as $$\text{for } n=1\,\,\,\,\, g^{(1)}({\bf u})=\left[u_0+u_1,u_1\right]$$ $$\label{eq:Constr} \text{for } n>1\,\,\,\,\, g^{(m)}({\bf u})=\left[v_0,w_0,v_1,w_1,...,v_{N/2-1},w_{N/2-1} \right]={\bf x}\,\,\,\,,$$ where ${\bf v}_{0}^{N/2-1}=g^{(m-1)}\left({{\bf u}_{0}^{N/2-1}}\right)+g^{(m-1)}\left({{\bf u}_{N/2}^{N-1}}\right)$ and ${\bf w}_{0}^{N/2-1}=g^{(m-1)}\left({{\bf u}_{N/2}^{N-1}}\right)$. See also Figure \[fig:uvExample\]. \ In a mixed kernel constructions the outer codes are not necessarily from the same family of polar codes. For example, if we take the first kernel $g_1(u_0,u_{1,2},u_3)={\bf x}_0^3\in\{0,1\}^4$ and define the RS kernel as $g_2(u_{0,1},u_{2,3},u_{4,5},u_{6,7})={\bf x}_0^3\in\left(\{0,1\}^2\right)^4$ [@Presman2011], then the general concatenated construction is given in Figure \[fig: mixed kernel\]. \ Now, note that using $g_2^{(m)}$ mapping over a binary channel is like using a concatenated scheme in which the inner code is the standard binary full space mapping. It can be observed, that the mapping in Figure \[fig: mixed kernel\] has more potential in transforming between the used alphabets. This concept may be further generalized, by replacing some of the outer polar codes, with other types of codes (see e.g. Trifonov’s proposal [@Trifonov2011]). The recursive GCC structure of polar codes calls for recursive formalization of the algorithms associated with them. These algorithms enjoy from a simple and clear description, which may lead to an elegant analysis. Furthermore, in some cases it allows reuse of resources and indicates which operations may be done in parallel. The recursive encoding algorithm has already been described in Definition \[def:constructG2\]. The recursive decoding algorithms are described in the next section. Recursive Descriptions for Decoding Algorithms of Polar Codes {#sec:RecDescOfDecAlgor} ============================================================= In this section, we describe decoding algorithms for polar codes in a recursive framework that is induced from their recursive GCC structure. Roughly speaking, all the algorithms we consider here have a similar format. Consider the GCC structure of Definition \[def:constructG2\]. This means that we have a length $N$ code, that is composed of $\ell$ layers of outer codes, denoted by $\left\{\mathcal{C}_r\right\}_{r=0}^{\ell-1}$, each one of length $N/\ell$. The decoding algorithms, we consider here, are composed of $\ell$ pairs of steps. Pair number $r$ is dedicated to decoding $\mathcal{C}_{r-1}$, in the following way. STEP $2\cdot r -1$ : \ Using the previous steps, prepare the inputs to the decoder of code $\mathcal{C}_i$. STEP $2\cdot r$ : \ Call the decoder of code $\mathcal{C}_i$ on the input you’ve prepared. Process the output of this decoder, together with the outputs of the previous steps. Typically, the codes $\left\{\mathcal{C}_r\right\}_{r=0}^{\ell-1}$ are polar codes of length $N/\ell$, thereby creating the recursive structure of the decoding algorithm. It should be noted that the above decoding format is quite common for decoding algorithms of GCCs. As an example, see the decoding algorithms in Dumer’s survey on GCCs [@DumerConcatCodes]. In addition, the recursive decoding algorithms for Reed-Muller (RM) codes, utilizing their Plotkin $(u+v,v)$ recursive GCC structure were extensively studied by Dumer [@Dumer2006; @Dumer2006b] and are closely related to the algorithms we present here. Actually, Dumer’s simplified decoding algorithm for RM codes [@Dumer2006b Section IV] is the SC decoding for the Arikan’s structure, we describe in Subsection \[sec:recSCDec\]. The algorithms we describe in a recursive fashion are the SC (Subsection \[sec:recSCDec\]), Tal and Vardy’s SCL (Subsection \[sec:SCListDecoding\]) and BP (Subsection \[sec:BP\]). For all of these algorithms, we first consider Arikan’s $(u+v,v)$ code. For the first two algorithms we also provide generalizations to other kernels, both homogenous and mixed. We note, that when possible, we prefer that the inputs to the algorithm and the internal computations are interpreted as log likelihood ratios (llrs). Thus, the SC algorithms and the BP are described in such manner, but in SCL decoding, we use likelihoods instead of llrs. Furthermore, in our discussion we do not consider how to efficiently compute these quantities. In some cases, especially with large kernels or with large alphabet size, these computations pose a computational challenge. Approaches to adhere this challenge, are efficient decoding algorithms (such as variants of Viterbi algorithms) or approximations of the computations (for example, the min-sum approximation that Leroux *et al.* used [@Leroux2012] or the near Maximum Likelihood (ML) decoding algorithms that were used by Trifonov [@Trifonov2011]). A Recursive Description of the SC Algorithm {#sec:recSCDec} ------------------------------------------- We begin by considering the SC decoder for Arikan’s $(u+v,v)$ construction, and then generalize it to arbitrary kernels. First, let us describe the decoding algorithm for length $N=2$ code, i.e. for the basic kernel $g^{(1)}(u,v)=(u+v,v)\equiv(a,b)$. We get as input $[\lambda_a,\lambda_b]$ which are the log likelihood ratios (llrs) of the output of the channel ($\lambda_{a}$ corresponds to the first output of the channel and $\lambda_{b}$ corresponds to the second output). The algorithm has four steps. STEP I : \ Compute the llr of $u$, $L_u = 2\tanh^{-1}\left(\tanh(\lambda_a/2)\tanh(\lambda_b/2)\right)$. STEP II : \ Decide on $u$, (denote it by $\hat{u}$). STEP III : \ Compute the llr of $v$, (given the estimate of $\hat{u}$): $L_v = (-1)^{\hat{u}}\cdot\lambda_a+\lambda_b$. STEP IV : \ Decide on $v$, (denote it by $\hat{v}$). It should be noted that steps II and IV, may be done based on the llrs computed on steps I and III (i.e. by their sign), or by using an additional side information (for example, if $u$ is frozen, then the decision is based on its known value). Now, for describing a SC decoder of length $N=2^{n}$, let us assume that we already developed a SC decoder for length $N/2$ polar code. We assume that the $N$ length decoder gets as input $N$ channel output $llrs$, $\{\lambda_{i}\}_{i=0}^{N-1}$, and the frozen bits indices. The decoder outputs the estimation of the information (unfrozen) bits and the estimation of the codeword that was sent on the channel. For convenience, we assume that the estimation of the information word is an $N$ length vector (denoted by ${\bf u}$) which includes also the values for the frozen bits. A decoder for length $N$ polar code contains the following steps. STEP I : \ Partition the llr vector into pairs of consecutive llr values $\left\{(\lambda_{2i},\lambda_{2i+1}) \right\}_{i=0}^{N/2-1}$. Compute the llr input vector, ${\bf L}_0^{N/2-1}$, for the first outer code such that $$L_i= 2\tanh^{-1}\left(\tanh(\lambda_{2i}/2)\tanh(\lambda_{2i+1}/2)\right)\,\,\,\,, 0 \leq i\leq N/2 -1.$$ STEP II : \ Give the vector ${\bf L}_0^{N/2-1}$ as an input to the polar code decoder of length $N/2$. Also provide to the decoding algorithm, the indices of the frozen bits from the first half of the codeword (corresponding to the first outer code). Assume that the decoder outputs ${\bf u}^{(0)} $ as the estimation of the information word, and ${\bf x}^{(0)}$ as the estimation of the first outer polar codeword of length $N/2$. Both of them are vectors of length $N/2$. Then, we can output ${\bf u}^{(0)}$ as ${\bf{ u}}_{0}^{N/2-1}$ (the first half of the estimated information word). STEP III : \ Using, again, the input llr pairs and ${\bf x}^{(0)}$ as the estimation of the first outer polar codeword, prepare the llr input vector for the second outer code, ${\bf L}_0^{N/2-1}$, such that $$L_i = (-1)^{x^{(0)}_i}\cdot \lambda_{2i}+\lambda_{2i+i}\,\,\,\,, 0 \leq i\leq N/2 -1.$$ STEP IV : \ Give the vector ${\bf L}_0^{N/2-1}$ as an input to the polar code decoder of length $N/2$ and the indices of the frozen bits from the second half of the codeword (corresponding to the second outer code). Assume that the decoder outputs ${\bf u}^{(1)}$ as the estimation of the information word, and ${\bf x}^{(1)}$ as the estimation of the second outer polar codeword of length $N/2$. Then, we can output ${\bf u}^{(1)}$ as ${\bf u}_{N/2}^{N-1}$ (the second half of the estimated information word). Construct the estimation of the codeword as follows ${\bf x}=\left [\left\{x^{(0)}_i+x^{(1)}_i,x^{(1)}_i\right\}_{i=0}^{N/2-1}\right]$. Let us now generalize this decoding algorithm for the GCC scheme with a general kernel. In this case for length $N$ code, we have an $\ell$ length mapping $g({\bf u})={\bf x}$ over an alphabet $F$, i.e. $g(\cdot):F^{\ell}\rightarrow F^{\ell}$. We also have at most $\ell$ outer codes $\left\{\mathcal{C}_r\right\}$, each one of length $N/\ell$. We may have less than $\ell$ outer codes, in case some of the inputs are glued (which results in a mixed kernel case). In this case, the outer code corresponding to the glued inputs is considered to be over a larger size input alphabet. We assume that each outer code has a decoding algorithm associated with it. This decoding algorithm is assumed to get as input the “channel” observations on the outer code symbols (usually manifested as probabilities matrices, or llr vectors). If the outer code is a polar code, then this algorithm should also get the indices of the frozen bits of the outer code. We require that the algorithm will output its estimation on the information vector and its corresponding outer code codeword. Assuming that we know input symbols ${\bf u}_{0}^{k}$, computing the llr vector $L(\cdot)$ corresponding to input number $k+1$ of the transformation is done according to the following rule. $$\label{eq:genSCRule} L(t)=\ln\left(\frac{\sum_{{\bf u}_{k+2}^{\ell}\in F^{\ell-k+1}} R_g\left({\bf u}_{0}^{k},0,{\bf u}_{k+1}^{\ell}\right)}{\sum_{{\bf u}_{k+2}^{\ell}\in F^{\ell-k+1}} R_g\left({\bf u}_{0}^{k},t,{\bf u}_{k+1}^{\ell}\right)}\right),$$ where $$R_g({\bf u}_{0}^{\ell-1}) =\exp\left(\sum_{i=0}^{\ell-1} \lambda_i\left(g_i({\bf u})\right) \right),$$ which is the likelihood ratio associated with input ${\bf u}$ to the kernel $g(\cdot)$, and $\lambda_i(\cdot)$ is the llr associated with the $i^{th}$ output of the kernel, ${\bf x}_i$. Because $F$ may be non-binary, $\lambda(\cdot)$ and $L(\cdot)$ are assumed to be functions of llrs, that is $\lambda_i(t)=\log\left(\frac{\Pr({\bf y}|{\bf x} = 0)}{\Pr({\bf y}|{\bf x}_i = t)}\right)$, for $t\in F$, where ${\bf y}$ is assumed to be the vector of the observations. We now describe the SC decoding algorithm. As we already mentioned, because of structure of the code, the decoding algorithm is composed of pairs of steps, such that pair $r$ deals with outer code $r-1$, where $1\leq r \leq \ell$. As a preparation step, we partition the decoder’s $N$ length input llr vector ${\bf \lambda}(\cdot)$ to $N/\ell$ length vectors, each of length $\ell$, denoted by ${\bf \lambda}^{(m)}(\cdot)$, such that $${ \lambda}^{(m)}_i(\cdot) = \lambda_{m\cdot\ell+i}(\cdot) \,\,\,\,\, 0 \leq m \leq N/{\ell}-1,\,\,\,\,\,0\leq i\leq \ell-1.$$ The $\ell$ length vector ${\bf \lambda}^{(m)}(\cdot)$ is associated with the output symbols corresponding to the $m^{th}$ symbol of the outer codes (transformed by kernel $g(\cdot)$). We denote the information word that was given by the decoder of the $m^{th}$ outer code by ${\bf u}^{(m)}$ and its corresponding codeword by ${\bf x}^{(m)}$, both of them are of length $N/{\ell}$. We have the following pair of steps of the algorithm $1 \leq r \leq \ell $. STEP $2\cdot r-1$ : \ Using the results on the outer-codewords of the previous steps i.e. ${\bf x}^{(m)}$, for $0\leq m\leq r-2$, prepare the $N/{\ell}$ length llr input vector ${\bf L}(\cdot)$ for outer code number $r-1$. To do that, for $0 \leq j \leq N/{\ell}-1$, compute $L_{j}(\cdot)$ using (\[eq:genSCRule\]) with $\left\{{ x}^{(m)}_j\right\}_{0\leq m\leq r-2}$ as the estimated inputs to the transformation. STEP $2\cdot r$ : \ Give the llr vector ${\bf L}(\cdot)$ as an input to the decoder of outer code number $r-1$. If this is a polar code decoder of length $N/\ell$, then also supply the indices of the frozen symbols in the range $\left[(r-1)\cdot N/\ell,r\cdot N/\ell-1\right]$. The decoder outputs ${{\bf u}}^{(r-1)}$, as the estimation of the information word and ${\bf x}^{(r-1)}$ as the estimation of the outer codeword. Both of these vectors are of length $N/{\ell}$ symbols. After the step $2\cdot \ell$, the decoder outputs its estimation for the information word, by concatenating the information parts generated by all the outer code decoders, i.e. ${\bf u} = \left[ {\bf u}^{(0)},{\bf u}^{(1)},...,{\bf u}^{(\ell -1)}\right]$. The estimation of the codeword, ${\bf x}$, is done by applying the transformation $g(\cdot)$ on the column of the matrix, which rows are $\left\{{\bf x}^{(m)}\right\}_{m=0}^{\ell-1}$, that is $${\bf x}_{\ell\cdot i}^{\ell\cdot(i+1)-1} = g\left({ x}^{(0)}_i, { x}^{(1)}_i,...,{ x}^{(\ell-1)}_i \right),\,\,\,\,\,0 \leq i\leq N/{\ell}-1.$$ The base case of the recursion, i.e. the decoder for $N=\ell$ length polar code, is a simple generalization of the SC decoder for length $N=2$ code of Arikan. The idea is to successively estimate the input bits to the transformation $g(\cdot)$, using (\[eq:genSCRule\]). We decide on the symbol ${ u}_i$ using the llr generated by (\[eq:genSCRule\]) (in which our previous decisions are taken as known values). If $u_i$ is frozen, we skip the calculation of (\[eq:genSCRule\]), and decide on its known value. In case we have a mixed kernel construction, the generalization is very easy. Let us assume that we have glued the symbols ${ u}_1$ and ${ u}_2$ to a new symbol ${ u}_{1,2} \in F^{2}$. In this case, we treat these two symbols as a one entity, and consider the outer code associated with them, denoted as ${\mathcal{C}}_{1,2}$, as $N/{\ell}$ length code over the alphabet $F^{2}$. The only change we have in the decoding algorithm is for the pair of steps corresponding to this “glued” outer code. For the first step in the pair, we need to compute the $N/{\ell}$ length llr vector ${\bf L}(\cdot,\cdot)$, that serves as an input to the the decoder of ${\mathcal{C}}_{1,2}$. In this case, we need that each llr function in the vector, will be a function of both ${ u}_1$ and ${ u}_2$. Equation (\[eq:genSCRule\]) is therefore updated accordingly. $$\label{eq:genSCRule2} L(t_1,t_2)=\ln\left(\frac{\sum_{{\bf u}_{3}^{\ell}\in F^{\ell-3}} R_g\left({ u}_{0},0,0,{\bf u}_{3}^{\ell}\right)}{\sum_{{\bf u}_{3}^{\ell}\in F^{\ell-3}} R_g\left({ u}_{0},t_1,t_2,{\bf u}_{3}^{\ell}\right)}\right).$$ The second step of the pair is remained unchanged. A Recursive Description of the SCL Algorithm {#sec:SCListDecoding} -------------------------------------------- Tal and Vardy introduced an efficient SCL decoder [@Tal2012]. We give here a recursive description of this algorithm. In the algorithm, there is a requirement to compare between the likelihoods of different decoding possibilities. Therefore, we need to assume that inputs to the algorithm as well as its internal computations are interpreted as likelihoods, instead of llrs. Note, that if the decoding list is of size $1$, then the formulation we give below is of the SC decoder we described in the previous subsection (with the only difference that we use likelihoods instead of llrs to describe the computations). We also note, that here we only describe the algorithm to generate the list. At the end of the algorithm, the most likely element of the list should be given as output. If there is an outer CRC, only outputs that agree with the CRC should be considered. The notion of likelihoods normalization that was considered by Tal and Vardy [@Tal2012 Algorithm 14] to avoid floating-point underflow is also applicable here. These two issues and their generalization are not further discussed in this paper. The algorithm of SCL decoding of $N$ length polar code with list of size $M$ gets as input the following structures. - Two likelihood matrices ${\bf \Pi^{(0)}}$ and ${\bf \Pi^{(1)}}$ of $M\times N$ dimension, which represent $M$ arrays of conditional probability values (each array of length $N$) - each one corresponds to an input option, that the decoder should consider. We refer to these input options as *models*. The plurality of the models exists, because at any given point, in the list decoding algorithm we allow $M$ options for past decisions on the symbols of the information word (these options form the list). Each one of these options induces a different statistical model, in which we assume that the information sub-vector, which is associated with it, is the one that was transmitted. We have ${ \Pi}^{(b)}_{i,j} = \Pr({ Y}_j^{(i)}={ y}_j^{(i)}|{ V }_j=b)$, where ${ Y}_j^{(i)}$ is the measurement of the $j^{th}$ channel ${ V }_j \rightarrow { Y}_j $ of the $i^{th}$ option in the list and $b\in \{0,1\}$. - A marker $\rho_{in}$ indicating how many rows in ${\bf \Pi^{(0)}}$ and ${\bf \Pi^{(1)}}$ are occupied. The algorithm supports decoding of $\rho_{in} \in [1,M]$ input models. - The vector of the indices of the frozen bits. The algorithm outputs the following structures. - A matrix $ {\bf U} $ of $M\times N$ dimension, which represents $M$ arrays of information values (each array of length $N$) - this is the list of the possible information words that the decoder estimated. - A matrix $ {\bf X} $ of $M\times N$ dimension, which represents $M$ arrays of codewords (each array of length $N$) - this is the list of codewords that correspond to the information words in ${\bf U}$. - An indicator vector ${\bf s}_{0}^{M-1}$, that indicates for each row in ${ \bf U}$ and ${\bf X}$ to which row in the input ${\bf \Pi^{(0)}}$ and ${\bf \Pi^{(1)}}$ it has originated from (i.e. it refers to the statical model that was assumed when estimating this row). - A marker $\rho_{out}$ indicating how many rows in $ {\bf U}$ or $ {\bf X} $ are occupied. For the basic $N=2$ length case the algorithm operates as follows. STEP I : \ For each of the $\rho_{in}$ occupied rows of ${ \Pi}^{(0)}$ and ${ \Pi}^{(1)}$ compute ${ P}^{(0)}_{i}=\frac{1}{2}\left({ \Pi}^{(0)}_{i,0}\cdot{ \Pi}^{(0)}_{i,1}+{ \Pi}^{(1)}_{i,0}\cdot{ \Pi}^{(1)}_{i,1}\right)$ and ${ P}^{(1)}_{i}=\frac{1}{2}\left({ \Pi}^{(0)}_{i,0}\cdot{ \Pi}^{(1)}_{i,1}+{ \Pi}^{(1)}_{i,0}\cdot{ \Pi}^{(0)}_{i,1}\right)$, for $0\leq i \leq \rho_{in}-1$. STEP II : \ Concatenate the two vectors to one $2\cdot \rho_{in}$ length vector, ${\bf P} = [{\bf P}^{(0)} , {\bf P}^{(1)}]$. Let $\tilde {\bf P}$ be a vector that contains the $\rho = \min\{2\cdot \rho_{in}, M \}$ largest values of $\bf P$. Let ${\bf s}^{(0)}, {\bf u}^{(0)}$, be $\rho$ length column vectors corresponding to $\tilde{\bf P}$, such that the $i^{th}$ element of $\tilde{\bf P}$ is element number $ { s}^{(0)}_i$ in the vector ${\bf P}^{({ u}^{(0)}_{i})}$. This element was originated from model number $ { s}^{(0)}_i$, which means that it was computed assuming that row number $ { s}^{(0)}_i$ of ${\bf \Pi}^{(0)}$ and ${\bf \Pi}^{(1)}$ was the statistical model. If $u$ is frozen (without loss of generality assume that it is set to the 0 value), then steps I and II should be skipped and ${\bf s}^{(0)} = [0,1,...,\rho_{in}-1]$ ,${\bf u}^{(0)} = {\bf 0}$. STEP III : \ Generate two $\rho$ length vectors, ${ \bf P}^{(0)}$ and ${ \bf P}^{(1)}$. For each of the $\rho$ occupied rows of ${\bf s}^{(0)}, {\bf U}^{(0)}$ compute ($ i\in [0,\rho-1]$). $${P}_{i}^{(0)} = \frac{1}{2}\cdot\left\{ \begin{array}{ll} { \Pi}^{(0)}_{{s}^{(0)}_i,0}\cdot{ \Pi}^{(0)}_{{ s}^{(0)}_i,1}, & \hbox{ ${ x}^{(0)}_{i}=0$;} \\ { \Pi}^{(1)}_{{ s}^{(0)}_i,0}\cdot{ \Pi}^{(0)}_{{ s}^{(0)}_i,1}, & \hbox{ ${ x}^{(0)}_{i}=1$.} \end{array} \right.$$ $${ P}_{i}^{(1)} = \frac{1}{2}\cdot\left\{ \begin{array}{ll} { \Pi}_{{ s}^{(0)}_i,0}^{(1)}\cdot{ \Pi}_{{ s}^{(0)}_i,1}^{(1)}, & \hbox{ ${ x}^{(0)}_{i}=0$;} \\ { \Pi}_{{ s}^{(0)}_i,0}^{(0)}\cdot{ \Pi}^{(1)}_{{ s}^{(0)}_i,1}, & \hbox{ ${ x}^{(0)}_{i}=1$.} \end{array} \right.$$ STEP IV : \ Concatenate the two vectors to one $2\cdot \rho $ length vector, ${\bf P} = [{\bf P}^{(0)} , {\bf P}^{(1)}]$. Let $\tilde {\bf P}$ be a vector that contains the $\rho_{out} = \min\{2\cdot \rho, M \}$ largest values of $\bf P$. Let ${\bf s}^{(1)}, {\bf u}^{(1)}$, be $\rho_{out}$ length column vectors corresponding to $\tilde{\bf P}$, such that the $i^{th}$ element of $\tilde {\bf P}$ is element number ${ s}^{(1)}_i$ of the vector ${ \bf P}^{({ u}^{(1)}_{i})}$. If the second bit is frozen (without loss of generality assume that it is set to the 0 value), then steps III and IV should be skipped and ${\bf s}^{(1)} = [0,1,...,r-1],\,\,{\bf u}^{(1)}= {\bf 0}, \rho_{out} = r$. Output: - $\rho_{out}$ - ${ s}_i = { s}^{(0)}_{\sigma}\,\,\,\, s.t \,\,\,\, \sigma = { s}^{(1)}_i \,\,\,\, i\in [0,\rho_{out}-1]$ - ${{\bf U}} = [{\bf u}^{(0)} ; {\bf u}^{(1)}]$ - ${{\bf X}}= [ {\bf u}^{(0)}+{\bf u}^{(1)}; {\bf u}^{(1)}]$ Now, for describing a SC decoder for length $N=2^{n}$ polar code, let us assume that we already developed a SCL decoder for length $N/2$ polar code. Therefore, a decoder for length $N$ polar code contains the following steps. STEP I : \ Prepare the probability transition matrices for the first polar outer code decoder. Specifically, generate two matrices ${\bf P}^{(b)}$ of dimension $M\times N/2$, $b\in \{0,1\}$, such that for $0 \leq i \leq \rho_{in}-1\,\,\,0\leq j\leq N/2-1$ $${ P}^{(0)}_{i,j}=\frac{1}{2}\left({ \Pi}^{(0)}_{i,2\cdot j}\cdot{ \Pi}^{(0)}_{i,2\cdot j+1}+{ \Pi}^{(1)}_{i,2\cdot j}\cdot{ \Pi}^{(1)}_{i,2\cdot j+1}\right)$$ and $${ P}^{(1)}_{i,j}=\frac{1}{2}\left({ \Pi}^{(0)}_{i,2\cdot j}\cdot{ \Pi}^{(1)}_{i,2\cdot j+1}+{ \Pi}^{(1)}_{i,2\cdot j}\cdot{ \Pi}^{(0)}_{i,2\cdot j+1}\right)$$ STEP II : \ Give the $M\times \frac{N}{2}$ matrices ${\bf P}^{(0)}$ and ${ \bf P}^{(1)}$, the frozen bits from the first half of the codeword and $\rho_{in}$ as the number of elements in the list as inputs to the polar code decoder of length $N/2$. Assume that the decoder outputs ${\bf U}^{(0)}$ and ${\bf X}^{(0)}$ as the list of estimations of the information word and the outer polar codeword of length $N/2$, respectively. Both of these structures are matrices of dimension $M\times N/2$. The decoder also outputs ${\bf s}^{(0)}$ as the source indicator vector (of length $M$), and $\rho$ as the size of the list. STEP III : \ Prepare the input matrices for the decoder of the second outer polar code of length $N/2$. Specifically, generate two matrices ${\bf P}^{(b)}$ of dimension $M\times N/2$, $b\in \{0,1\}$, such that for $0 \leq i \leq \rho-1,\,\,\,0\leq j\leq N/2-1$ $${ P}^{(0)}_{i,j}=\frac{1}{2}\cdot\left\{ \begin{array}{ll} { \Pi}^{(0)}_{{ s}^{(0)}_i,2\cdot j}\cdot{ \Pi}^{(0)}_{{ s}^{(0)}_i,2\cdot j+1}, & \hbox{ ${ X}_{{ s}^{(0)}_i,j}^{(0)}=0$;} \\ { \Pi}^{(1)}_{{ s}^{(0)}_i,2\cdot j}\cdot{ \Pi}^{(0)}_{{ s}^{(0)}_i,2\cdot j+1}, & \hbox{${ X}_{{ s}^{(0)}_i,j}^{(0)}=1$,} \end{array} \right.$$ and $${P}^{(1)}_{i,j}=\frac{1}{2}\cdot\left\{ \begin{array}{ll} { \Pi}^{(1)}_{ {s}^{(0)}_i,2\cdot j}\cdot{ \Pi}^{(1)}_{{ s}^{(0)}_i,2\cdot j+1}, & \hbox{ ${ X}_{{ s}^{(0)}_i,j}^{(0)}=0$;} \\ { \Pi}^{(0)}_{{ s}^{(0)}_i,2\cdot j}\cdot{ \Pi}^{(1)}_{{ s}^{(0)}_i,2\cdot j+1}, & \hbox{${ X}_{{ s}^{(0)}_i,j}^{(0)}=1$.} \end{array} \right.$$ STEP IV : \ Give these matrices ${\bf P}^{(0)}$ and ${\bf P}^{(1)}$, the vector of indices of the frozen bits from the second half of the codeword and $\rho$ (as the number of elements in the list) as inputs to the decoder of the second outer polar code of length $N/2$. Assume that the decoder outputs ${{\bf U}}^{(1)}$ and ${{\bf X}}^{(1)}$ as the list of estimations of the information words and their corresponding outer polar codeword of length $N/2$. Both of these structures are matrices of dimension $M\times N/2$. The decoder also outputs $\bf s^{(1)}$ as the source indicator vector (of length $M$) and $\rho_{out}$ as the size of the output list. Now, generate the outputs of the decoder ($i\in [0,\rho_{out}-1]$): - ${ s}_{i} = { s}^{(0)}_{\sigma(i)}$, where $\sigma(i) = { s}_i^{(1)}$. - ${\bf U}_{\rightarrow i}=[ {\bf U}^{(0)}_{\rightarrow { s}_{i}}, {\bf U}^{(1)}_{\rightarrow \sigma(i)}]$. - ${\bf X}_{i,\text{even}}={\bf X}^{(0)}_{\rightarrow { s}_{i}}+{\bf X}^{(1)}_{\rightarrow {\sigma(i)}}$ - ${\bf X}_{i,\text{odd}}={\bf X}^{(1)}_{\rightarrow {\sigma(i)}}$, where ${\bf X}_{i,\text{even}}$ (${\bf X}_{i,\text{odd}}$) are the vectors of the even (odd) indices columns of row number $i$ in the matrix ${\bf X}$, and for a matrix $\bf A$, the $i^{th}$ row is denoted by ${\bf A}_{\rightarrow i}$. Let $T(n)$ be the decoding time complexity, for length $N=2^n$ polar code. Then $T(n)= 2\cdot T(n-1) + O(M\cdot N)$, and $T(1)=O(M)$, which results in $T(n)=O(M\cdot N\cdot\log_2N)$. Similarly, the space complexity of the algorithm can be shown to be $O(M\cdot N)$. The generalization of the decoding algorithm for a homogenous kernel of dimension $\ell$ with alphabet $F$ is quite simple. Here we emphasize the principal changes, from the $(u+v,v)$ algorithm. First, the only change in the input is that we should have $|F|$ input channel matrices, ${\bf \Pi}^{(b)}$, one for each $b\in F$. In the decoding algorithm, we have $\ell$ pairs of steps, such that each one is dedicated to a different outer codes. Before step $2\cdot r -1$, we have decoded outer-codes $\mathcal{C}_{i}$ where $0\leq i \leq r-2$. We assume, that we have temporary lists ${\bf X}^{(i)}$ and ${\bf U}^{(i)}$ of the estimated codewords and their corresponding information words, which are represented by matrices of size $M\times N/{\ell}$. The $i^{th}$ matrix corresponds to the decoding of $\mathcal{C}_{i}$, $0\leq i \leq r-2$. We maintain a temporary indicator vector ${\bf s}^{(0)}$ of length $M$, such that ${\bf X}^{(i)}_{j\rightarrow}$ and ${\bf U}^{(i)}_{j\rightarrow}$ were estimated assuming model ${ s}^{(0)}_j$. We also have $\rho$ as the number of occupied elements in the list so far (on the initialization, $\rho={\rho}_{in}$). STEP $2\cdot r-1$ : \ Using the decoding results of the outer-codewords from the previous steps i.e. ${\bf X}^{(m)}$, for $0 \leq m\leq r-2$, prepare the $N/{\ell}$ length likelihood lists, $\left\{{\bf P}^{(b)}\right\}_{b\in F}$. Each list is an $M\times N/{\ell}$ matrix, and all of them will serve as an input to the decoder of the $N/{\ell}$ length outer code number $r-1$. For the computation of row $i$ of ${\bf P}^{(b)}$, use the input statistical model $s^{(0)}_i$, that is the likelihoods in rows $\left\{{\bf \Pi}^{(b)}_{\rightarrow s^{(0)}_i}\right\}_{b\in F}$. Also, as the estimated codewords of the previous outer codes, we need to use rows $\left\{{\bf X}^{(m)}_{\rightarrow i}\right\}_{0 \leq m\leq r-2}$. To prepare $\left\{{\bf P}^{(b)}\right\}_{b\in F}$ we do computations on likelihoods (instead of llrs), which are the equivalent to step $2\cdot r -1$ in the description of the general SC decoding (Subsection \[sec:recSCDec\]). STEP $2\cdot r$ : \ Give the matrices $\left\{{\bf P}^{(b)}\right\}_{b\in F}$, the vector of indices of the frozen bits from the second half of the codeword and $\rho$ (as the number of elements in the list) as inputs to the decoder of outer polar code number $r-1$. Assume that the decoder outputs ${{\bf U}}^{(r-1)}$ and ${{\bf X}}^{(r-1)}$ as the list of estimations of the information word and their corresponding estimations of the transmitted codeword of the outer code number $r-1$, respectively. Both of these structures are matrices of dimension $M\times N/{\ell}$. The decoder also outputs $\bf s^{(1)}$ as the model indicator vector (of length $M$) and $\rho$ as the number of occupied elements in the list. Allocate ${ \bf s}$, a temporary vector of size $M$, and temporary matrices $\tilde{{\bf X}}^{(i)},\tilde{{\bf U}}^{(i)}$ of size $M\times N/{\ell}$, where $0\leq i \leq r-2$. - ${ s}_{i} = { s}^{(0)}_{\sigma(i)}$, where $\sigma(i) = { s}_i^{(1)}$ and $0\leq i \leq r-2$. - $\tilde{{\bf X}}^{(i)}_{\rightarrow j }={{\bf X}}^{(i)}_{ \rightarrow \sigma(j) }\,\,\,\,0 \leq i\leq r-2,\,\,\,0\leq j\leq \rho-1$ - $\tilde{{\bf U}}^{(i)}_{\rightarrow j }={{\bf U}}^{(i)}_{\rightarrow \sigma(j) }\,\,\,\,0 \leq i\leq r-2,\,\,\,0\leq j\leq \rho-1$ Copy these matrices to the internal data structures. - ${\bf s}^{(0)} ={\bf s} $. - ${{\bf X}}^{(i)}=\tilde{{\bf X}}^{(i)}\,\,\,\,0 \leq i\leq r-2$ - ${{\bf U}}^{(i)}=\tilde{{\bf U}}^{(i)}\,\,\,\,0 \leq i\leq r-2$ If this is step $2\cdot \ell$ (the last step), then prepare the output. - $\rho_{out}=\rho$. - ${ \bf s}$. - ${\bf U} =[ {\bf U}^{(0)}; {\bf U}^{(1)};...{\bf U}^{(\ell-1)} ]$. - ${\bf X}_{i,\ell\cdot m:{\ell\cdot(m+1)-1}}=g\left({\bf X}^{(0)}_{i,m}, {\bf X}^{(1)}_{i,m},...,{\bf X}^{(\ell-1)}_{i,m} \right),\,\,\,\,\, 0\leq m\leq N/{\ell}-1.$ Where for a matrix $\bf A$, the subvector that is composed of the columns $n_1$ to $n_2$ of the $i^{th}$ row is denoted by ${\bf A}_{i,n_1:n_2}$. The decoder for the basic $N={\ell}$ length code, also contains $\ell$ pairs of steps. The decoding is similar to the above, with the exception that instead of delivering the likelihood matrices $\left\{{\bf P}^{(b)}\right\}_{b\in F}$ (here these matrices are actually column vectors) to a decoder, we concatenate them to a vector $\tilde{{\bf P}}$ and choose the $\rho = \min\left\{ M,2\cdot \rho\right\}$ maximum elements from it, and generate the indicator vector ${\bf s}^{(1)}$ and the information symbols list ${\bf u}^{(r-1)}$, similarly to the case of the $N=2$ length decoder of the $(u+v,v)$ construction. In case the kernel is mixed, the generalization is also quite easy. Let us consider the mixed example, from the end of Subsection \[sec:recSCDec\]. The only changes we have in the decoding algorithm, are for the pair of steps associated with the glued outer code $\mathcal{C}_{1,2}$. In step $3$ (the preparation step for this outer-code), we prepare $|F|^{2}$ input matrices ${\bf P}^{(b_1,b_2)}$, for $(b_1,b_2)\in F^2$. For this, we use the equivalent of equation (\[eq:genSCRule2\]) for likelihoods (instead of llrs). The decoder of $\mathcal{C}_{1,2}$ is supposed to return a list of estimations of the information words, their corresponding codewords and the model indicator vectors. These outputs and the temporary structures are re-organized, as is done in step $2\cdot r$ for the decoding algorithm of the homogenous kernel polar code. Note, however, that at the end of step $4$, there are three information words lists ${\bf U}^{(0)}$, ${\bf U}^{(1)}$ and ${\bf U}^{(2)}$ along with their corresponding three outer codewords lists. This is because we have decoded the glued outer code $\mathcal{C}_{1,2}$ simultaneously, which contributed ${\bf U}^{(1)}$, ${\bf U}^{(2)}$, ${\bf C}^{(1)}$ and ${\bf C}^{(2)}$ in the same decoding step. A Recursive Description of the BP Algorithm {#sec:BP} ------------------------------------------- BP is an alternative to SC decoding [@Arikan]. It is an iterative message-passing algorithm, which messages are defined using Forney’s normal factor graph [@Forney01]. There is no evidence which algorithm is better for general channels, except for the BEC, in which BP is shown to outperform SC [@Hussami2009]. However, simulations indicate that BP outperforms SC in many cases. The order of sending the messages on the graph is called the *schedule* of the algorithm. Hussami *et al.* suggested to use a “$Z$ shape schedule” for transferring the messages [@Hussami2009 Section II.A]. Here we prefer, to present a serial schedule which is induced from the GCC structure of the code. \ We begin by describing the type of messages that are computed during the algorithm. Figure \[fig:uvNormFactGraph\] depicts the normal factor graph representation of Arikan’s kernel. We have $4$ symbol half edges denoted by $u,v,x_0$ and $x_1$. These symbols have the following functional dependencies among them $x_0 = u+v$ and $x_1=v$. The messages and the inputs that may be sent on the graph are assumed to be llrs, and their values are taken from $\mathbb{R}\bigcup\{\pm\infty\}$. The $\infty$ and $-\infty$ are special types of llrs, that indicate known values of $0$ and $1$, respectively. They are used to support the existence of the frozen bits of the polar code. For the symbol half edges, we assume that we have $4$ input llr messages. These messages may be generated by the output of the channel, by known values associated with frozen bits, or by computations that were done in this iteration or previous ones. We denote these messages by $\mu^{(in)}_{u}$, $\mu^{(in)}_{v}$, $\mu^{(in)}_{x_0}$ and $\mu^{(in)}_{x_1}$. The algorithm computes (in due time) $4$ output llr messages, $\mu^{(out)}_{u}$, $\mu^{(out)}_{v}$, $\mu^{(out)}_{x_0}$ and $ \mu^{(out)}_{x_1}$, indicating the estimations of $u,v,x_0$ and $x_1$, respectively, by the decoding algorithm. The messages are computed using the extrinsic information principle, i.e. each message that is sent from a node on an adjacent edge is a function of all the messages that were previously sent to the node, except the message that was received over the particular edge. The nodes of the graphs are denoted by $a_0$ (the adder functional) and $e_1$ (the equality functional). Using the ideas mentioned above we have the following computation rules. $$\label{eq:BPUV1} \mu_{e_1 \rightarrow a_0 }=f_{(=)}(\mu^{(in)}_{v},\mu^{(in)}_{x_1}),$$ $$\label{eq:BPUV2} \mu_{a_0 \rightarrow e_1}=f_{(+)}(\mu^{(in)}_{u},\mu^{(in)}_{x_0}),$$ $$\label{eq:BPUV3} \mu^{(out)}_{u}=f_{(+)}(\mu^{(in)}_{x_0},\mu_{e_1 \rightarrow a_0 }),$$ $$\label{eq:BPUV4} \mu^{(out)}_{v}=f_{(=)}(\mu_{a_0 \rightarrow e_1},\mu^{(in)}_{x_1}),$$ $$\label{eq:BPUV5} \mu^{(out)}_{x_0}=f_{(+)}(\mu_{e_1 \rightarrow a_0 },\mu^{(in)}_{u}),$$ $$\label{eq:BPUV6} \mu^{(out)}_{x_1}=f_{(=)}(\mu_{a_0 \rightarrow e_1 },\mu^{(in)}_{v}),$$ where $f_{(=)}(z_0,z_1) = z_0+z_1$ and $f_{(+)}(z_0,z_1)=2\tanh^{-1}\left(\tanh(z_0/2)\cdot\tanh(z_1/2)\right)$. Note that, $\mu_{\alpha\rightarrow\beta}$ where $\alpha,\beta\in\{e_1,a_0\}$ is the message which is sent from node $\alpha$ to node $\beta$. $\mu^{(out)}_{u}$ and $\mu^{(out)}_{x_0}$ are sent from $a_0$ over the half edges corresponding to symbols $u$ and $x_0$, respectively. $\mu^{(out)}_{v}$ and $\mu^{(out)}_{x_1}$ are sent from $e_1$ over the half edges corresponding to symbols $v$ and $x_1$, respectively. We, now, turn to give a recursive description of an iteration of the algorithm. The factor graph of the length $N$ code, has $\log_2N$ layers. In each layer, there exist $N/2$ copies of the normal factor graph, that we depicted in Figure \[fig:uvNormFactGraph\]. Their organization can be implied from the recursive description in Figure \[fig:uvExample\]. Therefore, for each layer, we have $N/2$ realizations of the input messages, output messages and inner messages (each one is corresponding to a different set of symbols and interconnect). To denote the $i^{th}$ realization of these messages, we use the notation $\mu_{\alpha\rightarrow\beta,i}$, $\mu_{\gamma,i}^{(in)}$ and $\mu_{\gamma,i}^{(out)}$, where $\alpha,\beta \in \{a_0,e_1\}$ and $\gamma\in \{x_0,x_1,u,v\}$. As before, we denote the channel llrs by the length $N$ vector $\{\lambda_{i}\}_{i=0}^{N-1}$. STEP I : \ Partition the llr vector into pairs of consecutive llr values $ \left\{\left(\mu_{x_0,i}^{(in)},\mu_{x_1,i}^{(in)}\right)=\left(\lambda_{2i},\lambda_{2i+1}\right) \right\}_{i=0}^{N/2-1}$. Compute the messages $\left\{\mu_{e_1 \rightarrow a_0,i}\right\}_{i=0}^{N/2-1}$ using (\[eq:BPUV1\]). Compute the messages $\left\{\mu_{u,i}^{(out)} \right\}_{i=0}^{N/2-1}$, using (\[eq:BPUV3\]) (Note that the two computations in this step can be combined to one computation). STEP II : \ Give the vector $\left\{\mu_{u,i}^{(out)} \right\}_{i=0}^{N/2-1}$ as an input to the polar code BP iterative decoder of length $N/2$. Also provide the indices of the frozen bits from the first half of the codeword. Assume that the decoder outputs $\left\{\mu_{u,i}^{(in)}\right\}_{i=0}^{N/2-1}$ and the estimation of the information word. STEP III : \ Compute the messages $\left\{\mu_{a_0 \rightarrow e_1,i}\right\}_{i=0}^{N/2-1}$ using (\[eq:BPUV2\]). Compute the messages $\left\{\mu_{v,i}^{(out)} \right\}_{i=0}^{N/2-1}$ using (\[eq:BPUV4\]) (Note that the two computations in this step can be combined to one computation). STEP IV : \ Give the vector $\left\{\mu_{v,i}^{(out)} \right\}_{i=0}^{N/2-1}$ as an input to the polar code decoder of length $N/2$. Also provide to this decoder the indices of the frozen bits from the second half of the codeword. Assume that the decoder outputs $\left\{\mu_{v,i}^{(in)} \right\}_{i=0}^{N/2-1}$ and the estimation of the information word of the second outer polar codeword of length $N/2$. The information part may be concatenated to the information part of step II, to generate the decision on the information word after this iteration. Compute the messages $\left\{\mu_{e_1 \rightarrow a_0,i}\right\}_{i=0}^{N/2-1}$ using (\[eq:BPUV1\]). Compute the messages $\left\{\mu_{x_0,i}^{(out)}\right\}_{i=0}^{N/2-1}$ and $\left\{\mu_{x_1,i}^{(out)}\right\}_{i=0}^{N/2-1}$ using (\[eq:BPUV5\]) and (\[eq:BPUV6\]), respectively. Any input message or inner message, unless given (by the channel output or by a prior knowledge on the frozen bits) is set to $0$ before the first iteration. It is assumed that the inner messages are preserved between the iterations (and see a further discussion in the sequel). To complete the recursive description of the algorithm, we need to consider the case of the length $N=2$ code. Assume, that we get $\mu_{x_0}^{(in)},\mu_{x_1}^{(in)}$ as the input values. Also, before the first iteration initialize $w\in \{u,v\}$ $$\mu_{w}^{(in)}=\left\{ \begin{array}{ll} 0, & \hbox{w is not frozen;} \\ (-1)^b\cdot \infty, & \hbox{u is frozen and equals b.} \end{array} \right.$$ STEP I : \ Compute $\mu_{e_1 \rightarrow a_0 }$ according to (\[eq:BPUV1\]). STEP II : \ If $u$ is not frozen, compute $\mu_{u}^{(out)}$ according to (\[eq:BPUV3\]), and make a hard decision on this bit, based on its sign. STEP III : \ Compute $\mu_{a_0 \rightarrow e_1 }$ according to (\[eq:BPUV2\]). STEP IV : \ If $v$ is not frozen, compute $\mu_{v}^{(out)}$ according to (\[eq:BPUV4\]), and make a hard decision on it, based on its sign. Compute $\mu_{x_0}^{(out)},\mu_{x_1}^{(out)}$ according to (\[eq:BPUV5\]), (\[eq:BPUV6\]). We should note that $$f_{(=)}(\pm \infty,z_1)=f_{(=)}(z_0,\pm \infty)=\pm \infty$$ $$f_{(+)}(\pm \infty,z_1)=\pm z_1,\,\,\,\,f_{(+)}(z_0,\pm \infty)=\pm z_0.$$ We further note, that for $N=2$ length code, steps I and II can be combined to one operation, and similarly steps III and IV can be combined to one operation. Both of these combined steps are independent, so they may be performed in any order, or in parallel. In this implementation, we assumed that there is a memory for storing messages of type $\mu_{u}^{(in)}$, $\mu_{v}^{(in)}$, $\mu_{x_0}^{(in)}$, $\mu_{x_1}^{(in)}$ and $\mu_{a_0\rightarrow e_1}$, that were previously computed. This memory is dedicated for each realization of such messages, specifically, for each layer of the graph and for each $(u+v,v)$ normal subgraph, as in Figure \[fig:uvNormFactGraph\]. Actually, for this particular schedule, excluding $\mu_{v}^{(in)}$, we do not need to save any message beyond the iteration boundary (this observation reduces the required memory consumption as we’ll see in the hardware implementation). The memory consumption of the algorithm is $\Theta\left(N\cdot \log (N)\right)$. The running time is also $\Theta\left(N\cdot\log(N)\right)$, assuming no parallelism is allowed. In each iteration, we send one instance for each of the possible messages and for each $(u+v,v)$ block realization in the code, except for the $\mu_{e_1\rightarrow a_0}$ for which we send two messages (for all the layers, besides the last one). The full implementation may contain several iterations. The number of iterations may be fixed or set adaptively, which means that the algorithm continues until some consistency criteria are fulfilled. An example for such a criterion, is that the signs of the llr estimations for all the frozen bits agree with their know values (i.e. if all the frozen bits are set to zero, then $\text{sign}\left(\mu^{(out)}_{\gamma}\right)>0$ of all the frozen bits, $\gamma$). In this case, one can stop an iteration in the middle by holding a counter in a similar way to the method that is usually used in BP decoding of LDPC codes using the check-node based serial schedules (see e.g. [@Sharon07]). We note, however, that in the LDPC case, the consistency is manifested in the fact that all the parity check equations are satisfied. In the next section we describe hardware architectures for the decoding algorithms we covered so far. Recursive Descriptions of Hardware Architectures of Decoders for Arikan’s Construction {#sec:HrdwreArikConstr} ====================================================================================== We now turn to study hardware architectures, that are inspired by the recursive decoding algorithms, which we presented in Section \[sec:RecDescOfDecAlgor\]. This section covers hardware architectures for Arikan’s $(u+v,v)$ construction. A generalization of this discussion to other kernels is presented in Section \[sec:HardArchiForOthKer\]. We begin by the simple SC pipeline decoder (Subsection \[sec:SCPipeUV\]), and then progress to the more efficient SC line decoder (Subsection \[sec:UVLineDecoder\]). Both of these designs were presented by Leroux *et al.* [@Leroux10; @Leroux2012] in a non recursive fashion. We finish by considering a BP line decoder (Subsection \[sec:UVLineDecoderBP\]). It is important to note that throughout the hardware discussion, our presentation is relatively abstract, emphasizing the important concepts and features of the recursive designs without dwelling into all the details. As such, the figures representing the block diagrams should not be considered as full detailed specifications of the implementation, but rather as illustrations that aim to aid the reader in the task of designing the decoder. We usually prefer to use the same notation for signals array or registers arrays. Let $u(0:N-1)$ be an $N$ length signals array, then its $i^{th}$ value is denoted by $u(i)$. If $v$ is a two dimensional array of $M$ rows and $N$ columns, we denote it by $v(0:M-1,0:N-1)$. Naturally, the $i^{th}$ row of this array is denoted by $v(i,0:N-1)$, and it is a one dimensional array of $N$ elements, of which the $j^{th}$ element is denoted by $v(i,j)$. The SC Pipeline Decoder {#sec:SCPipeUV} ----------------------- \ A block diagram of the SC pipeline decoder for Arikan’s construction is depicted in Figure \[fig: pipArikan\]. The main ingredients of the diagram are listed below. 1. Processing Element (PE) - This is the basic computation unit of the decoder. It gets as input two channel llrs, an estimate of the $u$ input for the $(u+v,v)$ mapping and a control signal, $c_u$, indicating wether to compute the llr of $u$ ($c_u=0$) or $v$ ($c_u=1$). Note, that the estimate of $u$ is only needed in the latter case. 2. ${\bf \lambda}(0:N-1)$ - An array of $N$ registers holding the llrs from the channels. 3. SC decoding unit of polar code of length $N/2$ - This unit has the following inputs: $N/2$ length signals array of input llrs and a binary signals array containing the indices of the frozen bits of the code. Its outputs are $\tilde{ u}(0:N/2-1)$, which is the estimation of the transmitted information word (including the frozen bits), and $\tilde{ x}(0:N/2-1)$, which is the estimation of the transmitted codeword. 4. A register for the estimated information word ${ u}(0:N-1)$. 5. Encoding unit for generating the estimated codeword, it includes a register for the codeword ${x} (0:{N-1})$ and $N/2$ bitwise xor circuits for generating the codeword based on the output of the $N/2$ length decoder. We note that a basic $N=2$ length decoder has only one PE, and operates according to the algorithm described in Section \[sec:Prelim\]. The algorithm for $N>2$ is based on the notion of the recursion, as we describe below. STEP I : \ Using the processing elements $PE_0,PE_1,...,PE_{N/2-1}$ with $c_u=0$, prepare the llr input for the decoder of the first $N/2$ length outer code and output it on the signals array $L(0:N/2-1)$, such that $$L(k)= 2\tanh^{-1}\left(\tanh(\lambda({2k})/2)\tanh(\lambda({2k+1})/2)\right),\,\,\,\,\,\, 0\leq k\leq N/2-1.$$ STEP II : \ Give the signals array $L(0:N/2-1)$ and the list of indices corresponding to the first half of the codeword (i.e. the first outer code) as inputs to the polar code decoder of length $N/2$. Call the decoder of length $N/2$ polar code on these inputs (decoding the first outer polar code). Store ${u}(0:{N/2-1})=\tilde{ u}(0:{N/2-1})$, ${x}_{even}(0:N/2-1)=\tilde{ x}({0}:{N/2-1})$. STEP III : \ Using the signals array $\tilde{ x}(0:{N/2-1})$ as the vector of estimations of $u$ from the $(u+v,v)$ pair, operate the processing elements $PE_0,PE_1,...,PE_{N/2-1}$ with $c_u=1$. This will prepare the llr input for the second outer code, and output it on signals array ${ L}(0:{N/2-1})$, such that $$L(k) = (-1)^{\tilde{x}(k)}\lambda({2k})+\lambda({2k+1}),\,\,\,\,\,\,\, 0\leq k\leq N/2-1.$$ STEP IV : \ Give the signals array $L(0:N/2-1)$ as an input to the polar code decoder of length $N/2$. Also provide the indices of the frozen bits corresponding to the second half of the codeword (i.e. the second outer code). Call the decoder of the length $N/2$ polar code on these inputs (which means that we decode the second outer polar code). Store ${ u}({N/2}:{N-1})=\tilde{ u}(0:{N/2-1})$, ${x}_{even}(0:{N/2-1})={ x}_{even}(0:{N/2-1})+\tilde{ x}({0}:{N/2-1})$, ${x}_{odd}(0:{N/2-1})=\tilde{ x}(0:{N/2-1})$. Here, for an array ${ x}$, we denote by ${ x}_{even}$ and ${ x}_{odd}$ the $2-$decimated arrays containing ${ x}$’s even indices samples and odd indices samples, respectively. Note, that to avoid any delays due to sampling by a register, it is important that the codeword estimation (which is one of the outputs of the decoder) will be the output of the encoding layer and not the register following it. This issue and further timing concerns are considered in the next subsection. Let us consider the complexity of this circuit. We assume that a call to a PE finishes in one clock cycle. Denote by $T(n)$ the time (in terms of the number of clock cycles) that is required to complete the decoding of $N=2^n$ length polar code. Then, $T(n)=2+2\cdot T(n-1) \,\,\,\,\,\, n> 1$ and $T(1) = 2$. This recursion yields $T(n) = 2N-2$. Denote by $P(n)$ the number of PEs for a decoder of length $N=2^n$ polar code, we have $P(n) = 2^{n-1} + P(n-1) \,\,\,\,\, n > 1$ and $P(1) = 1$, so $P(n) = 2^n - 1 = N-1$. The cost of the encoding unit is of $2\cdot \sum_{i=1}^n 2^i = 4\cdot(N-1)$ bits registers, and $\sum_{i=0}^{n-1}2^i=N-1$ xor circuits. We should have $R(n)$ registers for holding llr values, so $R(n) = 2^n +R(n-1) \,\,\,\,\, n>1$ and $R(1) = 2$, so $R(n) = 2\cdot P(n) = 2N-2$. Note, that in this design, we assume that the re-encoding unit is a combinatorial circuit. The SC Line Decoder {#sec:UVLineDecoder} -------------------- In the pipeline design of the decoder of length $N$, the $N/2$ processing elements $\left\{ PE_k \right\}_{k=0}^{N/2-1}$, are only used during steps I and III of the algorithm. During the other steps (that ideally consumes $2\cdot T(n-1) = 2N-4$ clock cycles of the total $2N-2$), these processors are idle, and this results in an inefficient design. To improve this, we observe that the maximum number of operations that can be done in parallel by the PEs in the SC decoding algorithm is $N/2$. So, in order to allow this maximum level of parallelism, a design must have at least $N/2$ processors. The line decoder[^3], that we describe in this subsection, achieves this lower-bound. In order to support this, we need to redefine the decoder of length $N$ polar code. First, The line decoder has two operation modes. Standard Mode (S-Mode) : \ In this mode, the decoder gets as inputs llrs and the indices of the frozen bits, and outputs the hard decision on the information word and its corresponding codeword (this is the operation mode we assumed so far). PE-Array Mode (P-Mode) : \ In this mode, the decoder gets as input a signals array of llrs ${ \lambda}({0}:{N -1})$, a control signal $c_u$, and a binary array of length $N/2$, ${ z}({0}:{N/2-1})$. The output is a signals array ${ L}(0:N-1)$ of llrs, where for $0\leq k\leq N/2-1$ $${L}(k)=\left\{ \begin{array}{ll} 2\cdot \tanh^{-1}\left(\tanh\left(\lambda({2k})/2 \right)\cdot\tanh\left(\lambda({2k+1})/2 \right)\right), & \hbox{$c_u=0$;} \\ (-1)^{z(k)}\cdot \lambda({2k})+\lambda({2k+1}), & \hbox{$c_u=1$.} \end{array} \right.$$ In Figure \[fig: linArikan\], we give a block diagram of this decoder. Note, that in order to maintain the maximum level of parallelism, the length $N$ polar code decoder has $N/2$ processors. Thus, in order to build the length $N$ polar code decoder using an embedded $N/2$ length polar code decoder (already having $N/4$ processors), we use an additional array of $N/4$ PEs, which is referred to as the *auxiliary array*. The input signal *modeIn* indicates wether the decoder is used in *S-Mode* or in *P-Mode*. The *mode* signal is an internal signal that controls wether the $N/2$ length embedded decoder is in *P-Mode*. \ The algorithm for the S-Mode is described below. STEP I : \ Simultaneously, - At the multiplexers array (MUX array), at the input of the embedded decoder of length $N/2$ polar code, set the control signal $c_m=0$, which means that the array ${\lambda}({0}:{N/2-1})$ is selected as an input to this unit. Set $c_u=0$ and use the decoder of length $N/2$ polar code in P-Mode, which causes this unit to output the signals array ($0\leq k \leq N/4-1$) $${L}(k) = 2\cdot \tanh^{-1}\left(\tanh\left(\lambda({2k})/2 \right)\cdot\tanh\left(\lambda({2k+1})/2 \right)\right).$$ Store this array in the registers array ${R}(0:{N/4-1})$. - Use the auxiliary array of processors and compute for $N/4 \leq k \leq N/2-1$ $${L}(k) = 2\cdot \tanh^{-1}\left(\tanh\left(\lambda({2k})/2 \right)\cdot\tanh\left(\lambda({2k+1})/2 \right)\right)$$ and store them in the registers array ${ R}({N/4}:{N/2-1})$. STEP II : - At the MUX array, at the input of the decoder of the length $N/2$ polar code, set the control signal $c_m=1$, which means that content of the registers array ${ R}({0}:{N/2-1})$ is selected as an input to this unit. - Provide the vector of indices, corresponding to the frozen bits from the first half of the codeword to the $N/2$ length decoder. Call the decoder of the length $N/2$ polar code in S-Mode on these inputs (decoding of the first outer polar code). Store ${u}(0:{N/2-1})=\tilde{ u}(0:{N/2-1})$, ${ x}_{even}(0:{N/2-1})=\tilde{ x}({0}:{N/2-1})$. STEP III : \ Simultaneously, - At the MUX array, at the input of the embedded decoder of length $N/2$ polar code, set the control signal $c_m=0$, which means that the array ${\lambda}({0}:{N/2-1})$ is selected as an input to this unit. Set $c_u=1$ and use the decoder of length $N/2$ polar code in P-Mode, which causes this unit to output the signals array ($0\leq k \leq N/4-1$) $${L}(k) = (-1)^{\tilde{x}(k)}\cdot\lambda({2k})+\lambda({2k+1}).$$ Store this signals array in the registers array ${R}(0:{N/4-1})$. Note, that we use $\tilde { x}(0:{N/4-1})$, the estimation of the first half of the codeword, that the embedded decoder gave as output in step II, as an input to this unit. - Use the auxiliary array and compute for $N/4 \leq k \leq N/2-1$ $${L}(k) = (-1)^{\tilde{x}(k)}\cdot\lambda({2k})+\lambda({2k+1})$$ and store them in the registers array ${ R}({N/4}:{N/2-1})$. STEP IV : \ At the MUX array, at the input of the decoder of the length $N/2$ polar code, set the control signal $c_m=1$, which means that the array of registers ${ R}({0}:{N/2-1})$ is selected as an input to this unit. Provide to the $N/2$ length decoder, the vector of indices, corresponding to the frozen bits of the second half of the codeword. Call the decoder of the length $N/2$ polar code in S-Mode on these inputs (decoding of the second outer polar code). Store ${u}({N/2}:{N-1})=\tilde{ u}(0:{N/2-1})$, ${x}_{even}(0:{N/2-1})={ x}_{even}(0:{N/2-1})+\tilde{ x}(0:{N/2-1})$, ${x}_{odd}(0:{N/2-1})=\tilde{ x}(0:{N/2-1})$. The P-Mode operation of the decoder is quite simple. Use $c_m = 0$, which means that the channel llrs are given as an input to the line decoder of the $N/2$ length polar code. Also provide as input the vector ${x}_{in}(0:N/2-1)$, that will serve here as estimations of the $u$ bits from the $(u+v,v)$ pairs. Set $c_u=c_{u,in}$, and operate simultaneously the auxiliary array of processors and the line-decoder of length $N/2$ in P-Mode. Return the llr output of the line-decoder of length $N/2$ and the auxiliary array, i.e. the signals array $L(0:N/2-1)$. We now analyze the complexity of the decoder. Let $P(n)$ be the number of processors of the $N=2^n$ decoder. Then, $P(n)=2^{n-2}+P(n-1)\,\,\,\,\,\, P(1)=1$, so $P(n)=2^{n-1}=N/2$. The number of registers, we use in the design for the llrs (not including registers for the input and the encoding registers) are $R(n) = 2^{n-1} +R(n-1),\,\,\,\,\,\, R(1) = 1$, so we have $R(n) = 2^n-1 = N-1$. The number of multiplexers for the llrs is denoted by $M(n) = 2^{n-1}+M(n-1)\,\,\,\,\, M(1)=0$, so $M(n) = N-2$. We want to make a remark about the efficiency of the design we propose here. The recursive design has a potential advantage of being a clearer reflection of the underlined algorithm. It also has the potential advantage of emphasizing the parts of the system that may be reused. However, it may have a disadvantage when considering the routing of signals in the circuit. Because we want to use the decoder of the $N/2$ length polar code as a closed box, we route all the signals from it and to it, using its interface. This may result in some signals traversing a long path before reaching their target processor. These paths may be too long for the circuit to have a good clock frequency, thereby resulting in degradation of the achievable throughput. It is therefore advised to optimize the circuit by “opening” the recursive units and making the paths shorter, after completing the design of the circuit in a recursive manner. It will also be a good idea, that when building a decoder for a $2N$ length code, the designer will use this “optimized” design of the $N$ length decoder in the $2N$ length design, enjoying the benefits of the recursion. We give here two examples of these long paths hazards, that we believe that are likely to pose a problem along with their possible solutions. 1. The multiplexers layer at the input of the embedded line decoder of the length $N/2$ code is required because of the introduction of the P-Mode. A closer look of our design, reveals that some of the signals have long paths before reaching their target PE. For example, the inputs $\lambda_0$ and $\lambda_1$ need to traverse $\log_2(N)-1$ multiplexer layers before reaching their processor. Since the P-Mode needs to be accomplished in one time-unit, this long path may be prohibitive. By “opening” the $N/2$ length decoder box, the designer is able to control the lengths of the paths by a proper routing. 2. The “Encoding Layer” also suffers from long routing. We assumed, in our analysis, that the encoding procedure is combinatorial, and therefore can be done within the clock cycle. This may be a problem when several encoding circuits are operated one after the other. This is, for example, the case of step IV of the decoder of length $N/2^{i}$ code, that occurs within the step IV of the decoder of length $N/2^{i-1}$ code for $1 \leq i \leq \log_2N-2$. In this case, $O(\log N)$ operations need to occur in a sequential manner in one clock cycle. For large $N$ and high clock frequency circuit, this may not be feasible. The idea of Leroux *et al.* [@Leroux2012] was to use flip-flops for saving the partial encoding for each code bit in the different layers of the decoding circuit. Each such flip-flop, is connected using a xor circuit to the signal line of the estimated information bit. As such, whenever the SC decoder decides on an information bit, the flip-flops corresponding to the code bits that are dependent on this information bit are updated accordingly. These flip-flops need to be reset whenever we start decoding their corresponding outer-code. For example, when we start using the embedded $N/2$ length decoder (on step II and step IV) its flip-flops of partial encoding need to be erased (as they correspond to new outer code). It should be noted, that this idea may also be described recursively, by changing the specification of the length $N$ polar code decoder in S-mode, and requiring it to output the estimated information bits as soon as they’re ready. The decoder should also have an $N$ length binary indicator vector, that indicates which code bits is dependent on the currently estimated information bit. It is easy to see that using the indicator vector of the length $N/2$ decoder, it is possible to calculate the $N$ length indicator vector, by using the $(u+v,v)$ mapping. This, however, generates again a computation path of length $\Theta(\log N)$. This problem, can be addressed, by having a fixed indicator circuit for each partially encoded-bit flip-flop. This circuit will indicate which information bit should be accumulated depending on the ordinal number of this bit. For example, for the decoder of the code of length $N$, we should have an array of $N/2$ flip-flops, each one corresponds to a bit of the codeword of the $N/2$ length first outer code. Each one of these flip-flops, should have an indicator circuit, that gets as input a value of a counter signaling the ordinal number of the information bit that has been estimated, and returns $1$ iff its corresponding codeword bit is influenced by this information bit. For example, the indicator circuit, corresponding to the first code bit, is a constant $1$, because ${ x}_0 =\sum_{i=0}^{N/2-1}{ u}_i$, i.e. it is dependent on all the information bits. On the other hand, the last bit’s indicator (i.e. of $x_{N/2-1}$) returns $1$ iff its input equals to $N/2-1$, because $x_{N/2-1}=u_{N/2-1}$. Using the global counter (that is advanced whenever an information bit is estimated) and the indicator circuits, each code bit that is influenced by this information bit will sum it up to its flip-flop. Using the Kronecker power form of the generating matrix of the $(u+v,v)$ polar code, it can be seen that each of such indicator circuits can be designed by using no more than $O(\log n) = O(\log\log N)$ AND and NOT circuits, therefore the total cost of these circuits will be of $O(N\log \log N)$ in terms of space complexity. In summary, the recursive architecture may be developed and modified to achieve the timing requirements of the circuit. This may be done by “opening the box” of the embedded decoders, and even altering them to support more efficient designs. A careful examination of the line-decoder reveals that the *auxiliary array* is only used on steps I and III, and is idle on the other steps. This might motivate us to consider two variations on this design. The first one, adds hardware and use these arrays to increase the throughput, while the second one decreases the throughput and thereby reduces the required hardware. ### Parallel Decoding of Multiple Codewords {#sec:ParDecLine} There are cases that it is required to increase the throughput of the decoder, by allowing parallel decoding of multiple codewords. A simple solution is to introduce $\mathrm{p}$ decoders when there is a need for decoding $\mathrm{p}$ codewords simultaneously. Because the *auxiliary array* of processors is idle most of the time, it seems like a good idea to “share” this array among several decoders. By appropriately scheduling the commands to to the processors, it is possible to have an implementation of a decoder for $\mathrm{p}$ parallel codewords which is less expensive than just duplicating the decoders (the naive solution). Since the array is idle during steps II and IV, in which the decoder of the length $N/2$ code is active, it is possible to have $\mathrm{p}\leq T(n-1)+1=N-1$ decoders sharing the same *auxiliary array*. The decoding of each one of them is issued in a delay of one clock cycle from each other. Assuming the that $\mathrm{p}=N-1$, we have a decoding time $T(n)+N-2=3N-4$ for $N-1$ codewords while having $\mathrm{p}\cdot P(n-1)+N/4 = (N-1)(N-2)+N/4$ processors, which is about half of the number of processors in the naive solution. This notion can be developed further. For the decoder of the length $N/2$ code that is embedded in the $N$ length decoder, there is a an auxiliary array of $N/8$ processors. This auxiliary array is used on steps I and III of the decoder of length $N$ and length $N/2$. Therefore, it is idle most of the time, and we can share it among the $\mathrm{p}$ decoders of length $N/2$. Assuming that $\mathrm{p} = N-1$, we may allocate $3$ auxiliary arrays that will be shared among the decoders, each one is dedicated for one of these different step: one array for step I (and III) of the $N$ length decoder, one array for step I of the $N/2$ length decoder and one array for step III of the $N/2$ length decoder. For each of the decoded codewords the number of clock cycles between these steps is at least $\mathrm{p}$, therefore there will be no contention on these resources and the throughput will not suffer because of this hardware reduction. In general, for $\mathrm{p} = N-1$, the *auxiliary array* within the embedded decoder of length $\frac{N}{2^i}$ polar decoder ($i \in [1,\log_2(N)-2]$), can be shared among the $\mathrm{p}$ decoders, provided that we allocate an instance of the array for each of the decoding steps it is used in, during the first half of the decoding algorithm for the length $N$ code (i.e. for the time of steps I and II). Thus, for this specific array, we have $1$ call in step I of the $N$ length decoder, $1$ call for step I and $1$ call for step III of the $\frac{N}{2}$ length decoder, $2$ calls for step I and $2$ calls for step III of the $\frac{N}{2^{2}}$ length algorithm, ..., $2^i$ calls for step I and $2^i$ calls for step III for the length $\frac{N}{2^i}$ decoder. In summary, we need $\sum_{t=0}^{i}2^t = 2^{i+1}-1$ *auxiliary arrays* of processors, each one contains $\frac{N}{2^{i+2}}$ PEs. In particular, we need $N-1$ PEs for the $2$ length decoder (each PE is allocated to a specific decoder), and $\frac{N}{2}\cdot \sum_{i=0}^{\log_2(N)-2}\frac{2^{i+1}-1}{2^{i+1}}\approx \frac{N}{2}\left(\log_2(N)-1\right)$ PEs for the other decoders lengths. This adds up to approximately $\frac{N}{2}\left(1+\log_2(N)\right)$ PEs. We conclude that this solution allows an increase of the throughput in a multiplicative factor of $N$, while the PEs hardware is only increased by an approximately $\log_2(N)$ factor. Note, that the number of registers should increase by a multiplicative factor of $\mathrm{p}$. A closer look at the above hardware design, reveals that we actually allocated for each sub-step of steps I and II of the $N$ length decoder a different array of processors. The decoding operation of the $\mathrm{p}$ codewords will go through these units in a sequential order. However, each decoder should have its own set of registers saving the state of the decoding algorithm. Another observation is that when we finish decoding the first codeword (i.e. the one we started decoding in time $0$), we can start decoding codeword number $N$ in the next time slot (and then codeword number $N+1$, etc.), in a pipelined fashion. It should be noted that Leroux *et al.* considered a similar idea, and referred to it as the *vector-overlapping* structure [@Leroux10]. ### Limited Parallelism Decoding {#sec:LimitedParDecLine} Another approach for addressing the problem of low utilization of the *auxiliary arrays* is to limit the number of processing elements that may be allowed to operate simultaneously. This is a very practical consideration, as typically, a system designer has a parallelism limitation which is due to power consumption and silicon area. The limited parallelism, inevitably results in an increase of the decoding time, and thereby a decrease of the throughput. The line decoder of the code of length $N$ has a PE parallelism of $N/2$, because it may simultaneously compute at most $N/2$ llrs using the $N/2$ PEs. We consider a line decoder of length $N$ code with limited parallelism of $N/{2^i}$, where $i\in [1,\log_2N]$. This means, that the decoder has exactly $\frac{N}{2^i}$ PEs. If $i=1$ then the decoder is actually, the standard line-decoder. If $i>1$ then the decoder’s block diagram will be similar to the one shown in Figure \[fig: linArikan\], with the following changes. - There will be no *auxiliary PEs array*. - The embedded line decoder of the $N/2$ length code will be replaced by a limited parallelism line decoder, with parallelism factor of $N/{2^i}$. - The signals array $L(0:N/4-1)$ at the output of the embedded line decoder will also be connected to the registers array $R({N/4}:{N/2-1})$ . - The multiplexers array at the the input of the $N/2$ length line decoder, will change to also include the input array ${ \lambda}({N/2}:{N-1})$. This means, that we should have an array of $3\rightarrow 1$ multiplexers (instead of $2\rightarrow 1$), in which the $k^{th}$ multiplexer selects between inputs $\lambda(k), \lambda(k+N/2)$ and $R(k)$. - There will be an additional array of multiplexers at the input of the line-decoder for selecting between ${x}({0}:{N/4-1})$ and ${x}({N/4}:{N/2-1})$, to support the use of both parts of the decided codeword. Similarly, for the P-Mode, we should have an array of multiplexers to select between the two parts of the $x_{in}(0:N/2-1)$ array. The S-mode decoding algorithm will have $4$ steps as before, however steps I and III are modified as follows. STEP I : \ Sequentially, - **STEP I-a**: At the MUX array, at the input of the (limited parallelism) decoder of the length $N/2$ polar code, set the control signal $c_m=0$, which means that ${ \lambda}({0}:{N/2-1})$ is selected as an input to this unit. Set $c_u=0$ and use the $N/2$ length polar code decoder in P-Mode. Store the output array of signals $L(0:N/4-1)$, corresponding to $${L}(k) = 2\cdot \tanh^{-1}\left(\tanh\left(\lambda({2k})/2 \right)\cdot \tanh\left(\lambda({2k+1})/2 \right)\right)\,\,\,\,\,,0\leq k\leq N/4-1$$ in the registers array ${ R}(N/4:{N/2-1})$. - **STEP I-b**: Set the control signal of the MUX array to $c_m=1$, which means that ${\lambda}({N/2}:{N-1})$ is selected as an input to this unit. Set $c_u=0$ and use the decoder of the polar code of length $N/2$ in P-Mode. Store the output signals array $${L}(k-N/4) = 2\cdot \tanh^{-1}\left(\tanh\left(\lambda({2k})/2 \right)\tanh\left(\lambda({2k+1})/2 \right)\right), \,\,\,\,\,\,{N/4}\leq k \leq {N/2-1}$$ in registers array ${ R}({N/4}:{N/2-1})$. <!-- --> STEP III : \ Sequentially, - **STEP III-a**: At the MUX array, at the input of the (limited parallelism) decoder of length $N/2$ polar code, set the control signal $c_m=0$, which means that ${ \lambda}({0}:{N/2-1})$ is selected as an input to this unit. Set $c_u=0$ and use the $N/2$ length polar code decoder in P-Mode. Store the output array of signals $L(0:N/4-1)$, corresponding to $${L}(k) = (-1)^{\tilde{x}(k)}\cdot \lambda({2k})+\lambda({2k+1})\,\,\,\,\,,0\leq k\leq N/4-1$$ in the registers array ${ R}(N/4:{N/2-1})$. Note that we use $\tilde { x}(0:{N/4-1})$, the first half of the output from step III, as an input to the $N/2$ length decoder. - **STEP III-b**: Set the control signal of the MUX array to $c_m=1$, which means that ${\lambda}({N/2}:{N-1})$ is selected as an input to this unit. Set $c_u=0$ and use the decoder of the polar code of length $N/2$ in P-Mode. Store the output signals array $${L}(k-N/4) = (-1)^{\tilde{x}(k)}\cdot \lambda({2k})+\lambda({2k+1}), \,\,\,\,\,\,{N/4}\leq k \leq {N/2-1}$$ in registers array ${ R}({N/4}:{N/2-1})$. Note that we, now, use $\tilde { x}({N/4}:{N/2-1}$, the second half of the output from step III, as an input to the $N/2$ length decoder. The P-Mode operation of the decoder is also changed, and now contains two steps. STEP I : \ At the MUX array, at the input of the (limited parallelism) polar code decoder of length $N/2$ set the control signal $c_m=0$, which means that ${\lambda}({0}:{N/2-1})$ is selected as an input to this unit. Set $c_u = c_{u,in}$ and use the $N/2$ length polar code decoder in P-Mode. Store the output signals array $L(0:N/4-1)$ in the registers array ${ R}({0}:{N/4-1})$. If $c_{u,in}=1$, use the first half of the input signals array $x_{in}$ (i.e. $x_{in}(0:N/4-1)$) as an input to the $N/2$ length decoder (otherwise this input is ignored). <!-- --> STEP II : \ Set the control signal of the MUX array to $c_m=1$, which means that ${ \lambda}({N/2}:{N-1})$ is selected as an input to this unit. Set $c_u = c_{u,in}$ and use the $N/2$ length polar code decoder in P-Mode. Store the output signals array $L(0:N/4-1)$ in the registers array ${ R}({N/4}:{N/2-1})$. If $c_{u,in}=1$, use the second half of the input signals array $x_{in}$ (i.e. $x_{in}(N/4:N/2-1)$) as an input to the $N/2$ length decoder (otherwise this input is ignored). The output of the decoder is the array of signals corresponding to the array of registers ${R}(0:N/2-1)$. Let’s analyze the time complexity of this algorithm. We denote the S-Mode running time (in terms of clock cycles) for length $N=2^n$ polar code with limited parallelism of $N/{2^i}=2^{n-i}$, by $T(n,n-i)$. We note that $T(n,n-1) = T(n)$, where $T(n)=2N-2$ is the running time of the standard line decoder. The recursion formula is $$T(n,n-i)=2\cdot T(n-1,n-i)+ 4\cdot T_p(n-1,n-i),$$ where $T_p(n,m)$ is the running time of the $N=2^{n}$ length decoder with $2^{m}$ limited parallelism in P-Mode. $$T_p(n,m) = \left\{ \begin{array}{ll} 1, & \hbox{$n-m\leq 1$;} \\ 2\cdot T_p(n-1,m), & \hbox{otherwise.} \end{array} \right.$$ Therefore, $$T_p(n,m)=\left\{ \begin{array}{ll} 1, & \hbox{$n-m\leq 1$;} \\ 2^{n-m-1}, & \hbox{otherwise.} \end{array} \right.$$ It can be shown that $$\label{eq:timeTradeoffLimPara} T(n,n-i)= 2\cdot N +(i-2)\cdot 2^i\,\,\,\,\,\,\,\,, i\geq 1.$$ Equation (\[eq:timeTradeoffLimPara\]) reveals the tradeoff between the number of PEs and the running time of the algorithm. For example, decreasing the number of processors by a multiplicative factor of $8$, compared to the standard case (i.e. $i=4$), results in an increase of only $34$ clock cycles in the decoding time. We note however, that to build such a decoder, additional control hardware (e.g. multiplexers layers) should be designed. It seems that for a limited list size, the Successive Cancellation List decoder may also be implemented by a line decoder. This requires to duplicate the hardware by the size of the list, $M$, and to introduce the appropriate logic (i.e. comparators and multiplexer layers). It is possible to provide an implementation with $O(f(M)\cdot N)$ time complexity, where $f(\cdot)$ is a polynomialy bounded function, that is dependent on the efficiency of algorithms for selection of $M$ most likely paths (in the $N=2$ decoder). Furthermore, the normalization of the likelihoods should be considered carefully, and also should have its impact on the precise (i.e. non asymptotic) time complexity. The BP Line Decoder {#sec:UVLineDecoderBP} -------------------- \ As we saw in Subsection \[sec:BP\], BP is an iterative algorithm, in which messages are sent on the normal factor graph representing the code. In this subsection, we consider an implementation of the BP decoder with the GCC serial schedule. The proposed decoder structure is a variation on the recursive structure of the SC line decoder. Figure \[fig: BPLineDecodeUV\] depicts a block diagram for this design. The main changes, in respect to the SC decoder, are in the memory resources and the processor structure. The memory plays a fundamental role in the design, as it helps keeping computed messages within the iteration boundary and beyond it. The basic requirement is that each “butterfly” realization of the $(u+v,v)$ factor graph, should have memory resources to store its messages. To allow messages to be kept within the iteration boundary, it is only required to have one registers array for each length of outer code and for each message type. However, the need for keeping a message beyond the iteration boundary requires a dedicated memory array for each instance of the outer code. In the case of the $(u+v,v)$ code and the GCC schedule, only messages of type $\mu_{v}^{(in)}$ need to be kept beyond the iteration boundary. We suggest to address this requirement, in the following way. For the decoder of length $N$, we associate a registers matrix $\mu_{v}^{(in)}(0:\#_r(N)-1,0:N/2-1)$. Here, $\#_r(N)$ is the *number of realizations* of factor graphs corresponding to outer codes of size $N$ that exist in our code. For the code of length $N$, there is only one factor graph of this size (i.e. the entire graph), and therefore for this decoder $\#_r(N)=1$. Consider, now, the $N/2$ length decoder that is embedded within the $N$ length decoder. We see in Figure \[fig: BPLineDecodeUV\], that this decoder has its number of realizations as $2\cdot \#_r(N)$, i.e. for the $N$ length decoder we have $\#_r(N/2)=2$. This is because we have two outer codes of length $N/2$ in the $N$ length code. Therefore, the memory matrix associated with it has two rows and $N/4$ columns. The first row is dedicated for the first realization of the outer code and the second row is dedicated for the second realization. Within this $N/2$ length decoder, there is an embedded $N/4$ length decoder with $2\cdot\#_r(N/2)$ realizations, so in this case $\#_r(N/4)= 4$. As a result, it has a registers matrix with $4$ rows and $N/8$ columns (each row is dedicated to one of the $4$ outer codes of length $N/4$ in this GCC scheme). This development continues, until we reach the embedded decoder of length $2$, which, by induction, has $\#_r(2)=N/2$ realizations for the $N$ length decoder, so it requires a registers matrix with $N/2$ rows and one column. For a correct operation of the decoder, it is required to inform the embedded decoders to which realization of the outer code’s factor graph they are currently referring to. This is the role of the signals $realizationID_{N/2}$, $realizationID_{N}$ and $OuterCodeID$, that indicate the specific realization as follows. The signal $realizationID_{N}$ notifies the decoder of length $N$, on which realization of the factor graph of the code of length $N$ it is working on. Note, that because we describe here a decoder for a code of length $N$, we have only one realization of this graph, therefore this signal is fixed to $0$. However, if this was an embedded decoding unit within a larger length code decoder, then this signal should indicate the ordinal number of the outer code we’re decoding, ranging from $0$ to $\#_r(N)-1$. The signal $realizationID_{N/2}$ gives the identification of the realization of the outer polar code of length $N/2$. It is computed as $2\cdot realizationID_N+OuterCodeID$, where $OuterCodeID$ equals $0$ on step II, and equals $1$ on step IV of the iteration for the $N$ length decoder. We also need to have registers arrays for the messages of type $\mu_{e_1\rightarrow a_0},\mu_{a_0 \rightarrow e_1},\mu_{u}^{(in)},\mu_{u}^{(out)}$ and $\mu_{v}^{(out)}$, each one of them of length $N/2$. We denote them by $\mu_{e_1\rightarrow a_0}(0:N/2-1), \mu_{a_0\rightarrow e_1}(0:N/2-1), \mu_{u}^{(in)}(0:N/2-1), \mu_{u}^{(out)}(0:N/2-1)$ and $\mu_{v}^{(out)}(0:N/2-1)$. Note, that as opposed to the memory structure for the $\mu_{v}^{(in)}$ messages, these arrays do not need to be available beyond the iteration boundary, therefore it suffices to have them as arrays and not matrices. Furthermore, the arrays for messages $\mu_{e_1\rightarrow a_0}$, $\mu_{u}^{(out)}$ and $\mu_{v}^{(out)}$, can be replaced by one temporary array. However, in the description of the hardware structure, we chose not to do this, in order to keep the discussion more comprehensible. \ Figure \[fig: BPPEPRocessor\], depicts the processing element $BP\_PE$ that is considered here. This unit has two inputs for message llrs, and depending on the control signal $c_{BPPE}$ it performs either the $f_{(+)}(\cdot,\cdot)$ function or the $f_{(=)}(\cdot,\cdot)$. Because it has to implement the functionalities of equations (\[eq:BPUV1\])-(\[eq:BPUV6\]), we introduce routing layers for the inputs (OP-MUX) and the outputs (OP-De-MUX) that ensure that the proper inputs will be given to the processor and that its output is stored in the appropriate array, depending on the computation schedule of the iteration. Besides the messages that serve as inputs or outputs to the processor, we allocate two additional message inputs, denoted by $ext_{a}$ and $ext_{b}$, and one additional output message, denoted by $ext_{out}$. These inputs and output are used during the P-Mode of the decoder. We note, that in Figure \[fig: BPLineDecodeUV\], we preferred, for brevity, not to specify these routing units for each processor, but rather to group them into routing arrays. The inputs and outputs to these routing arrays are arrays of inputs and outputs corresponding to the types of inputs and outputs that appear in Figure \[fig: BPPEPRocessor\]. The convention is that in these routing arrays, the $i^{th}$ output corresponds to the $i^{th}$ input from each signals array (the signals array is selected by the control signal of the routing array). Moreover, the $i^{th}$ output of the OP-MUX array corresponds to the consecutive $i^{th}$ processor from the array of processors it serves. Similarly, the $i^{th}$ input of the OP-De-MUX array corresponds to the $i^{th}$ consecutive processor from the array of processors it serves. As in the SC case, the BP line decoder has two operation modes. - S-Mode - The decoder gets as input $\mu^{(in)}$ type of messages referring to its inputs. It outputs, $\mu^{(out)}$ type of messages, corresponding to its inputs (i.e. messages, that are sent from the subgraph which realization the decoder is operating on, to its neighbors) and estimation of the information word vector (denoted by *infoEst*). - P-Mode - The decoder serves as an array of $N/2$ processors and performs simultaneously the computation of the type of message indicated by $C_{PPE,external}$ using signals $ext_a$ and $ext_b$ as the inputs and $ext_{out}$ as the output. The S-Mode decoding algorithm operates as follows. STEP I : \ Simultaneously, - At the MUX array, at the input of the decoder of the code of length $N/2$, set the control signal $c_m=0$, which means that the OP-MUX array is selected as the input to the decoder. Set $c_{opMUX}$ such that $\mu^{(in)}_{v}(0:N/4-1)$ and $\mu_{x_1}^{(in)}(0:N/4-1)$ will be selected as the first input and the second input, respectively of this unit. Set $c_{BPPE,internal}$ to correspond to the computation of (\[eq:BPUV1\]) and use the $N/2$ length polar code decoder in P-Mode. Set the OP-De-MUX array to direct the output to $\mu_{e_1 \rightarrow a_0}(0:N/4-1)$. - Having the same values for $c_{opMUX}$ and $c_{BPPE,internal}$, use the auxiliary array of processors to operate on the inputs $\mu^{(in)}_{v}(N/4:N/2-1)$ and $\mu_{x_1}^{(in)}(N/4:N/2-1)$ and have the output directed to $\mu_{e_1\rightarrow a_0}^{}(N/4:N/2-1)$. Simultaneously, - Keep $c_m=0$. Set $c_{opMUX}$ such that $\mu_{x_0}^{(in)}(0:N/4-1)$ and $\mu_{e_1\rightarrow a_0}(0:N/4-1)$ will be selected as the first input and the second input, respectively to the $N/2$ length decoder. Set $c_{BPPE,internal}$ to correspond to the computation of (\[eq:BPUV3\]) and use the decoder of length $N/2$ in P-Mode. Set the OP-De-MUX array to direct the output to $\mu_{u}^{(out)}(0:N/4-1)$. - Having the same values for $c_{opMUX}$ and $c_{BPPE,internal}$, use the auxiliary array of processors to operate on the inputs $\mu_{x_0}^{(in)}(N/4:N/2-1)$ and $\mu_{e_1\rightarrow a_0} (N/4:N/2-1)$ and have the output directed to $\mu_{u}^{(out)}(N/4:N/2-1)$. STEP II : - At the MUX array, at the input of the BP decoder of the $N/2$ length polar code, set the control signal $c_m=1$, which means that the input from the second multiplexer is selected as input to this unit. Specifically, since $OuterCodeID=0$ it means that $\mu_{u}^{(out)}(0:N/2-1)$ is the input to this decoder. - Provide the indices of the the frozen bits from the first half of the codeword to the $N/2$ length decoder, and operate it in S-Mode. Store the estimation of the information word (output signals array *infoSet*) to the bits array $u(0:N/2-1)$. Direct the output messages to be saved in $\mu_{u}^{(in)}(0:N/2-1)$, using the de-mux that is connected to the *outMessages* signals array, at the output of the $N/2$ length decoder. STEP III : \ Simultaneously, - At the MUX array, at the input of the BP decoder of the $N/2$ length polar code set the control signal $c_m=0$, which means that the OP-MUX array is selected as the input to the decoder. Set $c_{opMUX}$ such that $\mu^{(in)}_{x_0}(0:N/4-1)$ and $\mu_{u}^{(in)}(0:N/4-1)$ will be selected as the first input and the second input, respectively to this unit. Set $c_{BPPE,internal}$ to correspond to the computation of (\[eq:BPUV2\]), and use the $N/2$ length decoder in P-Mode. Set the OP-De-MUX array to direct the output to the array $\mu_{a_0\rightarrow e_1}(0:N/4-1)$. - Having the same values for $c_{opMUX}$ and $c_{BPPE,internal}$, use the auxiliary array of processors to operate on the inputs $\mu^{(in)}_{x_0}(N/4:N/2-1)$ and $\mu_{u}^{(in)}(N/4:N/2-1)$ and have the output directed to $\mu_{a_0\rightarrow e_1}^{}(N/4:N/2-1)$. Simultaneously, - Keep $c_m=0$ and change $c_{opMUX}$ such that $\mu^{(in)}_{x_1}(0:N/4-1)$ and $\mu_{a_0\rightarrow e_1}(0:N/4-1)$ will be the first input and the second input, respectively to the $N/2$ length decoder. Set $c_{BPPE,internal}$ to correspond to the computation of (\[eq:BPUV4\]) and use the $N/2$ length decoder in P-Mode. Set the OP-De-MUX array to direct its output to the array $\mu_{v}^{(out)}(0:N/4-1)$. - Having the same values for $c_{opMUX}$ and $c_{BPPE,internal}$, use the auxiliary array of processors to operate on the inputs $\mu^{(in)}_{x_1}(N/4:N/2-1)$ and $\mu_{a_0\rightarrow e_1}(N/4:N/2-1)$ and have the output directed to $\mu_{v}^{out}(N/4:N/2-1)$. STEP IV : \ - At the MUX array, at the input of the decoder of the code of length $N/2$, set the control signal $c_m=1$, which means that the input from the second multiplexer is selected as an input to this unit. Also set $OuterCodeID=1$, which means that $\mu_{v}^{(out)}(0:N/2-1)$ is the input to this decoder. - Provide the indices of the the frozen bits from the second half of the codeword to the $N/2$ length decoder, and operate it in S-Mode. Perform the decoding of the second outer polar code of length $N/2$. Save the estimation of the information word (output signals array *infoSet*) to the bits array $u(N/2:N-1)$. Direct the output messages to be stored in $\mu_{v}^{(in)}(0:N/2-1)$, using the de-mux that is connected to the *outMessages* signals array, at the output of the $N/2$ length decoder. Simultaneously, - At the MUX array, at the input of the decoder of the code of length $N/2$, set the control signal $c_m=0$. Set $c_{opMUX}$ such that $\mu^{(in)}_{v}(0:N/4-1)$ and $\mu_{x_1}^{(in)}(0:N/4-1)$ will be selected as the first input and the second input, respectively of this unit. Set $c_{BPPE,internal}$ to correspond to the computation of (\[eq:BPUV1\]) and use the $N/2$ length polar code decoder in P-Mode. Set the OP-De-MUX array to direct the output to $\mu_{e_1 \rightarrow a_0}(0:N/4-1)$. - Having the same values for $c_{opMUX}$ and $c_{BPPE,internal}$, use the auxiliary array of processors to operate on the inputs $\mu^{(in)}_{v}(N/4:N/2-1)$ and $\mu_{x_1}^{(in)}(N/4:N/2-1)$ and have the output directed to $\mu_{e_1\rightarrow a_0}^{}(N/4:N/2-1)$. Simultaneously, - Keep $c_m=0$ and change $c_{opMUX}$ such that $\mu^{(in)}_{u}(0:N/4-1)$ and $\mu_{e_1\rightarrow a_0}(0:N/4-1)$ will be the first input and the second input, respectively to the $N/2$ length decoder. Use the polar code decoder of length $N/2$ in P-Mode, and set $c_{BPPE,internal}$ to correspond to the computation of (\[eq:BPUV5\]). Set the OP-De-MUX array to direct the output to $\mu_{x_0}^{(out)}(0:N/4-1)$. - Having the same values for $c_{opMUX}$ and $c_{BPPE,internal}$, use the auxiliary array of processors to operate on the inputs $\mu^{(in)}_{u}(N/4:N/2-1)$ and $\mu_{e_1\rightarrow a_0}(N/4:N/2-1)$ and have the output directed to $\mu_{x_0}^{out}(N/4:N/2-1)$. Simultaneously, - Keep $c_m=0$ and change $c_{opMUX}$ such that $\mu^{(in)}_{v}(0:N/4-1)$ and $\mu_{a_0 \rightarrow e_1 }(0:N/4-1)$ will be the first input and the second input, respectively to the $N/2$ length decoder. Set $c_{BPPE,internal}$ to correspond to the computation of (\[eq:BPUV6\]), and use the $N/2$ length polar code decoder in P-Mode. Set the OP-De-MUX array to direct the output to $\mu_{x_1}^{(out)}(0:N/4-1)$. - Having the same values for $c_{opMUX}$ and $c_{BPPE,internal}$, use the auxiliary array of processors to operate on the inputs $\mu^{(in)}_{v}(N/4:N/2-1)$ and $\mu_{a_0 \rightarrow e_1}(N/4:N/2-1)$ and have the output directed to $\mu_{x_1}^{out}(N/4:N/2-1)$. The output of the decoder, in S-mode, will be the array $u(0:N-1)$ and the two arrays $\mu_{x_0}(0:N/2-1)$ and $\mu_{x_1}(0:N/2-1)$ interleaved. This means, that the output message vector is an array in which the entries with even indices are from $\mu_{x_0}(0:N/2-1)$ and the entries with the odd indices are from $\mu_{x_1}(0:N/2-1)$. In the P-Mode, the decoder serves as an array of $N/2$ processors that operate in parallel. the control signal $C_{BPPE,external}$ indicates which operation is performed on all the processors. The inputs to the processor are denoted by the signals arrays $ext_{a}(0:N/2-1)$ (the first input) and $ext_{b}(0:N/2-1)$ (the second input). The output is directed to the signals array $ext_{out}(0:N/2-1)$. The P-Mode decoding algorithm operates as follows. Simultaneously, - At the MUX array, at the input of the BP-decoder of the polar code of length $N/2$, set the control signal $c_m=0$, which means that the OP-MUX array is the input of the decoder. Set $c_{opMUX}$ such that $ext_a(0:N/4-1)$ and $ext_b(0:N/4-1)$ will be the first input and the second input, respectively. Use the polar code decoder of length $N/2$ in P-Mode, and set $c_{BPPE,internal}$ to be equal to $C_{BPP,external}$. Have the OP-De-MUX array to direct the output to $ext_{out}(0:N/4-1)$. - Having the same values for $c_{opMUX}$ and $c_{BPPE,internal}$, use the auxiliary array of processors to operate on the inputs $ext_a(N/4:N/2-1)$ and $ext_b(N/4:N/2-1)$ and have the output directed to $ext_{out}(N/4:N/2-1)$. Let us, now, consider the time complexity (in terms of the number of clock cycles consumed by an iteration) of this design. As before, let $T(n)$ be the time complexity of the decoder of the polar code of length $N=2^n$. We assume that each call to a PE requires one clock cycle. In our design, we therefore have $$\label{eq:recBPUV} T(n) = 2\cdot T(n-1)+7,$$ and $T(1)=4$, so $T(n)=5.5\cdot N-7=\Theta(N)$. The memory consumption, however is $\Theta(N\cdot \log N)$, because of the memory matrices for the $\mu_{v}^{(out)}$ type of messages. The number of processing elements in this design is $N/2$. It should be noted, that the suggested processor can be further improved to support some operations to occur in parallel. For example, if the PE could run one operation of $f_{+}(\cdot)$ and one operation of $f_{=}(\cdot)$ in parallel, we could have the two last operations in step IV to be done in one clock cycle, therefore reducing the free addend in (\[eq:recBPUV\]) to $6$. Further reduction would be achieved if one could perform $f_{+}(\cdot)$ and direct its output to $f_{=}(\cdot)$ in one clock cycle. This will result in joining the two operations in step III, into one operation. Allowing the computation of $f_{=}(\cdot)$ and directing its output to $f_{+}(\cdot)$ in the same clock cycle, will results in consolidation of the two operations of step I into one operation (actually, the latter change may also allow to consolidate the second and third computation in step IV, making the first change redundant). These changes will result in $4$, as the free addend in (\[eq:recBPUV\]) and $T(2)=2$, so $T(n)=3\cdot N-4$. Naturally, these changes require the appropriate amendments in the routing units, that we described before. We want to note here that the remarks, which we made on the SC line decoder at the end of Subsection \[sec:UVLineDecoder\] also apply here. Specifically, the long paths hazards, requiring a more efficient designs by opening the recursive boxes is also relevant for the BP decoder, specifically for the routing layers in P-Mode. Furthermore, the issue of idle clock cycles for the PEs is also a problem of this design and the solution of Subsections \[sec:ParDecLine\] and \[sec:LimitedParDecLine\] may be adapted to this decoder too. However, while in the SC decoder, the existence of inactive PEs is due to the properties of the SC algorithm, which dictates the scheduling of the message computation, in the BP case, this is due to the scheduling we choose and not a mandatory property of the algorithm. Other types of scheduling do exist, and currently there is no evidence which scheduling is better (for example, in terms of the achieved error rate or in terms of the average number of iterations required for convergence). Hussami *et al.* [@Hussami2009] proposed to use the Z-shape schedule, which description suggests a constant level of parallelism of $N$ PEs (of the type we considered here) operating all the time. This seems to give the Z-shape schedule an advantage over the GCC schedule if the number of processors is not limited (unless the technique of Subsection \[sec:ParDecLine\] is applied). It is an interesting question to find which schedule is better, when the number of processors is limited. This is a matter for further research. Hardware Architectures for General Kernels {#sec:HardArchiForOthKer} ========================================== So far, we described algorithms for decoding of polar codes in a recursive way. This notion has enabled us to restate the hardware implementation for SC for Arikan’s construction, that were proposed by Leroux *et al.* [@Leroux2012]. In addition, we suggested an implementation for BP decoding for the GCC schedule. In this section, we would like to generalize these constructions for other types of kernels. Because we already covered the implementation for Arikan’s codes in some details, we will be more brief in this section, mainly emphasizing the principle differences from the designs in Section \[sec:HrdwreArikConstr\]. Recursive Description for the SC Line Decoder for General Kernels {#sec:SCLineForGeneralKernel} ----------------------------------------------------------------- \ Figure \[fig: LineDecoderGeneralHmGen\] depicts a block diagram for a SC line decoder for a general linear kernel of dimension $\ell$, over alphabet $F$. This kernel has an $\ell\times\ell$ generating matrix, $G$ associated with it. We assume, that this decoder has the same requirements for the inputs and outputs, that were given for the $(u+v,v)$ line decoder in Subsection \[sec:UVLineDecoder\]. The basic processing element of this design (denoted by PE), gets $\ell$ llr functions (each function is of $|F|-1$ values), and the coset vector that reflects the previous stages decisions. The control signal $c_u$ indicates which type of llr function the processor should output. There are $\ell$ types of computations that the processor should support according to the different stages of decoding, as (\[eq:genSCRule\]) implies. Since we consider here a linear kernel, when decoding outer code number $k$, the assumption on ${\bf u}_0^{k-1}$ (the information sub-vector input to the kernel) is manifested by the coset vector, which this sub-vector induces. This coset vector is generated by ${\bf u}_0^{k-1}\cdot G_{\rightarrow(0:k-1)}$, where $G_{\rightarrow(0:k-1)}$ is a matrix containing only the first $k$ rows of $G$. This coset vector is gradually computed and maintained in the registers array $x(0:N-1)$, as we explain in the sequel. We note, that if the kernel is not linear, then each processor should get the previously decided bits associated with it, i.e. the estimated sub-vector ${\bf u}_0^{k-1}$, in order to perform (\[eq:genSCRule\]). The way the llr computations of (\[eq:genSCRule\]) are done is an important question, that we do not elaborate on here. For example, it may be beneficial to consider trellis implementation, of the decoding stages, or even consider using approximations of it, such as *min-sum* rule [@Leroux2012], or near ML decoding variants, such as *order statistics* or *box and match* [@Trifonov2012]. Since the outer codes in this design are of length $N/{\ell}$, the processors in the preparatory steps of the SC algorithm (i.e. steps $2\cdot r -1$, as defined in Section \[sec:RecDescOfDecAlgor\]) should generate $N/\ell$ llr functions, serving as inputs to the decoder of the outer code. Therefore, to have the maximum level of parallelism we use $N/{\ell}$ PEs in the decoder. The embedded $N/{\ell}$ length recursive decoder is able to contribute only $N/ \ell^2$ processors, so the auxiliary array of processors needs to supply the rest of the processors, i.e. it should have $N/{\ell}-N/{\ell^2}$ additional processors. The *encoding unit* gets the decisions on the codewords of the outer codes from the $N/{\ell}$ length decoder. Using these decisions, it computes the estimated coset vectors of the inner codes. To support this, we use the signal *outerCodeID*, that identifies which outer-code is currently decoded. At the end of step $2\cdot r$, we have $outerCodeID = r-1$, because we just finished decoding outer code number $r-1$, by the $N/{\ell}$ length decoder. This decoder outputs the estimation of the codeword using the signals vector $\tilde{x}(0:N/{\ell}-1)$. Now, the encoding layer performs the following operation, for $0 \leq i \leq N/\ell-1$, $$\label{eq:GeneralEnc} x\left(\ell \cdot i:\ell\cdot(i+1)-1\right)=x\left(\ell \cdot i:\ell\cdot(i+1)-1\right)+\tilde{x}(i)\cdot G_{\rightarrow r-1}.$$ This means, that we add row number $r-1$ of $G$, multiplied by the symbols of the recently estimated outer codeword, to the previously estimated coset vectors (Note, that we have $N/{\ell}$ coset vectors, such that $x\left(\ell \cdot i:\ell\cdot(i+1)-1\right)$, corresponds to the $i^{th}$ inner code, $0 \leq i \leq N/\ell-1$). At the end of step $2\cdot\ell$, the output of the encoding layer is the estimation of the codeword. As in the $(u+v,v)$ line decoder, we have two operation modes. The first one is S-Mode, in which the decoder gets llr functions and the indices of the frozen symbols, and outputs the hard decisions on the information word and its corresponding codeword. The second one is P-Mode, in which the decoder operates as an array of processors and performs the same type of operation according to the signal $c_{u}$. In S-Mode, we have $\ell$ pairs of computation steps, as described below ($1\leq r \leq \ell$). STEP $2\cdot r-1$ : \ Simultaneously, - At the MUX array, at the input of the decoder of the polar code of length $N/\ell$, set the control signal $c_m=0$, which means that the array ${\bf \lambda}(0:N/{\ell}-1)$ is selected as an input to this unit. Set $c_u=r-1$ and supply the coset vectors $x(0:N/{\ell}-1)$ to the unit (the latter is achieved because $modeIn=0$). Use the decoder of the polar code of length $N/\ell$ in P-Mode. This means that the processors will perform the computation of the llrs of type $r-1$ according to (\[eq:genSCRule\]), where $k=r$. The values of the computations are stored in the registers array $R(0:N/{\ell^2}-1)$. - Use the auxiliary array of processors, and perform the same computations given the rest of the llrs array, ${\bf \lambda}(N/{\ell}:N-1)$, and the rest of the cosets vector, $x(N/{\ell}:N-1)$. The outputs of the computations are stored in the registers array $R(N/\ell^2:N/{\ell}-1)$. STEP $2\cdot r$ : - At the MUX array, at the input of the decoder of the polar code of length $N/\ell$, set the control signal $c_m=1$, which means that the values of the registers array $R(0:N/{\ell}-1)$ are inputs to this unit. - Provide to the $N/{\ell}$ length polar code decoder, the indices of the frozen symbols from the range $[(r-1)\cdot N/\ell,r\cdot N/{\ell}-1]$. Operate the $N/{\ell}$ length polar code decoder in S-Mode, which results in decoding of the outer code number $r-1$. Store the estimated information word in the following way $u((r-1)\cdot N/\ell:r\cdot N/{\ell}-1)=\tilde{u}(0:N/{\ell}-1)$. Perform the computation of the coset vector as defined in (\[eq:GeneralEnc\]). If this is the last step (i.e. $r=\ell$), then we give as output the content of $u(0:N-1)$ and the content of $x(0:N-1)$ (To avoid the sampling delay due to the registers, we prefer to give as output $\left[u(0:N-\ell+1)\,\,\,\,\, \tilde{u}(0:N/\ell-1)\right]$ instead of $u(0:N-1)$, and the output of the *encoding layer* block instead of $x(0:N-1)$). The P-Mode operation is quite straight forward. We have the signal $modeIn=1$, which indicates that the $N$ length decoder operates in P-Mode. This causes the input cosets vectors (denoted by the signals array $x_{in}(0:N-1)$) to be routed to the processors (instead of the internal cosets vector $x(0:N-1)$). The embedded $N/{\ell}$ length decoder operates in P-Mode (i.s. $mode = 1$, as well). As a result, both the auxiliary array of processors and the embedded $N/{\ell}$ length decoder computes the operation that is indicated by the signal $c_{u,in}$, and output the computations results using the signals array $L(0:N/{\ell}-1)$. The complexity analysis is also quite simple. As an example, if we assume that the processor requires $c$ clock cycles to complete the computation of each of its $\ell$ stages, then we have for $N=\ell^n$ length code, $T(n)=\ell\cdot T(n-1)+\ell\cdot c$ and $T(1)=\ell\cdot c$, so $T(n)=c\cdot\left( N + \ell\cdot\left(\log_{\ell}N-1\right)\right)$ clock cycles. The number of $R$ registers for holding the llr functions (each function contains $|F| -1$ values) can be shown to be $N\sum_{i=1}^{\log_{\ell}N-1}\ell^{-i}=N\cdot \frac{1-N^{-1}\ell^2}{\ell-1}$. The long routing path hazard that we raised in the context of the $(u+v,v)$ decoder, may also be of concern here. Therefore, our suggestion to open the recursion boxes and to optimize them accordingly, may be relevant here as well. The ideas of sharing the auxiliary array of processors for increasing the throughput, or decreasing the parallelism studied in Subsections \[sec:ParDecLine\] and \[sec:LimitedParDecLine\] respectively, are also applicable here with the obvious adaptations. About Decoders for Mixed Kernels and General Concatenated Codes {#sec:MixedKernelsHW} --------------------------------------------------------------- So far, we considered decoders for homogenous kernels that may be non-binary. These codes have the nice property, that the outer codes in their GCC structure are themselves polar codes from the same family (but shorter ones). Therefore, we were able to use a single embedded decoder of a code of length $N/{\ell}$ within the decoder of the code of length $N$. This embedded decoder is used $\ell$ times, each time with different inputs (i.e. indices of the frozen symbols and the input messages). This property no longer applies when mixed kernels are employed. Consider, for example, the $\ell=4$ dimension mixed kernel that we presented in one of our previous papers [@Presman2011]. In the decoder of the mixed code of length $N=4^n$, we should have an embedded decoder of the mixed code of length $N/4$, and an additional embedded decoder for the $RS(4)$ polar code of length $N/4$. It should be noted, however, that even here, a reuse of hardware is still possible, as the decoder for the $RS(4)$ of length $N/4$, requires an embedded decoder for the $RS(4)$ of length $N/16$ within it. The latter decoder (and its embedded decoders) can be shared with the decoder for mixed code of length $N/4$ (that requires an embedded $RS(4)$ decoder of the same length). A further step in generalization of this structure, is the general concatenated structure, in which the outer codes are not required to be polar codes. This means, that other types of codes may be used with their corresponding decoding algorithms. Examples of such structures using BCH codes and near ML decoding algorithms, were recently described by Trifonov [@Trifonov2011]. In these types of constructions, we need to have a separate decoding unit for each outer code. As in the cases of the mixed kernels, if the outer codes share structure and decoding algorithm these resources may be reused, thereby enabling a more efficient design. Summary and Conclusions {#summary-and-conclusions .unnumbered} ======================= We considered the recursive GCC structures of polar codes which led to recursive description of their decoding algorithms. Specifically, known algorithms (SC and SCL) were formalized in a recursive fashion, and then were generalized for arbitrary kernels. The BP decoding algorithm the with the GCC schedule was also depicted. Then, recursive hardware architectures for these algorithms were considered. We restated known architectures, and generalized them for arbitrary kernels. In our discussion, we preferred for brevity, to give somewhat abstract descriptions of the subjects, emphasizing the main properties while neglecting some of the technical details. However, a complete hardware design requires a full treatment of all of these details (as was done by Leroux *et al.* for the $(u+v,v)$ case [@Leroux2012]). We intend to verify this design for arbitrary kernels in a further work. Another issue, that needs a more careful attention, is the BP decoder, and specifically the proposed GCC schedule. A comparison between it and other proposed schedules (e.g. the $Z$ shaped schedule) is an interesting question, which is also a subject for further research. The usage of BP decoder for arbitrary kernels is another interesting problem, that also worth further studying. For these kernels, the way to compute the messages is well understood. However, the question of an appropriate schedule that enables the convergence of the algorithm, is not clear. We note however, that for a specific kernel, if such a schedule exists it may be beneficial to try to define it in a recursive manner, thereby enabling the utilization of the approach in this paper to construct a decoding hardware for it. [^1]: Noam Presman and Simon Litsyn are with the the School of Electrical Engineering, Tel Aviv University, Ramat Aviv 69978 Israel. (e-mails: {presmann, litsyn}@eng.tau.ac.il.). [^2]: The construction of the GCCs is a generalization of Forney’s code concatenation method [@Forney1966]. [^3]: We note, that the original line decoder, which was presented by Leroux *et al.* [@Leroux2012 Section 3.3] is not precisely the same design, which we discuss here. The differences, appear to be minor (especially in the area of the routing between the llr registers and the PEs), so we preferred not to distinguish it from Leroux’s design by giving it another name.
--- abstract: 'Let ${\mathcal{O}}$ be a maximal order in a definite quaternion algebra over $\mathbb{Q}$ of prime discriminant $p$, and $\ell$ a small prime. We describe a probabilistic algorithm, which for a given left ${\mathcal{O}}$-ideal, computes a representative in its left ideal class of $\ell$-power norm. In practice the algorithm is efficient, and subject to heuristics on expected distributions of primes, runs in expected polynomial time. This breaks the underlying problem for a quaternion analog of the Charles-Goren-Lauter hash function, and has security implications for the original CGL construction in terms of supersingular elliptic curves.' author: - 'David Kohel, Kristin Lauter, Christophe Petit[^1], Jean-Pierre Tignol' title: 'On the quaternion $\ell$-isogeny path problem' --- *To appear in the LMS Journal of Computation and Mathematics, as a special issue for ANTS (Algorithmic Number Theory Symposium) conference.* Introduction {#sec:introduction} ============ In this paper, we provide a probabilistic algorithm to solve a quaternion ideal analog of the path problem in supersingular $\ell$-isogeny graphs. The main result is an algorithm for the following. Let ${B_{p,\infty}}$ be a quaternion algebra over ${\mathbb{Q}}$ ramified at $p$ and $\infty$. Let $\ell$ be a “small” prime, typically 2 or 3, or any small constant prime. Given a maximal quaternion order ${\mathcal{O}}$ in ${B_{p,\infty}}$ and a left ${\mathcal{O}}$-ideal $I$, compute an equivalent left ${\mathcal{O}}$-ideal $J = I\beta$ with norm $\ell^k$ for some $k$. This algorithm runs in practice in probabilistic polynomial time, and this effective runtime follows from heuristic assumptions on expected distributions of primes. With minimal adaptation, the algorithm also applies to output an ideal with smooth (or power-smooth) norm. The algorithm is described in terms of a special maximal order, but extends to any maximal order by passing through such a special order. The motivation for this problem is an explicit equivalence of categories between left ${\mathcal{O}}$-ideals and supersingular elliptic curves (over $\bar{{\mathbb{F}}}_p$). The Deuring correspondence gives a bijection between such curves, up to Galois conjugacy, and isomorphism classes of maximal orders in ${B_{p,\infty}}$. This bijection can be turned into an equivalence of categories by the following construction. Let $E_0/K$ be a fixed elliptic curve with endomorphism ring ${\mathcal{O}}= {\mathrm{End}}(E_0)$ a quaternion order in ${B_{p,\infty}}= {\mathcal{O}}\otimes {\mathbb{Q}}$ (we may take the base field $K = {\mathbb{F}}_{p^2}$ and $E_0$ such that $|E_0(K)| = (p+1)^2$). Associated to any pair $(E_1,\varphi)$ where $\varphi: E_0 \rightarrow E_1$ is an isogeny, we obtain a left ${\mathcal{O}}$-ideal $I = \mathrm{Hom}(E_1,E_0) \varphi$ of norm $n = \deg(\varphi)$ and conversely every left ${\mathcal{O}}$-ideal arises in this way (see Kohel [@Kohel1996 Section 5.3]). In particular, given any isogeny $\psi: E_0 \rightarrow E_1$ of degree $m$, the left ${\mathcal{O}}$-ideal $J = I \hat{\varphi}\psi/n$ is an equivalent ideal of norm $m$, where $\hat\psi$ is the dual of $\psi$. The problem we address in this work is to solve the quaternion version of the supersingular $\ell$-isogeny path problem: given $E_0$, $E_1$ and a small prime $\ell$, find an $\ell$-power isogeny from $E_0$ to $E_1$. Under this equivalence of categories, the analogous problem is the determination of a $\ell$-power norm left ${\mathcal{O}}$-ideal in the class of a given left ${\mathcal{O}}$-ideal $I$. After introducing the necessary background on quaternion orders and ideals in Section \[sec:quaternions\] and addressing some preliminary algorithmic problems in Sections \[sec:prel\], we solve the $\ell$-power norm problem in Section \[sec:ideals:ellpow\]. Subject to reasonable heuristics on the probability of finding suitable primes, we obtain a probabilistic algorithm which solves this problem in expected polynomial time. The experimental runtime agrees with the most optimistic predictions for the distribution of primes. The algorithm gives a clear distinction between the efficiency of the $\ell$-isogeny problem in the equivalent category of quaternion ideals, whereas the analogous problem in the category of supersingular elliptic curves, on which the security of the Charles, Goren and Lauter hash function [@Charles2009] is based, has to date resisted attack. This dichotomy poses several questions on the extent to which the information from the algebraic category can be transported to the geometric one. In particular, one expects an algorithm for computing the endomorphism ring of a given elliptic curve to provide an effective reduction to the algebraic setting, making the hardness of this problem critical to the underlying security. The quaternion $\ell$-isogeny path problem {#sec:quaternions} ========================================== In this section, we first motivate and define the quaternion $\ell$-isogeny path problem. We then recall basic facts on quaternion algebras. We introduce *$p$-extremal* maximal orders, which will play an important role in our solution of the quaternion $\ell$-isogeny problem. We finally discuss properties of reduced norms and ideal morphisms. “Hard” isogeny problems {#sec:intro:hard} ----------------------- The motivation for studying the quaternion $\ell$-isogeny problem is based on the analogous (indeed categorically equivalent) problem for supersingular elliptic curves. The difficulty of this problem for elliptic curves underlies the security of the Charles, Goren and Lauter hash function [@Charles2009]. As an example, finding a preimage (inverting the function) amounts to solving the following path problem in the supersingular $\ell$-isogeny graph: \[prob:preim\] Let $p$ and $\ell$ be prime numbers, $p\neq\ell$. Let $E_0$ and $E_1$ be two supersingular elliptic curves over ${\mathbb{F}}_{p^2}$ with $|E_0({\mathbb{F}}_{p^2})|=|E_1({\mathbb{F}}_{p^2})|=(p+1)^2$. Find $k\in\mathbb{N}$ and an isogeny of degree $\ell^k$ from $E_0$ to $E_1$. Similarly, finding collisions requires a solution to the following multiple path problem in the supersingular $\ell$-isogeny graph: \[prob:coll\] Let $p$ and $\ell$ be prime numbers, $p\neq\ell$. Let $E_0$ be a supersingular elliptic curve over ${\mathbb{F}}_{p^2}$. Find $k_1,k_2\in\mathbb{N}$, a supersingular elliptic curve $E_1$ and two distinct isogenies (i.e. with distinct kernels) of degrees respectively $\ell^{k_1}$ and $\ell^{k_2}$ from $E_0$ to $E_1$. Setting ${\mathcal{O}}= {\mathrm{End}}(E_0)$, we have a category of left ${\mathcal{O}}$-ideals, with morphisms $I \rightarrow I\alpha \subseteq J$, for $\alpha$ in $B = {\mathcal{O}}\otimes {\mathbb{Q}}$, which is equivalent to the category of supersingular elliptic curves and isogenies. The analog of the path problem in supersingular $\ell$-isogeny graphs is that of finding a representative ideal $J$ for given $I$ of norm $\ell^k$. We call this problem the *quaternion $\ell$-isogeny path problem*, and focus on its effective solution in this article. Quaternion algebras\[sec:intro:QA\] ----------------------------------- In this work we consider the structure of left ideals of a maximal order in the quaternion algebra $B_{p,\infty}$ ramified only at $p$ and $\infty$. Such an algebra is isomorphic to ${\mathrm{End}}(E) \otimes {\mathbb{Q}}$ for any supersingular elliptic curve $E/{\mathbb{F}}_{p^2}$. Here we denote ${\mathrm{End}}(E) = {\mathrm{End}}_{\bar{{\mathbb{F}}}_p}(E)$ and if we assume $\#E({\mathbb{F}}_{p^2}) = (p+1)^2$, then the full endomorphism ring ${\mathrm{End}}(E)$ is defined over ${\mathbb{F}}_{p^2}$. Any definite quaternion algebra over ${\mathbb{Q}}$ has a presentation of the form ${\mathbb{Q}}\langle{i,j}\rangle$, where $i^2 = a$, $j^2 = b$, $k = ij = -ji$ for negative integers $a,b$. The canonical involution on $B_{p,\infty}$ is given by $$\alpha = x_0 + x_1 i + x_2 j + x_3 k \longmapsto \bar{\alpha} = x_0 - x_1 i - x_2 j - x_3 k.$$ from which the reduced trace and norm take the form $${\mathrm{Trd}}(\alpha) = \alpha + \bar{\alpha} = 2x_0 \mbox{ and } {\mathrm{Nrd}}(\alpha) = \alpha \bar{\alpha} = x_0^2 - a x_1^2 - b x_2^2 + ab x_3^2.$$ The integral basis $\{1,i,j,k\}$ has the nice property of being an orthogonal basis with respect to the bilinear form $\langle{x,y}\rangle = {\mathrm{Nrd}}(x + y) - {\mathrm{Nrd}}(x) - {\mathrm{Nrd}}(y)$ associated to the reduced norm. Nevertheless, the order ${\mathcal{O}}= {\mathbb{Z}}\langle{i,j}\rangle$ is never maximal. Extremal orders\[sec:prob:extremal\] ------------------------------------ In this work we first place the focus on the [*$p$-extremal*]{} maximal orders ${\mathcal{O}}$ containing $\pi$ such that $\pi^2 = -p$. For a general order there exists a unique maximal $2$-sided ideal $\mathfrak{P}$ over $p$, and this ideal is principal if and only if there exists such an element $\pi$. The maximal ideal $\mathfrak{P}$ is a generator of the $2$-sided class group, and $p$-extremal orders are precisely those of trivial $2$-sided class number. In the context of supersingular elliptic curves, these are the maximal orders which are endomorphism rings of elliptic curves defined over ${\mathbb{F}}_p$ with Frobenius endomorphism $\pi$. Secondly, we focus on orders with distinguished quadratic subring $R$. For a maximal order ${\mathcal{O}}$ we define $ d({\mathcal{O}}) = \min\{ {\mathrm{disc}}(R) : {\mathbb{Z}}\ne R \subsetneq {\mathcal{O}}\}. $ Among all $p$-extremal maximal quaternion orders, we define a [*special*]{} $p$-extremal maximal order ${\mathcal{O}}$ to be a $p$-extremal maximal order such that $d({\mathcal{O}})$ is minimal. The following lemma establishes the main properties we need for such an order, after which Lemmas \[lem:disc-4\], \[lem:disc-8\], and \[lem:disc-q\] provide for their existence by explicit construction. \[lem:special-p-extremal-properties\] Let ${\mathcal{O}}$ be a maximal order in $B_{p,\infty}$ containing a subring ${\mathbb{Z}}\langle{i,j}\rangle$ with $i^2=-q$, $j^2=-p$, and $ij = -ji$, for $q$ coprime to $p$. Set $R = {\mathcal{O}}\cap {\mathbb{Q}}[i]$ and let $D$ be its discriminant. If $R$ is the ring of integers of ${\mathbb{Q}}[i]$, then $R^\perp = Rj$ and $R + Rj$ is a suborder of index $|D|$ in ${\mathcal{O}}$. If $\omega$ is a generator of $R$, then $${\mathrm{Nrd}}(x_1 + y_1\omega + (x_2 + y_2\omega)j) = f(x_1,y_1) + p f(x_2,y_2),$$ where $f(x,y)$ is a principal quadratic form of discriminant $D$. The triviality of the trace of $j$ and the anti-commuting relation $ij = -ji$ imply that ${\mathbb{Q}}(i)$ has orthogonal complement ${\mathbb{Q}}(i)j$ in $B_{p,\infty}$. Consequently $R^\perp \subset {\mathcal{O}}$ is a lattice in ${\mathbb{Q}}(i)j$ containing $Rj$, hence of the form $\mathfrak{a}j$ for a fractional ideal $\mathfrak{a}$ of $R$ which contains $R$. The prime $p$ is inert in $R$, since $p$ is ramified in $B_{p,\infty}$ but not in $R$. Since the norm is integral on $\mathfrak{a}j$, and ${\mathrm{Nrd}}(j) = p$, it follows that $\mathfrak{a}$ is integral, hence equals $R$. The orthogonality of $R$ and $Rj$ implies that $j\beta = \bar{\beta}j$ for all $\beta$ in $R$, so $jR = Rj$ and $R + Rj$ is closed under multiplication. The form of the norm follows from orthogonality and multiplicativity of the norm: ${\mathrm{Nrd}}(\beta_1 + \beta_2j) = {\mathrm{Nrd}}(\beta_1) + p {\mathrm{Nrd}}(\beta_2)$. Consequently the discriminant of the norm form is $D^2 p^2$, from which we conclude that $R + Rj$ has index $|D|$ in any maximal order. By convention, for our special $p$-extremal order ${\mathcal{O}}$, we fix ${\mathbb{Z}}[i] \subseteq R$ with $i^2 = -q$ and $D = {\mathrm{disc}}(R) = -d({\mathcal{O}})$, and $j^2 = -p$ (i.e. $j = \pi$ above). Being of smallest discriminant, $R$ is necessarily a maximal order whose discriminant is the first of the sequence $$-3, -4, -7, -8, -q \text{ for prime } q \equiv 3 \bmod 4,$$ such that $p$ is ramified or inert in $R$. The next three lemmas establish existence for $q = 1$, $q = 2$, and $q \equiv 3 \bmod 4$ prime. These lemmas incorporate and expand on Propositions 5.1 and 5.2 of Pizer [@Pizer1980]. We recall that an order in a quaternion algebra is [*Eichler*]{} if it is the intersection of two maximal orders. \[lem:disc-4\] Let $p \equiv 3 \bmod 4$ be a prime, and let $B = {\mathbb{Q}}\langle{i,j}\rangle$ be the quaternion algebra given by the presentation $i^2 = -1$, $j^2 = -p$, and $k = ij = -ji$, and set $R = {\mathbb{Z}}[i]$. Then $B$ is ramified only at $p$ and $\infty$, and ${\mathbb{Z}}\langle{i,j}\rangle$ is contained in exactly two maximal orders with index $4$, described by the inclusion chains: $${\mathbb{Z}}\langle{i,j}\rangle \subsetneq {\mathbb{Z}}\langle{i,\frac{1+i+j+k}{2}}\rangle \subsetneq \left\{ \begin{array}{l} {\displaystyle}{\mathbb{Z}}\langle{i,\frac{1+j}{2}}\rangle{\raisebox{0.4ex}{,}}\\[2.5mm] {\displaystyle}{\mathbb{Z}}\langle{i,\frac{1+k}{2}}\rangle\cdot \end{array} \right.$$ In particular ${\mathbb{Z}}\langle{i,(1+i+j+k)/2}\rangle$ is an Eichler order, but ${\mathbb{Z}}\langle{i,j}\rangle$ is not. \[lem:disc-8\] Let $p \equiv 5 \bmod 8$ be a prime, and let $B = {\mathbb{Q}}\langle{i,j}\rangle$ be the quaternion algebra given by the presentation $i^2 = -2$, $j^2 = -p$, and $k = ij = -ji$, and set $R = {\mathbb{Z}}[i]$. Then $B$ is ramified only at $p$ and $\infty$, and ${\mathbb{Z}}\langle{i,j}\rangle$ is contained in exactly two maximal orders with index $8$, described by the inclusion chains: $${\mathbb{Z}}\langle{i,j}\rangle \subsetneq {\mathbb{Z}}\langle{i,j,\frac{i+k}{2}}\rangle \subsetneq {\mathbb{Z}}\langle{i,\frac{i+k}{2},\frac{1+j+k}{2}}\rangle \subsetneq \left\{ \begin{array}{l} {\displaystyle}{\mathbb{Z}}\langle{i,\frac{1+j+k}{2},\frac{i+2j+k}{4}}\rangle{\raisebox{0.4ex}{,}}\\[2.5mm] {\displaystyle}{\mathbb{Z}}\langle{i,\frac{1+j+k}{2},\frac{i+2j-k}{4}}\rangle\cdot \end{array} \right.$$ In particular ${\mathbb{Z}}\langle{i,j}\rangle$ is not an Eichler order. \[lem:disc-q\] Let $p$ and $q$ be primes, with $p \equiv 1 \bmod 4$, $q \equiv 3 \bmod 4$, and $${\left(\!\frac{-p}{q}\!\right)} = 1.$$ Let $B = {\mathbb{Q}}\langle{i,j}\rangle$ be the quaternion algebra given by the relations $i^2 = -q$, $j^2 = -p$, and $k = ij = -ji$, and set $R = {\mathbb{Z}}[(1+i)/2]$. Then $B$ is ramified only at $p$ and $\infty$, and ${\mathbb{Z}}\langle{(1+i)/2,j}\rangle = R + Rj$ is contained in exactly two maximal orders with index $q$, described by the inclusion chains: $${\mathbb{Z}}\langle{(1+i)/2,j}\rangle \subsetneq \left\{ \begin{array}{l} {\displaystyle}{\mathbb{Z}}\langle{\frac{1+i}{2}{\raisebox{0.4ex}{,}}\, j\,{\raisebox{0.4ex}{,}}\frac{ci+k}{q}}\rangle{\raisebox{0.4ex}{,}}\\[2.5mm] {\displaystyle}{\mathbb{Z}}\langle{\frac{1+i}{2}{\raisebox{0.4ex}{,}}\, j\,{\raisebox{0.4ex}{,}}\frac{ci-k}{q}}\rangle{\raisebox{0.4ex}{,}}\end{array} \right.$$ where $c$ is any root of $x^2 + p \bmod q$. In particular $R + Rj$ is an Eichler order. Under the generalized Riemann hypothesis, for $p \equiv 1 \bmod 4$, the smallest $q$ satisfying the conditions of the last lemma is $O(\log(p)^2)$ by a result of Ankeny [@Ankeny1952] (or explicitly $q < 2\log(p)^2$ by Bach [@Bach1990]). In the remainder of this paper, we will assume that ${B_{p,\infty}}$, ${\mathcal{O}}$, and $R$ are suitably constructed from these lemmas with ${\mathrm{disc}}(R)$ the minimal discriminant in which $p$ is inert in the sequence $-3$, $-4$, $-7$, $-8$, or $-q$ for $q \equiv 3 \bmod 4$ prime. Reduced norms and ideal morphisms {#sec:ideals:properties} --------------------------------- Now suppose that ${\mathcal{O}}$ is any maximal order. We recall that the reduced norm on ${B_{p,\infty}}$ induces a reduced norm on left ideals defined by any of the equivalent conditions $${\mathrm{Nrd}}(I) := \sqrt{|{\mathcal{O}}/I|} = \gcd\left(\{\, {\mathrm{Nrd}}(\alpha) \;:\; \alpha \in I \,\}\right),$$ or by $I\bar{I} = {\mathrm{Nrd}}(I){\mathcal{O}}$. It follows that the reduced norm on ideals is multiplicative and compatible with the reduced norm on elements ${\mathrm{Nrd}}(\alpha) = {\mathrm{Nrd}}(\alpha{\mathcal{O}}) = {\mathrm{Nrd}}({\mathcal{O}}\alpha)$. If $I$ and $J$ are left ${\mathcal{O}}$-ideals, a homomorphism of $I$ to $J$ is a map given by $\alpha \mapsto \alpha\gamma$ for $\gamma$ in ${B_{p,\infty}}^*$, which is an isomorphism if $J = I\gamma$. By the multiplicativity of the reduced norm, isomorphisms are similitudes of quadratic modules (with respect to the reduced norm). In particular, an isomorphism sends a reduced basis to a reduced basis. In fact the normalized norm map $$q_I = \frac{{\mathrm{Nrd}}}{{\mathrm{Nrd}}(I)} : I \longrightarrow {\mathbb{Z}}$$ remains invariant under this isomorphism, in the sense that $q_I(\alpha) = q_J(\beta)$ for $\alpha$ in $I$ and $\beta = \alpha\gamma$ in $J$. The normalized norm $q_I$ is a positive-definite integral quadratic map, whose bilinear module given by $ \langle{x,y}\rangle = q_I(x+y) - q_I(x) - q_I(y) $ has determinant $p^2$. This follows from the same property for any maximal order (see Pizer [@Pizer1980 Proposition 1.1]), since $|{\mathcal{O}}/I| = {\mathrm{Nrd}}(I)^2$, and the fact that any submodule of index $m$ in a quadratic module $L$ has determinant $m^2\det(L)$. The following lemma serves to replace an ideal $I$ with an isomorphic one of different reduced norm. \[lem:ideal\_norm\_rep\] Let $I$ be a left ${\mathcal{O}}$-ideal of reduced norm $N$ and $\alpha$ an element of $I$. Then $I\gamma$, where $\gamma = \bar{\alpha}/N$, is a left ${\mathcal{O}}$-ideal of norm $q_I(\alpha)$. By the multiplicativity of the reduced norm, and ${\mathrm{Nrd}}(\alpha) = {\mathrm{Nrd}}(\bar{\alpha})$, we have $${\mathrm{Nrd}}(I\gamma) = {\mathrm{Nrd}}(I){\mathrm{Nrd}}(\gamma) = N\frac{{\mathrm{Nrd}}(\alpha)}{N^2} = \frac{{\mathrm{Nrd}}(\alpha)}{N} = q_I(\alpha).$$ Clearly $I$ is a fractional left ${\mathcal{O}}$-ideal, so it remains to show that $I\gamma \subseteq {\mathcal{O}}$. Since ${\mathcal{O}}\alpha \subseteq I$, we have $\bar{\alpha} \subseteq \bar{I}$, and hence $I\bar{\alpha} \subseteq I\bar{I} = N{\mathcal{O}}$, from which $I\gamma \subseteq {\mathcal{O}}$ follows. Preliminary algorithmic results\[sec:prel\] =========================================== In this section, we provide two algorithmic tools that will be used to solve the quaternion $\ell$-isogeny path problem in Section \[sec:ideals:ellpow\]. The first algorithm computes prime norm representatives in ideal classes. The second one computes representations of integers by the norm form of a $p$-extremal order. Computing prime norm representatives in ideal classes {#sec:ideals:prime} ----------------------------------------------------- Given a maximal order ${\mathcal{O}}$ and a left ${\mathcal{O}}$-ideal $I$, we give a probabilistic algorithm that computes another left ${\mathcal{O}}$-ideal $J = I\gamma$ in the same class, but with prime norm. Using Lemma \[lem:ideal\_norm\_rep\], this problem reduces to the problem of finding a prime represented by $q_I$. [**Prime norm algorithm.**]{} Given a left ${\mathcal{O}}$-ideal $I$ of norm $N$, with a Minkowski-reduced basis $\{\alpha_1,\alpha_2,\alpha_3,\alpha_4\}$. Generate random elements $\alpha = \sum_i x_i \alpha_i$ with $(x_1,x_2,x_3,x_4)$ in a box $[-m,m]^4$ until finding an element $\alpha$ of $I$ with $q_I(\alpha)$ prime, and return $I(\bar{\alpha}/N)$. Assuming that numbers represented by $q_I$ behave like random numbers, it remains to ensure that $q_I([-m,m]^4)$ contains sufficiently many primes to have a high probability of finding one. If $\{\alpha_1,\alpha_2,\alpha_3,\alpha_4\}$ is a Minkowski-reduced basis, the $q_I(\alpha_i)$ attain the successive minima, and we have the bounds $$p^2 \le 16 q_I(\alpha_1) q_I(\alpha_2) q_I(\alpha_3) q_I(\alpha_4) \le 4 p^2,$$ where $q_I(\alpha_i) \le q_I(\alpha_{i+1})$. For a generic ideal $I$ we expect $q_I(\alpha_4)$ to be in $\tilde{{O}}(\sqrt{p})$. In the worst case, $q_I(\alpha_4)$ is in $\tilde{{O}}(p)$ when $I$ equals an order ${\mathcal{O}}$ containing a subring $R$ with $|{\mathrm{disc}}(R)|$ in $O(\log(p)^n)$. Assuming $I$ is generic, we expect to find $\alpha$ with $q_I(\alpha)$ in $\tilde{{O}}(m^2 \sqrt{p})$. In practice, we find sufficiently many primes $q_I(\alpha)$ for $m$ which grows polynomially in $\log(p)$. However to provably terminate, even under the GRH, it may be necessary to allow $m$ to exceed a function in $O(\sqrt[4]{p})$, in which case the output may exceed $O(p)$. We implemented a prime norm algorithm in Magma [@Magma]. We tested it on ideals of $\ell$-power norms generated via a random walk from a given maximal order. All our computations with primes of up to 200 bits and random ideals took seconds on an Intel Xeon CPU X5500 processor with 24 GB RAM running at 2.67GHz. The norms of the output ideals $J$ were experimentally only slightly larger than $\sqrt{p}$. The experimental results are given in Appendix \[sec:primeres\]. Representing integers by special orders {#sec:repns_in_orders} --------------------------------------- We also consider the problem of representing a sufficiently large positive integer $M$ by the norm form of ${\mathcal{O}}$. Suppose that ${\mathcal{O}}$ is a $p$-extremal order, with suborder $R + Rj$, and let $D = {\mathrm{disc}}(R)$. We let $\Phi(x)$ be a monotone function such that a suitable interval $[x,x+\Phi(x)]$ contains sufficiently many primes, and we assume that $M \ge p\,\Phi(M)$. If $\omega$ is a reduced generator of $R$ (of trace $0$ or $\pm 1$), then the norm form on $R + Rj$ is of the form $${\mathrm{Nrd}}(\alpha + \beta j) = f(x_1,y_1) + p f(x_2,y_2),$$ where $\alpha = x_1 + y_1 \omega$ and $\beta = x_2 + y_2 \omega$, and $f(x,y)$ is a principal form. For $(x,y)$ in $[-m,m]^2$ with $m = \lfloor\sqrt{\Phi(M)/|D|}\rfloor$, we have $f(x,y) < \Phi(M)$ and ${\mathrm{Nrd}}(\beta j) < p\,\Phi(M) < M$. This gives the following algorithm on which we build our strong approximation algorithm. [**Integer representation.**]{} Given an integer $M \ge p\,\Phi(M)$. Set $m = \lfloor\sqrt{\Phi(M)/|D|}\rfloor$, and choose $(x_2,y_2)$ at random in $[-m,m]^2$ until finding a prime $r = M - p f(x_2,y_2)$ which is split in $R$ and for which a prime $\mathfrak{r}$ over $r$ is principal. Let $\alpha = x_1 + y_1 \omega$ be a generator for $\mathfrak{r}$, set $\beta = x_2 + y_2 \omega$, and return $\alpha + \beta j$. Clearly the output has norm $M$. We assume that primes have density $1/\log(M)$ in the arithmetic progression $M - p\,[0,\Phi(M)]$. Moreover we assume that such primes are equidistributed among primes which are non-split and split in $R$ and, in the latter case, among each of the $h(R)$ ideal classes of $R$. Finally, we must assume that elements $\beta = x_2 + y_2 \omega$ give rise to integers $r = M - p\,{\mathrm{Nrd}}(\beta)$ with the same primality probabilities as random integers in the range $M - p\,[0,\Phi(M)]$. Under such heuristic assumptions, the expected number of random $\beta$ to be tested is $2 h(R) \log(M)$. Detecting a prime $r$, solving for a representative prime $\mathfrak{r}$ over $r$, and determination of a principal generator can be done in expected polynomial time by Cornaccia’s algorithm [@Cornacchia1903]. Under the heuristic assumptions made above, we can appeal to average distributions among all arithmetic progressions $a - p\,[0,\Phi(M)]$, for representatives $a$ of $({\mathbb{Z}}/p{\mathbb{Z}})^*$. In the application that follows, $M$ will be of the form $\ell^e$ or $N\ell^e$, and we can adapt to failure to find primes in a particular arithmetic progression sparsely populated with primes by changing $e$. Main algorithm {#sec:ideals:ellpow} ============== In this section, we provide an algorithm to solve the quaternion $\ell$-isogeny path problem. We also sketch a generalization of our approach to build ideal class representatives with powersmooth norms. Overview of the algorithm {#sec:algorithm_overview} ------------------------- We reduce the quaternion $\ell$-isogeny problem to a restricted version of the same problem, where we assume that ${\mathcal{O}}$ is a special $p$-extremal maximal order with suborder $R + Rj$ as defined in Section \[sec:intro:QA\]. We also assume that $I$ is a left ${\mathcal{O}}$-ideal with reduced norm $N$, where $N$ is a (large) prime coprime to $\ell$, $|{\mathrm{disc}}(R)|$ and $p$. A reduction from generic left ${\mathcal{O}}$-ideals to left ${\mathcal{O}}$-ideals with the required norms can be effectively performed with the algorithm of Section \[sec:ideals:prime\]. A reduction from general maximal orders to special $p$-extremal orders will be provided in Section \[sec:ideals:gen\]. Using Lemma \[lem:ideal\_norm\_rep\], the quaternion $\ell$-isogeny path problem is also reduced to an effective strong approximation theorem in Section \[sec:ideals:strongapproximation\]. In particular if the ideal is given by a pair of generators $I = {\mathcal{O}}(N,\alpha)$, the quaternion $\ell$-isogeny path problem is reduced to finding $\lambda \in {\mathbb{Z}}$ coprime to $N$ and $$\beta \equiv \lambda\alpha \bmod N{\mathcal{O}}$$ with ${\mathrm{Nrd}}(\beta) = N\ell^e$ for some positive integer $e$. Sections \[sec:ideals:ellpow:isom\], \[sec:ideals:ellpow:lift\], and \[sec:ideals:ellpow:results\] describe the core of our approach to solve this problem. Since the index of $R + Rj$ in ${\mathcal{O}}$ is coprime to $N$, we have an isomorphism $$\frac{R + Rj}{N(R + Rj)} \cong \frac{{\mathcal{O}}}{N{\mathcal{O}}}\cdot$$ We can therefore choose representative elements in $R + Rj$ as convenient to simplify the algorithm. Since the index $[{\mathcal{O}}:R + Rj] = |{\mathrm{disc}}(R)|$ is assumed to be small (in $O(\log(p)^2)$ under the GRH), the size of the output might be slightly larger, but the distinction is asymptotically insignificant. A direct approach to the strong approximation problem to solve for $\beta$ seems daunting, so instead we reduce to the following steps: 1. Solve for a random $\gamma \in {\mathcal{O}}$ of reduced norm $N\ell^{e_0}$. 2. Solve for $[\mu]$ in $({\mathcal{O}}/N{\mathcal{O}})^*$ such that $({\mathcal{O}}\gamma/N{\mathcal{O}})[\mu] = I/N{\mathcal{O}}$. 3. Solve for the strong approximation of $[\mu]$ (modulo $N$) by $\mu$ in ${\mathcal{O}}$ of reduced norm $\ell^{e_1}$. Here we denote the element $\mu + N{\mathcal{O}}$ of ${\mathcal{O}}/N{\mathcal{O}}$ by $[\mu]$ to distinguish it from the conjugate $\bar{\mu}$ of $\mu $. The output $\beta = \gamma\mu$ is then an element of $I$ with reduced norm $N\ell^e$ where $e = e_0+e_1$. The element $\gamma$ can be constructed with the algorithm of Section \[sec:repns\_in\_orders\]. We solve for $[\mu]$ by linear algebra in Section \[sec:ideals:ellpow:isom\], showing that we can take $[\mu]$ in $(R/NR)^*[j] \subseteq ({\mathcal{O}}/N{\mathcal{O}})^*$. The core of the algorithm is the final specialized strong approximation algorithm of Section \[sec:ideals:ellpow:lift\], taking $[\mu]$ in $(R/NR)^*[j]$ and constructing the lifting $\mu$ of norm $\ell^e$. The whole algorithm for $p$-extremal orders is analyzed in Section \[sec:ideals:ellpow:results\]. As mentioned above, we finally remove the $p$-extremal condition in Section \[sec:ideals:gen\] by providing a reduction from the general case to the case of $p$-extremal orders, and we generalize our approach to compute ideal representatives of smooth or powersmooth norms in Section \[sec:ideals:psmooth\]. Effective strong approximation\[sec:ideals:strongapproximation\] ---------------------------------------------------------------- Let $B := B_{p,\infty}$ be the quaternion algebra ramified at $p$ and $\infty$. Let ${\mathbb{A}}_{\mathbb{Q}}$ be the rational adèle ring, defined as the restricted product of ${\mathbb{Q}}_v$ with respect to ${\mathbb{Z}}_v$, let $\ell \ne p$ be a “small” prime, and let ${\mathbb{A}}_{{\mathbb{Q}},\ell}$ be the restricted product over all $v \ne \ell$. Let ${\mathbb{A}}_B = B \otimes_{\mathbb{Q}}{\mathbb{A}}_{\mathbb{Q}}$ be the adèle ring of $B$, and ${\mathbb{A}}_{B,\ell} = B \otimes {\mathbb{A}}_{{\mathbb{Q}},\ell}$. Then $B$ embeds diagonally in ${\mathbb{A}}_B$ and is discrete in ${\mathbb{A}}_B$ (see [@Cassels1967 Section 14]). The strong approximation theorem (see [@Cassels1967 Section 15]) asserts that $B$ is dense in ${\mathbb{A}}_{B,\ell}$ (see also Théorème Fondamental 1.4, p. 61 of Vignéras [@Vigneras1980]). The strong approximation theorem can be viewed as a strong version of the Chinese remainder theorem. We apply this to find an element of a left ${\mathcal{O}}$-ideal $I$ which generates $I$ almost everywhere. Each such ideal is known to be generated by two elements $N$ and $\alpha$, where we may take $N = {\mathrm{Nrd}}(I)$ for the first generator. This follows since locally ${\mathcal{O}}_v = {\mathcal{O}}\otimes {\mathbb{Z}}_v$ is a left principal ideal ring, hence so is the quotient ${\mathcal{O}}/N{\mathcal{O}}$. If $I = {\mathcal{O}}(N,\alpha):={\mathcal{O}}N+{\mathcal{O}}\alpha$, the approximation theorem implies that we can find $\beta$ in $I$ such that $$\beta \equiv \alpha \bmod N{\mathcal{O}}$$ and ${\mathrm{Nrd}}(\beta) = N\ell^e$ for some positive integer $e$, from which $I = {\mathcal{O}}(N,\alpha) = {\mathcal{O}}(N,\beta)$. By Lemma \[lem:ideal\_norm\_rep\], an effective version of this strong approximation theorem is sufficient to solve the quaternion $\ell$-isogeny path problem. In particular, since $\beta$ is in $I$, the ideal $I\bar\beta/N$ is an isomorphic ideal of norm $\ell^e$. Similarly, solving for $$\beta \equiv \lambda\alpha \bmod N{\mathcal{O}}$$ with $\lambda \in {\mathbb{Z}}$ coprime to $N$ such that we still have $I = {\mathcal{O}}(N,\beta)$, is also sufficient to solve the quaternion $\ell$-isogeny path problem. We will focus on this relaxed effective strong approximation theorem in the next subsections. Isomorphism of ${\mathcal{O}}/N{\mathcal{O}}$-ideals {#sec:ideals:ellpow:isom} ---------------------------------------------------- In this section, let $I$ be a left ${\mathcal{O}}$-ideal of prime norm $N \ne p$, and let $\gamma$ be an arbitrary element of ${\mathcal{O}}$ of norm $NM$, where $\gcd(N,M) = 1$. Since $N$ is large, we can assume that it does not divide the index $[{\mathcal{O}}:R+Rj]$, hence we have equalities of rings $${\mathcal{O}}/N{\mathcal{O}}= (R+Rj)/N(R + Rj) \cong {\mathbb{M}}_2({\mathbb{Z}}/N{\mathbb{Z}}).$$ We denote by $[\alpha]$ the class of an element $\alpha$ in ${\mathcal{O}}/N{\mathcal{O}}$ (as distinct from its conjugate $\bar{\alpha}$). We note that ${\mathcal{O}}\gamma/N{\mathcal{O}}$ and $I/N{\mathcal{O}}$ are proper nonzero left ${\mathcal{O}}/N{\mathcal{O}}$-ideals. The following explicit classification of such ideals, in ${\mathbb{M}}_2({\mathbb{Z}}/N{\mathbb{Z}})$, will let us construct an explicit isomorphism between these ideals. Let $N$ be a prime and $A = {\mathbb{M}}_2({\mathbb{Z}}/N{\mathbb{Z}})$. There exists a bijection $$S : {\mathbb{P}}^1({\mathbb{Z}}/N{\mathbb{Z}}) \times {\mathbb{P}}^1({\mathbb{Z}}/N{\mathbb{Z}}) \longrightarrow \frac{\{\, \gamma \in A\backslash\{0\} \;:\; \det(\gamma) = 0 \,\}}{({\mathbb{Z}}/N{\mathbb{Z}})^*}{\raisebox{0.4ex}{,}}$$ given by $$S\big((u:v),(x:y)\big) = \left(\begin{array}{@{}cc@{}} ux & uy \\ vx & vy \end{array}\right)\cdot$$ Under this correspondence, the set of proper nontrivial left $A$-ideals is in bijection with the set $$\{\, {\mathbb{P}}^1({\mathbb{Z}}/N{\mathbb{Z}}) \times (x:y) : (x:y) \in {\mathbb{P}}^1({\mathbb{Z}}/N{\mathbb{Z}}) \,\},$$ and the right action of $A^*/({\mathbb{Z}}/N{\mathbb{Z}})^* = {\mathrm{PGL}}_2({\mathbb{Z}}/N{\mathbb{Z}})$ on left $A$-ideals is transitive and induced by the natural (transpose) action on ${\mathbb{P}}^1({\mathbb{Z}}/N{\mathbb{Z}})$. The nonzero matrices of determinant zero, modulo $({\mathbb{Z}}/N{\mathbb{Z}})^*$, determine a hypersurface $ad = bc$, which is the image of ${\mathbb{P}}^1 \times {\mathbb{P}}^1$ by the Segre embedding in ${\mathbb{P}}^3$ (= $(A\backslash\{0\})/({\mathbb{Z}}/N{\mathbb{Z}})^*$). It is easily verified that left and right multiplication induce the standard and transpose multiplication on the first and second factors of ${\mathbb{P}}^1 \times {\mathbb{P}}^1$, respectively, under this isomorphism, from which the result follows. Using an explicit isomorphism ${\mathcal{O}}/N{\mathcal{O}}\cong {\mathbb{M}}_2({\mathbb{Z}}/N{\mathbb{Z}})$, by this lemma we can find $[\mu]$ in $({\mathcal{O}}/N{\mathcal{O}})^*$ such that $ ({\mathcal{O}}\gamma/N{\mathcal{O}}) [\mu] = I/N{\mathcal{O}}, $ using linear algebra over ${\mathbb{Z}}/N{\mathbb{Z}}$. In Section \[sec:ideals:ellpow:lift\] we require an input $[\mu]$ which is a unit in $Rj/N{\mathcal{O}}$. Observing that $[j]$ is a unit, we see that such units form a coset of $(R/NR)^*$: $$({\mathcal{O}}/N{\mathcal{O}})^* \cap Rj/N{\mathcal{O}}= (R/NR)^*[j].$$ We note that $(R/NR)^*$ acts on the $N+1$ proper nontrivial left ${\mathcal{O}}$-ideals, with kernel $({\mathbb{Z}}/N{\mathbb{Z}})^*$. By hypothesis, $R$ is a subring of small discriminant in which $N$ is not ramified. If $N$ is inert in $R$, then the $N+1$ ideals form one orbit. Otherwise, if $N$ is split, there is one orbit of size $N-1$ and two fixed points ${\mathcal{O}}{\mathfrak{p}}_1/N{\mathcal{O}}$ and ${\mathcal{O}}{\mathfrak{p}}_2/N{\mathcal{O}}$, where ${\mathfrak{p}}_1$ and ${\mathfrak{p}}_2$ are the prime ideals of $R$ over $N$. With overwhelming probability, $I/N{\mathcal{O}}$ and ${\mathcal{O}}\gamma/N{\mathcal{O}}$ will not be such fixed points, and so we can solve for $[\mu]$ in $(R/NR)^*[j]$. In the event of failure, we can select a new $\gamma$ or $N$. Approximating elements of $(R/NR)^*[j]$ by $\ell$-power norm representatives {#sec:ideals:ellpow:lift} ---------------------------------------------------------------------------- In this section, we assume that $\ell$ is a quadratic non-residue modulo $N$. Let also $\omega$ be a generator of $R$ of minimal norm, either $1$, $2$, or $(1+q)/4$, for $q$ a prime congruent to $3$ modulo $4$. We now motivate the restriction to elements of $(R/NR)^*[j]$ in the previous section. We suppose that we are given as input a lift $\mu_0 = x_0 + y_0 \omega + (z_0 + w_0 \omega) j$ of an arbitrary element of ${\mathcal{O}}/N{\mathcal{O}}$ to $R + Rj$. The relaxed approximation problem is to search for $\lambda$ in ${\mathbb{Z}}$ and $\mu_1 = x_1 + y_1 \omega + (z_1 + w_1 \omega) j$ such that $\mu = \lambda\mu_0 + N\mu_1$ satisfies the norm equation $${\mathrm{Nrd}}(\mu) = f(\lambda x_0 + N x_1, \lambda y_0 + N y_1) + p\, f(\lambda z_0 + N z_1, \lambda w_0 + N w_1) = \ell^e,$$ for some $e\in\mathbb{N}$, where $f(x,y) = {\mathrm{Nrd}}(x + y\omega)$ is a principal binary quadratic form of discriminant $D$ as in Lemma \[lem:special-p-extremal-properties\]. The key idea to solve this norm equation, as used in [@Petit2008c] to cryptanalyze the other hash function of Charles-Goren-Lauter, is that it simplifies considerably when $x_0 = y_0 = 0$: $$\label{eq:simplified-normequation} {\mathrm{Nrd}}(\mu) = N^2 f(x_1, y_1) + p\, f(\lambda z_0 + N z_1, \lambda w_0 + N w_1) = \ell^e.$$ The simple algorithm we now describe to solve this equation justifies the choice of $[\mu] \in (R/NR)^*[j]$ in Section \[sec:ideals:ellpow:isom\]. To construct $\mu$, given $[\mu] \in (R/NR)^*[j]$, we consider a first lift $\mu_0 = (z_0 + w_0 \omega) j$ to $Rj$ as above, and find $\lambda$ in ${\mathbb{Z}}$ and $\mu_1 = (x_1 + y_1 \omega) + (z_1 + w_1 \omega) j$ in $R + Rj$ satisfying the simplified equation . This equation modulo $N$, gives $ \lambda^2 p\,f(z_0,w_0) = \ell^e \bmod N, $ and since $\ell$ is a quadratic nonresidue modulo $N$, we choose the parity of $e$ depending on whether $p\,f(z_0,w_0)$ is a quadratic residue modulo $N$ or not, and solve for a square root modulo $N$ to find $\lambda$, in $0 < \lambda < N$. Now for fixed $z_0$, $w_0$, and $\lambda$, Equation  implies a linear equation in $z_1$ and $w_1$: $$\label{eq:linear-normequation} 2\lambda p L((z_0,w_0),(z_1,w_1)) = \frac{\ell^e-\lambda^2 p f(z_0,w_0)}{N}\bmod N,$$ where $L$ is the bilinear polynomial $$L((z_0,w_0),(z_1,w_1)) = \langle{z_0 + w_0\omega,z_1 + w_1\omega}\rangle = 2 z_0z_1 + {\mathrm{Trd}}(\omega)(z_0w_1 + w_0z_1) + 2{\mathrm{Nrd}}(\omega)w_0w_1.$$ Since $N$ is a large prime, such that $\gcd(x_0w_0|D|p,N)=1$, there are exactly $N$ solutions $(z_1,w_1)$ to the linear equation . We choose a random solution satisfying $$|\lambda z_0 + N z_1| < N^2 \text{ and } |\lambda w_0 + N w_1| < N^2,$$ and equation  now leads to a problem of representation of an integer by a binary quadratic form: $$\label{eq:cornaccia-normequation} f(x_1,y_1) = r := \frac{\ell^e- p f(\lambda z_0 + N z_1, \lambda w_0 + N w_1)}{N^2}\cdot$$ We assume that $e$ was chosen sufficiently large so that $r$ is positive. If $r$ (or $rq$), modulo a smooth square integer factor, is prime, splits and is a norm in $R$, Cornaccia’s algorithm [@Cornacchia1903] can efficiently solve this equation, or determine that no solution exists. In the latter case, we repeat with a new value of $(z_1,w_1)$. Assuming the values of $r$ behave as random values around $N^4|D|p$, we expect to choose $\log(N^4|D|p)h(D)$ values before finding a solution. In practice, we begin with $e$ the minimal possible value having the correct parity, then we progressively increase it if no solution has been found. For $N$ in the range $\tilde{O}(\sqrt{p})$, we expect the size of $e$ to satisfy $e \sim \log_\ell(N^4|D|p) \sim 3\log_\ell(p)$. Algorithm analysis and experimental results {#sec:ideals:ellpow:results} ------------------------------------------- We summarize our algorithm to compute an $\ell$-power norm representative of a left ${\mathcal{O}}$-ideal, where ${\mathcal{O}}$ is a special $p$-extremal maximal order. Let ${\mathcal{O}}$ be a maximal order in a quaternion algebra $B_{p,\infty}$ and let $\ell$ be a small prime. There exists a probabilistic algorithm, which takes as input a left ${\mathcal{O}}$-ideal and outputs an isomorphic left ${\mathcal{O}}$-ideal of $\ell$-power reduced norm. Under the most optimistic heuristic assumptions on randomness of representations of integers by quadratic forms and uniform distributions of primes, this algorithm is expected to run in polynomial time and to produce ideals of norm $\ell^{e}$, where $$e \sim \log_\ell(N p\,\Phi(p) |D|) + \log_\ell(N^4 |D| p) - \log_\ell N^2,$$ where the three terms respectively account for the norms of $\gamma$, $\mu$ and $N^{-1}$. Assuming that $\log_\ell(N) \sim \frac{1}{2}\log_\ell(p)$ and that in practice $\Phi(p)\sim\log(p)^n$ suffices, this leads to $$e \sim \frac{7}{2}\log_\ell(p).$$ We implemented the algorithms of this article in Magma [@Magma]. We first tested the algorithm of Section \[sec:repns\_in\_orders\] to compute $N$ times $\ell$-power norm elements in ${\mathcal{O}}$ with $\ell\in\{2,3\}$, for random primes $p$ of sizes up to 200 bits and for $N$ values obtained after applying the algorithm of Section \[sec:ideals:prime\] on an ideal generated via a random walk from ${\mathcal{O}}$. The norm of the outputs were close to the expected values. We then tested the algorithm of Section \[sec:ideals:ellpow:lift\] for $\ell\in\{2,3\}$, for random $p$ values of sizes up to 200 bits, for $N$ values obtained after applying the algorithm of Section \[sec:ideals:prime\] on an ideal generated via a random walk from ${\mathcal{O}}$, and for $\mu_0 = (z_0+w_0\omega)j$ with randomly chosen $z_0,w_0\in{\mathbb{Z}}/N{\mathbb{Z}}$ not both equal to zero. The exponents of the norms of the quaternions computed were close to the expected value $3\log_\ell p$. We finally tested the overall algorithm of Section \[sec:ideals:ellpow\] for $\ell\in\{2,3\}$, for random $p$ values of sizes up to 200 bits, and for ideals $I$ generated via a random walk from ${\mathcal{O}}$. The $\ell$-valuation of the norm of the ideals computed were close to the expected value $\frac{7}{2}\log_\ell p$. All computations were carried out on an Intel Xeon CPU X5500 processor with 24 GB RAM running at 2.67GHz. The algorithm of Section \[sec:ideals:ellpow:lift\] succeeded in less than 100 seconds for all 200 bit primes, and the overall algorithm of Section \[sec:ideals:ellpow\] terminated in less than 250 seconds for primes in this range. Additional experimental results are provided in Appendix \[sec:expres\]. Generalization to arbitrary orders {#sec:ideals:gen} ---------------------------------- We now describe how to remove the condition that ${\mathcal{O}}$ is one of the special orders defined in Section \[sec:intro:QA\]. First we encode the relation between two maximal orders embedded in ${B_{p,\infty}}$ in terms of an associated ideal. \[lem:eichler-order-ideals\] Suppose that ${\mathcal{O}}_1$ and ${\mathcal{O}}_2$ are given maximal orders in ${B_{p,\infty}}$. Then the Eichler order ${\mathcal{O}}_1 \cap {\mathcal{O}}_2$ has the same index in each of ${\mathcal{O}}_1$ and ${\mathcal{O}}_2$, which we denote $M$, and the set: $$I({\mathcal{O}}_1,{\mathcal{O}}_2) = \{ \alpha \in {B_{p,\infty}}\;|\; \alpha {\mathcal{O}}_2 \bar{\alpha} \subseteq M{\mathcal{O}}_1 \}$$ is a left ${\mathcal{O}}_1$-ideal and right ${\mathcal{O}}_2$-ideal of reduced norm $M$. Conversely, if $I$ is a left ${\mathcal{O}}_1$-ideal with right order ${\mathcal{O}}_2$, such that $I \not\subseteq n{\mathcal{O}}_1$ for any $n > 1$, then $I = I({\mathcal{O}}_1,{\mathcal{O}}_2)$. The determinant of the norm form of any maximal order ${\mathcal{O}}$ is $p^2$, and for any sub-lattice $L \subset {\mathcal{O}}$ of index $M$, the reduced norm form on $L$ has determinant $M^2\det({\mathcal{O}})$. This establishes the well-known result that the index of an Eichler order in any maximal order is an invariant, called its level. It is clear by construction that $I({\mathcal{O}}_1,{\mathcal{O}}_2)$ is a left ${\mathcal{O}}_1$-module and a right ${\mathcal{O}}_2$-module. Locally at any prime $q$, we may assume ${\mathcal{O}}_1$ and ${\mathcal{O}}_2$ are ${\mathbb{Z}}_q$-orders such that ${\mathcal{O}}_2 = \alpha^{-1} {\mathcal{O}}_1 \alpha$, for some $\alpha$ in ${\mathcal{O}}_1$ hence also in ${\mathcal{O}}_2$. It follows that we have an inclusion $\alpha {\mathcal{O}}_2 = {\mathcal{O}}_1 \alpha \subseteq I({\mathcal{O}}_1,{\mathcal{O}}_2)$. However, removing any integer factors (in the center), the reduced norm of a minimal $\alpha$ must equal the level $M{\mathbb{Z}}_q$, which implies equality. The global result follows from the local-global principle. Conversely, since any left ${\mathcal{O}}_1$-ideal $I$ is locally principal at each prime $q$, one can find locally $\alpha$ such that $I = {\mathcal{O}}_1\alpha$; the right order of I is then ${\mathcal{O}}_2 = \alpha^{-1} {\mathcal{O}}_1 \alpha$. By hypothesis $\alpha$ is not divisible by any integer and we conclude that the Eichler order has level ${\mathrm{Nrd}}(\alpha) = {\mathrm{Nrd}}(I) = M{\mathbb{Z}}_q$. From the above construction in terms of a local generator, we conclude $I = I({\mathcal{O}}_1,{\mathcal{O}}_2)$. Let ${\mathcal{O}}_1$ and ${\mathcal{O}}_2$ be maximal orders in a quaternion algebra $B_{p,\infty}$ and let $\ell$ be a small prime. Given an algorithm which takes as input a left ${\mathcal{O}}_1$-ideal and outputs an equivalent left ${\mathcal{O}}_1$-ideal of $\ell$-power reduced norm, there exists an algorithm with the same complexity, up to a constant of size polynomial in the input size of ${\mathcal{O}}_1$ and ${\mathcal{O}}_2$, which takes as input a left ${\mathcal{O}}_2$-ideal and outputs an equivalent left ${\mathcal{O}}_2$-ideal of $\ell$-power reduced norm. Assume we are given two orders ${\mathcal{O}}_1$, ${\mathcal{O}}_2$ and a left ${\mathcal{O}}_2$-ideal $J$, and set $I = I({\mathcal{O}}_1,{\mathcal{O}}_2)$ as in Lemma \[lem:eichler-order-ideals\]. The ideal $I$ may be of arbitrarily large norm, but is bounded by something polynomial in the specification of ${\mathcal{O}}_1$ and ${\mathcal{O}}_2$ in terms of a basis for ${B_{p,\infty}}$. Supposing that we have an algorithm for ${\mathcal{O}}_1$, we find representative left ${\mathcal{O}}_1$-ideals for $I$ and $IJ$ such that $I_1 = I\bar{\gamma}_1/{\mathrm{Nrd}}(I)$ with $\gamma_1$ in $I$, and $I_2 = IJ\bar{\gamma}_2/{\mathrm{Nrd}}(IJ)$ with $\gamma_2$ in $IJ$, where $${\mathrm{Nrd}}(\gamma_1) = {\mathrm{Nrd}}(I) \ell^{e_1} \mbox{ and } {\mathrm{Nrd}}(\gamma_2) = {\mathrm{Nrd}}(IJ) \ell^{e_2}.$$ It follows that $\gamma = \bar{\gamma}_1 \gamma_2/{\mathrm{Nrd}}(I)$ is an element of $J$ with reduced norm ${\mathrm{Nrd}}(\gamma) = {\mathrm{Nrd}}(J) \ell^{e_1+e_2}$, and hence $J\bar{\gamma}/{\mathrm{Nrd}}(J)$ is of reduced norm $\ell^{e_1+e_2}$. This provides a reduction of the general case to the case of special $p$-extremal orders, at the cost of two applications of the algorithm of Section \[sec:ideals:ellpow\], and a larger power of $\ell$. Generalization to powersmooth norms {#sec:ideals:psmooth} ----------------------------------- We recall that a number $s=\prod\ell_i^{e_i}$ is $S$-powersmooth if $\ell_i^{e_i}<S$. Our algorithms can be easily modified to construct ideal representatives of powersmooth norms. Using the approximations as before, the norm should be of size close to $p^{7/2}$. Since the product of all maximal powers of a prime lower than $S$ can be approximated by $S^{S/\log S}$, an adaptation of our algorithms will allow us to compute $S$-powersmooth representatives of left ideal classes of ${\mathcal{O}}$, with $S\approx\frac{7}{2}\log p$. Conclusion and future work {#sec:concl} ========================== In this paper, we provided a probabilistic algorithm to solve a quaternion ideal analog of the path problem in supersingular $\ell$-isogeny graphs. The algorithm runs in expected polynomial time subject to heuristics on expected distributions of primes, and it is efficient in practice. Following Deuring [@Deuring1941], there is a one-to-one correspondence between supersingular elliptic curves modulo $p$, up to Galois conjugacy, and isomorphism classes of maximal orders in the quaternion algebra ${B_{p,\infty}}$. By identifying isogeny kernels with powersmooth ideals in the quaternion algebra graphs, we expect our techniques to lead to both partial attacks on Charles-Goren-Lauter’s isogeny based hash function (when the initial curve has extremal endomorphism ring), and to security reductions to the problem of computing the endomorphism ring of a supersingular elliptic curve. Similarly, we expect our results to lead to a constructive version of Deuring’s correspondence from maximal orders in ${B_{p,\infty}}$ to their corresponding elements in the category of supersingular elliptic curves. #### Acknowledgements The research leading to these results has received funding from the Fonds National de la Recherche - FNRS and from the European Research Council through the European ISEC action HOME/2010/ISEC/AG/INT-011 B-CCENTRE project. [10]{} N. C. Ankeny. The least quadratic non residue, [*Annals of Mathematics*]{}, 55(1):65–72, 1952. E. Bach. Explicit bounds for primality testing and related problems, , 55(191):355–380, 1990. J. W. S. Cassels. Global fields. In J. W. S. Cassels and A. Frohlich, editors, [*Algebraic Number Theory*]{}, chapter Global Fields, pages 42–84. Academic Press, 1967. D. X. Charles, K. E. Lauter, and E. Z. Goren. Cryptographic hash functions from expander graphs. , 22(1):93–113, 2009. G. Cornacchia. Su di un metodo per la risoluzione in numeri interi dell’ equazione $\sum_{h=0}^nc_hx^{n-h}y^h=p$, [*Giornale di Matematiche di Battaglini*]{}, 46:33–90, 1903. M. Deuring. Die [T]{}ypen der [M]{}ultiplikatorenringe elliptischer [F]{}unktionenkörper. [*Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg*]{}, 14:197–272, 1941. W. Bosma, J. J. Cannon, C. Fieker, A. Steel (eds.), Handbook of Magma functions, Edition 2.20 (2013), <http://http://magma.maths.usyd.edu.au/magma/>. D. R. Heath-Brown. The number of primes in a short interval. [*J. Reine Angew. Math.*]{}, 397:162–193, 1989. D. Kohel. [*Endomorphism rings of elliptic curves over finite fields*]{}, PhD thesis, University of California, Berkeley, 1996. H. Maier. Primes in short intervals, [*Michigan Math. J.*]{}, 32:221–225, 1985. C. Petit, K. Lauter, and J.-J. Quisquater. Full cryptanalysis of [LPS]{} and [Morgenstern]{} hash functions. In R. Ostrovsky, R. De Prisco, and I. Visconti, eds., [*SCN*]{}, volume 5229 of [*Lecture Notes in Computer Science*]{}, pages 263–277. Springer, 2008. A. Pizer. An algorithm for computing modular forms on [$\Gamma_0(N)^*$]{}. [*Journal of Algebra*]{}, 64:340–390, 1980. A. Selberg. On the normal density of primes in small intervals and the difference between consecutive primes, [*Arch. Math. Naturvid.*]{}, 47:87–105, 1943. M.-F. Vignéras. [*Arithmétique des algèbres de quaternions*]{}. Springer-Verlag, 1980. Experimental results {#sec:expres} ==================== In our experiments, the value of $m$ and the function $\Phi$ appearing in the specification of our algorithms were fixed to a priori minimal values based on probabilistic arguments on the distribution of primes, then increased when needed. Prime norm ideals {#sec:primeres} ----------------- We show experimental results on the prime norm algorithm of Section \[sec:ideals:prime\] in Figure \[fig:primeres\]. The norms of the ideals constructed seem to be slightly larger than $p^{1/2}$ and the computation time cubic in $\log(p)$. Quaternion elements with particular norms {#sec:resNorm} ----------------------------------------- Experimental results on the algorithm of Section \[sec:repns\_in\_orders\] are shown in Figures \[fig:Nellpowel\] and \[fig:ellpowel\], respectively for computing elements of norms $\ell^e$ or $N\ell^e$, for some $e$. The results show the difference between the minimal exponent $e$ needed and a prediction based on probabilitic arguments. All computations took less than one second. Ideals with $\ell$-power norms ------------------------------ Experimental results on the algorithms of Section \[sec:ideals:ellpow\] are shown in Figures \[fig:liftSize\], \[fig:liftTime\],  \[fig:ellpowSize\], \[fig:ellpowTime\]. [^1]: The third author is supported by an F.R.S.-FNRS postdoctoral research fellowship at Université catholique de Louvain, Louvain-la-Neuve.
--- abstract: | The study of transport and mixing processes in dynamical systems is particularly important for the analysis of mathematical models of physical systems. We propose a novel, direct geometric method to identify subsets of phase space that remain strongly coherent over a finite time duration. This new method is based on a dynamic extension of classical (static) isoperimetric problems; the latter are concerned with identifying submanifolds with the smallest boundary size relative to their volume. The present work introduces *dynamic* isoperimetric problems; the study of sets with small boundary size relative to volume *as they are evolved by a general dynamical system*. We formulate and prove dynamic versions of the fundamental (static) isoperimetric (in)equalities; a dynamic Federer-Fleming theorem and a dynamic Cheeger inequality. We introduce a new dynamic Laplace operator and describe a computational method to identify coherent sets based on eigenfunctions of the dynamic Laplacian. Our results include formal mathematical statements concerning geometric properties of finite-time coherent sets, whose boundaries can be regarded as Lagrangian coherent structures. The computational advantages of our new approach are a well-separated spectrum for the dynamic Laplacian, and flexibility in appropriate numerical approximation methods. Finally, we demonstrate that the dynamic Laplace operator can be realised as a zero-diffusion limit of a newly advanced probabilistic transfer operator method [@F13] for finding coherent sets, which is based on small diffusion. Thus, the present approach sits naturally alongside the probabilistic approach [@F13], and adds a formal geometric interpretation. author: - | Gary Froyland\ \ School of Mathematics and Statistics\ University of New South Wales\ Sydney NSW 2052, Australia title: | Dynamic isoperimetry and the\ geometry of Lagrangian coherent structures --- ¶ \[section\] \[theorem\][Lemma]{} \[theorem\][Corollary]{} \[theorem\][Sublemma]{} \[theorem\][Remark]{} Introduction ============ The study of Lagrangian coherent structures in nonlinear dynamics is broadly concerned with the identification of spatial structures in phase space that behave in a relatively “stable” way under the dynamics by resisting high levels of distortion and/or diffusion. In the case of purely advective dynamics governed by a nonlinear map or time-dependent ordinary differential equations, if the structure is a full-dimensional set[^1], this set resists filamentation under the nonlinear dynamics and the ratio of boundary size to the volume of the set remains relatively unchanged. When the dynamics is a combination of advection and diffusion, for example, a time-dependent Fokker-Planck equation, a coherent set resists mixing with the surrounding phase space, again through the mechanism of retaining a relatively low boundary size. There is a long history of development of related ideas spread across the dynamical systems, fluid dynamics, and geophysics literature. We mention just two early related works: [@pierrehumbert_yang], which contains several ideas concerning mixing and transport mitigation in fluids, and the book [@ottino], which discusses purely advective (chaotic) mixing. These ideas, and the theory and algorithms developed subsequently, have grown into their own field, and have been employed across a wide spectrum of physical, biological, environmental, and engineering applications. The study of sets with minimal boundary is known in differential geometry as an isoperimetric problem. The classic isoperimetric problem in $\mathbb{R}^2$ is to determine the set $S$ with least boundary length (perimeter), given a fixed area; or equivalently to find a set $S$ of fixed (iso-) perimeter with greatest area. The unique solution of this problem is a disk; all other sets $S$ satisfy the inequality $\mbox{length}(\partial S)/\sqrt{\mbox{area}(S)}>2\sqrt{\pi}$, an example of an *isoperimetric inequality*. The obvious generalisation of this problem in $\mathbb{R}^d$ is true: $d$-balls minimise surface area and $\ell_{d-1}(\partial S)/\ell_d(S)^{1-1/d}>d{\omega_d}^{1/d}$, where ${\omega_d}$ is the volume of a unit ball in $\mathbb{R}^d$, and $\ell_d, \ell_{d-1}$ are $d$ and $d-1$-dimensional volume (see e.g. [@chavelisoperimetric]). Because we have in mind applications to fluid flow, we focus on compact domains $M$ rather than $\mathbb{R}^d$. For compact, connected $M$, one has a hypersurface $\Gamma\subset M$ disconnect $M$ into two pieces $M_1,M_2$, just as the $d$-ball disconnects $\mathbb{R}^d$. One tries to find a disconnecting hypersurface $\Gamma$ that minimises the ratio $$\mathbf{h}(\Gamma):=\ell_{d-1}(\Gamma)/\min\{\ell_d(M_1),\ell_d(M_2)\}.$$ One of our main contributions is to develop a theory of *dynamic* isoperimetry, where one studies the *evolution* of hypersurfaces $\Gamma\subset M$ that disconnect phase space $M$, under a nonlinear transformation $T:M\to T(M)$. Both the manifold $M$ and the separating surface $\Gamma$ are subjected to general nonlinear dynamics, representing the action of some (possibly chaotic) flow over some finite-time duration. The solution to this dynamic isoperimetric problem may have nothing to do with the solution to the static problem because even if $\Gamma$ has low co-dimension 1 volume, the size of $T(\Gamma)$ may be greater, and if $T$ is chaotic, $T^k(\Gamma)$ for a modest number of iterations $k$ may have significantly greater size (see Figure \[coherencefig\]b and \[coherencefig\]c). ![The two-dimensional set on the left with boundary $\Gamma$ has a low boundary size to volume ratio. The three sets $T(\Gamma)$ on the right are three possible images of the shape on the left under three different nonlinear volume-preserving dynamical systems $T$ over a fixed finite time duration. Under dynamics ‘a’, the set on the left retains a low boundary size to volume ratio, but under dynamics ‘b’ and ‘c’, the boundary size is significantly increased.[]{data-label="coherencefig"}](coher-noncoher-images_withletters.png "fig:"){width="12cm"}\ Thus, the dynamics plays a key role in the selection of a surface $\Gamma$ that remains small relative to the domain volume when evolved under a general volume-preserving nonlinear dynamical system. Clearly such surfaces $\Gamma$ bound sets that are very natural candidates for [coherent sets]{}. To take a real-world example, coherent sets such as oceanic eddies retain water mass and remain coherent by their boundary remaining as short as possible over an extended period of time. This reduces diffusion through the eddy boundary $T^k\Gamma$ at times $k=0,1,2,\ldots$ via small-scale diffusion processes (e.g. [@balasuriyajones99]). In discrete time, under a single application of $T$, this motivates the minimisation of the quantity $$\label{cheegereqn_intro} \mathbf{h}^D(\Gamma):=\frac{\ell_{d-1}(\Gamma)+\ell_{d-1}(T(\Gamma))}{2\min\{\ell(M_1),\ell(M_2)\}},$$ where $\Gamma$ varies over smooth hypersurfaces disconnecting $M$ into two connected pieces $M_1, M_2$. In continuous time, we consider smooth flow maps $T^{(t)}:M\to T^{(t)}(M)$, $t\in[0,\tau]$ and the quantity $$\label{hdynt_intro} \mathbf{h}_{[0,\tau]}^{D}(\Gamma):=\frac{\int_0^\tau \ell_{d-1}(T^{(t)}\Gamma)\ dt}{\tau\min\{\ell(M_1),\ell(M_2)\}}.$$ In addition to the geometric interpretation of Figure \[coherencefig\], the expression (\[hdynt\_intro\]) is also directly proportional to the mass lost through the boundary over the finite time interval $[0,\tau]$ via continually-present small-scale diffusion. This latter interpretation motivates the additive combination of terms in (\[cheegereqn\_intro\]) and (\[hdynt\_intro\]) (as opposed to e.g. a multiplicative combination). We focus on the difficult setting of general time-dependent dynamics. In the autonomous dynamics setting, classical “coherent” (in fact, invariant) objects such as invariant tori or invariant cylinders are completely invariant under the dynamics and thus their boundaries remain fixed and unchanging. Furthermore, trajectories that begin on the inside of these structures can never leave through their co-dimension 1 boundary. Thus, these objects can be regarded as “ideal” coherent structures and are relatively well-understood. In the general time-dependent dynamics setting, the existence of such perfectly invariant structures is highly unlikely. \[def:ftcs\] For the finite-time dynamics considered we define a *maximally coherent structure* on $M$ to be a minimizing $\Gamma$ for (\[cheegereqn\_intro\]) or (\[hdynt\_intro\]) (see also Section \[sect:multistep\]) when the infimum is achieved; otherwise we select a $\Gamma$ for which $\mathbf{h}^D(\Gamma)$ is arbitrarily close to the infimum. The corresponding *maximally coherent set* is defined to be the $M_k$, $k=1,2$ with minimal volume arising from the disconnection $\Gamma$. Existing approaches to identifying coherent structures fall broadly into two categories: probabilistic methods and geometric methods. Probabilistic approaches to finding coherent structures are based around the [transfer operator]{} (or Perron-Frobenius operator) and can be applied to systems with a combination of advection and diffusion, or purely advective dynamics. These methods look for *finite-time coherent sets* [@FSM10; @F13; @FPG14]: sets that resist mixing with the rest of phase space and represent global transport barriers to complete mixing. These constructions have found application in atmospheric dynamics to map and track the Antarctic polar vortex [@FSM10], and in ocean dynamics to track an oceanic eddy in the Agulhas current [@FHRSS12; @FHRvS15], in both two and three dimensions. In the purely advective setting, the constructions underpinning the transfer operator methods rely on small amounts of diffusion [@F13]. This small diffusion makes complete phase space mixing possible and is required for other technical reasons; these points are discussed in [@F13] and in the present work in Section \[sect:zerolimit\]. Further, the boundary sizes of finite-time coherent sets are implicitly measured in [@F13] because the localised diffusion can only eject mass near the boundaries of coherent sets; thus the mixing experienced over a finite time is tied to the boundary sizes of the coherent sets. We therefore expect our proposed dynamic isoperimetry methodology to be compatible with [@F13] and in fact, we show that the former arises as a zero-diffusion limit of the latter, which makes explicit the geometry contained in the probabilistic methods for small diffusion. In recent years there have been several geometric methods proposed to characterise either trajectories or co-dimension 1 surfaces that represent coherent structures in purely advective dynamics [@mancho_M; @mezicmeso; @Haller_11; @Haller_12; @allshouse_thiffeault; @ma_bollt_shape; @romkedar_M]. In two dimensions, [@Haller_11] defines a hyperbolic LCS as a material curve that has repelling dynamics normal to the curve in forward time and greater repulsion than nearby curves, while [@Haller_12] defines transport barriers as curves that are local minimisers of length functionals integrated over a finite time interval, with various hyperbolic, shear, and elliptic boundary conditions for the associated Euler-Langrange equations. The approach [@ma_bollt_shape] suggests that curves formed from points that experience local rigid-body motion over a finite-time duration are associated with coherent dynamics. Most approaches compute various scalar fields from Lagrangian trajectories and infer corresponding dynamic properties from the fields. The main contributions of this paper include formulations of dynamic versions of classical objects in isoperimetric theory and formulations and proofs of dynamic versions of fundamental isoperimetric theorems. We formulate a dynamic version of the *Cheeger constant* $\mathbf{h}$ (the minimal ratio of the $d-1$-dimensional volume of a disconnecting hypersurface $\Gamma$ to the disconnected volumes of $M_1, M_2$) and the Sobolev constant (a functional representation of the Cheeger constant, where the separating hypersurface $\Gamma$ is the level set of a smooth function). The celebrated *Federer-Fleming theorem* equates these two constants, formally linking geometric and functional representations of these notions of isoperimetry. We formulate and prove a dynamic version of the Federer-Fleming theorem, linking our new dynamic Cheeger and Sobolev constants (Section \[sect:ff\]). We further formulate and prove a dynamic version of the *Cheeger inequality*, which relates the second largest eigenvalue of the Laplace operator $\triangle$ on $M$ to the Cheeger constant (Section \[sect:cheegerineq\]). This requires a replacement of the Laplace operator with a new operator that incorporates the general nonlinear dynamics. In the discrete-time volume-preserving setting, the operator on $M$ corresponding to the expression (\[cheegereqn\_intro\]) is $$\label{dynlap_intro} (\triangle+\mathcal{P}^*\triangle\mathcal{P})/2,$$ where $\mathcal{P}f=f\circ T^{-1}$ is the transfer operator for $T$. We develop a spectral theory for this new operator in Section \[sect:dynlapspec\], and propose an algorithm that uses eigenvectors of this operator to identify coherent sets in practice. In Section \[sect:zerolimit\] we demonstrate that one can recover the new dynamic Laplace operator from the probabilistic methodology of [@F13] as a zero-diffusion limit of the latter. The probabilistic approach in [@F13] computes singular vectors of an $\epsilon$-perturbed operator $\mathcal{L}_\epsilon$; that is, eigenvectors of $\mathcal{L}_\epsilon^*\mathcal{L}_\epsilon$. For $C^3$ $f:M\to\mathbb{R}$ we show that $$\label{diffform1thm_intro} \lim_{\epsilon\to 0} \frac{(\mathcal{L}_\epsilon^*\mathcal{L}_\epsilon-I)f(x)}{\epsilon^2}=c\cdot(\triangle+\P^*\triangle\P)f(x),$$ for each $x\in \mathring{M}$, where $c$ is an explicit constant. Thus, we provide a missing formal link between the probabilistic coherent set methodologies and direct notions of geometry via boundary size and volume. Finally, in Section \[sect:numerics\], we illustrate how eigenfunctions of the dynamic Laplace operator (\[dynlap\_intro\]) can be used to find coherent sets using three numerical case studies. The appendix contains most of the proofs. Background {#staticsection} ========== Let $M$ be a compact, connected $d$-dimensional $C^\infty$ Riemannian manifold of vanishing curvature, which is either boundaryless or has $C^\infty$ boundary. This setting is relatively simple from a differential geometric point of view, but the introduction of nonlinear dynamics creates nontrivial questions in this setting. Let $\ell_d$ denote Lebesgue (volume) measure on $M$. To measure co-dimension 1 objects, we use $d-1$-dimensional Hausdorff measure $\mathcal{H}^{d-1}$ (using the trivial Riemannian metric to calculate diameter) to define $\ell_{d-1}=(\omega_{d-1}/2^{d-1})\mathcal{H}^{d-1}$; see e.g. Corollary IV.1.1 [@chavelisoperimetric]. We define the *Cheeger constant* $$\label{cheegerconst}\mathbf{h}:=\inf_\Gamma\frac{\ell_{d-1}(\Gamma)}{\min\{\ell_d(M_1),\ell_d(M_2)\}},$$ where $\Gamma$ varies over compact $(d-1)$-dimensional $C^\infty$ submanifolds that separate $M$ into two connected components $M_1, M_2$. One can link these geometric ideas with functions on $M$ by considering level sets of a function $f:M\to\mathbb{R}$ defining the $(d-1)$-dimensional separating surface $\Gamma$. In fact, defining the *Sobolev constant* $$\label{sobolevconst} \mathbf{s}:=\inf_{f\in C^\infty}\frac{\|\nabla f\|_1}{\inf_{\alpha\in\mathbb{R}}\|f-\alpha\|_1},$$ one has the celebrated Federer-Fleming result \[ffthm\] $$\label{staticffeqn} \mathbf{s}=\mathbf{h}.$$ A link to spectral theory of operators is provided by the Cheeger inequality. Consider the eigenproblem $$\label{closedeigenproblem} \triangle\phi=\lambda\phi,\mbox{ on $\mathring{M}$}.$$ If $\partial M\neq \emptyset$, then zero Neumann boundary conditions are imposed: $$\label{staticneumann} \nabla \phi(y)\cdot \mathbf{n}(y)=0\mbox{ for $y\in\partial M$},$$ where $\mathbf{n}(y)$ is the outward unit normal to $\partial M$ at $y$. It is well-known (see e.g. Theorem 1.1 [@chaveleigenvalues] and Section 4.4 [@mcowen]) that the set of eigenvalues consists of a sequence $0= \lambda_1>\lambda_2>\cdots\downarrow -\infty$. and each associated eigenspace is finite-dimensional. Eigenspaces belonging to distinct eigenvalues are orthogonal in $L^2(M)$ and $L^2(M)$ is the direct sum of all of the eigenspaces. Furthermore, each eigenfunction is $C^\infty$ on $M$. \[cheegerthm\] 1. If $M$ is boundaryless, let $\lambda_2$ be the smallest magnitude nonzero eigenvalue of (\[closedeigenproblem\]). 2. If $\partial M\neq\emptyset$, let $\lambda_2$ be the smallest magnitude nonzero eigenvalue for (\[closedeigenproblem\])–(\[staticneumann\]). Then $$\label{staticcheegerineq} \mathbf{h}\le 2\sqrt{-\lambda_2},$$ Level sets of the eigenfunction corresponding to $\lambda_2$ give vital information about the $\Gamma$ that achieves the Cheeger constant; a fact that we will exploit in our new dynamic setup. We remark that there is a vast literature on the use of the Laplace operator for extracting various types of geometric information on static manifolds and we refer the reader to the recent survey [@grebenkov_nguyen], with over 500 references. We now proceed through a few simple domains to illustrate the relationship between the solution to the isoperimetric problems and the eigenvalues and eigenfunctions of the Laplacian. The flat 2-torus: {#sect:torus} ----------------- Consider the flat 2-torus $\mathbb{T}^2=2\pi(\mathbb{R}/\mathbb{Z})\times 2\pi(\mathbb{R}/\mathbb{Z})$, which we write as $[0,2\pi)/\sim\times [0,2\pi)/\sim$, where $\sim$ is the identification at the interval endpoints. ### Solution to the isoperimetric problem {#solution-to-the-isoperimetric-problem .unnumbered} There is an infinite family of optimal $\Gamma$ solving (\[cheegerconst\]): either $(\{x\}\times [0,2\pi))\cup(\{x+\pi\}\times [0,2\pi))$ (two vertical loops) or $([0,2\pi)\times \{y\})\cup([0,2\pi)\times\{y+\pi\})$ (two horizontal loops). The value of $\mathbf{h}$ is $2\cdot(2\pi)/((1/2)\cdot (2\pi)^2)=2/\pi$. One particular solution is shown as black lines in Figure \[torusfig\]: $\Gamma=\{(\{\pi/2\}\times [0,2\pi))\cup(\{3\pi/2\}\times [0,2\pi))$, $M_1=[\pi/2,3\pi/2]\times [0,2\pi)$, and $M_2=\mathbb{T}^2\setminus M_1$. [0.3]{} ![Cylinder $[0,a)\times [0,b]$, $a=3/2$, $b=1$: Plot of second Laplacian eigenfunction $\sin(\pi y)$ with the the zero level set shown in black.[]{data-label="cylfig"}](torus_cosine_withlines.png "fig:"){width="\textwidth"}\ [0.3]{} ![Cylinder $[0,a)\times [0,b]$, $a=3/2$, $b=1$: Plot of second Laplacian eigenfunction $\sin(\pi y)$ with the the zero level set shown in black.[]{data-label="cylfig"}](rectangle_cosine_withline.png "fig:"){width="\textwidth"}\ [0.3]{} ![Cylinder $[0,a)\times [0,b]$, $a=3/2$, $b=1$: Plot of second Laplacian eigenfunction $\sin(\pi y)$ with the the zero level set shown in black.[]{data-label="cylfig"}](cylinder_cosine_withline.png "fig:"){width="\textwidth"}\ ### Laplace operator and eigenfunctions {#laplace-operator-and-eigenfunctions .unnumbered} The Laplace operator has eigenvalues $-(k^2+l^2)$, $k,l\in 0,1,2,\ldots,$ with eigenfunctions $\cos(k x)\cos(l y),\sin(k x)\cos(l y),\cos(kx)\sin(ly),\sin(kx)\sin(ly)$, $k,l\in 0,1,2,\ldots$. Thus the multiplicity of the first nontrivial eigenvalue -1 is 4, and the corresponding eigenspace is spanned by $\{\cos(x),\sin(x),\cos(y),\sin(y)\}$. The upper bound for $\mathbf{h}$ provided by Cheeger’s inequality is 2. Note that the zero level sets of the functions $\{\cos(x),\sin(x),\cos(y),\sin(y)\}$ are exactly the optimal disconnecting curves $\Gamma$ discussed above. The zero level set of one of these functions, $\cos(x)$ is shown in black in Figure \[torusfig\]. The rectangle {#sect:rect} ------------- Consider the rectangle $[0,a]\times [0,b]$, with $a>b$. ### Solution to the isoperimetric problem {#solution-to-the-isoperimetric-problem-1 .unnumbered} The problem (\[cheegerconst\]) has a unique solution $\Gamma$: $\{(a/2,y):0\le y\le b\}$, shown as a black line in Figure \[rectfig\]. The corresponding value of $\mathbf{h}$ is $2/a$. ### Laplacian operator and eigenfunctions {#laplacian-operator-and-eigenfunctions .unnumbered} The Laplace operator with zero Neumann boundary conditions has eigenvalues $-\pi^2(k^2/a^2+l^2/b^2)$, $k,l\in 0,1,2,\ldots,$ with corresponding eigenfunctions $\cos(k\pi x/a)\cos(l\pi y/b)$, $k,l\in 0,1,2,\ldots$. Note that the Neumann boundary conditions are satisfied on the boundary of the rectangle by this set of eigenfunctions. The first nontrivial eigenvalue is $\lambda_2=-\pi^2/a^2$. In the example shown in Figure \[rectfig\], $a=3/2$, $b=1$ and the first three eigenvalues are $0,-4\pi^2/9,-\pi^2$, each of unit multiplicity, corresponding to $(k,l)=(0,0),(1,0),(0,1)$. The corresponding eigenspaces are spanned by $1,\cos(3\pi x/2),\cos(\pi y)$. Note that the zero level set of the second eigenfunction $\cos(3\pi x/2)$ is exactly the optimal disconnecting curve $\Gamma$, shown in black in Figure \[rectfig\]. The upper bound for $\mathbf{h}$ provided by Cheeger’s inequality is $2\sqrt{4\pi^2/9}=4\pi/3$. The cylinder {#sect:cyl} ------------ Consider the flat cylinder $a(\mathbb{R}/\mathbb{Z})\times [0,b)$, which we write as $[0,a)/\sim\times [0,b]$, with $a>b$ and the vertical “edges” identified. ### Solution to the isoperimetric problem {#solution-to-the-isoperimetric-problem-2 .unnumbered} The solution to the isoperimetric problem depends on the relative size of $a$ to $b$. If $a<2b$ then $\mathbf{h}=a$, with a unique minimising disconnecting curve $\Gamma$: $\{(x,b/2):0\le x\le a\}$. If $a>2b$ then $\mathbf{h}=2b$, with $\Gamma$ selected from an infinite family of pairs of vertical lines parameterised by $x\in[0,a)$: $\{(x,y):0\le y\le b\}\cup\{(x+a/2,y):0\le y\le b\}$. ### Laplacian operator and eigenfunctions {#laplacian-operator-and-eigenfunctions-1 .unnumbered} The Laplace operator has eigenvalues $-\pi^2(4k^2/a^2+l^2/b^2)$, $k,l\in 0,1,2,\ldots,$ with corresponding eigenfunctions $\sin(2k\pi x/a)\cos(l\pi y/b),\cos(2k\pi x/a)\cos(l\pi y/b)$, $k,l\in 0,1,2,\ldots$. Note that we only have to enforce Neumann boundary conditions on the top and bottom horizontal boundaries of the cylinder. The leading eigenvalue is 0, and the second eigenvalue depends on the relative size of $a$ and $b$; a switch occurs at $a=2b$, matching the corresponding switch in the domain geometry. In the example shown in Figure \[cylfig\], $a=3/2$, $b=1$ and the first three eigenvalues are $0,-\pi^2,-16\pi^2/9$, with multiplicities 1, 1, and 2, corresponding to $(k,l)=(0,0),(0,1),(1,0)$. The corresponding eigenspaces are spanned by $1,\cos(\pi y),\{\sin(4\pi x/3),\cos(4\pi x/3)\}$. The upper bound for $\mathbf{h}$ provided by Cheeger’s inequality is $2\pi$. Note that the zero level set of the second eigenfunction $ \cos(\pi y)$ is exactly the optimal disconnecting curve $\Gamma$, shown in Figure \[cylfig\]. Dynamic Isoperimetry ==================== In this section we extend the concepts of the previous section to a dynamic setting, where $T$ is a $C^\infty$ diffeomorphism from $M$ onto $T(M)$, and $M$, $T(M)$ are compact, connected Riemannian manifolds of vanishing curvature. Much of the discussion in this paper is for a single iterate of a $T:M\to T(M)$, however, the extension to multiple iterates of the same map, iterates of different maps as would occur in time-dependent dynamical systems, and even a continuum of flow maps generated by a time-dependent ODE is straightforward (see Section \[sect:multistep\]). For a single iterate of $T$ we seek sets that have small boundary size relative to volume both *before* and *after* the application of the nonlinear dynamics of $T$. Thus, if $\Gamma$ is the boundary of a coherent set, one needs to minimise both $\ell_{d-1}(\Gamma)$ and $\ell_{d-1}(T\Gamma)$. To identify finite-time coherent sets, we propose the following natural dynamic minimisation problem. \[cheegerdefn\] Define the *dynamic Cheeger constant* $\mathbf{h}^D$ by $$\label{cheegereqn2} \mathbf{h}^D:=\inf_\Gamma \frac{\ell_{d-1}(\Gamma)+\ell_{d-1}(T(\Gamma))}{2\min\{\ell(M_1),\ell(M_2)\}}.$$ where $\Gamma$ varies over compact $(d-1)$-dimensional $C^\infty$ submanifolds of $M$ that divide $M$ into two disjoint open submanifolds $M_1, M_2$ of $M$. In the present paper, to avoid obscuring the key constructions, we focus on volume-preserving $T$. Note that (\[cheegereqn2\]) cannot be decomposed into two static minimisation problems because $\Gamma$ is the same in both terms in the numerator of (\[cheegereqn2\]). A Dynamic Federer-Fleming Theorem {#sect:ff} --------------------------------- It is of theoretical interest (and for the present paper, of interest in dynamical systems applications) to connect the set-based optimisation problem (\[cheegereqn2\]) with functional optimisation problems. Two basic tools in differential geometry for doing this are the co-area formula, which connects spatial integrals of the gradient of a function with an integral over the areas of level sets of a function, and Cavalieri’s principle, which represents a function as an integral over its level sets. The Federer-Fleming theorem (Theorem \[ffthm\]) connects a set-based isoperimetric problem (the (static) Cheeger constant) in an exact way with a functional minimisation problem. We wish to formulate a dynamic equivalent of this theorem. Let $M$ be a compact, connected Riemannian manifold of dimension $d\ge 1$ with vanishing curvature, and $T:M\to T(M)$ a volume-preserving diffeomorphism. We denote by $\mathcal{P}$ the Perron-Frobenius operator of $T$, defined by $\mathcal{P}f=f\circ T^{-1}$ as $T$ is volume-preserving. \[sobolevdefn\] Define the *dynamic Sobolev constant of $M$*, $\mathbf{s}^D(M)$ by $$\label{soboleveqn} \mathbf{s}^D=\inf_{f\in C^\infty} \frac{\|\nabla f\|_1+\|\nabla(\P f)\|_1}{2\inf_{\alpha\in \mathbb{R}} \|f-\alpha\|_{1}}.$$ Related to the above is the alternate Sobolev constant $$\label{altsoboleveqn} \hat{\mathbf{s}}^D=\inf_{f\in C^\infty} \frac{\|\nabla f\|_1+\|\nabla(\P f)\|_1}{2\|f-\bar{f}\|_{1}},$$ setting $\alpha$ to be the mean value of $f$, $\bar{f}=(1/\ell(M))(\int_M f\ d\ell)$ (see e.g. [@chavelisoperimetric] p163 for the static version). Clearly, $\hat{\mathbf{s}}^D\le \mathbf{s}^D$. We wish to demonstrate a dynamic analogue of the Federer-Fleming theorem. Our first main result is: \[fflemma\] Let $M$ be a compact, connected $C^\infty$ manifold with vanishing curvature. Let $T:M\to T(M)$ be a $C^\infty$ volume-preserving diffeomorphism. Then $$\label{ffeqn} \mathbf{s}^D=\mathbf{h}^D,$$ and further, $$\label{ffeqn22} \mathbf{h}^D/2\le \hat{\mathbf{s}}^D\le \mathbf{s}^D=\mathbf{h}^D.$$ See appendix. A Dynamic Cheeger Inequality {#sect:cheegerineq} ---------------------------- The (static) Cheeger inequality (Theorem \[cheegerthm\]) is an $L^2$-based result while the (static) Federer-Fleming equality (Theorem \[ffthm\]) is $L^1$-based. The advantage of $L^2$ is that one obtains a nice spectral theory for $\triangle$ from the Hilbert space structure, and crucial variational characterisations of the eigenvalues. One pays for this convenience by obtaining an inequality, rather than equality. Nevertheless, as we have seen in Sections \[sect:torus\]–\[sect:cyl\], the level sets of the Laplacian eigenfunctions carry significant information and provide good solutions to the original set-based isoperimetric problem (\[cheegerconst\]). We wish to replicate these properties for the dynamic Cheeger constant $\mathbf{h}^D$ and a dynamic version of the Laplace operator. We define the latter by $$\label{hattriangle} \hat{\triangle}:=(\triangle+\mathcal{P}^*\triangle\mathcal{P})/2.$$ The spectral properties of this operator are developed in Section \[sect:dynlapspec\], but we say a few words here about the intuition behind this definition. Consider a function $f:M\to \mathbb{R}$ on $M$ at the initial time from which we extract level sets, as in Figures (\[torusfig\])–(\[cylfig\]). The first term in (\[hattriangle\]), $\triangle$, is the Laplacian on the domain $M$ and of obvious importance for providing information on decompositions of $M$. The second term $\mathcal{P}^*\triangle\mathcal{P}$ first pushes the function on $f$ on $M$ forward to a function $\P f$ on $T(M)$, possibly undergoing nonlinear distortion. One then applies the Laplacian to $\P f$ on $T(M)$ to obtain geometric information on $T(M)$, and finally pulls the result back to $M$ with $\P^*$, ready to be combined with the result from the first term $\triangle$. We note that in fact $\mathcal{P}^*\triangle\mathcal{P}$ is the Laplace-Beltrami operator for the pullback of the Euclidean metric on $T(M)$. Consider $\triangle_\delta:C^\infty(T(M),\mathbb{R})\circlearrowleft$ as the Laplace-Beltrami operator on the Riemannian manifold $(T(M),\delta)$, where $\delta$ denotes the Riemannian metric (in the present context, $\delta$ is the trivial Euclidean metric). Pulling $\delta$ back under $T$ we obtain the Riemannian metric $T^*\delta$ and the map $T:(M,T^*\delta)\to (T(M),\delta)$ is an isometry. One can now write $\triangle_{T^*\delta}f=(\triangle_\delta(f\circ T^{-1}))\circ T=\mathcal{P}^*\triangle_\delta\mathcal{P}$; see e.g. p27 [@chaveleigenvalues]. Our second new result is a dynamic Cheeger inequality, which highlights the importance of eigenfunctions of the operator $\hat{\triangle}$. \[cheegerdynthm\] Let $M$ be a compact, connected $C^\infty$ manifold with vanishing curvature, and $T:M\to T(M)$ be a $C^\infty$ and volume-preserving diffeomorphism. 1. If $M$ is boundaryless, then let $\lambda_2$ be the smallest magnitude nonzero eigenvalue of $\hat{\triangle}$. 2. If $\partial M\neq\emptyset$, denote by $\mathbf{n}(x)$ the outward unit normal at $x\in \partial M$. Let $\lambda_2$ be the smallest magnitude nonzero eigenvalue for the $L^2$-eigenproblem $$\label{strongeqn0} \hat{\triangle}u(x)=\lambda u(x),\quad x\in \mathring{M},$$ with boundary condition $$\label{strongbc0} \nabla u(x)\cdot\left[\left(I+DT(x)^{-1}\left(DT(x)^{-1}\right)^\top\right) \mathbf{n}(x)\right]=0,\quad x\in\partial M.$$ Then $$\label{dyncheegereqn} \mathbf{h}^{D}\le 2\sqrt{-\lambda_2}.$$ See appendix. An intuitive explanation of the term $\left((\nabla u(x))^\top DT(x)^{-1}\right)\cdot\left(\left(DT(x)^{-1}\right)^\top\mathbf{n}(x)\right)$ in (\[strongbc0\]) is that (i) $(\nabla u(x))^\top DT(x)^{-1}=\nabla(u\circ T^{-1})(T(x))$ (gradient of the pushforward of $u$ by $T$ at $T(x)$) and (ii) $\left(DT(x)^{-1}\right)^\top\mathbf{n}(x)$ is normal to $\partial T(M)$ at $T(x)$. Thus, $\left((\nabla u(x))^\top DT(x)^{-1}\right)\cdot\left(\left(DT(x)^{-1}\right)^\top\mathbf{n}(x)\right)=0$ can be viewed as a natural pullback of a zero Neumann boundary condition on $\partial T(M)$ at $T(x)$. In terms of metrics, one has $\nabla_\delta(u\circ T^{-1})=(\nabla_{T^*\delta}(u))\circ T^{-1}$. \[proofremark\] Using the above pullback interpretation of the boundary condition and the pullback interpretation of $\hat{\triangle}$, one can produce shorter, coordinate-free proofs of Theorems \[cheegerdynthm\] and \[dynlapthm\], instead of the coordinate-based proofs in the Appendix. Similarly, the proof of Theorem \[fflemma\] can also be easily approached from this point of view. Multiple time-steps {#sect:multistep} ------------------- Let us now consider a composition of several maps $T_1,\ldots,T_{n-1}$, denoting $T^{(i)}:=T_i\circ\cdots\circ T_2\circ T_1$, $i=1,\ldots,n-1$. These maps might arise, for example, as time-$\tau$ maps of a time-dependent flow. If we wish to track the evolution of a coherent set under these maps, penalising the boundary of the evolved set $T^{(i)}(\Gamma)$ after the application of each $T_i$, then we can define $$\label{hdynn} \mathbf{h}_n^{D}:=\inf_\Gamma \frac{\frac{1}{n}\sum_{i=0}^{n-1}\ell_{d-1}(T^{(i)}\Gamma)}{\min\{\ell(M_1),\ell(M_2)\}},$$ as the natural generalisation of $\mathbf{h}^D$. In continuous time, we consider a (possibly time-dependent) ODE $\dot{x}=F(x,t)$, where $F$ is $C^\infty$ on $M\times[0,\tau]$. The flow maps $T^{(t)}:M\to T^{(t)}(M)$ are then smooth[^2] for each $t\in[0,\tau]$. One can define $$\label{hdynt} \mathbf{h}_{[0,\tau]}^{D}:=\inf_\Gamma \frac{\frac{1}{\tau}\int_0^\tau \ell_{d-1}(T^{(t)}\Gamma)\ dt}{\min\{\ell(M_1),\ell(M_2)\}},$$ as a time-continuous generalisation of $\mathbf{h}^D$. Analogously, setting $\mathcal{P}^{(i)}f=f\circ (T^{(i)})^{-1}$ and $\mathcal{P}^{(t)}f=f\circ (T^{(t)})^{-1}$, one can define dynamic Sobolev constants for multiple discrete time steps or over a continuous time interval: $$\label{sdynnt} \mathbf{s}^D_n=\inf_{f\in C^\infty} \frac{\frac{1}{n}\sum_{i=0}^{n-1}\|\nabla ( \mathcal{P}^{(i)}f)\|_1}{\inf_{\alpha\in \mathbb{R}} \|f-\alpha\|_{1}},\qquad\qquad \mathbf{s}^D_{[0,\tau]}=\inf_{f\in C^\infty} \frac{\frac{1}{\tau}\int_0^\tau\|\nabla ( \mathcal{P}^{(t)}f)\|_1\ dt}{\inf_{\alpha\in \mathbb{R}} \|f-\alpha\|_{1}}.$$ \[ffcor\] Let $M$ be a compact, connected $C^\infty$ manifold with vanishing curvature, and $T^{(i)}$, $i=1,\ldots,n-1$ (resp. $T^{(t)}, t\in[0,\tau]$) be generated by a sequence of $C^\infty$ volume-preserving diffeomorphisms (resp. be smooth flow maps generated by a volume-preserving ODE $\dot{x}=F(x,t)$). Then $$\label{ffeqnn} \mathbf{h}^D_n/2\le \hat{\mathbf{s}}^D_n\le \mathbf{s}^D_n=\mathbf{h}^D_n,$$ resp.$$\label{ffeqnt} \mathbf{h}^D_{[0,\tau]}/2\le \hat{\mathbf{s}}^D_{[0,\tau]}\le \mathbf{s}^D_{[0,\tau]}=\mathbf{h}^D_{[0,\tau]}.$$ See appendix. Theorem \[cheegerdynthm\] also naturally extends to multiple time steps. \[cheegerncor\] Let $M$ be a compact, connected $C^\infty$ manifold with vanishing curvature, and $T^{(i)}$, $i=1,\ldots,n-1$ be generated by a sequence of $C^\infty$ volume-preserving diffeomorphisms. Define $$\label{bartrianglen} \hat{\triangle}^{(n)}:=\frac{1}{n}\sum_{i=0}^{n-1}(\mathcal{P}^{(i)})^*\triangle\mathcal{P}^{(i)},$$ where $\mathcal{P}^{(i)}f=f\circ (T^{(i)})^{-1}$. 1. If $\partial M=\emptyset$, let $\lambda_2$ be the smallest magnitude nonzero eigenvalue of $\hat{\triangle}^{(n)}$. 2. If $\partial M\neq\emptyset$, denote by $\mathbf{n}(x)$ the outward unit normal at $x\in \partial M$. Let $\lambda_2$ be the smallest magnitude nonzero eigenvalue for the $L^2$ eigenproblem $$\label{strongeqnmultn} \hat{\triangle}^{(n)}u(x)=\lambda u(x),\quad x\in \mathring{M},$$ with boundary condition $$\label{strongbcmultn} \nabla u(x)\cdot\left[\sum_{i=0}^{n-1}DT^{(i)}(x)^{-1}\left(DT^{(i)}(x)^{-1}\right)^\top \mathbf{n}(x)\right]=0,\quad x\in\partial M.$$ Then $$\label{cheegermultistepn} \mathbf{h}_n^{D}\le 2\sqrt{-\lambda_2}.$$ See Appendix. \[cheegertcor\] Let $M$ be a compact, connected $C^\infty$ manifold with vanishing curvature, and $T^{(t)}, t\in[0,\tau]$ be smooth flow maps. Define $$\label{bartrianglet} \hat{\triangle}^{(\tau)}:=\frac{1}{\tau}\int_0^\tau(\mathcal{P}^{(t)})^*\triangle\mathcal{P}^{(t)}\ dt,$$ where $\mathcal{P}^{(t)}f=f\circ (T^{(t)})^{-1}$. 1. If $\partial M=\emptyset$, let $\lambda_2$ be the smallest magnitude nonzero eigenvalue of $\hat{\triangle}^{(\tau)}$. 2. If $\partial M\neq\emptyset$, denote by $\mathbf{n}(x)$ the outward unit normal at $x\in \partial M$. Let $\lambda_2$ be the smallest magnitude nonzero eigenvalue for the $L^2$ eigenproblem $$\label{strongeqnmultt} \hat{\triangle}^{(\tau)}u(x)=\lambda u(x),\quad x\in \mathring{M},$$ with boundary condition $$\label{strongbcmultt} \nabla u(x)\cdot\left[\int_0^\tau DT^{(t)}(x)^{-1}\left(DT^{(t)}(x)^{-1}\right)^\top \mathbf{n}(x)\ dt\right]=0,\quad x\in\partial M.$$ Then $$\label{cheegermultistept} \mathbf{h}_\tau^{D}\le 2\sqrt{-\lambda_2}.$$ See Appendix. If one does not wish to track the length of the evolved $\Gamma$ except at the initial and final time, one would instead use (\[hattriangle\]) with $\mathcal{P}=\mathcal{P}^{(n-1)}$ or $\mathcal{P}=\mathcal{P}^{(\tau)}$. Spectral properties of the dynamic Laplacian {#sect:dynlapspec} ============================================ The following result summarises important properties of the operator $\hat{\triangle}=(\triangle+\P^*\triangle \mathcal{P})/2$. \[dynlapthm\] Let $M$ be a compact, connected $C^\infty$ manifold with vanishing curvature, and $T:M\to T(M)$ be a $C^\infty$, volume-preserving diffeomorphism. - If $M$ is boundaryless, let $\lambda, u$ denote solutions to the $L^2$ eigenproblem $(1/2)(\triangle+\mathcal{P}^*\triangle\mathcal{P})u=\lambda u$ on $M$. - If $\partial M\neq\emptyset$, denote by $\mathbf{n}(x)$ the outward unit normal at $x\in \partial M$. Let $\lambda, u$ denote solutions to the $L^2$-eigenproblem $$\label{strongeqn} (1/2)(\triangle+\P^*\triangle\P)u(x)=\lambda u(x),\quad x\in \mathring{M},$$ with boundary condition $$\label{strongbc} \nabla u(x)\cdot\left[\mathbf{n}(x)+DT(x)^{-1}\left(DT(x)^{-1}\right)^\top \mathbf{n}(x)\right]=0,\quad x\in\partial M.$$ The solutions $\lambda, u$ satisfy the following properties. 1. The eigenvalues form a decreasing sequence $0=\lambda_1> \lambda_2>\cdots$ with $\lambda_n\to-\infty$. 2. The corresponding eigenfunctions $u_1,u_2,\ldots$ are $C^\infty$ on $M$ and eigenfunctions corresponding to distinct eigenvalues are pairwise orthogonal in $L^2$. 3. One has the variational characterisation of eigenvalues: if $u_1,u_2,\ldots$ are arranged to be orthonormal, denoting $X_k=\span\{u_1,u_2,\ldots,u_k\}$ $$\label{variational} \lambda_k=-\inf_{u\in X,\langle u,u_i\rangle=0, i=1,\ldots,k-1}\frac{\int_M |\nabla u|^2\ d\ell+\int_{T(M)}|\nabla(\P u)|^2\ d\ell}{2\int_M u^2\ d\ell},$$ with the infimum achieved only when $u=u_k$. Our main focus is the eigenvalue $\lambda_2$ and the corresponding eigenfunction $u_2$. We will see that $u_1\equiv 1$ and therefore that $\int_M u_2\ d\ell=0$ by $L^2$-orthogonality to $u_1$. Appendix \[sec:weakexist\] contains the proofs of items 1 and 3 of Theorem \[dynlapthm\] and Appendix \[sec:ellipticity\] contains the proofs of item 2 and the boundary conditions. \[spectrummultistep\] Using identical arguments, one can also show a multiple time step version of Theorem \[dynlapthm\], where $(1/2)(\triangle+\P^*\triangle\P)$ is replaced with either (\[bartrianglen\]) or (\[bartrianglet\]), and the boundary condition (\[strongbc\]) is replaced with either (\[strongbcmultn\]) or (\[strongbcmultt\]). Objectivity ----------- We demonstrate that the operator $\hat{\triangle}$ behaves in a very predictable way when the phase space is observed in a time-dependent rotating and translating frame. In particular, we show that the method of extracting coherent sets from eigenvectors of $\hat{\triangle}$ (described in Section \[sect:numerics\]) is *objective* or *frame-invariant*, meaning that the method produces the same features when subjected to time-dependent “proper orthogonal + translational” transformations; see [@truesdellnoll]. In continuous time, to test for objectivity, one makes a time-dependent coordinate change $x\mapsto Q(t)x+b(t)$ where $Q(t)$ is a proper othogonal linear transformation and $b(t)$ is a translation vector, for $t\in[t_0,t_1]$. The discrete time analogue is to imagine we begin in the frame given by $\Phi_{t_0}(M)$, where $\Phi_{t_0}(x)=Q(t_0)x+b(t_0)$, and end in the frame $\Phi_{t_1}(M)$, where $\Phi_{t_1}(x)=Q(t_1)x+b(t_1)$. We are concerned with the deterministic transformation $\dot{T}:\Phi_{t_0}(M)\to \Phi_{t_1}(M)$, which is given by $\dot{T}=\Phi_{t_1}\circ T\circ \Phi_{t_0}^{-1}$. This change of frames is summarised in the commutative diagram below. $$\begin{CD} M @>T>> T(M)\\ @VV\Phi_{t_0}V @VV\Phi_{t_1}V\\ \Phi_{t_0}(M) @>\dot{T}>> \Phi_{t_0}\circ T(M) \end{CD}$$ Corresponding to $\dot{T}$ is the operator $\dot{\hat{\triangle}}=\triangle+\P^*_{\dot{T}}\triangle \P_{\dot{T}}$, where $\P_{\dot{T}}=\P_{\Phi_{t_1}}\circ \P\circ \P_{\Phi_{t_0}}^{-1}$. If we were observing the dynamics in the frames given by $\Phi_{t_0}$ and $\Phi_{t_1}$ we would compute eigenfunctions of $\dot{\hat{\triangle}}$. \[objthm\] The operator $\hat{\triangle}$ in the original frame and the operator $\dot{\hat{\triangle}}$ in the transformed frame satisfy the commutative diagram: $$\begin{CD} L^2(M) @>\hat{\triangle}>> L^2(M)\\ @VV\P_{\Phi_{t_0}}V @VV\P_{\Phi_{t_0}}V\\ L^2(\Phi_{t_0}(M))@>\dot{\hat{\triangle}}>>L^2(\Phi_{t_1}(M)) \end{CD}$$ Consequently, if $f$ solves $$\label{origevalprob} \hat{\triangle}f=\lambda f\quad\mbox{ on $\mathring{M}$},$$ then $$\label{transfevalprob} \dot{\hat{\triangle}}(\P_{\Phi_{t_0}}f)=\lambda (\P_{\Phi_{t_0}}f)\quad\mbox{ on $\Phi_{t_0}(\mathring{M})$}.$$ Furthermore, if $\partial M\neq \emptyset$, then if $$\label{strongorigbc} \nabla f(x)\cdot\left[\mathbf{n}(x)+DT(x)^{-1}\left(DT(x)^{-1}\right)^\top\mathbf{n}(x)\right]=0,\quad x\in\partial M,$$ one has $$\label{strongtransfbc} \nabla (\P_{\Phi_{t_0}}f)(x)\cdot\left[\dot{\mathbf{n}}(x)+D\dot{T}(x)^{-1}\left(D\dot{T}(x)^{-1}\right)^\top\dot{\mathbf{n}}(x)\right]=0,\quad x\in\partial(\Phi_{t_0}(M)),$$ where $\dot{\mathbf{n}}(x)=Q(t_0)\mathbf{n}(\Phi_{t_0}^{-1}x)$. It follows from Theorem \[objthm\] that the coherent sets extracted on $M$ from e.g. level sets of the eigenfunctions of $\dot{\hat{\triangle}}$ will be transformed versions (under $\Phi_{t_0}$) of those extracted from $\hat{\triangle}$, as required for objectivity. Zero-diffusion limit of an analytic diffusion-based framework {#sect:zerolimit} ============================================================= The paper [@F13] introduced an analytic methodology for finding finite-time coherent sets, formalising prior numerical work [@FSM10]. This methodology was based around smoothings of $\P$. In [@F13], one defined smoothing operators $\mathcal{D}_{M,\epsilon}:L^2(M)\to \mathcal{H}_{1/2}(M_\epsilon)$, $\mathcal{D}_{T(M_\epsilon),\epsilon}:L^2(T(M_\epsilon))\to \mathcal{H}_{1/2}(T(M)_\epsilon)$ where $\mathcal{H}_{1/2}(M_\epsilon), \mathcal{H}_{1/2}(T(M)_\epsilon)$ denote Hölder functions with exponent 1/2 on $\epsilon$-neighbourhoods of $M$ and $T(M_\epsilon)$, respectively. The operators considered in [@F13] were $\mathcal{D}_{M,\epsilon} f(y)=\int_M \alpha_\epsilon(x-y)f(x)\ d\ell(x)$, $y\in M_\epsilon$ and $\mathcal{D}_{T(M_\epsilon),\epsilon} f(y)=\int_{T(M_\epsilon)} \alpha_\epsilon(x-y)f(x)\ d\ell(x)$, $y\in T(M)_\epsilon$, where $\alpha_\epsilon(x)=\mathbf{1}_{B_\epsilon(0)}/\ell(B_\epsilon(0))$, corresponds to smoothing on a local $\epsilon$-ball. In [@F13], the operator $\mathcal{L}_\epsilon:L^2(M)\to L^2(T(M)_\epsilon)$, $\mathcal{L}_\epsilon=\mathcal{D}_{T(M_\epsilon),\epsilon}\P\mathcal{D}_{M,\epsilon}$ was introduced[^3] and used to identify coherent sets for $T$ in phase space $M$. The reason for the diffusion operators $\mathcal{D}_\epsilon$ were two-fold. 1. Firstly, as $T$ is often invertible (e.g. the time-$t$ map of some smooth flow), subsets of $M$ are simply deformed by $T$, they do not “disperse”, and one could argue that every set is “coherent” in the sense that it is non-dispersive. Let us consider the action of $\mathcal{L}_\epsilon$ on $\mathbf{1}_{A}$, where the latter represents a subset $A\subset M$ by its characteristic function; we think of $\mathbf{1}_{A}$ as a uniform mass distribution on $A$. Applying $\mathcal{L}_\epsilon$, we first have $\mathcal{D}_{M,\epsilon}$ acting on $\mathbf{1}_{A}$, which removes from $A$ some mass within distance $\epsilon$ of the boundary of $A$. The resulting function is then transformed dynamically by $\P$, and will be supported on an $\epsilon$-neighbourhood of $T(A)$. Finally, we apply $\mathcal{D}_{T(M_\epsilon)}$ again, so that some mass within a distance $\epsilon$ of the boundary of the support of $\P\mathcal{D}_{M,\epsilon}\mathbf{1}_A$ is ejected from this support. These ideas are quantified in the proof of Lemma 6 [@F13]. In this way, the boundary size of both $A$ and $T(A)$ are penalised because the amount of mass ejected by the operators $\mathcal{D}_{M,\epsilon},\mathcal{D}_{T(M_\epsilon),\epsilon}$ is proportional to the boundary sizes. 2. Secondly, in order to find a set $A$ with minimal combined boundary sizes for $A$ and $T(A)$, [@F13] used minimisation properties of the singular vectors of $\mathcal{L}_\epsilon$; in particular, the sets $A$ and $T(A)$ were estimated from the left/right singular vectors corresponding to the second largest singular value of $\mathcal{L}_\epsilon$ (the leading singular value is always 1 by construction). To use this variational machinery $\mathcal{L}_\epsilon$ needs to be compact, and it was shown in [@F13] that $\mathcal{D}_{M,\epsilon}, \mathcal{D}_{T(M_\epsilon)}$ also played the technical role of ensuring compactness of $\mathcal{L}_\epsilon$ acting on $L^2$ functions. The singular vector of $\mathcal{L}_\epsilon$ that corresponds to the initial time (prior to application of $T$) is an eigenvector of $\mathcal{A}_\epsilon:=\mathcal{L}_\epsilon^*\mathcal{L}_\epsilon$; one pushes forward and then pulls back. Without any diffusion operators, this would read $\mathcal{A}_0=\P^*\P$; deterministically pushing forward and deterministically pulling back. Because $T$ is volume-preserving and invertible, $\P f=f\circ T^{-1}$ and $\P^*=f\circ T$. Thus $\mathcal{A}_0$ is the identity operator, and one lacks compactness and a “second” eigenvalue. Without diffusion, there is no distinguished coherent set, all sets are equally coherent as they are merely distorted, not dispersed, by the deterministic dynamics over the finite time duration encoded in $T$. However, one can ask about higher order terms when $\epsilon$ is close to zero. We show that with the right scaling in $\epsilon$, one can make sense of an expression like $$\label{difflimit} \mathcal{B}f(x):=\lim_{\epsilon\to 0}\frac{(\mathcal{L}_\epsilon^*\mathcal{L}_\epsilon-I)f(x)}{\epsilon^\beta},$$ with $\mathcal{B}$ capturing the essential effects of tiny $\epsilon$-diffusion *without explicitly including* that diffusion. We slightly modify and generalise the diffusion operators from [@F13]. Let $q:M\to\mathbb{R}^+$ be a nonnegative density with compact support, with mean the origin, and with covariance matrix $c\cdot I$, where $I$ is the $d\times d$ identity matrix. We scale $q$ to form $q_\epsilon(x)=q(x/\epsilon)/\epsilon^d$; $q_\epsilon$ will play the role of the previous $\alpha_\epsilon$, and obviously $q(x)=\mathbf{1}_{B_1(0)}/\ell(B_1(0))$ is one example of a density satisfying the above conditions. We redefine $\mathcal{D}_{M,\epsilon}f(x)=\int_M q_\epsilon(x-y)f(y)\ d\ell(y)$, $x\in \mathring{M}$, and $\mathcal{D}_{T(M),\epsilon}f(x)=\int_{T(M)} q_\epsilon(x-y)f(y)\ d\ell(y)$, $x\in \mathring{T(M)}$, where $\epsilon=\epsilon(x)$ is sufficiently small that both operators preserve integrals (i.e. $\int_M f\ d\ell=\int_M \mathcal{D}_{M,\epsilon}f\ d\ell$ and $\int_{T(M)} f\ d\ell=\int_{T(M)} \mathcal{D}_{T(M),\epsilon}f\ d\ell$). In the sequel we use the definition $\mathcal{L}_\epsilon=\mathcal{D}_{T(M),\epsilon}\P\mathcal{D}_{M,\epsilon}$. With the additional assumptions $\int_M q_\epsilon(x-y)^2\ d\ell(y)d\ell(x), \int_{T(M)} q_\epsilon(x-y)^2\ d\ell(y)d\ell(x)<\infty$, one has $\mathcal{L}_\epsilon:L^2(M)\to L^2(T(M))$ is compact as required in [@F13]. The following theorem shows that one can in fact take the scaling limit (\[difflimit\]) with $\beta=2$ and that $\mathcal{B}$ is a scalar multiple (the variance of the diffusion) of $\hat{\triangle}$. \[analcvgce\] Let $M$ be a connected, compact Riemannian manifold of vanishing curvature, $f:M \to\mathbb{R}$ be $C^3$, and $T:M\to T(M)$ be $C^3$ and volume-preserving. Let $q:M\to\mathbb{R}^+$ be a nonnegative density with compact support, with mean the origin, and covariance matrix $c\cdot I$, where $I$ is the $d\times d$ identity matrix, and let $\mathcal{L}_\epsilon=\mathcal{D}_{T(M),\epsilon}\P\mathcal{D}_{M,\epsilon}$ be defined as above. One has $$\label{diffform1thm}\lim_{\epsilon\to 0} \frac{(\mathcal{L}_\epsilon^*\mathcal{L}_\epsilon-I)f(x)}{\epsilon^2}=c\cdot(\triangle+\P^*\triangle\P)f(x),$$ for each $x\in \mathring{M}$. The proof of Theorem \[analcvgce\] is in Appendix \[sec:analcvgce\]. The appearance of the Laplace operator is due to the fact that $\mathcal{D}_{M,\epsilon}f(x)\approx f(x)+ (c\epsilon^2/2)\triangle f(x)$ for small $\epsilon$. The symmetry conditions on $q$ in Theorem \[analcvgce\] (which are physically desirable as they model isotropic diffusion) cause the first order term in $\epsilon$ to vanish. \[unifexample\] If $q(x)=\mathbf{1}_{B_1(0)}/\ell(B_1(0))$ (uniform diffusion on a unit ball), then $c=1/3, 1/4, 1/5$ in dimensions $d=1, 2, 3$, respectively. Theorem \[analcvgce\] provides a theoretical link between the diffusion-based method [@F13] and the diffusion-free constructions based on the Laplace operator in the present paper. The latter have very strong connections with geometry, evidenced by Theorems \[fflemma\] and \[cheegerdynthm\], and further reinforce the geometric intuition of [@F13]. In numerical computations, if the dynamical system is deterministic and the dynamics and the domain are smooth, the present construction may be advantageous because the spectrum of $\hat{\triangle}$ is well-separated, while the second-largest eigenvalue of $\mathcal{L}_\epsilon^*\mathcal{L}_\epsilon$ is likely to be separated from 1 by order $\epsilon^2$. If the dynamical system or the domain lacks smoothness, or if the dynamics has nontrivial diffusion from a model, both of which are not uncommon in many real-world applications, then approach of [@F13] may be more appropriate. The nontrivial diffusion from the model will in this case produce a larger spectral gap. Numerical experiments {#sect:numerics} ===================== In this section, we propose a method for finding coherent sets with low Cheeger ratios $\mathbf{h}^D(\Gamma):=(\ell_{d-1}(\Gamma)+\ell_{d-1}(T\Gamma))/\min\{\ell_d(M_1),\ell_d(M_2)\}$, where $\Gamma$ disconnects $M$ into $M_1,M_2$. We use the level sets of the first nontrivial eigenfunction of the dynamic Laplacian ${\hat{\triangle}}$, in analogy to the level sets of the first nontrivial eigenfunction of the Laplacian $\triangle$ in the static case described in Section \[staticsection\]. Our goal here is to demonstrate the efficacy of this approach, rather than to find the most accurate or efficient numerical implementation, which will be treated in a forthcoming study. To numerically estimate the Perron-Frobenius operator $\mathcal{P}$ we use Ulam’s method [@ulam]. For simplicity, we describe here the case of $T(M)=M$; the construction for $T(M)\neq M$ is completely analogous and can be found in [@FSM10; @FPG14]. We partition $M$ into a grid of $n$ small boxes $\{B_1,\ldots,B_n\}$ and compute a matrix $P$ of conditional transition probabilities between boxes under the action of $T$. Using a uniform intra-box grid of $Q$ points $z_{i,1},\ldots,z_{i,Q}\in B_i$, one computes $P_{ij}=\#\{z_{i,q}\in B_i: T(z_{i,q})\in B_j\}/\#\{z_{i,q}\in B_i\}$. The matrix $P$ is a row-stochastic matrix, where the $(i,j)^{\rm th}$ entry estimates the conditional probability of a randomly chosen point in $B_i$ entering $B_j$ under the application of $T$. The connection with $\mathcal{P}$ is as follows. Denote by $\pi_n:L^1(M)\to\sp\{\mathbf{1}_{B_1},\ldots,\mathbf{1}_{B_n}\}$ the projection onto characteristic functions on grid sets. One has $[\pi_n\mathcal{P}f]=\tilde{P}[\pi_nf]$, where $\tilde{P}$ is the transpose of $P$ and $[f]$ denotes the vector formed from the $n$ values taken by an $f\in \sp\{\mathbf{1}_{B_1},\ldots,\mathbf{1}_{B_n}\}$. The Laplace operator $\triangle$ in our two-dimensional examples is approximated using finite-difference on a five-point stencil, calculated at the centre points of the grid boxes $\{B_1,\ldots,B_n\}$. We treat the cases where $M$ has boundary rather crudely, simply applying zero Neumann boundary conditions via a symmetric reflection in the finite-difference scheme, without directly enforcing (\[strongbc\]). For example, given an $N\times N'$ grid covering a rectangle, denote $f_{i,j}$ to be the value of $f$ at grid position $(x_i,y_j)$. At the right-hand boundary $f_{N,j}$, we replace the fictional extension $f_{N+1,j}$ in the usual five-point stencil $f_{N+1,j}+f_{N-1,j}+f_{N,j+1}+f_{N,j-1}-4f_{N,j}$ with a symmetric extension $f_{N+1,j}=f_{N-1,j}$ to obtain $2f_{N-1,j}+f_{N,j+1}+f_{N,j-1}-4f_{N,j}$. The resulting matrix is denoted $\triangle_n$. We note that the matrices $\tilde{P}$ and $\triangle_n$ are sparse and consequently $\hat{\triangle}_n=\triangle_n+\tilde{P}^\top\triangle_n\tilde{P}$ is also sparse. The boxes $\{B_1,\ldots,B_n\}$ and matrix $P$ were constructed in Matlab using the GAIO software [@DFJ01]. The level sets of the eigenfunctions of the approximation of $\hat{\triangle}_n$ are extracted automatically using Matlab’s `contour` function, with the default settings. The algorithm we use in the following two-dimensional case studies is described below. \[mainalg\] 1. Form the matrix $\tilde{P}$ and the discrete Laplacian $\triangle_n$ as described above, and combine to create $\hat{\triangle}_n$. 2. Calculate eigenvalues $\lambda_1>\lambda_2>\cdots$ and eigenvectors $u_1, u_2,\ldots$ of $\hat{\triangle}_n$. 3. Iteratively scan over values of $u_2$ from $\min_i u_{2,i}$ to $\max_i u_{2,i}$. For each value, extract a level curve $\Gamma$ in $M$ using Matlab’s `contour` function (this function returns a collection of points representing corners of a polygonal curve). To compute $T\Gamma$, 1. Either: map the points representing $\Gamma$ directly with $T$, 2. Or: compute $\tilde{P}u_2$ and extract $T\Gamma$ using Matlab’s `contour` function with the same level set value as for $\Gamma$. 4. Optimise $\mathbf{h}^D(\Gamma)$ by running over all curves $\Gamma$ formed from level sets of $u_2$ in Step 3. The length of $\Gamma$ and $T\Gamma$ are computed as the lengths of the polygonal curves comprising them. Report the $\Gamma$ and $T\Gamma$ that yield the lowest value of $\mathbf{h}^D(\Gamma)$. Linear shear on a cylinder -------------------------- Our first example is a linear shear on a cylinder $M=[0,4)/\sim\times [0,1]$, where the $x$-coordinate is periodic. The map $T:M\circlearrowleft$ is the horizontal shear $T(x,y)=(x+y,y)$. We begin by exploring some naive guesses for an optimal $\Gamma$. Choosing $\Gamma$ to be $\{(x,1/2):0\le x<4\}$ separates the cylinder into upper and lower halves, and such a $\Gamma$ is preserved by $T$; the length of $\Gamma$ and $T(\Gamma)$ are both relatively long at 4 units each, and $\mathbf{h}^D(\Gamma)=(4+4)/(2\times 2)=2$. On the other hand, choosing $\Gamma=\{(x,y):0\le y\le 1\}\cup \{(x+2,y):0\le y\le 1\}$ separates the cylinder into two rectangles. In this case, the length of $\Gamma$ is 2, while the length of $T(\Gamma)$ is $2\sqrt{2}$; $\mathbf{h}^D(\Gamma)=(2+2\sqrt(2))/(2\times 2)=(1+\sqrt{2})/2$, an improvement over our previous guess. The numerical computations are carried out using a $256\times 64$ grid of $2^{14}$ square boxes and within each box, $Q=1600$ test points are used to estimate the entries of $P$. The eigenvalues of $\hat{\triangle}_n$ are $-0.0271, -3.0865, -3.1368, -10.2103, -12.3406, -12.3769,\ldots$. The first eigenvalue is not exactly zero because the constant vector is not mapped exactly to a constant vector by $\tilde{P}$ due to finite point sampling in its construction. We use the eigenvector corresponding to $\lambda_2=-3.0865$ to estimate coherent sets. The results are shown in Figures \[shearvecs\] and \[shearcs\]. ![Shear map: The second eigenvector and its image under $\mathcal{P}$ are shown, in addition to the optimised level set at $-1.0457\times 10^{-4}$.[]{data-label="shearvecs"}](shearvecs.png "fig:"){width="20cm"}\ ![Shear map: The extracted coherent sets at the optimised level set at $- 1.0457\times 10^{-4}$.[]{data-label="shearcs"}](shearcs.png "fig:"){width="20cm"}\ In this simple example, one can calculate exactly that $u_2(x,y)=\sin((x+y/2)\pi/2)$ is an eigenfunction of $\hat{\triangle}$, with eigenvalue $5\pi^2/16$ (multiplicity 2). One may construct a one-parameter family of optimal coherent sets by sliding the sets in Figure \[shearcs\] (left) sideways, with corresponding movement of the sets in Figure \[shearcs\] (right). The boundaries of the members of this family are of the form $\Gamma=\{(x-y/2,y):0\le y\le 1\}\cup \{(x-y/2+2,y):0\le y\le 1\}$ (parameterised by $x\in [0,4)$). As the lengths of both $\Gamma$ and $T(\Gamma)$ are both $\sqrt{5}$, we can compute exactly the Cheeger constant using $\ell_d(M_1)=\ell_d(M_2)=2$ to obtain $\mathbf{h}^D(\Gamma)=(\sqrt{5}+\sqrt{5})/(2\times 2)=\sqrt{5}/2$. Thus, the second eigenfunction of $\hat{\triangle}$ is “balancing” the boundary lengths between the initial and final times in order to optimise the sum of these lengths. Note that $\sqrt{5}/2$ further improves over the Cheeger value $(1+\sqrt{2})/2$ of our second naive solution of above. We note that the boundary condition (\[strongbc\]) is automatically satisfied by $u_2(x,y)=\sin((x+y/2)\pi/2)$. For example, the outward normal vector on the lower boundary of $M$ and $T(M)$ is $\mathbf{n}(x)\equiv [0, -1]^\top$, and $DT^{-1}(x,y)\equiv \begin{pmatrix}1 & -1 \\ 0 & 1 \end{pmatrix}$, so the condition (\[strongbc\]) is that $\nabla u_2(x,0)\cdot ([0, -1]+[1, -1])^\top=0$, which is clearly satisfied. The numerically computed eigenfunction in Figure \[shearvecs\] also appears to satisfy this condition, even with the relatively crude numerical scheme we have employed. In comparison with the numerics, Algorithm \[mainalg\] produces $\ell_d(M_1)=\ell_d(M_2)=2$ (to 4 significant figures), while the value for $\ell_{d-1}(\Gamma)+\ell_{d-1}(T\Gamma)$ is around $1\%$ too low because the Matlab’s contour function does not extend all the way to the cylinder boundary because of the box discretisation. The bound for the Cheeger constant from Theorem \[cheegerdynthm\] is 3.5137, a consistent upper bound for the exact value of $\mathbf{h}^D$. The remaining eigenfunctions of $\hat{\triangle}$ provide good independent solutions to the dynamic boundary minimising problem. By Theorem \[dynlapthm\], the eigenfunctions of $\hat{\triangle}$ corresponding to distinct eigenvalues are mutually orthogonal. Thus if we extract coherent sets from different eigenfunctions using the level set approach, we obtain solutions that are “independent”, in the sense that one is not a small perturbation of another. In this example, one can exactly compute that $u(x,y)=\cos(\pi y)$ is an eigenfunction with eigenvalue $\pi^2$ (unit multiplicity), and $u(x,y)=\sin((x+y/2)\pi)$ with eigenvalue $5\pi^2/4$ (multiplicity 2). The eigenvalues $5\pi^2/16, \pi^2,$ and $5\pi^2/4$ are the eigenvalues numbered two to six numerically computed (approximately) above. The numerically computed eigenfunctions are shown in Figure \[shear3efuncs\], and it is clear that zero level sets of these eigenfunctions provide a ranking of good independent solutions of decreasing quality (increasing total boundary length). ![Shear map: The second, fourth, and fifth eigenvectors of $\hat{\triangle}$ (top to bottom).[]{data-label="shear3efuncs"}](shear3efuncs.png "fig:"){width="12cm"}\ The standard map on the torus ----------------------------- Our second example is nonlinear dynamics on a flat boundaryless manifold: the so-called “standard map” $T:\mathbb{T}^2\circlearrowleft$ on the 2-torus is given by $T(x,y)=(x+y,y+8\sin(x+y))\pmod{2\pi}$. We begin by testing a naive guess for the optimal $\Gamma$, namely one of the continuum of solutions to the static isoperimetric problem illustrated in Figure \[torusfig\]): $\Gamma=\{(\{\pi/2\}\times [0,2\pi))\cup(\{3\pi/2\}\times [0,2\pi))$. Figure \[fig:standardbenchmark\] illustrates the action of $T$ on the partition defined by $\Gamma$; while the length of $\Gamma$ is short, the nonlinear action of $T$ rapidly lengthens the boundary, and the length of $T(\Gamma)$ is much greater. ![Standard map: The black set (left) arises as one of a continuum of solutions to the static isoperimetric problem (see Figure \[torusfig\]). Its image (right) has a much longer boundary and consequently a high $\mathbf{h}^D$ value.[]{data-label="fig:standardbenchmark"}](standard_benchmark_small2.png "fig:"){width="15cm"}\ To find the optimal $\Gamma$, numerical computations are carried out using a $128\times 128$ grid of $2^{14}$ boxes and within each box, $Q=1600$ test points are used to estimate the entries of $P$. The eigenvalues of $\hat{\triangle}_n$ are $-0.1487,-1.6466,-1.6498,-6.0875,-6.0939,\ldots$. The first eigenvalue is not exactly zero because the constant vector is not mapped exactly to a constant vector by $\tilde{P}$ due to finite point sampling in its construction. We use the eigenvector corresponding to $\lambda_2=-1.6466$ to estimate coherent sets. The results are shown in Figures \[standardvecs\] and \[standardcs\]. ![Standard map: The second eigenvector and its image under $\mathcal{P}$ are shown, in addition to the optimised level set at $-2.4741\times 10^{-4}$.[]{data-label="standardvecs"}](standard_vecs.png "fig:"){width="15cm"}\ ![Standard map: The extracted coherent sets at the optimised level set at $-2.4741\times 10^{-4}$.[]{data-label="standardcs"}](standard_coherentsets.png "fig:"){width="15cm"}\ It is clear that the fact that the standard map creates affine dynamics in certain directions is being exploited by the operator $\hat{\triangle}$ in order to find boundaries that are initially small and remain small under one iterate of $T$ (in fact, the boundary length is reduced under $T$). One may construct a one-parameter family of optimal coherent sets by sliding the sets in Figure \[standardcs\](b) sideways, with corresponding movement of the sets in Figure \[standardcs\](a). The second eigenvalue of $\hat{\triangle}$ is therefore probably of multiplicity 2, and this is borne out by the closeness of the computed values for $\lambda_2$ and $\lambda_3$. In this case we can compute exactly the Cheeger constant because $\ell_d(M_1)=\ell_d(M_2)=(2\pi)^2/2$ and $\ell_{d-1}(\Gamma)=4\sqrt{2}\pi$ and $\ell_{d-1}(T\Gamma)=4\pi$. Thus $\mathbf{h}^D=(1+\sqrt{2})/\pi\approx 0.7685$. In comparison with the numerics, one obtains $\ell_d(M_1)=\ell_d(M_2)=(2\pi)^2/2$ (to 4 significant figures), while the value for $\ell_{d-1}(\Gamma)+\ell_{d-1}(T\Gamma)$ is around $1\%$ too low because the Matlab’s contour function does not extend all the way to the torus boundary. Bounds for the Cheeger constant from (\[soboleveqn\]) and Theorem \[cheegerdynthm\] are 1.2278 and 2.5664, respectively, both consistent upper bounds for the exact value of $\mathbf{h}^D$. Transitory flow on the square ----------------------------- Our third example is a nonlinear time-dependent flow on the unit square introduced in [@meissmosovsky], defined by $\dot{x}=-\partial\Psi/\partial y, \dot{y}=-\partial\Psi/\partial x,$ where $\Psi$ is the time-dependent stream function $\Psi(x,y,t)=(1-s(t))\sin(2\pi x)\sin(\pi y)+s(t)\sin(\pi x)\sin(2\pi y)$ and $s(t)=t^2(3-2t), 0\le t\le 1$. The flow is computed from $t=0$ to $t=1$. At time $t=0$, the instantaneous vector field comprises two separate rotating “gyres” on the left and right halves of the square. As $t$ increases from 0 to 1, the instantaneous vector field rotates 90 degrees to finally arrive at two rotating gyres in the upper and lower halves of the square. The numerical computations are carried out using a $128\times 128$ grid of $2^{14}$ boxes and within each box, $Q=1600$ test points are used to estimate the entries of $P$. The eigenvalues of $\hat{\triangle}_n$ are $-39.9269$, $-87.1430$, $-155.7652$, $-352.8106$, $-430.3017$, $-465.4415,\ldots$ The first eigenvalue is again not exactly zero, because the constant vector is not mapped exactly to a constant vector by $\tilde{P}$ due to finite point sampling in its construction. We use the eigenvector corresponding to $\lambda_2=-87.1430$ to estimate coherent sets. The results are shown in Figures \[meissvecs\] and \[meisscs\]. ![Transitory flow: The second eigenvector and its image under $\mathcal{P}$ are shown, in addition to the optimised level set at $-6.4417\times 10^{-4}$.[]{data-label="meissvecs"}](meiss_vecs.png "fig:"){width="15cm"}\ ![Transitory flow: The extracted coherent sets at the optimised level set at $-6.4417\times 10^{-4}$.[]{data-label="meisscs"}](meiss_coherentsets.png "fig:"){width="15cm"}\ From the numerics, one obtains $\ell_d(M_1)=0.3091$, $\ell_{d-1}(\Gamma)=2.1606$, $\ell_{d-1}(T\Gamma)=2.9557$ and the value for $\mathbf{h}^D(\Gamma)\approx(2.1606+2.9557)/(2\times 0.3091)=8.2749$. Bounds for the Cheeger constant from (\[soboleveqn\]) and Theorem \[cheegerdynthm\] are 10.0533 and 18.6701, respectively, both consistent upper bounds. We compare these results with a “naive” solution, where one selects $\Gamma'$ to be the vertical separatrix that separates the two rotating elements in the instantaneous vector field at $t=0$; see Figure \[meissvi\] (left). This choice of $\Gamma'$ is one of two solutions to the static isoperimetric problem on the unit square, and corresponds to a static Cheeger value of $\mathbf{h}(\Gamma')=1/(1/2)=2.$ The image of $\Gamma'$ under $T$ is shown in Figure \[meissvi\] (right). ![Transitory flow: A vertical separatrix and its image from $t=0$ to $t=1$.[]{data-label="meissvi"}](meiss_verticalimage.png "fig:"){width="15cm"}\ While the length of $\Gamma'$ is only 1, the length of $T\Gamma'$ is much greater (approximately 8.3057), leading to a Cheeger value of $\mathbf{h}^D(\Gamma')\approx(1+8.3057)/(2\times 1/2)=9.3057$, larger than the value of $\mathbf{h}^D(\Gamma)=8.2749$ corresponding to the solution shown in Figures \[meissvecs\] and \[meisscs\]. We see that the curve $\Gamma$ in Figure \[meissvecs\] trades off length at $t=0$ in order to have a relatively short length also at time $t=1$, in contrast to $\Gamma'$. Finally, Figure \[meisszooms\] shows fine detail of the curves $T(\Gamma)$; the pixellation visible is the underlying grid, which controls the resolution of the boundary curves. There is some shearing at the two locations shown. This is responsible for most of the increase in $\ell_{d-1}(T(\Gamma))$ from $\ell_{d-1}(\Gamma)$. ![Transitory flow: Zooms of the boundary at time $t=1$.[]{data-label="meisszooms"}](meiss_zooms.png "fig:"){width="15cm"}\ While the shearing is not tiny, particularly in the left-hand figure, given the limited resolution and the fact that most of the boundary is shear-free, our selected coherent sets do perform well in terms of reducing boundary length for both the initial set and its image. Moreover, if one considers applying diffusion at the scale of the box diameters, the “effective boundary” at this scale (responsible for possible diffusive ejection as discussed in §5) is increased only a little by the tight shearing. Conclusion ========== We have extended classical results from isoperimetric theory, concerned with identifying subsets of manifolds with least boundary size to volume ratios, to the situation where the manifolds are subjected to general nonlinear dynamics. We proved a dynamic version of (i) the Federer-Fleming Theorem, which tightly links geometric and functional characterisations of the fundamental isoperimetric problem, and (ii) the Cheeger inequality, which bounds the least boundary size to volume ratio by the first nontrivial eigenvalue of the Laplace operator on the manifold. We developed a new dynamic Laplace operator and used this operator to numerically identify subsets of manifolds that have small boundary size to volume ratios before, after, and during, the application of nonlinear dynamics. In nonlinear fluid flow, such sets characterise finite-time coherent sets, as their boundaries do not elongate and filament, and there is little exchange between the interior and exterior of these sets in the presence of small diffusion. We proved that the dynamic Laplace operator can also be obtained as a zero-diffusion limit of the existing probabilistic approach to identifying finite-time coherent sets [@F13], thus creating a strong formal link between probabilistic descriptions and geometric descriptions of Lagrangian coherent structures. Numerical experiments were carried out using a simple combination of Ulam’s method and a finite-difference scheme. Obvious extensions of the methodology include handling non-volume-preserving dynamics, nonuniform initial mass distributions, and manifolds of nonvanishing curvature, and work is in progress in these directions. Accurate and efficient numerical methods are also being pursued. An advantage of the present formulation over [@F13] in the pure advection setting is that there is more freedom in selecting an approximating function basis as the basis no longer needs to generate numerical diffusion, and various out-of-the-box numerical methods can be employed. Recent work [@FJ15] uses radial basis functions to estimate both $\mathcal{P}$ and $\triangle$ and has resulted in a more accurate approximation of the eigenspectrum and a significant reduction of the number of required Lagrangian trajectories, compared to the numerical techniques in the present paper. Radial basis functions are flexible enough to be able to handle irregularly-shaped domains as sometimes arise in applications. Acknowledgements ================ The author acknowledges feedback from Eric Kwok and Daniel Karrasch, which improved the manuscript, assistance from Oliver Junge regarding GAIO, and a discussion with Renato Feres. This research is supported by an Australian Research Council Future Fellowship and Discovery Project DP150100017. Proof of Theorem \[fflemma\] ============================ \[sl\] Let $A\in GL(d)$, and $v_1,\ldots, v_d$ be an orthonormal basis for $\mathbb{R}^d$. Let $U_1=\sp\{v_1,\ldots, v_k\}, U_2=\sp\{v_{k+1},\ldots,v_{d}\}$. Then $$\|A(v_1\wedge \cdots \wedge v_k)\|=|\det(A)|\cdot\|(A^{-1})^\top(v_{k+1}\wedge\cdots \wedge v_d)\|,$$ where $\|\cdot\|$ is the volume induced by the Gram determinant. The parallelopiped $A(v_1\wedge\cdots\wedge v_d)$ has volume $|\det(A)|$ by orthonormality of $v_1,\ldots,v_d$. We note that the space spanned by $(A^{-1})^\top v_{k+1},\ldots, (A^{-1})^\top v_d$ is orthogonal to the space spanned by $Av_1,\ldots A v_k$; indeed any element of one collection is orthogonal to any element of the other. The volume of the parallelopiped can therefore be written as $\det(A)=\|Av_1\wedge\cdots\wedge A v_k\|\cdot\|\Pr_{(A^{-1})^\top(U_2)}(Av_{k+1})\wedge\cdots\wedge \Pr_{(A^{-1})^\top(U_2)}(A v_d)\|$, where $\Pr_{(A^{-1})^\top(U_2)}$ denotes orthogonal projection along $A(U_1)$ onto $(A^{-1})^\top(U_2)$. Let $V$ be the $d\times (d-k)$ matrix with columns $v_{k+1},\ldots,v_d$, and let $W=(A^{-1})^\top V$. The projection matrix associated with $\Pr_{(A^{-1})^\top(U_2)}$ is $C=W(W^\top W)^{-1}W^\top$. We compute $\|\Pr_{(A^{-1})^\top(U_2)}(Av_{k+1})\wedge\cdots\wedge \Pr_{(A^{-1})^\top(U_2)}(A v_d)\|$ as $\det((CAV)^\top CAV)^{1/2}$. $$\begin{aligned} \lefteqn{\det((CAV)^\top CAV)}\\ &=&\det\left(\left[V^\top A^\top (A^{-1})^\top V(V^\top A^{-1}(A^{-1})^\top V)^{-1}V^\top A^{-1}\right] \left[(A^{-1})^\top V(V^\top A^{-1}(A^{-1})^\top V)^{-1}V^\top A^{-1}AV\right]\right)\\ &=&\det\left(\left(V^\top A^{-1}(A^{-1})^\top V\right)^{-1}\left(V^\top A^{-1}(A^{-1})^\top V\right)\left(V^\top A^{-1}(A^{-1})^\top V\right)^{-1}\right)\quad\mbox{by orthogonality of $V$}\\ &=&\det(V^\top A^{-1}(A^{-1})^\top V)^{-1})\\ &=&1/\|(A^{-1})^\top (v_{k+1}\wedge\cdots\wedge v_d)\|^2\end{aligned}$$ Thus, $\|\Pr_{(A^{-1})^\top(U_2)}(Av_{k+1})\wedge\cdots\wedge \Pr_{(A^{-1})^\top(U_2)}(A v_d)\|=1/\|(A^{-1})^\top(v_{k+1}\wedge\cdots \wedge v_d)\|$, and the result follows. The main thing to prove is the equality. We modify the arguments of Remark VI.2.3 and the proof of Theorem II.2.1[@chavelisoperimetric]. (a) We start by showing $\mathbf{s}^D\le \mathbf{h}^D$. We do this by creating a specific sequence of functions $f_\epsilon$, which when substituted into (\[soboleveqn\]), in the limit achieve $\mathbf{h}^D$; therefore $\mathbf{s}^D$ can potentially be lower still. Suppose we have a specific disconnection $\Gamma$, and define $\Gamma_\epsilon=\{x\in M: d(x,\Gamma)<\epsilon\}$, where $d(x,\Gamma)=\inf_{y\in \Gamma}\|x-y\|$, and because of the vanishing curvature we write the Riemannian distance between two points $x,y\in M$ as $\|x-y\|$. Define $$f_\epsilon=\left\{ \begin{array}{ll} 1, & \hbox{$x\in M_1\setminus\Gamma_\epsilon$;} \\ -1, & \hbox{$x\in M_2\setminus\Gamma_\epsilon;$}\\ (1/\epsilon)d(x,\Gamma),&\hbox{$x\in M_1\cap\Gamma_\epsilon$;}\\ -(1/\epsilon)d(x,\Gamma),&\hbox{$x\in M_2\cap\Gamma_\epsilon.$} \end{array} \right.$$ The function $f_\epsilon$ is Lipschitz and by mollification on $\mathring{M}$ we can produce a sequence of $C^\infty$ functions $\phi_{j,\epsilon}$ such that $\| f_\epsilon- \phi_{j,\epsilon}\|_1\to 0$ and $\|\nabla f_\epsilon-\nabla \phi_{j,\epsilon}\|_1\to 0$ as $j\to \infty$ (see e.g. Theorem I.3.3 [@chavelisoperimetric]). Now, $$\begin{aligned} \mathbf{s}^D&=&\inf_{f\in C^\infty} \frac{\|\nabla f\|_1+\|\nabla\P f\|_1}{2\inf_\alpha \|f-\alpha\|_{1}}\\ &\le&\frac{\|\nabla \phi_{j,\epsilon}\|_1+\|\nabla\P \phi_{j,\epsilon}\|_1}{2\inf_\alpha \|\phi_{j,\epsilon}-\alpha\|_{1}}\qquad\mbox{for each $j$}\\ &\le&\frac{\|\nabla f_\epsilon\|_1+\|\nabla \phi_{j,\epsilon}-\nabla f_\epsilon\|_1+\|\nabla\P f_\epsilon\|_1+\|\nabla\P \phi_{j,\epsilon}-\nabla\P f_\epsilon\|_1}{2\inf_\alpha \|f_\epsilon-\alpha\|_{1}-2\|f_\epsilon-\phi_{j,\epsilon}\|_{1}}\qquad\mbox{for each $j$}\\\end{aligned}$$ Thus, letting $j\to\infty$ we have for each $\epsilon>0$, $$\label{sDcompare1} \mathbf{s}^D\le \frac{\|\nabla f_\epsilon\|_1+\|\nabla\P f_\epsilon\|_1}{2\inf_\alpha \|f_\epsilon-\alpha\|_{1}}.$$ We begin to interpret these terms in terms of $d$- and $d-1$-dimensional volume. Note that $|\nabla f_\epsilon|$ is $1/\epsilon$ on $\Gamma_\epsilon$ and zero elsewhere. Thus $\lim_{\epsilon\to 0}\int_M |\nabla f_\epsilon|\ d\ell=\lim_{\epsilon\to 0}\ell(\Gamma_\epsilon)/\epsilon=2\ell_{d-1}(\Gamma)$. Now we concentrate on the term $\|\nabla \P f_\epsilon\|_1$. Let $x\in\Gamma_\epsilon\cap M_2$, and $z\in\Gamma$ be the closest point to $x$ (if there are several, choose one). Note $\nabla f_\epsilon(x)=\hat{n}(x)/\epsilon$ where $\hat{n}(x)=(z-x)/|z-x|$, which is normal to $\Gamma$ at $z$. Since $T$ is volume-preserving we note that $\P f_\epsilon$ is 1 on $T( M_1\setminus\Gamma_\epsilon)$ and $-1$ on $T( M_2\setminus\Gamma_\epsilon)$. Thus, $|\nabla (\P f_\epsilon)|=0$ on these regions. The value of $\P f_\epsilon$ on $T\Gamma_\epsilon$ must be computed. Let us first consider $T( M_2\cap\Gamma_\epsilon)$. $$\begin{aligned} \nonumber\int_{T( M_2\cap\Gamma_\epsilon)}|\nabla (f_\epsilon\circ T^{-1})(x)|\ d\ell&=&\int_{T( M_2\cap\Gamma_\epsilon)}|\nabla f_\epsilon(T^{-1}x)^\top\cdot DT^{-1}(x)|\ d\ell\\ \nonumber&=&(1/\epsilon)\int_{T( M_2\cap\Gamma_\epsilon)}|\hat{n}(T^{-1}x)^\top\cdot DT^{-1}(x)|\ d\ell\\ \nonumber&=&(1/\epsilon)\int_{ M_2\cap\Gamma_\epsilon}|\hat{n}(x)^\top\cdot DT^{-1}(Tx)|\ d\ell\\ \label{final}&=&(1/\epsilon)\int_{ M_2\cap\Gamma_\epsilon}|(DT(x)^{-1})^\top\hat{n}(x)|\ d\ell\end{aligned}$$ Let $t_1(x),\ldots,t_{d-1}(x)$ be an orthonormal set of vectors spanning the orthogonal complement of $\hat{n}(x)$ in $\mathbb{R}^d$ (these vectors span the $d-1$-dimensional tangent space of $\Gamma$ at $z$). By Lemma \[sl\], one has $|(DT(x)^{-1})^\top\hat{n}(x)|=|DT(x)(t_1(x)\wedge\cdots \wedge t_{d-1}(x))|$, where $|\cdot|$ denotes the volume (one-dimensional and $d-1$-dimensional, respectively) induced by the Gram determinant. Thus, $$(\ref{final})=(1/\epsilon)\int_{ M_2\cap\Gamma_\epsilon}|DT(x)(t_1(x)\wedge\cdots \wedge t_{d-1}(x))|\ d\ell.$$ The integrand measures the local increase in the $d-1$-dimensional volume of linear spaces close to the tangent spaces of $\Gamma$, under the action of $T$ in an $\epsilon$-neighbourhood of $\Gamma$, and the above integral converges to $\ell_{d-1}(T\Gamma)$ as $\epsilon\to 0$. Similarly, $$\lim_{\epsilon\to 0}\int_{T( M_1\cap\Gamma_\epsilon)}|\nabla (f_\epsilon\circ T^{-1})(x)|\ d\ell=\ell_{d-1}(T\Gamma).$$ Thus, $$\label{sDcompare2} \lim_{\epsilon\to 0} (\|\nabla f_\epsilon\|_1+\|\nabla \P f_\epsilon\|_1)/2=\ell_{d-1}(\Gamma)+\ell_{d-1}(T\Gamma).$$ Now we turn to the denominator $\int_M |f_\epsilon-\alpha|\ d\ell$. Without loss, suppose that $\ell( M_1)\le \ell( M_2)$. $$\begin{aligned} \int_M |f_\epsilon-\alpha|\ d\ell&\ge& |1-\alpha|(\ell( M_1)-\ell(\Gamma_\epsilon))+|1+\alpha|(\ell( M_2)-\ell(\Gamma_\epsilon))\\ &\ge&(|1-\alpha|+|1+\alpha|)(\ell( M_1)-\ell(\Gamma_\epsilon))\\ &\ge&2(\ell( M_1)-\ell(\Gamma_\epsilon)),\end{aligned}$$ implying $\inf_\alpha \int_M |f_\epsilon-\alpha|\ d\ell\ge 2(\ell( M_1)-\ell(\Gamma_\epsilon))$ for each $\epsilon>0$. Taking the limit as $\epsilon\to 0$, we combine this with (\[sDcompare1\]) and (\[sDcompare2\]) to conclude $\mathbf{s}^D\le \mathbf{h}^D$. \(b) Now let $f\in C^\infty(M)$ and choose a constant $\beta$ so that, $ M_1=\{f>\beta\}$, $ M_2=\{f<\beta\}$ have equal volume. Such a choice of $\beta$ satisfies $\|f-\beta\|_1=\inf_\alpha \|f-\alpha\|_1$ (see Remark VI.2.2 p163 [@chavelisoperimetric]). For $t>0$ define $D_t=\{x\in M_1: f(x)>\beta+t\}$, and $\tilde{D}_t=\{x\in T(M_1): \P f(x)>\beta+t\}= \{x\in T(M_1): f\circ T^{-1}(x)>\beta+t\}$, thus $\tilde{D}_t=TD_t$. In what follows, we concentrate on $\tilde{D}_t$ and $\P f$, modifying the argument for $D_t$ and $f$ in [@chavelisoperimetric] p46. Firstly, using the co-area formula, Corollary I.3.1 [@chavelisoperimetric] with $f\equiv 1$, $\Phi=f-\beta$ (and then $\Phi=\mathcal{P}f-\beta$), one has $$\label{coarea1} \int_{ M_1}|\nabla (f -\beta)|\ d\ell+\int_{T(M_1)}|\nabla (\P f -\beta)|\ d\ell=\int_0^\infty (\ell_{d-1}(\partial{D}_t)+\ell_{d-1}(\partial\tilde{D}_t))\ dt.$$ Continuing, $$\begin{aligned} \label{altsobvol} (\ref{coarea1})&\ge& 2\mathbf{h}^D \int_0^\infty \ell(D_t)\ dt\quad\mbox{since $\ell(D_t)=\ell(\tilde{D}_t)\le \ell(M)/2$}\\ \label{hfinal}&=& 2\mathbf{h}^D \int_{ M_1} |f-\beta|\ d\ell,\end{aligned}$$ by a standard argument, see e.g. p.164 [@chavelisoperimetric]. Similarly, $\int_{ M_2}|\nabla (f -\beta)|\ d\ell+\int_{T(M_2)}|\nabla (\P f -\beta)|\ d\ell\ge 2\mathbf{h}^D \int_{ M_2} |f-\beta|\ d\ell$. Thus, $$\begin{aligned} \label{altsob1}\int_{ M}|\nabla f|\ d\ell+\int_{T(M)}|\nabla \P f |\ d\ell&=&\int_{ M}|\nabla (f -\beta)|\ d\ell+\int_{T(M)}|\nabla (\P f -\beta)|\ d\ell\\ \nonumber&\ge& 2\mathbf{h}^D \int_{ M} |f-\beta|\ d\ell\\ \nonumber&\ge& 2\mathbf{h}^D \inf_\alpha \int_{ M} |f-\alpha|\ d\ell\end{aligned}$$ and $\mathbf{s}^D\ge \mathbf{h}^D$. \(c) In order to get the inequality $\mathbf{h}^D/2\le \hat{\mathbf{s}}^D$, we set $\beta=\bar{f}$ in the argument of part (b) above. Note that now possibly only one of $M_1$ or $M_2$ has volume less than or equal to $\ell(M)/2$, and WLOG suppose it is $M_1$. Then (\[hfinal\]) holds. We note that $\int_{T(M_2)}|\mathcal{P}f-\beta|\ d\ell=\int_{M_2} |f-\beta|\ d\ell\ge \int_{M_1} |f-\beta|\ d\ell=\int_{T(M_1)}|\mathcal{P}f-\beta|\ d\ell$. Thus, $$\begin{aligned} \frac{\int_M |\nabla(f-\beta)|\ d\ell +\int_{T(M)}|\nabla(\mathcal{P}f-\beta)|\ d\ell}{2\int_M|f-\beta|\ d\ell}&\ge&\frac{\int_{M_1} |\nabla(f-\beta)|\ d\ell +\int_{T(M_1)}|\nabla(\mathcal{P}f-\beta)|\ d\ell}{4\int_{M_1}|f-\beta|\ d\ell}\\ &\ge&\mathbf{h}^D/2,\end{aligned}$$ by (\[hfinal\]). Thus, $\mathbf{h}^D\le 2\hat{\mathbf{s}}^D$. All of the calculations concerning the map $T$ and the operator $\mathcal{P}$ in the proof of Theorem \[ffthm\] hold for each of the maps $T^{(i)}, i=1,\ldots,n$ or $T^{(t)}, t\in[0,\tau]$. These calculations are always put together linearly, and the proof proceeds exactly as in the proof of Theorem \[ffthm\]. Proof of Theorem \[cheegerdynthm\] ================================== The proof is a modification of the presentation of [@ledoux]; see also [@chavelisoperimetric] Theorem 3, Section IV.3. Let $g:M\to \mathbb{R}$ be positive and smooth; then $\P g$ is also positive and smooth. First, by the co-area formula applied separately to $g$ and $\P g$ (see e.g. Cor. I.3.1 [@chaveleigenvalues]) and then the definition of $\mathbf{h}^D$ we have that $$\begin{aligned} \label{coareaeqn}\int_M |\nabla g|\ d\ell+\int_{T(M)}|\nabla(\P g)|\ d\ell&=&\int^\infty_0 \ell_{d-1}(\{g=t\})+\ell_{d-1}(\{\P g=t\})\ dt \\ &=&\int^\infty_0 \ell_{d-1}(\{g=t\})+\ell_{d-1}(T\{g=t\})\ dt \\ \label{heqn}&\ge& 2\mathbf{h}^D\int_0^\infty \min\{\ell(\{g\ge t\}),\ell(\{g<t\})\}\ dt\end{aligned}$$ Let $f:M\to \mathbb{R}$ be smooth and denote by $m$ the median of $f$; i.e. $\ell(f\ge m)\ge 1/2$ and $\ell(f\le m)\ge 1/2$. Set $f^+=\max\{f-m,0\}, f^-=\max\{-f+m,0\}$, so that $f-m=f^+-f^-$. Note that by volume-preservation, $m$ is also the median for $\P f$, and we similarly decompose $\P f-m=(\P f)^+-(\P f)^-$. Further, note that since $\P$ is positive and a composition operator, we have $((\P f)^+)^2=\P ((f^+)^2)$ and similarly for $f^-$. We apply (\[heqn\]) to $g=(f^+)^2$ and $g=(f^-)^2$. Note that for each $t>0$, $\ell(\{(f^+)^2\ge t\})\le 1/2$ and $\ell(\{(f^-)^2\ge t\})\le 1/2$. Now, $$\begin{aligned} \nonumber\lefteqn{\frac{1}{2}\left(\int_M |\nabla((f-m)^2)|\ d\ell+\int_{T(M)}|\nabla((\P f-m)^2)|\ d\ell\right)}\\ \nonumber&=&\frac{1}{2}\left(\int_M |\nabla((f^+)^2)|+|\nabla((f^-)^2)|\ d\ell+\int_{T(M)}|\nabla((\P f^+)^2)|+|\nabla((\P f^-)^2)|\ d\ell\right)\\ \nonumber&\ge&\mathbf{h}^{D}\int^\infty_0 \ell(\{(f^+)^2\ge t\})\ dt+\mathbf{h}^{D}\int^\infty_0 \ell(\{(f^-)^2\ge t\})\ dt\\ \nonumber&=&\mathbf{h}^{D}\int_M (f^+)^2\ d\ell+\mathbf{h}^{D}\int_M (f^-)^2\ d\ell\\ \label{eqn1}&=&\mathbf{h}^{D}\int_M (f-m)^2\ d\ell\end{aligned}$$ Further, $$\begin{aligned} \nonumber\frac{1}{2}\int_M|\nabla((f-m)^2)|\ d\ell&=&\int_M|(f-m)\cdot\nabla f|\ d\ell\\ \label{cauchyschwartz}&\le& \|f-m\|_2\cdot\|\nabla f\|_2,\end{aligned}$$ where $\|\cdot\|_2$ denotes the $L^2(\ell)$ norm. Also analogously to (\[cauchyschwartz\]) we have $$\label{cauchyschwartza} \frac{1}{2}\int_{T(M)}|\nabla((\P f-m)^2)|\ d\ell\le \|\P f-m\|_2\cdot\|\nabla (\P f)\|_2=\|f-m\|_2\cdot\|\nabla (\P f)\|_2.$$ Thus, using (\[eqn1\])–(\[cauchyschwartza\]) and Cauchy-Schwartz, $$\begin{aligned} \nonumber (\mathbf{h}^{D})^2\|(f-m)\|_2^4&\le&\|f-m\|_2^2\left(\|\nabla f\|_2+\|\nabla(\P f)\|_2\right)^2\\ \label{penultimate}&\le&2\|f-m\|_2^2\left(\|\nabla f\|_2^2+\|\nabla(\P f)\|_2^2\right).\end{aligned}$$ Thus (\[penultimate\]) becomes $$\begin{aligned} \nonumber(\mathbf{h}^{D})^2&\le &2\frac{\int_M|\nabla f|^2\ d\ell+\int_{T(M)}|\nabla(\P f)|^2\ d\ell}{\int_M (f-m)^2\ d\ell}\\ \label{cheegerexpr}&\le & 2\frac{\int_M|\nabla f|^2\ d\ell+\int_{T(M)}|\nabla(\P f)|^2\ d\ell}{\int_M (f-(\int_M f\ d\ell))^2\ d\ell},\end{aligned}$$ since $\inf_\alpha\|f-\alpha\|_2$ is realised when $\alpha$ is the mean of $f$. As $f$ is arbitrary, we may minimise the RHS of (\[penultimate\]) by inserting $f=u_2$, the eigenfunction of $\hat{\triangle}$ corresponding to the lowest nontrivial eigenvalue (with $u_2$ satisfying the boundary condition of Theorem \[cheegerdynthm\] if $M$ has nonempty boundary). Upon this insertion, the RHS of (\[penultimate\]) takes the value $4\lambda_2$ by Part 3 of Theorem \[dynlapthm\]. All of the calculations concerning the map $T$ and the operator $\mathcal{P}$ in the proof of Theorem \[cheegerdynthm\] hold for each of the maps $T^{(i)}, i=1,\ldots,n$ or $T^{(t)}, t\in[0,\tau]$. These calculations are almost always put together linearly; the only exception is equation (\[penultimate\]), which we describe here in the continuous time case. $$\begin{aligned} \nonumber (\mathbf{h}^{D})^2\|(f-m)\|_2^4&\le&\frac{1}{\tau^2}\left(\int_0^\tau 2\|f-m\|_2\cdot\|\nabla \P^{(t)}f\|_2\ dt\right)^2\\ \label{penultimatet}&\le&\frac{4\|f-m\|_2^2}{\tau^2}\int_0^\tau \|\nabla \P^{(t)}f\|_2^2\ dt\cdot\int_0^\tau 1\ dt. $$ Thus $$(\mathbf{h}^{D})^2\le 4\frac{\frac{1}{\tau}\int_0^\tau \|\nabla \P^{(t)}f\|_2^2\ dt}{\|f-m\|^2_2}.$$ The proof proceeds exactly as in the proof of Theorem \[cheegerdynthm\], using Remark \[spectrummultistep\]. Proof of Theorem \[dynlapthm\] ============================== Existence of weak solutions and variational characterisation of eigenvalues {#sec:weakexist} --------------------------------------------------------------------------- Let $X=W^{1,2}(M)$, the Sobolev space of functions $u:M\to\mathbb{R}$ with square-integrable weak derivative. The space $X$ is a Hilbert space with the inner product $\langle u,v\rangle_X=\int_M \nabla u\cdot \nabla v+uv\ d\ell$. We will establish the existence of a set of weak solutions $u\in X$ to $$\label{weakeqn} (1/2)\left(\int_M \nabla v\cdot \nabla u\ d\ell+\int_{T(M)} \nabla (\P v)\cdot \nabla (\P u)\ d\ell\right)=-\lambda \int_M v\cdot u\ d\ell\qquad\mbox{for all $v\in W^{1,2}$}.$$ We require certain extremisation properties and therefore for $u\in X$, we define the functionals $F(u)=F_1(u)+F_2(u)$, where $F_1(u)=(1/2)\int_M |\nabla u|^2\ d\ell$, $F_2(u)=(1/2)\int_{T(M)}|\nabla(\P u)|^2\ d\ell$ and $G(u)=\int_M u^2\ d\ell-1$. We look for $u$ which minimizes $F(u)$ subject to $G(u)=0$ ($\|u\|_2=1$). In the following, we consider only $F_2(u)$ as the corresponding results for $F_1(u)$ follow immediately by setting $T$ to the identity map and $\P$ the identity operator. \[functionallemma\] 1. The functional $F_2:X\to\mathbb{R}$ is well-defined, 2. The derivative $F_2'(u)$ is linear and bounded (hence $F_2'(u)\in X^*$), 3. $F_2$ is differentiable, and 4. $u\mapsto F'_2(u)$ is continuous as a map from $X$ to $X^*$. - $2F_2(u)=\int_{T(M)} \|\nabla(\P u)\|^2\ d\ell=\int_{T(M)} \|\nabla(u\circ T^{-1})\|^2\ d\ell=\int_{T(M)} \|(DT^{-1})^\top\cdot(\nabla u)\circ T^{-1}\|^2\ d\ell$. By compactness of $M$ and the fact $T$ is $C^\infty$ diffeomorphism onto $T(M)$, one may find a $C<\infty$ such that the previous expression is bounded above by $C\int\|\nabla u\|^2\ d\ell\le C\|u\|_X^2<\infty$. Thus, $F_2:X\to \mathbb{R}$ is well-defined. - $$\begin{aligned} F'_2(u)v&=&\lim_{h\to 0}\frac{F_2(u+hv)-F_2(u)}{h}\\ &=&\lim_{h\to 0}\frac{\int_{T(M)}|\nabla(\P u)+h\nabla (\P v)|^2\ d\ell-\int_{T(M)}|\nabla (\P u)|^2\ d\ell}{2h}\\ &=&\lim_{h\to 0}\frac{\int_{T(M)}|\nabla(\P u)|^2+2h(\nabla(\P u)\cdot\nabla (\P v))+h^2|\nabla (\P v)|^2 -|\nabla (\P u)|^2\ d\ell}{2h}\\ &=&\lim_{h\to 0}\frac{\int_{T(M)} 2h(\nabla(\P u)\cdot\nabla (\P v))+h^2|\nabla (\P v)|^2\ d\ell}{2h}\\ &=&\int_{T(M)} \nabla(\P u)\cdot\nabla (\P v)\ d\ell.\end{aligned}$$ $F'_2(u)$ is clearly linear in $v$ and bounded because $\int_{T(M)} \nabla(\P u)\cdot\nabla (\P v)\ d\ell\le \|\nabla(\P u)\|_2\cdot\|\nabla (\P v)\|_2\le C^{1/2}\|u\|_X\|v\|_X$, where $C$ is the constant from part (i); thus $F_2'(u)\in X^*$. - $F_2$ is differentiable since $$\begin{aligned} |F_2(u+v)-F_2(u)-F_2'(u)v|&=&\left|(1/2)\int_{T(M)}|\nabla(\P v)|^2\ d\ell\right|\\ &\le& (C/2)\int_M |\nabla v|^2\ d\ell\\ &\le& (C/2)\|v\|_X^2\to 0\mbox{ as }\|v\|_X\to 0.\end{aligned}$$ - Finally, let $u,v,w\in X$, then $$\begin{aligned} |(F'_2(u)-F'_2(v))w|&=&\left|\int_{T(M)} (\nabla(\P u)-\nabla(\P v))\cdot\nabla(\P w)\ d\ell\right|\\ &\le& \|\nabla(\P(u-v))\|_2\cdot\|\nabla(\P w)\|_2\\ &\le& C\|\nabla(u-v)\|_2\cdot\|\nabla w\|_2\\ &\le& C\|u-v\|_X\cdot\|w\|_X.\end{aligned}$$ Thus $\|F'_2(u)-F'_2(v)\|_{X^*}=\sup_{\|w\|_X=1}|(F'_2(u)-F'_2(v))w|\to 0$ as $\|u-v\|_X\to 0$, and $F'_2:X\to X^*$ is continuous. \[lemmaattain\] $F$ attains its minimum on the constraint set $\mathcal{C}=\{u\in X: G(u)=0\}$. Let $I=\inf_{u\in X}\{F(u):G(u)=0\}$. Select a sequence $u_j\in\mathcal{C}$ such that $F(u_j)\to I$ and $F(u_j)\le I+1$ for all $j\ge 0$. By the Poincaré inequality (e.g. p163 [@mcowen]), the norm $\|\cdot\|_X$ is equivalent to $\|\nabla(\cdot)\|_2$, so using the form of $F$, the $u_j$ are uniformly bounded in norm in $X$. By standard arguments using Rellich compactness (e.g. Thm. 8.4.2 [@jost]), one can find a subsequence $u_{j_k}\in X$ and a function $\bar{u}\in X$ such that $u_{j_k}\to \bar{u}$ in $L^2(M)$. For the weak derivatives, we develop a Cauchy sequence. We first demonstrate that there is a $c>0$ such that $\|\nabla ( u_{j_k}-u_{j_l})\|\le (1/c)\|\nabla(\mathcal{P}u_{j_k}-\mathcal{P}u_{j_l})\|$. $$\begin{aligned} \|\nabla(\mathcal{P}u_{j_k}-\mathcal{P}u_{j_l})\|^2&=&\int_{T(M)}|\nabla(u_{j_k}\circ T^{-1})-\nabla(u_{j_k}\circ T^{-1})|^2\ d\ell\\ &=&\int_{T(M)}|\nabla(u_{j_k}-u_{j_k})\circ T^{-1}\cdot DT^{-1}(x)|^2\ d\ell\\ &=&\int_{M}|\nabla(u_{j_k}-u_{j_k})\cdot DT^{-1}(Tx)|^2\ d\ell\\ &\ge& c^2\int_M |\nabla(u_{j_k}-u_{j_k})|^2\ d\ell,\end{aligned}$$ where $c=\inf_{x\in M, v\in\mathbb{R}^d} |DT^{-1}(Tx)\cdot v|/|v|>0$ as $T$ is a $C^\infty$ diffeomorphism and $M$ is compact. Now, $$\begin{aligned} \nonumber\lefteqn{(1+c^2)\|\nabla u_{j_k}-\nabla u_{j_l}\|^2}\\ \nonumber&\le& \|\nabla u_{j_k}-\nabla u_{j_l}\|^2+\|\nabla (\P u_{j_k})-\nabla (\P u_{j_l})\|^2\\ \nonumber&=&2\left(\|\nabla u_{j_k}\|^2+\|\nabla (\P u_{j_k})\|^2\right)+2\left(\|\nabla u_{j_l}\|^2+\|\nabla (\P u_{j_l})\|^2\right)-\left(\|\nabla(u_{j_k}+u_{j_l})\|^2+\|\nabla(\P(u_{j_k}+u_{j_l}))\|^2\right)\\ \label{expansion} &\le&2\left(\|\nabla u_{j_k}\|^2+\|\nabla (\P u_{j_k})\|^2\right)+2\left(\|\nabla u_{j_l}\|^2+\|\nabla (\P u_{j_l})\|^2\right)-2I\|(u_{j_k}+u_{j_l})\|^2\end{aligned}$$ By construction, as $j_k,j_l\to\infty$, the first two terms of (\[expansion\]) both converge to $4I$, and the final term of (\[expansion\]) converges to $8I$. Thus, the $u_{j_k}$ form a Cauchy sequence in $W^{1,2}$, and converge to $\bar{u}$ in $W^{1,2}$. Because $F$ and $G$ are both[^4] $C^1$ functionals on $X$, we can use the method of Lagrange multipliers, and by Lemma \[lemmaattain\] the minimiser $\bar{u}$ satisfies the Euler-Lagrange equation $F'(\bar{u})v=\mu G'(\bar{u})v$ for some $\mu\in\mathbb{R}$ and all $v\in X$. By the constructions in the proof of Lemma \[functionallemma\] (ii), this equation is $$\label{weakeigeneqn} \int_M (\nabla \bar{u}\cdot\nabla v)\ d\ell+\int_{T(M)}(\nabla(\P \bar{u})\cdot \nabla(\P v))\ d\ell=-2\mu \int_M \bar{u}v\ d\ell\quad\mbox{for all $v\in X$}.$$ If we set $\lambda=\mu$, we have exactly the statement (\[weakeqn\]). In fact, putting $v=\bar{u}$ we get $I=F(\bar{u})=(1/2)\int_M |\nabla \bar{u}|^2 +|\nabla(\P \bar{u})|^2\ d\ell=-\lambda\int_M \bar{u}^2\ d\ell=-\lambda$. Thus we could have defined $\lambda$ by the Rayleigh quotient $$-\lambda=\inf_{u\in X}\left(\frac{\int_M |\nabla u|^2\ d\ell +\int_{T(M)}|\nabla(\P u)|^2\ d\ell}{2\int_M u^2\ d\ell}\right).$$ From now on we denote $(\lambda,\bar{u})$ by $(\lambda_1,u_1)$ and search for other solution pairs. Note that $-\lambda_1=F(u_1)\ge 0$, but that $u_1\equiv 1$ yields $F(u_1)=0$ by volume-preservation of $T$; thus $\lambda_1=0$. \[orthogonality\] If $(\lambda_1,u_1)$ and $(\lambda_2,u_2)$ are solution pairs for (\[weakeqn\]) with $\lambda_1\neq\lambda_2$ then $\langle u_1,u_2\rangle =0$; that is $u_1,u_2$ are orthogonal in the $L^2$ inner product. Put $u=u_1, v=u_2$ in (\[weakeqn\]), then put $u=u_2, v=u_1$ in (\[weakeqn\]) and subtract the two equations to get $(\lambda_1-\lambda_2)\int_M u_1u_2\ d\ell=0$ One may now follow the standard procedure for constructing a sequence of eigenvalues $0=\lambda_1> \lambda_2>\cdots$ (e.g. [@mcowen] pp212–213) by first defining $$\label{rayleigh2} -\lambda_2=\inf_{u\in X,\langle u,u_1\rangle=0}\frac{\int_M |\nabla u|^2\ d\ell+\int_{T(M)}|\nabla(\P u)|^2\ d\ell}{2\int_M u^2\ d\ell},$$ and then inductively adding the constraint $\langle u,u_2\rangle=0$ in the next infimum to define $\lambda_3$ and so on. The functions $u_1,u_2,\ldots,$ constructed in this way are scaled to form an orthonormal set in $L^2$. \[infinitelemma\] The sequence $\lambda_n$ tends to $-\infty$ and for each $n$, the dimension of the solution space is finite. Let $u_n,u_m$ be solutions to (\[rayleigh2\]) corresponding to $\lambda_n,\lambda_m$ obtained inductively as above. By (\[weakeqn\]), setting $u=u_n, v=u_m$, we have $$\label{infinite} (1/2)\int_M \nabla u_n\cdot\nabla u_m\ d\ell+\int_{T(M)}\nabla(\P u_n)\cdot\nabla(\P u_m)\ d\ell=-\lambda_n\int u_nu_m\ d\ell.$$ By Lemma \[orthogonality\] we see that the RHS of (\[infinite\]) is 0 if $n\neq m$ and $\lambda_n$ if $n=m$. Thus, $$\|u_n\|_X^2=\int_M|\nabla u_n|^2 + u_n^2\ d\ell\le 2\left((1/2)\int_M|\nabla u_n|^2 \ d\ell+ \int_{T(M)}|\nabla (\P u_n)|^2\ d\ell\right)+1=-2\lambda_n+1.$$ By a standard argument, (e.g. [@mcowen] p213), we assume that $\lambda_n\nrightarrow-\infty$, thus $\|u_n\|_X$ is uniformly bounded in $n$ and by Rellich compactness one finds a Cauchy subsequence in $L^2$ and derives a contradiction by $L^2$ pairwise orthogonality of members of this subsequence. Because $\lambda_n\to-\infty$, each $\lambda_n$ occurs only finitely many times and therefore the solution space for each $\lambda_n$ is finite-dimensional. Ellipticity and strong solutions {#sec:ellipticity} -------------------------------- We have established the existence of a set of solutions $(u_1,\lambda_1), (u_2,\lambda_2),\ldots,$ of (\[weakeqn\]) and now wish to make a link between the solutions of (\[weakeqn\]) and solutions of the strong formulation (\[strongeqn\]). The property of ellipticity of $\hat{\triangle}$ will be crucial. Suppose we have a second order differential operator $$\label{2ndorderop} L=\sum_{i,j=1}^d a_{ij}(x)\frac{\partial^2}{\partial x_i\partial x_j}+\sum_{i=1}^d b_i(x)\frac{\partial}{\partial x_i}+c(x),\quad x\in \mathring{M},$$ with coefficient functions $a_{ij}, b_i, c$ that are $C^\infty$ on $M$. We will say that $a_{ij}$ satisfies uniform ellipticity if $$\label{ellipticity} \sum_{i,j=1}^d a_{ij}(x)\xi_i\xi_j\ge \gamma|\xi|^2,\qquad\mbox{for all $x\in M$, $\xi\in \mathbb{R}^d$}.$$ \[ellipticlemma\] $\hat{\triangle}$ satisfies uniform ellipticity. We have $L=\hat{\triangle}=\triangle+\P^*\triangle\P$. Clearly $\triangle$ is elliptic, with $a_{ij}(x)=\delta_{ij}$ for all $x\in M$. We show that $\P^*\triangle\P$ is elliptic and the result follows. In fact, since $\P^*$ is merely composition with $T$, we only need show that $\triangle\P$ is elliptic. Let $x_1,\ldots,x_d$ denote the standard coordinate system on $M$ in which $\triangle f=\sum_{i=1}^d \frac{\partial^2 f}{\partial x_i^2}$. Let $f:M\to\mathbb{R}$ be $C^2$; denoting $T^{-1}(x_1,\ldots,x_d)=(T^{-1}_1(x_1,\ldots,x_d),\ldots,T^{-1}_d(x_1,\ldots,x_d))$, and applying the chain rule for partial differentiation, one has $$\begin{aligned} \lefteqn{ \frac{\partial^2}{\partial x_ix_j}\left(f\circ T^{-1}\right)}\\ &=&\left[\frac{\partial T^{-1}_1}{\partial x_j},\ldots,\frac{\partial T^{-1}_d}{\partial x_j}\right]\cdot H(f)\circ T^{-1}\cdot\left[\frac{\partial T^{-1}_1}{\partial x_i},\ldots,\frac{\partial T^{-1}_d}{\partial x_i}\right]^\top+(\nabla f)\circ T^{-1}\cdot\left[\frac{\partial^2 T^{-1}_1}{\partial x_ix_j},\ldots,\frac{\partial^2 T^{-1}_d}{\partial x_ix_j}\right]^\top,\end{aligned}$$ where $H(f)$ is the Hessian for $f$. As $\triangle(\P f)=\sum_{i=1}^d \frac{\partial^2}{\partial x_i^2}(f\circ T^{-1})$, the representation of $\triangle (\P f)$ in the form (\[2ndorderop\]) is $$\label{2ndorderopspecific} \sum_{i,k,l=1}^d \frac{\partial T^{-1}_k}{\partial x_i}\frac{\partial T^{-1}_l}{\partial x_i}[H(f)]_{kl}\circ T^{-1}+\sum_{i,k=1}^d \frac{\partial^2 T^{-1}_k}{\partial x_i^2}[\nabla f]_k\circ T^{-1}.$$ Since $T$ is a $C^\infty$ diffeomorphism, both $a_{kl}=\sum_{j=1}^d\frac{\partial T^{-1}_k}{\partial x_j}\frac{\partial T^{-1}_l}{\partial x_j}$ and $b_i=\sum_{j=1}^d\frac{\partial^2 T^{-1}_i}{\partial x_j^2}$ are $C^\infty$ and bounded as a function of $x\in T(M)$ for each $k,l=1,\ldots,d$. In order to show that $a_{kl}$ is uniformly elliptic, we note that $a_{kl}(x)$ is the inner product of the $k^{th}$ and $l^{th}$ rows of the Jacobian matrix $D(T^{-1})(x)$. Thus, for each $x\in T(M)$, $a_{kl}(x)$ is a Gram matrix, formed from the $d$ (linearly independent, because $T$ is a diffeomorphism) rows of $D(T^{-1})(x)$, denoted $r_1,\ldots,r_d$. The matrix $a_{kl}(x)$ is symmetric and therefore positive definite if and only if all of its eigenvalues are positive (e.g. Theorem 7.2.1 [@hornjohnson]). Using the structure of the Gram matrix, we know $a_{kl}(x)$ is positive semidefinite and is nonsingular if and only if $\{r_1,\ldots,r_d\}$ are linearly independent (e.g. Theorem 7.2.10 [@hornjohnson]). Thus, $a_{kl}(x)$ is positive definite and satisfies $\sum_{k,l=1}^d a_{kl}(x)\xi_k\xi_l\ge \gamma(x)|\xi|^2$ for some $\gamma(x)>0$ for every $x\in M$. By compactness of $M$, we can set $\gamma=\min_{x\in M}\gamma(x)$. We show that solutions of (\[weakeqn\]) are in $C^\infty(M)$ and satisfy (\[strongeqn\])–(\[strongbc\]). By the arguments used to obtain Corollary 8.4.1 [@jost] or the discussion on p.214 [@GilbargTrudinger], provided that our second-order differential operator $\hat{\triangle}$ is uniformly elliptic and that $T$ and $M$ are $C^\infty$, one has that a solution $u$ of (\[weakeqn\]) is in fact $C^\infty$ on $M$. Following the arguments in §8.4–8.5 [@jost], because $u\in C^\infty(M)$ we can apply Green’s first identity to the first term on the LHS of (\[weakeqn\]) to obtain: $$\label{green1} \int_M \nabla v\cdot \nabla u\ d\ell=-\int_M v (\triangle u)\ d\ell+\int_{\partial M} v (\nabla u)\cdot \mathbf{n}\ d\ell_{d-1}.$$ Now, the second term on the LHS of (\[weakeqn\]): denoting $\tilde{\mathbf{n}}(y)$ to be the outward unit normal at $y\in T(M)$, and using Green’s first identity and change of variables under $T$: $$\begin{aligned} \int_{T(M)} \nabla (\P v)\cdot \nabla (\P u)\ d\ell&=&-\int_{T(M)} \P v\cdot \triangle \P u\ d\ell+\int_{\partial T(M)} \P v \left[\nabla (\P u)\right]\cdot \tilde{\mathbf{n}}\ d\ell_{d-1}\\ &=&-\int_M v\cdot(\P^*\triangle\P)u\ d\ell+\int_{\partial T(M)} \P v \left[\nabla (\P u)\right]\cdot \tilde{\mathbf{n}}\ d\ell_{d-1}.\end{aligned}$$ We now manipulate the second term above using the chain rule for differentiation, change of variables under $T$, and volume-preservation of $T$: $$\begin{aligned} \nonumber \int_{\partial T(M)} \P v \left[\nabla (\P u)\right]\cdot \tilde{\mathbf{n}}\ d\ell_{d-1}&=&\int_{\partial T(M)} v\circ T^{-1} \left[\left((\nabla u)\circ T^{-1}\right)^\top\cdot DT^{-1}\right]\cdot \tilde{\mathbf{n}}\ d\ell_{d-1}\\ \label{halfwayeqn}&=&\int_{\partial M} v \left[\left(\nabla u\right)^\top \cdot (DT^{-1}\circ T)\right]\cdot (\tilde{\mathbf{n}}\circ T)|\det DT_{|\mathcal{T}_x(\partial M)}|\ d\ell_{d-1}. $$ Note that $\tilde{\mathbf{n}}(Tx)=(DT(x)^{-1})^\top \mathbf{n}(x)/\|(DT(x)^{-1})^\top \mathbf{n}(x)\|$. By Lemma \[sl\], $|\det DT_{|\mathcal{T}_x(\partial M)}|=\|(DT(x)^{-1})^\top \mathbf{n}(x)\|$. Thus, $$(\ref{halfwayeqn})=\int_{\partial M} v \left[\left(\nabla u\right)^\top \cdot(DT)^{-1} \right]\cdot \left((DT)^{-1}\right)^\top\cdot\mathbf{n}\ d\ell_{d-1},$$ and we arrive at the transformed version of (\[weakeqn\]): $$\begin{aligned} \label{weakeqnsmooth1} \lefteqn{(1/2)\int_M v\cdot(\triangle+\P^*\triangle\P)u\ d\ell=\lambda \int_M v\cdot u\ d\ell}\\ \label{weakeqnsmooth2}&&\quad+(1/2)\int_{\partial M} v (\nabla u)\cdot \mathbf{n}\ d\ell_{d-1}+(1/2)\int_{\partial M} v \left[\left(\nabla u\right)^\top \cdot(DT)^{-1} \right]\cdot \left((DT)^{-1}\right)^\top\cdot\mathbf{n}\ d\ell_{d-1}, \forall v\in W^{1,2}.\end{aligned}$$ By considering all $v\in W^{1,2}_0$ (the closure of $C^\infty_0(\mathring{M})\cap W^{1,2}(M)$ with respect to $\|\cdot\|_{W^{1,2}(M)}$) in (\[weakeqnsmooth1\])–(\[weakeqnsmooth2\]) we see that $(1/2)(\triangle+\P^*\triangle\P)u=\lambda u$ on $\mathring{M}$ (let $f=(1/2)(\triangle+\P^*\triangle\P)u-\lambda u$ and WLOG suppose $f(x)>0$ at some $x\in\mathring{M}$. Necessarily, $f(x)>0$ for $x$ in some open $O\subset \mathring{M}$ and consider a bump function $v$ positive in a ball contained in $O$ and zero outside $O$ to derive a contradiction). Now (\[weakeqnsmooth2\]) implies that $$\int_{\partial M} v (\nabla u)\cdot \mathbf{n}\ d\ell_{d-1}+\int_{\partial M} v \left[\nabla u \cdot (DT)^{-1}\cdot \left((DT)^{-1}\right)^\top\right]\cdot\mathbf{n}\ d\ell_{d-1}=0\quad \forall v\in W^{1,2}.$$ Again using the fact that $u\in C^\infty(M)$, by an argument on $\partial M$ analogous to the parenthetical argument above, it follows that (\[strongbc\]) holds. Proof of Theorem \[objthm\] --------------------------- A key component to the proof of Theorem \[objthm\] is the fact that the Laplacian commutes with isometries. \[isometrylemma\] $$\label{isometryeqn} \triangle(f\circ \Phi_{t_0})=(\triangle f)\circ\Phi_{t_0}.$$ By (\[2ndorderopspecific\]), and the fact that $\Phi_{t_0}$ is affine, the LHS of (\[isometryeqn\]) is $$\begin{aligned} \sum_{i,k,l=1}^d Q(t_0)_{ki}\cdot[H(f)]_{kl}\circ \Phi_{t_0} \cdot Q(t_0)_{li} &=&\operatorname{Tr}(Q(t_0)^\top\cdot [H(f)]\circ \Phi_{t_0}\cdot Q(t_0))\\ &=&\operatorname{Tr}([H(f)]\circ \Phi_{t_0}),\end{aligned}$$ using orthogonality of $Q(t_0)$, to obtain exactly the RHS of (\[isometryeqn\]). We first show equivalence of (\[origevalprob\]) and (\[transfevalprob\]). $$\begin{aligned} \dot{\hat{\triangle}}&=&\triangle+\P^*_{\dot{T}}\triangle \P_{\dot{T}}\\ &=&\triangle+(\P_{\Phi_{t_1}}\circ \P\circ \P_{\Phi_{t_0}}^{-1})^*\triangle(\P_{\Phi_{t_1}}\circ \P\circ \P_{\Phi_{t_0}}^{-1})\\ &=&\triangle+\P_{\Phi_{t_0}}\P^*\triangle\P \P_{\Phi_{t_0}}^{-1},\end{aligned}$$ where we have used Lemma \[isometrylemma\] and the fact that $\P_{\Phi_{t_0}}$ and $\P_{\Phi_{t_1}}$ are unitary operators. Now, if $\hat{\triangle}f=\lambda f$, $$\begin{aligned} \dot{\hat{\triangle}}\P_{\Phi_{t_0}}f&=&(\triangle \P_{\Phi_{t_0}}+\P_{\Phi_{t_0}}\P^*\triangle\P)f\\ &=&\P_{\Phi_{t_0}}(\triangle +\P^*\triangle\P)f\\ &=&\lambda\P_{\Phi_{t_0}}f,\end{aligned}$$ as required, where we have again used Lemma \[isometrylemma\]. We now demonstrate equivalence of (\[strongorigbc\]) and (\[strongtransfbc\]). We calculate each of the components of (\[strongtransfbc\]). First, note that $\nabla (\P_{\Phi_{t_0}}f)(x)=\left[(\nabla f)\circ\Phi_{t_0}^{-1}(x)\right]^\top\cdot Q(t_0)^{-1}$. Next, $D\dot{T}(x)=Q(t_1)\cdot DT(\Phi^{-1}_{t_0}(x))\cdot Q(t_0)^{-1}$, so $D\dot{T}(x)^{-1}=Q(t_0)\cdot DT(\Phi^{-1}_{t_0}(x))^{-1}\cdot Q(t_1)^{-1}$ and $\left(D\dot{T}(x)^{-1}\right)^\top=Q(t_1)\cdot \left(DT(\Phi^{-1}_{t_0}(x))^{-1}\right)^\top\cdot Q(t_0)^{-1}$. Thus, for $x\in \partial(\Phi_{t_0}(M))$, $$\begin{aligned} \lefteqn{(\ref{strongtransfbc})}\\ &=&(\nabla f)\circ\Phi_{t_0}^{-1}(x)\cdot Q(t_0)^{-1}\cdot[Q(t_0)\mathbf{n}(\Phi_{t_0}^{-1}x)+Q(t_0)\cdot DT(\Phi_{t_0}^{-1}x)^{-1}\cdot \left(DT(\Phi_{t_0}^{-1}x)^{-1}\right)^\top \mathbf{n}(\Phi_{t_0}^{-1}x)]\\ &=&(\nabla f)\circ\Phi_{t_0}^{-1}(x)\cdot[\mathbf{n}(\Phi_{t_0}^{-1}x)+DT(\Phi_{t_0}^{-1}x)^{-1}\cdot \left(DT(\Phi_{t_0}^{-1}x)^{-1}\right)^\top \mathbf{n}(\Phi_{t_0}^{-1}x)],\end{aligned}$$ which is exactly condition (\[strongorigbc\]) evaluated at $\Phi_{t_0}^{-1}x\in \partial M$. Proof of Theorem \[analcvgce\] {#sec:analcvgce} ============================== The crux of the proof of Theorem \[analcvgce\] is linking the diffusion operators $\mathcal{D}_{M,\epsilon}, \mathcal{D}_{T(M),\epsilon}$ with $\triangle$. This linking is possible because of the symmetry of the smoothing kernel $q_\epsilon$. At small scales (small $\epsilon$), $\mathcal{D}_{M,\epsilon}, \mathcal{D}_{T(M),\epsilon}$ are close to the identity operator, and because of the spatial symmetry of $q_\epsilon$, the next dominant term depends on second order derivatives. \[nddifflemma\] Let $M$ be a connected, compact Riemannian manifold of vanishing curvature, and $f:M \to\mathbb{R}$ be $C^3$. Let $q:M\to\mathbb{R}^+$ be a nonnegative density with compact support, with mean the origin, and covariance matrix $c\cdot I$, where $I$ is the $d\times d$ identity matrix. We scale $q$ to form $q_\epsilon(x)=q(x/\epsilon)/\epsilon^d$, and define $\D f(x)=\int_M q_\epsilon(x-y)f(y)\ d\ell(y)$. Then $$\label{diffform1}\lim_{\epsilon\to 0} \frac{(\D-I)f(x)}{\epsilon^2}=(c/2)\triangle f(x),$$ for each $x\in \mathring{M}$. Using Taylor, we expand $f$ in an $\epsilon$-ball about $x$ $$\label{taylor} f(x+y)=\sum_{|\alpha|=0}^2\frac{D^\alpha f(x)}{|\alpha| !} y^\alpha+\sum_{|\alpha|=3} R_\alpha(x+y)y^\alpha,$$ where $R_\alpha(x+y)$ is the remainder term. The notation used is $\alpha=(\alpha_1,\ldots,\alpha_d)$, where $\alpha_i$ is the number of derivatives in coordinate direction $x_i$; $|\alpha|$ denotes the sum of the elements of $\alpha$ and $\alpha !=\prod_{i=1}^d \alpha_i !$. If $\partial M\neq \emptyset$, then put $\epsilon'=\inf_{z\in\partial M}\dist(x,z)$. In the following we assume that $\epsilon\le \epsilon'$. $$\begin{aligned} \nonumber\D f(x)&=&\int_M q_\epsilon(x-y)f(y)\ d\ell(y)\\ \nonumber&=&\int_M q_\epsilon(y)f(x+y)\ d\ell(y) \\ \label{Reqn}&=&\int_M q_\epsilon(y)\left[\sum_{|\alpha|=0}^2\frac{D^\alpha f(x)}{|\alpha| !} y^\alpha+\sum_{|\alpha|=3} R_\alpha(x+y)y^\alpha\right]\ d\ell(y)\end{aligned}$$ The terms in the first sum of order $|\alpha|=0, 1, 2$, respectively, are: $f(x)$, 0, $\sum_{|\alpha|=2} m_\alpha(q_\epsilon) D^\alpha f(x)/\alpha !$, where $m_\alpha(q_\epsilon)=\int_M q_\epsilon(y) y^\alpha\ d\ell(y)$ denotes the tensor of $\alpha$-moments of $q_\epsilon$. The order 1 term is zero because of the assumption on the mean (the $|\alpha|=1$ moments) of $q$. We note that $m_\alpha(q_\epsilon)=\int_M q_\epsilon(y) y^\alpha\ d\ell(y)=\int_M q(y/\epsilon)/\epsilon^d y^\alpha\ d\ell(y)=\int_M q(z)\epsilon^{|\alpha|}z^\alpha\ d\ell(z)=\epsilon^{|\alpha|}m_\alpha(q)$. We can further simplify the order 2 term as $\sum_{|\alpha|=2} m_\alpha(q_\epsilon) D^\alpha f(x)/\alpha !=(c/2)\epsilon^2\triangle f(x)$ using the fact that the covariance matrix of $q$ is $c\times I$. Rearranging (\[Reqn\]), we have $$\D f(x)-f(x)-(c/2)\epsilon^2\triangle f(x) =\int_M q_\epsilon(y)\sum_{|\alpha|=3}R_\alpha(x+y)y^\alpha\ d\ell(y).$$ For $|\alpha|=3$, and $y\in B_\epsilon(x)$, one has $|R_\alpha(x+y)|\le (1/\alpha !)\max_{|\alpha|=3}\max_{z\in B_\epsilon(x)}|D^\alpha f(z)|=:C(x)$, so $$|\D f(x)-f(x)-(c/2)\epsilon^2\triangle f(x)|\le C(x)\epsilon^3 \sum_{|\alpha|=3} m_\alpha(q).$$ By Lemma \[nddifflemma\], we have $\D f(x)=f(x)+\epsilon^2(c/2)\triangle f(x)+O(\epsilon^3)$, where $O(\epsilon^3)$ means the error term is of the form $\epsilon^3 \mathcal{R}(x)$. Therefore $\P \D f(x)=\P f(x)+\epsilon^2(c/2)\P \triangle f(x)+O(\epsilon^3)$. Since $T$ is $C^3$ we may again apply Lemma \[nddifflemma\] to obtain $$\begin{aligned} \D\P\D f(x)&=&\D\left(\P f(x)+\epsilon^2(c/2)\P \triangle f(x)+O(\epsilon^3)\right)\\ &=&\P f(x)+\epsilon^2((c/2)\P\triangle f(x)+(c/2)\triangle\P f(x))+O(\epsilon^3)\end{aligned}$$ Finally, $$\begin{aligned} \D^*\P^*\D^*\D\P\D f(x)&=&\P^*\P f(x)+\epsilon^2(c/2)(\P^*\P\triangle+\P^*\triangle \P+\P^*\triangle \P+\triangle\P^*\P)f(x)+O(\epsilon^3)\\ &&=f(x)+\epsilon^2 c(\triangle+\P^*\triangle \P)f(x)+O(\epsilon^3) $$ \[4thorderremark\] It is reasonably natural for $q$ to have additional symmetry, so that $m_\alpha(q)=0$ for $|\alpha|=3$. In this case, the error term in the above proof is $O(\epsilon^4)$. For example, the uniform diffusion on a unit ball considered in Example \[unifexample\] has $m_\alpha(q)=0$ for all $|\alpha|$ odd. [10]{} M.R. Allshouse and J.-L. Thiffeault. Detecting coherent structures using braids. , 241(2):95–105, 2012. L. Arnold. . Springer, 1998. S. Balasuriya and C.K.R.T. Jones. Diffusive draining and growth of eddies. , 8(4/5):241–251, 1999. I. Chavel. , volume 115 of [*Pure and Applied Mathematics*]{}. Academic Press, Orlando, 1984. I. Chavel. , volume 145 of [*Cambridge Tracts in Mathematics*]{}. Cambridge University Press, 2001. J. Cheeger. A lower bound for the smallest eigenvalue of the [L]{}aplacian. , 625:195–199, 1970. M. Dellnitz, G. Froyland, and O. Junge. . In B. Fiedler, editor, [*Ergodic Theory, Analysis, and Efficient Simulation of Dynamical Systems*]{}, pages 145–174. Springer, 2001. H. Federer and W.H. Fleming. Normal and integral currents. , pages 458–520, 1960. G. Froyland. An analytic framework for identifying finite-time coherent sets in time-dependent dynamical systems. , 250:1–19, 2013. G. Froyland, C. Horenkamp, V. Rossi, N. Santitissadeekorn, and A. Sen Gupta. Three-dimensional characterization and tracking of an [A]{}gulhas [R]{}ing. , 52–53:69–75, 2012. G. Froyland, C. Horenkamp, V. Rossi, and Erik van Sebille. Studying an Agulhas ring’s long-term pathway and decay with finite-time coherent sets. To appear in *Chaos*. G. Froyland and O. Junge. On fast computation of finite-time coherent sets using radial basis functions. *Chaos*, 25:087409, 2015. G. Froyland and K. Padberg-Gehle. Almost-invariant and finite-time coherent sets: directionality, duration, and diffusion. In Wael Bahsoun, Chris Bose, and Gary Froyland, editors, [*Ergodic Theory, Open Dynamics, and Coherent Structures*]{}, volume 70 of [ *Proceedings in Mathematics and Statistics*]{}, chapter 9, pages 171–216. Springer, 2014. G. Froyland, N. Santitissadeekorn, and A. Monahan. Transport in time-dependent dynamical systems: Finite-time coherent sets. , 20:043116, 2010. D. Gilbarg and N.S. Trudinger. . Classics in Mathematics. Springer, Berlin, revised third printing of second edition, 2001. D.S. Grebenkov and B.-T. Nguyen. Geometrical structure of [L]{}aplacian eigenfunctions. , 55(4):601–667, 2014. G. Haller. A variational theory of hyperbolic [L]{}agrangian coherent structures. , 240:574––598, 2011. G. Haller and F.J. Beron-Vera. Geodesic theory of transport barriers in two-dimensional flows. , 241:1680––1702, 2012. R.A. Horn and C.R. Johnson. . Cambridge University Press, 1985. . . Number 214 in Graduate Texts in Mathematics. Springer, New York, 2002. M. Ledoux. A simple analytic proof of an inequality by [P. Buser]{}. , 121(3):951–959, 1994. T. Ma and E. Bollt. Differential geometry perspective of shape coherence and curvature evolution by finite-time nonhyperbolic splitting. , 13(3):1106–1136. J.A.J. Madrid and A.M. Mancho. Distinguished trajectories in time dependent vector fields. , 19(1):013111, 2009. V.G. Mazya. Classes of domains and imbedding theorems for function spaces. In [*Soviet Math. Dokl*]{}, volume 1, pages 882–885, 1960. R.C. McOwen. . Prentice-Hall, Upper Saddle River, 1996. I. Mezi[ć]{}, S. Loire, V.A. Fonoberov, and P. Hogan. A new mixing diagnostic and gulf oil spill movement. , 330(6003):486–489, 2010. B.A. Mosovsky and J.D. Meiss. Transport in transitory dynamical systems. , 10(1):35–65, 2011. R. Mundel, E. Fredj, H. Gildor, and V. Rom-Kedar. New [L]{}agrangian diagnostics for characterizing fluid flow mixing. , 26:126602, 2014. J. M. Ottino. . Cambridge University Press, 1989. R.T. Pierrehumbert and H. Yang. Global chaotic mixing on isentropic surfaces. , 50(15):2462–2480, 1993. C. Truesdell and W. Noll. . Springer, 3rd edition, 2004. S. Ulam. . Interscience, 1964. [^1]: Frequently, *coherent structures* are co-dimension 1 objects, while full-dimensional objects are called *coherent sets*. [^2]: To weaken the smoothness assumption on $F(x,\cdot)$, but still obtain smooth flow maps, see [@arnold] Appendix B.3. [^3]: in [@F13] there is an additional normalisation term required for non-Lebesgue reference measures and non-Lebesgue-preserving $T$. In the present paper, as $T$ is volume preserving and volume is our reference measure, we eschew this normalisation term here. [^4]: Showing $G$ is a $C^1$ functional follows identically to the arguments above for $F$.
--- abstract: 'We consider 3+1-dimensional fluids with $U(1)^3$ anomalies. We use Ward identities to constrain low-momentum Euclidean correlation functions and obtain differential equations that relate two and three-point functions. The solution to those equations yields, among other things, the chiral magnetic conductivity. We then compute zero-frequency functions in hydrodynamics and show that the consistency of the hydrodynamic theory also fixes the anomaly-induced conductivities.' author: - Kristan Jensen bibliography: - 'anomalous\_refs.bib' title: 'Triangle Anomalies, Thermodynamics, and Hydrodynamics' --- *Introduction.*—The chiral magnetic effect (CME) [@Kharzeev:2007tn; @Fukushima:2008xe; @Kharzeev:2009pj] and chiral vortical effect (CVE) [@Son:2009tf; @Kharzeev:2010gr] represent remarkable implications of anomalies in quantum field theory for macroscopic transport. In a fluid with a $U(1)^3$ anomaly, there will be currents directed along a magnetic field $B^{\mu}$ or the vorticity $w^{\mu}$. The former is the CME and the latter the CVE. Both effects probe the violation of parity and charge conjugation and so may be measured by studying spatial and charge asymmetries [@Kharzeev:2010gr] in off-axis heavy ion collisions at RHIC or the LHC. It has also been shown that the hydrodynamic description of a relativistic fluid with anomalies must be modified [@Son:2009tf] from its textbook treatment [@LL6]. There are additional transport coefficients, describing the response of the currents to magnetic fields and vorticity. These coefficients are fixed by demanding a local version of the second law of thermodynamics, namely the existence of an entropy current whose divergence is positive semi-definite [@LL6]. Despite the absence of a clear understanding of this constraint in field theory, the result for the anomaly-induced transport matches results at weak [@Fukushima:2008xe] and strong coupling [@Banerjee:2008th; @Erdmenger:2008rm]. This yields a hydrodynamic description of the CME and CVE. We endeavor to reproduce the constraints on anomaly-induced transport without recourse to the entropy current. We will do so by employing properties of equilibrium quantum field theory. In particular, we will study theories which have finite static correlation length, $\lambda$. This has the practical implication that real-space, zero-frequency correlation functions fall off exponentially at long distance. The Fourier transformed functions are then analytic at zero frequency and small momenta $k\lambda\ll 1$. Together with some Ward identites we relate the $U(1)^3$ anomaly coefficient to transport. The resulting calculation makes a few points clear. First, only the small $B^{\mu}$ and small $w^{\mu}$ parts of the CME and CVE are fixed by the anomaly. Second, anomaly-constrained transport follows from a covariant and gauge-invariant description of thermodynamics. Finally, we see that, among other things, the entropy current enforces properties of the equilibrium theory (see  and ) that are not manifest in the hydrodynamic description. *Note:* We explore a number of questions related to equilibrium thermodynamics and the role of the entropy current in a companion paper [@Jensen:2012jh]. While this work was in progress we were also made aware of [@Banerjee:2012iz] which has overlap with the content of this Letter. *Correlation functions at $T,\mu\neq 0$.*—In real-time finite temperature field theory there are different definitions for correlation functions. We employ the closed-time-path (CTP) formalism, in which time is extended to a closed contour which extends from $t_1 :(-\infty,\infty)$, and then doubles back as $t_2:(\infty,-\infty)$. See [@Wang:1998wg] for a review. The only ingredient we need is that there is a CTP generating functional $W_{CTP}$ which depends on two sets of sources: $J_r = (J_1+J_2)/2$, $J_a = J_1-J_2$, where the $J_i$ are independent sources introduced on each segment of the time contour. The fully retarded functions, which are those computed in hydrodynamics [@Moore:2010bu], are obtained by varying $W_{CTP}$ with respect to the $r$ sources and one $a$ source. The $r$ source couples to $a$-type operators, whereas the $a$ source couples to $r$-type operators. Insertions of $U(1)$ currents, labeled by the index $A$, and stress tensor densities come from varying $W_{CTP}$ with respect to background fields as $$\langle \J^{\mu,A}_{r/a}(x)\rangle = \frac{\delta W_{CTP}}{\delta A^{A}_{\mu,a/r}(x)}, \,\,\,\, \langle \T^{\mu\nu}_{r/a}(x)\rangle = \frac{2\,\delta W_{CTP}}{\delta g_{\mu\nu,a/r}(x)}.$$ The currents and stress tensor are related to the densities by $\J^{\mu,A}=\sqrt{-g}J^{\mu,A}, \T^{\mu\nu}=\sqrt{-g}T^{\mu\nu}$. In this work we consider $ra$ and $raa$ functions in momentum space. We notate these as $$\begin{aligned} G_R^{I,J}(q)& = \langle \mathcal{O}^I_r(q)\mathcal{O}^J_a(-q)\rangle, \\ G_R^{I,J,K}(q_1,q_2)& = \langle \mathcal{O}^I_r(q_1)\mathcal{O}^J_a(q_2)\mathcal{O}^K_a(-q_1-q_2)\rangle,\end{aligned}$$ where coordinate space functions are related to their momentum space cousins by $$f(x_1,..,x_n) = \int d^dq_1..d^dq_n e^{i (q_1\cdot x_1 + .. q_n\cdot x_n)}f(q_1,..,q_n).$$ We further specialize to zero frequency $ra..a$ functions, i.e. we take $q_i=(0,\k_i)$. These (and all other CTP functions) are proportional to the corresponding Euclidean time-ordered functions obtained by variation of the Euclidean generating functional $W_E$ [@Evans:1991ky; @*Evans:1995ug]. Because of this we henceforth neglect the $r$ and $a$ indices. Additionally, since bosonic derivatives of $W_E$ commute, the CTP functions of bosonic fields satisfy $$\begin{aligned} \label{E:recip} \langle .. \mathcal{O}^I(\k_1).. \mathcal{O}^J(\k_2) .. \rangle = \langle .. \mathcal{O}^J(\k_2) .. \mathcal{O}^I(\k_1).. \rangle.\end{aligned}$$ The diffeomorphism invariance and anomalous variation of $W_{CTP}$ leads to the Ward identities [@Herzog:2009xv], $$\begin{aligned} \label{E:Jcons} \nabla_{\mu} \J^{\mu,A} &= -\frac{1}{24}C^{ABC}\epsilon^{\mu\nu\rho\sigma}F_{\mu\nu}^BF_{\rho\sigma}^C, \\ \label{E:Tcons} \nabla_{\mu} \T^{\mu\nu} & = F^{\nu\rho,A}\left(\J_{\rho}^A-\frac{C^{ABC}}{6}\epsilon_{\rho}^{\phantom{\rho}\sigma\alpha\beta}A_{\sigma}^BF_{\alpha\beta}^C\right),\end{aligned}$$ where $C^{ABC}$ is the symmetric anomaly coefficient and $\epsilon^{0123}=1$ . and  encode local gauge and diffeomorphism covariance. There are additional Ward identities for the Euclidean theory at nonzero temperature due to the global topology of $\mathbb{R}^3\times\mathbb{S}^1$ [@Jensen:2011xb]. Constant $A_0$ and $h_{00}$ may be gauged away at the price of redefining the temperature and chemical potentials. These are shifted as $$\begin{aligned} \label{E:TmuP} T' = \frac{T}{\sqrt{-g_{00}}}, \qquad \mu'^{A} = \frac{A_0^A}{\sqrt{-g_{00}}}.\end{aligned}$$ The $\mu^A$ depend on $g_{00}$ because they are defined through the Wilson line of $A^A$ around the time circle. At small momenta $|\k_i| \lambda\ll 1$, the correlation functions of the current and stress tensor are heavily constrained. The two-point functions with parity-violating terms to $O(k)$ may be parametrized as $$\begin{aligned} \begin{split} \label{E:G2pt} G_R^{iA,jB}(\k) &=-i \epsilon^{ijk}k_k \left(\sigma_1^{AB}-\frac{C^{ABC}A_0^C}{3}\right), \\ G_R^{iA,0j}(\k) & =-i \epsilon^{ijk}k_k \sigma_2^A, \\ G_R^{0i,0j}(\k) & = \alpha_3 \delta^{ij} - i \epsilon^{ijk}k_k \sigma_3, \end{split}\end{aligned}$$ for some functions $\sigma_m$. By  $\sigma_1^{AB}=\sigma_1^{BA}$. The second term in $G_{R}^{iA,jB}$ comes from the Bardeen-Zumino polynomial, which encodes the anomalous dependence of the $\J^{\mu,A}$ on gauge fields [@Bardeen:1984pm]. The three-point functions with parity-violating terms to $O(k)$ are similarly constrained. Demanding that the three-point functions are consistent with  we find that they take the form $$\begin{aligned} \begin{split} \label{E:G3pt} G_R^{iA,jB,0C}(\k_1,\k_2) & = {-} i \epsilon^{ijk}((k_1)_k \Sigma^{0,ABC}_{1}{-}(k_2)_k\Sigma_{1}^{0,BAC}) , \\ G_R^{iA,jB,00}(\k_1,\k_2) & ={-} i \epsilon^{ijk}((k_1)_k \Sigma^{00,AB}_{1}{-}(k_2)_k\Sigma_{1}^{00,BA}) , \\ G_R^{iA,0j,0B}(\k_1,\k_2) &={-} i \epsilon^{ijk} ((k_1)_k\Sigma^{0,AB}_{2,1} {+} (k_2)_k \Sigma^{0,AB}_{2,2}), \\ G_R^{iA,0j,00}(\k_1,\k_2) & = {-} i \epsilon^{ijk} ((k_1)_k\Sigma^{00,A}_{2,1} {+} (k_2)_k \Sigma^{00,A}_{2,2}), \\ G_R^{0i,0j,0A}(\k_1,\k_2) & = \alpha_3^{0,A}\delta^{ij} {-} i \epsilon^{ijk}(k_{1}-k_2)_k \Sigma^{0,A}_3, \\ G_R^{0i,0j,00}(\k_1,\k_2) & = \alpha_3^{00}\delta^{ij} {-} i \epsilon^{ijk}(k_1{-}k_2)_k \Sigma^{00}_3. \end{split}\end{aligned}$$ Imposing  fixes half of the $\Sigma$’s. For example, we may compute $i(k_1)_iG_R^{iA,jB,0C}$ directly from  or by variation of . Setting the two equal gives $$\begin{aligned} i (k_1)_i G_R^{iA,jB,0C}(\k_1,\k_2) &=\epsilon^{jkl}(k_1)_k(k_2)_l \Sigma_{1}^{0,ABC} \\ \nonumber &=\frac{\delta^2\langle \partial_{\mu} \J^{a\mu,A}(\k_1)\rangle}{\delta A_j^B({-}\k_2)\delta A_0^C(\k_1{+}\k_2)} \\ \nonumber &={-}\frac{1}{24} \frac{\delta^2 C^{ABC}\epsilon^{\mu\nu\rho\sigma}(F^B_{\mu\nu}F^C_{\rho\sigma})(\k_1)}{\delta A_j^B({-}\k_2)\delta A_0^c(\k_1{+}\k_2)}\\ \nonumber & = \epsilon^{jkl}(k_1)_k(k_2)_l \frac{C^{ABC}}{3}.\end{aligned}$$ By applying the same method to any $G_R$ with a $\J^{iA}$ insertion we thereby find $$\begin{aligned} \label{E:Sigma1} \Sigma_{1}^{0,ABC} = \frac{C^{ABC}}{3},\,\, \Sigma_{1}^{00,AB} =\Sigma_{2,2}^{0,AB}=\Sigma_{2,2}^{00,A}=0.\end{aligned}$$ Imposing  fixes the remaining $\Sigma$’s. For example, $$\begin{aligned} i(k_2)_jG_R^{iA,0j,0B}(\k_1,\k_2) &= - \epsilon^{ikl}(k_1)_k(k_2)_l \Sigma_{2,1}^{0,AB} \\ \nonumber &= \frac{\delta^2\langle\partial_{\mu} \T^{\mu 0}(\k_2)\rangle}{\delta A_i^A({-}\k_1)\delta A_0^B(\k_1{+}\k_2)} \\ \nonumber & = i (k_1{+}k_2)_k G_R^{kB,iA}({-}\k_1),\end{aligned}$$ which by  and  gives $$\label{E:Sigma2} \Sigma_{2,1}^{0,AB}=\sigma_1^{AB}.$$ Applying this method to the other three-point functions in  yields the remaining $\Sigma$’s $$\label{E:Sigma3} \Sigma_{2,1}^{00,A} = 2\sigma_2^A, \,\, \Sigma_3^{0,A} = 2 \sigma_2^A, \,\, \Sigma_3^{00}=4\sigma_3.$$ By the discussion around  we may evaluate the two-point functions  in a background with constant $A_0^A$ and $g_{00}$. To $O(k)$ we find $$\begin{aligned} \begin{split} \label{E:G2ptS} G_{R,S}^{iA,jB}(\k) &= - i \epsilon^{ijk}k_k\left(\sqrt{-g_{00}}\sigma_1'- \frac{C^{ABC}A_0^C}{3}\right), \\ G_{R,S}^{iA,0j}(\k) & = - i \epsilon^{ijk}k_k \sigma_2'^A, \\ G_{R,S}^{0i,0j}(\k) &= -\frac{\alpha_3'\delta^{ij}}{g_{00}}-i\epsilon^{ijk}k_k\frac{\sigma_3'}{\sqrt{-g_{00}}}, \end{split}\end{aligned}$$ where the prime indicates that a quantity is evaluated at temperature $T'$ and chemical potentials $\mu'^A$ , and the subscript $S$ that the correlator is evaluated in the presence of background fields. Differentiating  with respect to $A_0^A$ and $g_{00}$ leads to three-point functions with zero momentum insertions of $\J^{0,A}$ and $\T^{00}$. Comparing these functions with the three-point functions  and using (\[E:Sigma1\]), (\[E:Sigma2\]), and (\[E:Sigma3\]), the two agree only if the six equations $$\begin{aligned} \label{E:eqsSigma} \frac{\partial \sigma_1^{AB}}{\partial\mu^C} &= C^{ABC}, \,\,\,\, \frac{\partial\sigma_2^A}{\partial\mu^B} = \sigma_1^{AB}, \,\,\,\,\frac{\partial\sigma_3}{\partial\mu^A}= 2\sigma_2^A, \\ \label{E:eqsE} E_m & = T \frac{\partial \sigma_m}{\partial T} + \mu^A\frac{\partial \sigma_m}{\partial \mu^A} - m \sigma_m = 0,\end{aligned}$$ are satisfied. These equations uniquely fix the $\sigma_m$ up to integration constants. We have $$\begin{aligned} \begin{split} \label{E:sigmas} \sigma_1^{AB} &= C^{ABC}\mu^C + f_1^{AB}T,\qquad f_1^{AB}=f_1^{BA},\\ \sigma_2^A & = \frac{1}{2}C^{ABC}\mu^B\mu^C + f_1^{AB}\mu^B T + f_2^A T^2, \\ \sigma_3 & = \frac{1}{3}C^{ABC}\mu^A\mu^B\mu^C + f_1^{AB}\mu^A\mu^B T \\ & \phantom{=\,\,}+ 2 f_2^A \mu^A T^2 + f_3 T^3. \end{split}\end{aligned}$$ We may also consider the behavior of the integration constants $f_m$ under CPT. By a hydrodynamic argument employed in [@Bhattacharya:2011tra], we find that the $f_1^{AB}$ and $f_3$ are CPT-violating, while the $f_2^A$ are CPT-preserving. We can also directly establish this result by studying the transformation of the two-point functions  under CPT. *Hydrodynamics with sources.*—It is instructive to reproduce  from hydrodynamics. In this section we begin with equilibria at constant $T$, $\mu^A$, and vanishing sources. We take the fluid rest frame to be $u^{\mu}= v^{\mu}/\sqrt{-v^2}$, with $v^{\mu}$ a constant timelike vector. In these states the stress tensor and current are $$\label{E:TJequil} \langle T^{\mu\nu}\rangle = \epsilon u^{\mu}u^{\nu}+P\Delta^{\mu\nu}, \qquad \langle J^{\mu,A}\rangle = \rho^A u^{\mu},$$ with $P$ and $\epsilon$ the pressure and energy density, obeying $$dP = s dT + \rho^A d\mu^A, \qquad \epsilon +P = s T + \mu^A \rho^A,$$ and $\Delta^{\mu\nu}=g^{\mu\nu}+u^{\mu}u^{\nu}$ is a projector satisfying $\Delta^{\mu\nu} u_{\nu} = 0, \Delta^2=\Delta$. In hydrodynamics we study long-wavelength fluctuations around equilibrium states. Those fluctuations may be described by promoting $T,$ $\mu^A$, and the $u^{\mu}$ to spacetime fields (the hydrodynamic variables) and expanding the stress tensor and current in gradients thereof – this is the derivative expansion![@Bhattacharyya:2008jc]. We also turn on slowly varying background gauge fields $A^A$ and metric perturbations $g=\eta+h$. We take the sources to be $O(1)$, so that the field strengths $F$ and connection coefficients $\Gamma$ are $O(\partial)$ in the derivative expansion. This is the scaling required to study the response of a fluid to sources. In more general states we have the decomposition $$\begin{aligned} \nonumber \langle T^{\mu\nu}\rangle &= \mathcal{E} u^{\mu} u^{\nu} + \mathcal{P} \Delta^{\mu\nu} + q^{\mu}u^{\nu}+q^{\nu}u^{\mu} + \tau^{\mu\nu}, \\ \label{E:TJdecomp} \langle J^{\mu,A}\rangle & = \mathcal{N}^Au^{\mu} + \nu^{\mu,A} +\frac{C^{ABC}}{6\sqrt{-g}} \epsilon^{\mu\nu\rho\sigma}A^B_{\nu}F^C_{\rho\sigma},\\ \nonumber \qquad u^{\mu} q_{\mu} &= u^{\mu} \nu^A_{\mu} = u^{\mu}\tau_{\mu\nu} = \Delta^{\mu\nu}\tau_{\mu\nu}=0.\end{aligned}$$ has some redundancy. Fixing $\langle T\rangle$ and $\langle J^A\rangle$, we may redefine the hydrodynamic variables by $O(\partial)$ quantities to impose a choice of hydrodynamic frame. In the rest of this work, we perturb the equilibrium state  by sources which are static in the fluid rest frame, supposing that the perturbed fluid remains in a time-independent equilibrium. In such a state the one-point functions  and hydrodynamic variables will be local functions of sources – the non-locality is on order of the size of the static screening lengths, which vanish in the hydrodynamic approximation. We then perform a change of frame such that $T$, $\mu^A$, and $v^{\mu}$ take on their form for equilibria with constant background fields, $$\label{E:TmuEquil} T = \frac{T_{\rm eq}}{\sqrt{-v^2}}, \qquad \mu^A = u^{\mu}A_{\mu}^A,\qquad v^{\mu}=v^{\mu}_{\rm eq},$$ where $T_{\rm eq}$ and $v^{\mu}_{\rm eq}$ are the temperature and velocity field of the source-free equilibrium state. This is the covariant version of . It remains to express $\mathcal{E}$, $\mathcal{P}$, $\mathcal{N}$, $q^{\mu}$, $\nu^{\mu}$, and $\tau^{\mu\nu}$ in terms of the sources. These are the constitutive relations. To proceed we compute derivatives of $T$, $\mu^A$, and the fluid velocity. We find \[E:Dx\] $$\begin{aligned} \partial_{\mu}T &= -T a_{\mu}, &\partial_{\mu}\mu^A &= -\mu^A a_{\mu}+E^A_{\mu}, \\ \nabla_{\mu}u_{\nu}&=-u_{\mu}a_{\nu}+\omega_{\mu\nu},&E^A_{\mu}&=F^A_{\mu\nu}u^{\nu},\end{aligned}$$ are identically satisfied with $$\begin{aligned} a^{\mu}=u^{\nu}\nabla_{\nu}u^{\mu}, \,\,\,\, \omega^{\mu\nu}=\frac{\Delta^{\mu\rho}\Delta^{\nu\sigma}}{2}(\nabla_{\rho}u_{\sigma}-\nabla_{\sigma}u_{\rho}).\end{aligned}$$ The independent tensors with one derivative are listed in Table \[T:FOtensors\]. The pseudovectors are $\tilde{v}_1^{\mu,A}=B^{\mu,A}$, the magnetic field, and $\tilde{v}_2^{\mu}=w^{\mu}$ the vorticity of the fluid. To $O(\partial)$ we then write $$\begin{aligned} \begin{split} \label{E:constitutive} \mathcal{E} & = \epsilon, \qquad \mathcal{P} = P, \qquad \mathcal{N} = \rho,\qquad \tau^{\mu\nu}=0, \\ q^{\mu} &= \gamma_i v_i^{\mu}+\tilde{\gamma}_i \tilde{v}_i^{\mu}, \qquad \nu^{\mu,A} = \delta^A_i v_i^{\mu} + \tilde{\delta}^A_i \tilde{v}_i^{\mu}, \end{split}\end{aligned}$$ where the $\gamma$’s, $\tilde{\gamma}$’s, $\delta$’s, and $\tilde{\delta}$’s are functions of $T$ and $\mu^A$. $1$ $2$ ------------------------------------- -------------------------------------------------------------------------- ---------------------------------------------------------------------------------- vectors $(v_i^{\mu})$ $E^A_{\mu}$ $a_{\mu}$ psuedovectors $(\tilde{v}_i^{\mu})$ $\frac{1}{2\sqrt{-g}}\epsilon^{\mu\nu\rho\sigma}u_{\nu}F^A_{\rho\sigma}$ $\frac{1}{\sqrt{-g}}\epsilon^{\mu\nu\rho\sigma}u_{\nu}\partial_{\rho}u_{\sigma}$ : \[T:FOtensors\]The independent first-order tensors. We continue by treating the Ward identities  and  as equations of motion. Ordinarily, we solve for the hydrodynamic variables in the presence of external fields. In this instance, the conservation equations leads to differential equations for the coefficients in the constitutive relations. We note that the gradients of the first-order tensors satisfy some simple relations $$\begin{aligned} \begin{split} \label{E:useful} \nabla_{\mu}a^{\mu} & = u^{\mu}u^{\nu}R_{\mu\nu} + \omega^{\mu\nu}\omega_{\nu\mu}, \\ u^{\nu}\nabla_{\nu}a^{\mu}& = u^{\mu}a^2-\omega^{\mu\nu}a_{\nu}, \\ u^{\nu}\nabla_{\nu}E^{\mu,A}&=u^{\mu}a^{\nu}E^A_{\nu}-\omega^{\mu\nu}E^A_{\nu}, \\ \nabla_{\mu}B^{\mu,A} & = B^{A}_{\mu}a^{\mu}-E^A_{\mu}w^{\mu}, \\ u^{\nu}\nabla_{\nu}B^{\mu,A} &= u^{\mu}B^A_{\nu}a^{\nu}-\omega^{\mu\nu}B^A_{\nu},\\ \nabla_{\mu}w^{\mu} &= 2 a_{\mu}w^{\mu}, \qquad u^{\nu}\nabla_{\nu}w^{\mu} = u^{\mu}a_{\nu}w^{\nu}, \\ \epsilon^{\mu\nu\rho\sigma}u_{\nu}B_{\rho}^Aw_{\sigma}&=2\omega^{\mu\nu}B_{\nu}^A, \,\,\,\, \epsilon^{\mu\nu\rho\sigma}u_{\nu}w_{\rho}\omega_{\sigma\tau}=0, \end{split}\end{aligned}$$ where $R_{\mu\nu}$ is the Ricci tensor. Applying  and  to the conservation equations leads to a sum of coefficients, each of which multiplies an independent second-order tensor. Each such coefficient must vanish, which leads to a number of equations, $$\begin{aligned} \begin{split} \label{E:consEqns} \gamma_1^A=\gamma_2=\delta_1^{AB}=\delta_2^A &= 0, \\ \frac{\partial\tilde{\delta}_1^{AB}}{\partial\mu^C}-C^{ABC}=\frac{\partial\tilde{\delta}_2^A}{\partial\mu^B}-\tilde{\delta}_1^{AB}&=0, \\ \frac{\partial\tilde{\gamma}_1^A}{\partial\mu^B}-\tilde{\delta}_1^{AB}=\frac{\partial\tilde{\gamma}_2}{\partial\mu^A}-\tilde{\delta}_2^{A}-\tilde{\gamma}_1^A&=0, \\ T\frac{\partial\tilde{\delta}^{AB}_1}{\partial T}+\mu^C\frac{\partial\tilde{\delta}_1^{AB}}{\partial\mu^C}-\tilde{\delta}_1^{AB}&=0, \\ T\frac{\partial\tilde{\delta}^{A}_2}{\partial T}+\mu^B\frac{\partial\tilde{\delta}_2^{A}}{\partial\mu^B}-2\tilde{\delta}_2^{A}&=0, \\ T\frac{\partial\tilde{\gamma}^{A}_1}{\partial T}+\mu^B\frac{\partial\tilde{\gamma}_1^{A}}{\partial\mu^B}-2\tilde{\gamma}_1^{A}&=0, \\ T\frac{\partial\tilde{\gamma}_2}{\partial T}+\mu^A\frac{\partial\tilde{\gamma}_2}{\partial\mu^A}-3\tilde{\gamma}_2&=0. \end{split}\end{aligned}$$ When  holds, the Ward identities  and  are exactly solved for the first-order constitutive relations . We compute the $ra..a$ functions by varying one-point functions (which we view as expectation values of $r$ operators) with respect to external fields [@Moore:2010bu] (which we view as $a$-type sources). Those variations are easy to perform in this frame. We obtain zero-frequency $n$-point fuctions by directly varying  and . For instance, we find the following two-point functions to $O(k)$, $$\begin{aligned} \begin{split} G_R^{iA,jB}(\k) &= -i \epsilon^{ijk}k_k \tilde{\delta}_1^{AB}, \\ G_R^{iA,0j}(\k) &= -i \epsilon^{ijk}k_k \tilde{\delta}_2^A,\\ G_R^{0i,jA}(\k) &= - i \epsilon^{ijk}k_k \tilde{\gamma}_1^A, \\ G_R^{0i,0j}(\k) &= P\delta^{ij}- i \epsilon^{ijk}k_k \tilde{\gamma}_2. \end{split}\end{aligned}$$ By  we then have $\delta_2^A=\gamma_1^A$. Matching to  then gives $$\sigma_1^{AB}=\tilde{\delta}_1^{AB}, \qquad \sigma_2^A = \tilde{\delta}_2^A, \qquad \sigma_3 = \tilde{\gamma}_2,$$ so that the constraints  precisely reproduce those  and  for the $\sigma$’s. *Discussion.*—In this letter we have used Ward identities to constrain zero-frequency correlation functions. For a $3+1$-dimensional theory with $U(1)^3$ anomalies, the $O(k)$ parts of three-point functions of the stress tensor and currents are determined by conservation in terms of the anomaly coefficients and $O(k)$ parts of two-point functions.  leads to differential equations that relate two-point functions to three-point functions. As a result the $O(k)$ terms in two and three-point functions of $\T$ and $\J$ are uniquely fixed up to integration constants. The result matches the hydrodynamic [@Son:2009tf; @Neiman:2010zi] and holographic [@Amado:2011zx] calculation up to additional CPT-violating integration constants $f_1^{AB}$ . Zero-frequency correlation functions encode the thermodynamic response of a fluid. In this instance, a fluid may be subjected to a magnetic field $B^{\mu}$ or vorticity $w^{\mu}$ and remain in equilibrium. When parity is not a symmetry, there may be charge and momentum currents directed along $B^{\mu}$ and $w^{\mu}$. This calculation is telling us that the $O(B)$ and $O(w)$ response of those currents is fixed by a consistent description of thermodynamics in the presence of background fields. One significant question remains: what is the role of the CPT-preserving integration constant $f_2^A$ ? At weak [@Landsteiner:2011cp] and strong [@Landsteiner:2011iq] coupling, it has been found to be proportional to the mixed anomaly coefficient. Is that result fixed by Ward identities? The hydrodynamic calculation sheds some light on this question. By exactly solving the conservation equations to $O(\partial^2)$ and imposing  at the outset, we have implicitly enforced the Ward identities for gauge/diffeomorphism invariance on the $O(k)$ parts of all zero-frequency $n$-point functions of $\T$ and $\J$. At the end of the day, $f_2^A$ is left unfixed by the zero-frequency conditions  and . Three logical possibilities remain: (i.) $f_2^A$ is fixed by another zero-frequency condition, (ii.) $f_2^A$ is fixed by a finite-frequency property like the Onsager relations, or (iii.) $f_2^A$ is unfixed. *Acknowledgments.*—It is a pleasure to thank M. Kaminski, Z. Komargodski, P. Kovtun, K. Landsteiner, R. Meyer, A. Ritz, D. Son, and especially A. Yarom for stimulating discussions. This work was initiated at the Perimeter Institute for Theoretical Physics and was supported in part by NSERC, Canada.
--- abstract: 'The discovery of high redshift quasars represents a challenge to the origin of supermassive black holes. Here, two evolutionary scenarios are considered. The first one concerns massive black holes in the local universe, which in a large majority have been formed by the growth of seeds as their host galaxies are assembled in accordance with the hierarchical picture. In the second scenario, seeds with masses around 100-150 $M_\odot$ grow by accretion of gas forming a non-steady massive disk, whose existence is supported by the detection of huge amounts of gas and dust in high-z quasars. These models of non-steady self-gravitating disks explain quite well the observed ”Luminosity-Mass" relation of quasars at high-z, indicating also that these objects do not radiate at the so-called Eddington limit.' title: | [**Supermassive Black Holes in the Early Universe**]{}\ J.A. de Freitas Pacheco\ Université de la Côte d’Azur\ Observatoire de la Côte d’Azur - Laboratoire Lagrange\ 06304 Nice Cedex - France\ --- Introduction ============ The cosmological nature of quasars (QSOs) was established in the early sixties (Schmidt 1963). An immediate consequence of the implied large distances for these objects was the realization that QSOs were among the most powerful energy sources in the universe. Their luminosities are typically around $10^{46} erg.s^{-1}$ but the emission of some QSO’s may exceed by one or two orders of magnitude that value. Edwin Salpeter was one of the first to propose that a supermassive black hole (SMBH) in a state of accretion could provide the necessary energy to explain the luminosities of QSOs (Salpeter 1964). If presently a large majority of the scientific community accepts that accreting SMBHs are the engines powering QSOs, a series of questions still remain to be answered. For instance, how these SMBHs are formed? If they grow by accretion what are the seeds and from where they come from? What is the gas accretion geometry: spherically or that of an inspiraling disk? In last case, what are the viscous mechanisms responsible for the transfer of angular momentum? In the local universe the presence of SMBHs in the center either of elliptical galaxies or bulges of spiral galaxies seems to be a well-established fact (Kormendy & Richstone 1995; Richstone et al. 1998; Kormendy & Gebhardt 2001). The black hole mass $M$ is well correlated either with the stellar mass or the luminosity of the host bulge (Kormendy & Richstone 1995; Magorrian et al. 1998; Marconi & Hunt 2003; Haring & Rix 2004; Graham 2007) but, in particular, a tight correlation exists between the SMBH mass and the central projected stellar velocity dispersion $\sigma$ (Ferrarese & Merrit 2000; Gebhardt et al. 2000; Merrit & Ferrarese 2001; Tremaine et al. 2002). The mechanism (or mechanisms) responsible for establishing the M-$\sigma$ relation is (are) not yet well determined but several scenarios have been put forward in the past years to explain the origin of such a relation. Self-regulated growth of black holes by feedback effects produced either by outflows or UV-radiation from QSO’s, which affect also the star formation activity is a possible mechanism able to reproduce the M-$\sigma$ relation (Silk & Rees 1998; Sazonov et al. 2005). A relation between these physical quantities can also be obtained from the picture developed by Burkert & Silk (2001) in which black holes grow at the expense of a viscous accretion disk and whose gas reservoir beyond the BH influence radius feeds also the formation of stars. These investigations seem to point to a well-defined road leading to the formation of SMBHs: the growth of ”seeds” by accretion inside the host galaxy. This picture is consistent with the fact that the present BH mass density agrees with the accreted (baryonic) mass density derived from the bolometric luminosity function of quasars (Soltan 1982; Small & Blandford 1992; Hopkins, Richards & Hernquist 2007) and with a negligible amount of accreted dark matter (Peirani & de Freitas Pacheco 2008). Seeds could be intermediate mass ($10^3-10^4)~M_\odot$ black holes formed during the collapse of primordial gas clouds (Haehnelt & Rees 1993; Eisenstein & Loeb 1995; Koushiappas, Bullock & Dekel 2004) or during the core collapse of relativistic star clusters formed in star-bursts, which may have occurred in the early evolution of galaxies (Shapiro 2004). Here, as it will be discussed later, seeds are assumed to be black holes with masses around 100-500 $M_\odot$ originated from the first generation of stars, supposed to be quite massive due to the absence of metals, which are the main contributors to the cooling of the gas. The different correlations between the black hole mass and the dynamic or the photometric properties of the host galaxy suggest a gradual growth of the seed as the host galaxy itself is assembled. However this scenario seems to be inconsistent with the fact that up to now more than 40 bright QSO’s have been discovered at high redshift (Wu et al. 2015). The three QSOs having the highest redshift are J1061+3922 at z = 6.61, J1120+0641 at z = 7.08 and J1342+0928 at z = 7.54. The later corresponds to an age of the universe of only 0.69 Gyr. Since most of these high redshift QSOs are associated with SMBHs having masses around $10^8-10^9 ~ M_\odot$, their growth was probably not gradual but rather fast in other they could shine so early in the history of the universe. Thus, a possible issue is to admit the existence of two evolutionary paths leading to the formation of SMBHs: one in which seeds grow intermittently as their host galaxies are assembled and another in which seeds grow very fast in a timescale of less than 1 Gyr. These two possibilities will be discussed in the next sections of this article. Spherical Accretion and the Eddington limit =========================================== Many authors still consider in their investigations spherical accretion processes in which the mass inflow rate is controlled by the Eddington luminosity. In this case, it seems judicious to recall the physical assumptions that permit the derivation of the Eddington limit. The Euler equation describing a spherically symmetric inflow under the influence of gravitation and radiation pressure is $$\label{euler} V\frac{dV}{dr}+\frac{1}{\rho}\left[\frac{d(P+P_r)}{dr}\right]+\frac{GM}{r^2}=0$$ In the above equation $V$ is the radial flow velocity, $P$ and $P_r$ are respectively the gas and the radiation pressure, $\rho$ is the gas density, $G$ is the gravitational constant and $M$ is the mass of the central object. The radial gradient due to the radiation pressure is given by $$\frac{dP_r}{dr}=-\frac{1}{c}\int^{\infty}_{0}\kappa_{\nu}\phi_{\nu}d\nu = -\frac{\kappa}{c}\phi$$ where $\kappa$ is a suitable frequency average of the total absorption coefficient (including scattering) and $\phi$ is the total radiative flux. If the accreting gas envelope is highly ionized, the absorption of photons is essentially due to the Thomson scattering and, in this case $$\kappa = \frac{\sigma_T}{\mu m_H}\rho$$ where $\sigma_T = 6.65\times 10^{-25}~cm^2$ is the Thomson cross-section, $\mu$ is the mean molecular weight and $m_H$ is the proton mass. If the medium is optically thin and the radiation comes essentially from the deep inside region of the envelope, then the radiative flux in the outer regions is simply $$\phi = \frac{L}{4\pi r^2}$$ Combine eqs. (2), (3), (4) and replace into the Euler equation to obtain $$V\frac{dV}{dr}+\frac{1}{\rho}\frac{dP}{dr}= -\frac{GM}{r^2}+\frac{\sigma_T L}{4\pi\mu m_H c r^2}$$ Define now the Eddington luminosity as $$\label{eddington} L_E = \frac{4\pi GM\mu m_H c}{\sigma_T} = 1.76 \times 10^{38}\left(\frac{M}{M_\odot}\right)~erg.s^{-1}$$ In this case, eq. (5) can be rewritten as $$\label{euler2} V\frac{dV}{dr}+\frac{1}{\rho}\frac{dP}{dr}= -\frac{GM(1-\Gamma)}{r^2}$$ where $\Gamma = L/L_E$. Assume that the gas equation of state is given by $P = K\rho^{\gamma}$ and define also the adiabatic sound velocity as $a^2 = \gamma(P/\rho)$. Under these conditions, using the mass conservation equation to express the mass density gradient, after some algebra, eq.\[euler2\] can be recast as $$\label{euler3} V\left(1-\frac{a^2}{V^2}\right)\frac{dV}{dr} = \frac{2a^2}{r}-\frac{GM(1-\Gamma)}{r^2}$$ The ”critical” point of the flow in the usual mathematical sense corresponds to the point where both sides of eq. \[euler3\] vanish. Hence, in order to have the continuity of the flow through the critical point, two conditions must be simultaneously satisfied, namely $$\label{criticalvelocity} V_* = a_*$$ and $$\label{criticalradius} r_* = \frac{GM(1-\Gamma)}{2a_*^2}$$ for the critical velocity and the critical radius respectively. These relations imply that the critical and the sonic points of the flow coincide (this is not always the case) and that the luminosity radiated from inside must be [**[smaller]{}**]{} than the Eddington value in order that the critical radius be real. Note that once the critical point is surpassed, the left side of eq. \[euler3\] is negative, requiring imperatively that the right side be also negative or, equivalently, that $\Gamma < 1$. In other words, the spherical accretion of an optically thin envelope requires sub-Eddington conditions otherwise the inflow cannot be established. As we will see later, this requirement is weakened when the inflow geometry is modified as, for instance, in the case of an accretion disk. As it was mentioned previously, many authors assume that the central black hole accretes mass with the envelope radiating near the Eddington limit. Since the Eddington luminosity is proportional to the mass of the black hole (see eq. 6), it results an exponential growth with a timescale $$\tau_E = \frac{\eta}{(1-\eta)}\frac{c\sigma_T}{4\pi\mu m_H G} = 3.22\times 10^8\frac{\eta}{(1-\eta)}~yr$$ where $\eta$ is the accretion efficiency. Such a short timescale in often considered as an argument to explain the presence of SMBHs at high-z. However, as we have seen above, the Eddington limit is derived under conditions in which the envelope is optically thin and the opacity is due only to the Thomson scattering. The optical depth of the envelope is given by $$\tau(r,\infty) = \int^{\infty}_r \frac{\sigma_T}{\mu m_H}\rho(r')dr' = 7.8\times 10^{-6}\left(\frac{M}{M_\odot}\right)n_{\infty}$$ and the gravitational radius was taken as a lower limit in order to obtain the numerical value on the left side of the equation. For an optically thin envelope the condition $\tau < 1$ must be satisfied, imposing an upper bound to the black hole mass, namely $$\frac{M}{M_\odot} < \frac{1.3\times 10^5}{n_{\infty}}$$ where $n_{\infty}$ is the gas particle density far from the influence radius of the black hole. Hence, if accreting spherically, SMBHs at high-z with masses around $10^8-10^9 ~ M_\odot$ will necessarily have an optically thick envelope and a different inflow regime. An optically thick envelope reduces the distance to the critical point and reduces also the accretion rate with respect to the optically thin case. In particular, when the radiation field is quite important, a second critical point may exist in the flow besides the hydrodynamical one, according to Nobili et al. (1991). Accretion flows affected by radiation effects have been investigated by many authors in the past years (Maraschi et al. 1974; Flammang 1982; Milosavljevic et al. 2009). The radiation from the accreting envelope is essentially due to the free-free emission. For the optically thin case, the resulting luminosity is proportional to the square of the accretion rate and inversely proportional to the BH mass, i.e., $L \sim {\dot M^2}/M$. The flow becomes nearly self-regulated when the optical depth of the infalling matter is greater than unity and under these conditions, the luminosity approaches the Eddington limit (Milosavljevic et al. 2009). However, some authors claim that super-Eddington luminosities are possible if the black-hole is embedded in a very dense gas that decreases the importance of radiation pressure effects (Pacucci et al. 2015). Super-Eddington accretion rates were also found in some radiation-hydrodynamics simulations but based on one-dimensional geometry and particular conditions of the ambient gas (Inayoshi et al. 2016). However, it is not certain whether such extreme conditions are sustainable considering the violent environments of the first galaxies where the medium is affected by the star formation activity and supernovae. If the BH is moving with respect to the gas, the situation is rather different. After passing the BH, a conically shaped shock is produced in the flow in which the gas loses the momentum component perpendicular to the shock front. After compression in the shock, gas particles within a certain impact parameter will fall into the BH. One determinant factor describing the subsequent motion of the gas is the angular momentum. If the infalling gas has an specific angular momentum $J$ that exceeds $2r_gc$, where $r_g = 2GM/c^2$ is the gravitational radius, the centrifugal forces will become important before the gas reaches the horizon. In this case, the gas will be thrown into near circular orbits and only after viscous stresses have transported away the excess of angular momentum will the gas cross the BH horizon (Shvartsman 1971). In fact, the formation or not of a disk requires two conditions: the disk radius must be larger than the last stable circular orbit (equivalent to the condition $J > 2r_gc$) and must be smaller than the typical dimension of the shock cone, e.g., $l_s \approx 2r_g(c/u_{\infty})^2$, where $u_{\infty}$ is the BH velocity with respect to the gas. If the gas is highly turbulent, the velocity of eddies having a scale $k$ is given roughly by $$V_t \sim V_0\left(\frac{k}{k_0}\right)^q$$ In the case of a Kolmogorov spectrum, $q = 1/3$. However, it is more probable that the turbulent energy be dissipated mainly through shock waves and, in this case, the spectrum is steeper with $q \sim 1$ (Kaplan 1954), a situation which will be assumed here. For our rough estimates, we adopt typical values for the turbulence observed in our Galaxy, e.g., $V_0 \approx 10~ km.s^{-1}$, $k_0 \approx 10~ pc$ (Kaplan & Pikel’ner 1970). The specific angular momentum associated with eddies is $J \sim V_tk$ and the specific angular momentum of the accreted gas corresponds to eddies of the order of twice the scale of the capture impact parameter. Thus, the first condition for disk formation requires $$M > 720\left(\frac{u_{\infty}}{50~km.s^{-1}}\right)^4 ~M_\odot$$ whereas the second requires $$M < 3.5\times 10^6\left(\frac{u_{\infty}}{50~km.s^{-1}}\right)^3~ M_\odot$$ The conclusion of this brief analysis is that a disk can be formed after the shock front only if the BH mass is in the range $10^3$ up to $10^6$ $M_\odot$. Notice that these values depend strongly on the black hole velocity. Taking into account the restricted range of BH masses that allows the presence of a disk, the necessity of an adequate balance between the mass flow across the shock front and the flow throughout the disk that is controlled by viscous forces, as well as the variety of instabilities present in the flow after the shock. Under these conditions, the formation of a disk under these circumstances is rather uncertain. In this case, if a disk is not formed inside the accretion cone, the radiated luminosity is only a small fraction of the accretion power. Intermittent Growth of Black Holes ================================== As we have seen previously, the correlations between the black hole mass and the photometric or dynamical properties of the host galaxy suggest that the former grows as the latter is assembled. The different physical processes involved on the growth of black holes inside galaxies require a numerical treatment or, in other words, an appeal to cosmological simulations. In fact, there are different reasons justifying such an approach: first, because a significant volume of the universe can be probed; second, because the dynamics of dark matter and the hydrodynamics of the gas, including physical processes like heating, cooling, ionization of different elements can be taken into account self consistently. Moreover, it is possible to study environmental effects on the galaxies themselves as well as the chemical evolution of the interstellar and of the intergalactic gas, including the effects of supernovae and turbulent diffusion of heavy metals. The simulations permit to test star formation recipes, models for the growth of seeds and to investigate the influence of black holes on the environment when they are in an active phase. Cosmological Simulations ------------------------ The results described in this section were all derived from simulations performed at the Observatoire de la Côte d’Azur (Nice) in these past years. Details of the code and some results can be found, for instance, in the papers by Filloux et al. (2010, 2011) or Durier & de Freitas Pacheco (2011). For the sake of completeness, a short summary of the main features of the code will be presented here. All the simulations were performed in the context of the $\Lambda$CDM cosmology, using the parallel TreePM-SPH code GADGET-2 in a formulation, despite the use of fully adaptive smoothed particle hydrodynamics (SPH), that conserves energy and entropy (Springel 2005). Initial conditions are established according to the algorithm COSMICS (Bertschinger 1995) and the evolution of the structures are followed in the redshift range $60 \geq z \geq 0$. Ionization equilibrium taking into account collisional and radiative processes were included following Katz, Weinberg & Hernquist (1996), as well as the contribution of the ionizing radiation background. The contribution of cooling processes such as collisional excitation of HI, HeI and HeII levels, radiative recombination, free-free emission and inverse Compton were also included, using the results by Sutherland & Dopita (1993). An interpolation procedure was adopted to take into account the enhancement of the cooling as the medium is enriched by metals. The cooling functions computed by those authors are adequate for highly ionized gases and for $T \geq 10^4 K$. At high redshifts ($100 > z > 20$) and inside neutral gas clouds a residual electron fraction of about $n_e/n_H \approx 0.005$ is present (Peebles 1993) which is enough to act as a catalyst in chemical reactions producing molecular hydrogen. $H_2$ cooling due to excitation of molecular rotational levels was introduced by using the results of Galli & Palla (1998). After the appearance of the first stars, the gas is enriched by trace elements like O, C, Si, Fe, responsible for a supplementary cooling mechanism. The UV background with $h\nu < 13.6~eV$ is unable to ionize hydrogen (and oxygen) in neutral gas clouds but it can ionize carbon, silicon and iron, which are mostly singly ionized under these conditions. These ions have fine structure levels that can be excited by collisions either with electrons or atomic hydrogen, constituting an important cooling mechanism at low temperatures, which was included in the code. The UV radiation from young massive stars able to ionize the nearby gas was computed for different ionization species of hydrogen and helium, representing not only an additional (local) source of ionization but also of heating. Feedback processes like the return of mass to the interstellar medium, supernova heating and chemical enrichment were all taken into account. The return of mass to the interstellar medium was computed by assuming that the initial mass function (IMF) of stars with metallicities $[Z/H] < −2.0$ is of the form $\zeta(m) \propto m^{-2}$ while stars more metal-rich are formed with a Salpeter IMF, e.g., $\zeta(m) \propto m^{-2.35}$. Stars in the mass range $40-80~ M_\odot$ leave a $10~ M_\odot$ black hole as a remnant, whereas a $1.4~ M_\odot$ neutron star is left if progenitors are in the mass range $9-40 M_\odot$ or a white dwarf remnant otherwise. The mass lost by the ”stellar particle" is redistributed according to the SPH kernel among the gas particle neighbors and velocities are adjusted in order to conserve the total momentum in the cell. Moreover, the removed gas (except that ejected by supernovae, as discussed below) keeps its original chemical composition, contributing to the chemical budget of the medium. In fact, AGB stars, planetary nebulae and WR stars enrich the medium in He, C, N, but these contributions are not taken into account in the present version of the code. Supernova explosions are supposed to inject both thermal and mechanical energy in the interstellar medium. Past investigations have shown that, when heated, the nearby gas cools quite rapidly and the injected thermal energy is simply radiated away. However, when energy is injected in the form of kinetic energy, the star formation process is affected (Navarro & White 1993). In the present simulations, supernova explosions were supposed to inject essentially mechanical energy into the interstellar medium through a ”piston” mechanism, represented by the momentum carried by the ejected stellar envelope. The distance $D_p$ covered by such a “piston”, ejected with a typical velocity $V_{ej} \sim 3000 km.s^{-1}$ in a time interval $\Delta t$ is $V_{ej}\Delta t$. Under this assumption a ”stellar cell” is defined including all gas particles inside a spherical volume of radius $D_p$ that will be affected by the ”piston”. The released energy is redistributed non-uniformly among these particles. In this process, it is expected that the closest gas particles receive more energy than the farthest ones. This was achieved by assigning to each gas particle $j$ a distance dependent weight $w_j(r) = A/r^n_{ij}$ , where $r_{ij}$ is the distance between the ”i-stellar" particle (host of the SN explosion) to the $j-gas$ particle inside the cell. The normalization constant is defined by $A = 1/\sum_j r^n_{ij}$ . Different values of the exponent (n = 2, 4) were tested. Supernovae do not only contribute to the energy budget of the interstellar medium but also inject heavy metals, leading to a progressive chemical enrichment of galaxies as well as of their nearby environment. Such a progressive enrichment was treated by an adequate algorithm able to simulate the turbulent diffusion of metals trough the medium. In the code, BHs are represented by collisionless particles that can grow in mass, according to specific rules that mimic accretion or merging with other BHs. Possible recoils due to a merging event and to the consequent emission of gravitational waves were neglected. BHs are assumed to merge if they come within a distance comparable to or less than the mean inter-particle separation. Seeds are supposed to have been formed from the first (very massive) stars and are supposed to have a mass of about 100 $M_\odot$. An auxiliary algorithm finds potential minima where seeds are inserted in the redshift interval $15-20$. A fraction of the energy released during the accretion process is re-injected into the medium along two opposite “jets” aligned along the rotation axis of the disk, modeled by cones with an aperture angle of $20^o$ and extending up to distances of about 300 kpc. The adopted expression for the power of the jets is essentially that given by the simulations of Koide et al. (2002). Properties of simulated SMBHs ----------------------------- One of the main aspects of the growth of seeds by gas accretion concerns the fact that masses do not increase continuously. In the hierarchical model, galaxies are assembled in the filaments of the cosmic web or in the junction of filaments where clusters are formed. In filaments galaxies may capture fresh gas that will feed their central black holes. Gas may also come from merging events. However, from time to time the gas in the central region is exhausted and the process of growth stops until a new episode of gas capture is able to replenish the vicinity of the black hole. In fact, the amount of gas in the central regions of the host galaxy is controlled by the capture processes and internal processes like star formation and feedback from supernovae and the black hole itself. In figure 1 the individual growth of some simulated black holes is shown. It is possible to verify that there are periods during which the black hole mass remains constant (case of a ”dormant" black hole) and periods of gas accretion during which the black hole mass increases. In such a phase, the galaxy has an active nucleus, being associated to an AGN or to a QSO. Notice that only at $z \leq 4$ some black holes having masses greater than $10^7~M_\odot$ have appeared. During the activity phase, the associated accretion disk is quite luminous and the luminosity depends essentially on the accretion rate. For a given redshift it is possible to compute the total luminosity due to all active black holes and, consequently, to estimate the comoving luminosity density. Such a luminosity density can be compared with observational data, permitting to test the robustness of the simulations. Figure 2 compares the luminosity density evolution derived from simulations with data by Hopkins et al. (2007). The agreement is quite satisfactory suggesting that the main physical aspects of the growth process are reasonably taken into account in the simulations. Another successful comparison concerns the relation between the present SMBH mass with the central velocity dispersion of galaxies projected in the line of sight. This is done in figure 3. Black squares represent the masses of SMBHs at z = 0 derived from simulations while red squares represent data taken from the literature. There is a good agreement between simulated and observed data but some objects seem to have a higher black hole mass for the corresponding stellar velocity dispersion of their host galaxies. In particular, this is the case for NGC 5252 and Cygnus A as it can be seen in the considered figure. In our proposed scenario, these objects have not evolved intermittently but rather in a very short time scale in the early evolutionary phases of the universe, being presently the relics of such an active past. If some SMBHs seem to have masses above that expected from the $M - \sigma$ relation as figure 3 suggests, there are other arguments indicating that these objects followed indeed a different evolutionary path. At a given redshift, the simulations permit to compute the mass distribution of SMBHs. This is shown in figure 4, which indicates that no SMBHs with masses above $10^7 ~ M_\odot$ is present at $z = 5$, in agreement with the evolution of individual black holes shown in figure 1. This means that the evolutionary path in which the BH mass grows as the host galaxy is assembled, which is probably the origin of the $M - \sigma$ relation, is unable to explain the existence of very massive BHs in the early universe or SMBHs present today in bright galaxies like NGC 5252 or Cygnus A. In the next sections an alternative evolutionary path will be examined. The early formation of SMBHs ============================ As it was shown in the previous section, the coeval evolution of seeds and host galaxies is not able to explain the existence of bright QSOs at high redshift and the fact that in the local universe some objects have masses higher than that expected from the simulated $M - \sigma$ relation. Could a unique accretion disk form a SMBH in a timescale of about 1 Gyr? The answer to this question implies the solution of different related problems. The disk must be quite massive in order to provide enough gas to form a $10^9 ~M_\odot$ black hole and the angular momentum transfer must be very efficient in order to maintain a high accretion rate necessary to provide the observed luminosities as well as a short timescale for the growth of the seed. In fact, numerical simulations suggest that after the merger of two galaxies, a considerable amount of gas is settled into the central region of the resulting object. The gas loses angular momentum in a timescale comparable to the dynamical timescale (Mihos & Hernquist 1996; Barnes 2002), forming circumnuclear self-gravitating disks having masses in the range $10^6 - 10^9 ~ M_\odot$ and dimensions of about 100 - 500 pc. Massive accretion disks are, in general, self-gravitating in their early evolutionary phases, a situation that affects the usual dynamics of disks controlled only by the gravitational forces due to the central body. In fact, models of non steady self-gravitating accretion disks were computed by Montesinos & de Freitas Pacheco (2011, hereafter MP11) satisfying the aforementioned requirements or, in other words, they are luminous enough and they permit the growth of seeds in a short timescale, consistent with observations of bright QSOs at high $z$. Some aspects of the work by those authors will be shortly reviewed below, followed by the presentation of new results. As mentioned before, the very early formation of a massive disk in the central region of a galaxy requires the presence of a large amount of gas. In fact, infrared sky surveys have discovered huge amounts of molecular gas (CO) at intermediate and high-z QSOs (Downes et al 1999; Bertoldi et al 2003; Weiss et al 2007). In particular, the detection of CO emission in the quasar J1148+5251 at $z = 6.42$ permitted an estimation of the molecular hydrogen mass present in the central region of the host galaxy that amounts to $M(H_2) \sim 10^{10}~M_\odot$ (Walter et al. 2009). Moreover, at least in the case of the quasar J1319+0950 ($z = 6.13$) there is a robust evidence that the gas is rotating (Shao et al. 2017), suggesting the presence of a gaseous disk. More recently, observations of J1342+0928 ($z = 7.54$) indicate important amounts of gas and dust revealed by the infrared continuum and by the \[CII\] line emission (Venemans et al. 2017). All these observations support the idea that some massive galaxies in their early evolutionary phases had large amounts of gas in their central regions, which could have formed the massive accretion disks required by our model. If observations seem to support the scenario in which massive disks fed the seeds of SMBHs, the other question concerns the accretion timescale defining the growth of those seeds. The accretion rate is fixed by the mechanism of angular momentum transfer and depends on the gas viscosity mechanism. Presently there is no adequate physical theory able to describe the gas viscosity in the presence of turbulent flows or in the presence of magnetic fields. The angular momentum transfer is generally described by the formalism introduced almost forty five years ago by Shakura & Sunyaev (1973), in which the viscosity is due to subsonic turbulence and is parametrized by the relation $\eta = {\alpha}Hc_s$, where $\eta$ is the viscosity, $\alpha \leq 1$ is a free parameter of the theory, $c_s$ is the sound velocity and $H$ is the vertical scale of height of the disk. $H$ is supposed to be of the same order as the typical (isotropic) turbulence scale. However, disks based on such a formalism are, in general, thermally unstable as demonstrated long time ago by Piran (1978). It is well known that self-gravitating disks may be also unstable but, in some cases, such an instability can be a source of turbulence in the flow (Duschl & Britsch 2006). Simulations of the gas inflow in the central regions of galaxies induced by the gravitational potential either of the stellar nucleus or the SMBH, reveal the appearance of highly supersonic turbulence with velocities of the order of the virial value (Regan & Haehenelt 2009; Levine et al. 2008; Wise, Turk & Abel 2008). No fragmentation is observed in such a gas despite of being isothermal and gravitationally unstable. This behavior can be explained if an efficient angular momentum transfer suppresses fragmentation. On the contrary, if the angular momentum transfer is inefficient, the turbulence decays and triggers global instabilities which regenerates a turbulent flow. Thus, one could expect that the flow will be self-regulated by such a mechanism. In this case, the flow must be characterized by a critical Reynolds number ${\cal R}$, determined by the viscosity below which the flow becomes unstable (de Freitas Pacheco & Steiner 1973). This critical viscosity is given by $$\label{visco} \eta = \frac{2\pi r V_{\phi}}{{\cal R}}$$ where $r$ is the radial coordinate and $V_{\phi}=r\Omega$ is the azimuthal velocity of the flow. Another difference between ”$\alpha$“-disks and ”critical viscosity” models is the local balance of energy that is fixed by the equilibration of the rate of the dissipated turbulent energy with the radiated and advected energy rates or, in other words $$\label{balance} T_{r\phi}\frac{d\Omega}{dlg r} = \nabla\cdot F_{rad} + \varepsilon_{adv}$$ In the above equation the left side represents the rate of turbulent energy dissipated per unit volume, the first term on the right side gives the rate per unit volume of radiated energy and finally, the last term represents the rate per unit volume of advected energy. In eq. \[balance\], $T_{r\phi}$ is the $(r,\phi)$ component of the stress tensor, $\Omega$ is the angular flow velocity, $F_{rad}$ is the radiative flux and $\varepsilon_{adv}$ is the rate per unit volume of advected energy. In the ”$\alpha$“-disk model the considered stress component is given by $$T_{r\phi}=\alpha\rho Hc_s\left(\frac{d\Omega}{dlg r}\right)$$ while in the ”critical viscosity” model the stress is given by $$T_{r\phi}=\frac{2\pi}{{\cal R}}\rho r^2\Omega\left(\frac{d\Omega}{dlg r}\right)$$ The difference between the heating rates in both models implies that the temperature distribution along the disk is not the same and that the expected radiated spectrum of each disk model is also different as it will be discussed later. In non-steady self-gravitating disk models, the dynamics of the disk evolves because the mass distribution changes with time as well as the mass of the central black hole. Near the BH the velocity is approximately Keplerian while beyond the transition region, where self-gravitation dominates, the rotational velocity decreases with distance more slowly than $1/\sqrt{r}$. Along the vertical axis, the disk is supposed to be in hydrostatic equilibrium. The vertical scale of height varies as the disk evolves. At early phases the disk is geometrically thick in the central regions due to radiation pressure effects. At late phases, the vertical scale of height increases with distance, approaching the behavior displayed by canonical non self-gravitating models. Additional details, including a description of the numerical code used to solve the hydrodynamic equations can be found in MF11 as mentioned previously. Figure 5, adapted from MF11, shows some examples of models characterized by different values of the seed (100 or 1500 $M_\odot$) and of the critical Reynolds number (500, 1000 or 1500). Notice that an initial black hole of 100 $M_\odot$ can grow up to $3 \times 10^8 ~M_\odot$ in a timescale of only $10^8$ yr if the critical Reynolds number is 500 (black curve). Inspection of figure 5 shows that for the same initial mass, if the Reynolds number is increased (red and green curves), the rate of growth decreases. This can be explained by the fact that the accretion rate is inversely proportional to the viscous timescale, namely, $t_{vis}^{-1} \approx \eta/r^2 \approx \Omega/{\cal R}$. This is an immediate consequence of the fact that increasing the Reynolds number it is more difficult to generate the turbulence. Therefore the angular momentum transfer is less efficient, reducing the accretion rate. It is important to emphasize that while the in the internal parts of the disk the gas flows inwards, the outer parts expand as a consequence of the transfer of angular momentum from inside to outside. Hence, only about 50% of the initial mass of the disk is in fact accreted by the black hole. In the region where the sign of the radial velocity changes (the ”stagnation“ point) a torus-like structure is formed, supporting the scenario of the so-called ”unified model” of AGNs. The models by MF11 indicate that a substantial fraction of the expanding gas remains neutral with a temperature in the range 100 - 2000 K most of the time. In the case of our own galaxy, such a behavior could be related to the molecular “ring” of 2 pc radius observed around Sgr $A^*$ (Gusten et al. 1987). The physical conditions prevailing in the outskirts of the disk are favorable to star formation and could be an explanation for the presence of massive early-type stars located in two rotating thin disks detected in the central region of the Milky Way (Genzel et al. 2003; Paumard et al. 2006). Further tests of the model -------------------------- In the very beginning of the disk evolution, the accretion rate (and the luminosity) increases very rapidly and then remains more or less constant during most of the growth process. At the final phases, the accretion rate decays very fast once half of the disk mass is captured by the central black hole. Such a behavior can be seen in figures 2 and 3 of the paper by MF11. Depending on the initial disk mass and on the critical Reynolds number, the activity phase corresponding to the luminosity maximum lasts for about $2\times 10^7$ up to $3\times 10^8$ years. Despite the fact that the accretion rate (and the luminosity) varies very little during the active phase, the spectral distribution of the radiation emitted by the disk evolves. Such a spectral evolution is due to time variations of the optical depth radial profile as well as to time variations of the radial temperature distribution, as mentioned earlier. There is a continuous shift of the emission maximum toward longer wavelengths that is a consequence of the decreasing average disk temperature as a function of time. In general for wavelengths $\lambda \geq 0.15~ \mu m$ the spectral intensity can be well represented by a power-law, that is, $I_{\lambda} \propto \lambda^{-\alpha}$, where the power index is in the range $0.9 < \alpha < 1.3$, in agreement with values derived from most of quasar spectra. The modeling of the spectral emission of the disk permits an estimate of the bolometric correction. Usually, the luminosity in a given wavelength is derived from observations of monochromatic fluxes and luminosity distances, which depend on the redshift. The bolometric luminosity can be computed by adopting an adequate correction. Nemmen & Brotherton (2010) have estimated the bolometric correction for luminosities at $\lambda = 0.30 \mu m$   based on models by Hubeny et al. (2000). The grid of models by the latter authors assimilates to each annulus of the disk an effective temperature and gravity, which are used to compute the emergent spectrum of an equivalent stellar atmosphere defined by those parameters. The sum of the radiation from all annuli gives the resulting spectrum of the disk. However the effective temperature formula adopted by those authors is adequate for a steady disk whose dynamics is dominated by the central black hole. In the case of non-steady self-gravitating disks the situation is rather different because both the local gravity and the effective temperature vary with time. Fortunately, the bolometric corrections for the monochromatic luminosities at $\lambda = 0.30 \mu m$   or at $\lambda = 3.6 \mu m$  do not vary too much during the active phase and a suitable average correction can be defined. The adopted procedure to estimate such a bolometric correction requires an adequate choice of the representative parameters of the disk, since the seeds must be able to grow in timescales less than 1 Gyr and form SMBHs with masses larger than $5\times 10^8 ~ M_\odot$. Figure 6 shows the surface ”M-age-${\cal R}$“ derived from a grid of models where the seed mass was fixed to 100 $M_\odot$. It worth mentioning that in such a plot the parameter ”age” means the timescale required for the seed to accrete 50% of the initial disk mass, the same definition that was adopted by MF11. Inspection of figure 6 indicates that Reynolds numbers in the range $1000 < {\cal R} < 2500$ are required in order to satisfy those constraints. Then, bolometric corrections at $\lambda$ = 0.30 $\mu m$ and at $\lambda$= 3.6 $\mu m$ were computed for a series of models characterized by ${\cal R}$ = 2200, mass of the seed equal to 100 $M_\odot$ and different initial disk masses, corresponding to about twice the final black hole masses. After averaging the results from different models, the corrections are simply given by $$\log L_{bol} = \log \lambda L_{\lambda} + 0.83 \,\,\,\, for\,\, \lambda = 0.30~\mu$$ and $$\log L_{bol} = \log \lambda L_{\lambda} + 0.92 \,\,\,\, for\,\, \lambda = 3.6~\mu m$$ where the luminosities are given in $erg.s^{-1}$. In figure 7 the bolometric correction for $\lambda$ = 0.30 $\mu m$ derived from these models is compared with the correction adopted by Nemmen & Brotherton (2010) based on steady and non self-gravitating disk models. It should be emphasized that the present bolometric luminosities derived either from UV or infrared monochromatic luminosities are in very good agreement when the corrections above are applied. The present disk model can also be tested by the comparison between theoretical predictions and data in the diagram bolometric luminosity versus black hole mass. Data on high redshift QSOs ($z \geq 6.0$), including masses, monochromatic luminosities at $\lambda$ = 0.30 or 3.6 $\mu m$  and redshift, compiled by Trakhtenbrot et al. (2017) were used in the calculations. Black hole masses were estimated from the width of MgII lines and monochromatic UV-luminosities were derived from the best-fit model of the Mg II emission line complex. Then, using the derived bolometric corrections, the bolometric luminosity of each object was estimated and plotted as a function of the black hole mass in figure 8. The ”Mass-Luminosity" relation derived from our models can be adequately represented by the fit $$\label{luminosity} \frac{L_{bol}}{L_\odot} = 1.41\left(\frac{500}{{\cal R}}\right)\left(\frac{M_{seed}}{100M_\odot}\right)^{0.52}\left(\frac{M}{M_\odot}\right)^{1.5}$$ which depends essentially on the seed mass and on the critical Reynolds number. Such a relation for $M_{seed} = 100~M_\odot$ and ${\cal R}$ = 2200 is shown in figure 8 as a red line. In figure 8 is also shown the expected relation for the Eddington limited luminosity (see eq. \[eddington\]), frequently used to estimate the mass of the black hole. Notice that the theoretical ”M-L“ relation approaches the Eddington limit for black holes masses greater than $10^{10}~M_\odot$. It is worth mentioning that identifying the bolometric luminosity with the Eddington limit leads to an underestimate of the black hole mass, as it can be seen in the considered plot, where the majority of the data points are below the expected Eddington limit line. On the other hand, the theoretical ”L-M” relation displayed in figure 8 shows clearly that our disk models radiate below the Eddington limit. It should be emphasized that in the case of accretion disks, the balance between gravity and radiation pressure along the vertical axis must be considered locally. The disk is locally stable if the radiative flux along the vertical direction is not greater than a critical flux limit given by $$\label{critical} F_{rad} = \frac{\sqrt{3}}{4\pi}\left(\frac{m_Hc}{\sigma_T}\right)\left(\frac{\tau_s}{\tau_{ff}}\right)^{1/2}g_z$$ In the above equation $\tau_s$ is the optical depth due to electron scattering, $\tau_{ff}$ is the optical depth due to free-free absorption and $g_z$ is the local vertical gravitational acceleration. The condition expressed by eq. \[critical\] is valid in the inner regions of the disk where the electron scattering dominates over free-free absorption and where radiation pressure effects are more important. When the vertical radiation flux is higher than the critical value, the hydrostatic equilibrium is destroyed and outflows can be generated. Three-dimensional radiation magneto-hydrodynamical simulations were performed by Jiang et al. (2017), who have studied the evolution of an accretion disk with torus centered on a $5\times 10^8~M_\odot$ black hole. The radiation pressure in the internal regions of the disk may reach values up to $10^6$ times the gas pressure under certain conditions, producing outflows. In these simulations, the angular momentum transfer is controlled by magnetohydrodynamic turbulence that is not the case of our models. The present accretion disk models can be also tested in the diagram black hole mass versus age (figure 9). This plot is simply the projection of the surface displayed in figure 6 on the considered plane ”M-age". Since the age parameter defined above is not directly accessible from observations, the age of the universe derived from the observed redshift of the QSO was used to plot the objects listed by Trakhtenbrot et al. (2017). The age of the universe represents a robust upper limit to the age parameter of the model. Theoretical predictions shown in figure 9 (solid lines) were computed for the same value of the seed mass ($M_{seed}$ = 150 $M_\odot$) and for two different critical Reynolds number: ${\cal R}$ = 1800 and ${\cal R}$ = 2500. These two values enclose in the plot most of the observed high-z QSOs, strongly constraining this fundamental parameter of the model. Conclusions =========== Present astronomical data are not in contradiction with a scenario in which two different evolutionary paths exist for the formation of SMBHs from small mass seeds. In the first evolutionary path, seeds having masses around 100 $M_\odot$ grow intermittently following the gradual assembly of the host galaxy, according to the hierarchical picture. In this case, the coeval evolution between the host galaxy and the seed must be investigated by cosmological simulations. This procedure is justified by the complexity of the physical mechanisms involved in the process of growth. As we have previously seen, these numerical experiments are able to reproduce the observed luminosity density of QSOs and the observed correlations between the black hole mass at $z =0$ and the properties of the host galaxy like the stellar luminosity or the central projected stellar velocity dispersion. Despite these successful results, these simulations are unable to form SMBHs with masses around $10^9~M_\odot$ at high redshift, unless the masses of the seeds are dramatically increased up $10^5 - 10^6 ~ M_\odot$. Although this could be a possibility and despite some studies in this sense, such an alternative seems to be unrealistic. The existence of bright QSOs at $z \approx 6 - 7$ and the difficulty for cosmological simulations to form these objects points toward a new direction, that is, the possibility of a very fast growth of seeds fed by massive accretion disks. This picture is supported by observations of large amounts of gas and dust in QSOs at high-z as discussed before. Models of non-steady self-gravitating disks in which the angular momentum transfer is controlled by turbulent viscosity were developed by Montesinos & de Freitas Pacheco (2011). These models have demonstrated that seeds can grow in timescales of the order of 1 Gyr or even less, being able to explain the main features of QSOs observed at high-z. Further investigations on these ”critical-viscosity“ disks permitted an estimation of the bolomentric correction that should be applied to monochromatic luminosities measured at 0.30 $\mu m$ and 3.6 $\mu m$. These corrections permitted a comparison of existing data with theoretical predictions in the diagram ”Luminosity-Mass”. Such a plot strongly suggests that accretion disks radiate below the so-called Eddington limit. This means that black hole masses derived from such a limit are underestimated. Another useful diagram permitting the comparison of model predictions with data is the plot ”Mass - age". Here it is necessary to recall the remarks done before, that is, the age derived from the redshift is the age of the universe at that moment, representing only a robust upper limit to the disk age. Nevertheless, despite such limitations, both diagrams permit to constrain the two important parameters of the model, namely, the mass of the seeds and the critical Reynolds number. The former is probably in the range 100-150 $M_\odot$ while the latter should be in the interval $1800 < {\cal R} < 2500$. Finally, it worth mentioning that some SMBH in the local universe ($z = 0$) have masses above that expected from simulations. This is the case of NGC 5252 and Cynus A as already mentioned, but may also be the case of NGC 3115 and probably of NGC 4594. This last object is more uncertain since its estimated mass is only 4.4 times greater that that expected from the simulated ”M-$\sigma$" relation. These objects are probably the remnants of a fast growth occurred in the early evolutionary phases of the universe and not the consequence of a coeval evolution involving the seed and the host galaxy. Barnes J.E., 2002, Month.Not.Roy.Ast.Soc., 333, 481 Bertoldi F., Cox P., Neri R. et al., 2003, Astron.&Astrophys., 409, L47 Bertschinger E., arXiv:astro-ph/9506070 Burkert A. and Silk J., 2001, Astrophys.J., 554, L151 de Freitas Pacheco J.A. and Steiner J.E., 1976, Astrophys.Sp.Sci., 39, 487 Durier F. and de Freitas Pacheco J.A., 2011, Int.J.Mod.Phys., E20, 44 Downes D., Neri R., Wiklind T., Wilner D.J. and Shaver P.A., 1999, Astrophys.J., 513, L1 Duschel W.J. and Britsch M., 2006, Astrophys.J., 653, L92 Eisenstein D.J. and Loeb A. 1995, Astrophys.J., 443, 11 Ferrarese L. and Merrit D. 2000, Astrophys.J., 539, L9 Filloux Ch., Durier F., de Freitas Pacheco J.A. and Silk J., 2010, Int.J.Mod.Phys., D19, 1233 Filloux Ch., de Freitas Pacheco J.A., Durier F. and de Araújo J.N.C., 2011, Int.J.Mod.Phys., D20, 2399 Flammang R.A., 1982, Month.Not.Roy.Ast.Soc., 199, 833 Galli D., Palla F., 1998, Astron.&Astrophys., 335, 403 Gebhardt K., Bender R., Bower G. et al., 2000, Astrophys.J., 539, L13 Genzel R., Schodel R., Ott T. et al., 2003, Astrophys.J., 594, 813 Graham A.W. 2007, Month.Not.Roy.Ast.Soc., 379, 711 Gusten R., Genzel R., Wright, M.C.H. et al., 1987, Astrophys.J., 318, 124 Haehnelt M.G. and Rees M.J., 1993, Month.Not.Roy.Ast.Soc., 263, 168 Haring N. and Rix H.-W., 2004, Astrophys.J., 604, L89 Hopkins P.F., Richards G.T. and Hernquist L., 2007, Astrophys.J., 654, 731 Hubeny I., Blaes O., Krolik J. H. and Agol E., 2001, Astrophys.J., 559, 680 Inayoshi K., Haiman Z. and Ostriker J.P., 2016, Month.Not.Roy.Ast.Soc., 459, 3738 Jiang Y.-F., Stone J. and Davis S.W., 2017, arXiv:1709.02845 Kaplan S.A., 1954, Dokl.Akad.Nauk.SSSR, 94, 33 Kaplan S.A. and Pikel’ner S.B., 1970 in The Interstellar Medium, Cambridge, Harvard University Press Katz N., Weinberg D.H.and Hernquist L., 1996, Astrophys.J.Supp., 105, 19 Koide S., Shibata K., Kudoh T.,and Meier D.L., 2002, Science, 295, 1688 Kormendy J. and Gebhardt K. 2001, in AIP Conf.Proc. 586, 20th Texas Symposium on Relativistic Astrophysics, ed. J.C. Wheeler & H. Martel, NY, 363 Kormendy J. and Richstone, 1995, Ann.Rev.Astron.&Astrophys., 33, 581 Koushiappas S.M., Bullock J.S. and Dekel A., 2004, Month.Not.Roy.Ast.Soc., 354, 292 Levine R., Gnedin N.Y., Hamilton A.J.S. and Kravtsov, A. V., 2008, Astrophys.J., 678, 154 Magorrian et al., 1998, Astron.J., 115, 2285 Maraschi L., Reina C. and Treves A., 1974, Astron.&Astrophys., 35, 389 Marconi A. and Hunt, L.K. 2003, Astrophys.J., 589, L21 Merrit D. and Ferrarese L. 2001, Month.Not.Roy.Ast.Soc., 320, L30 Mihos C. and Hernquist L., 1996, Astrophys.J., 464, 641 Milosavljevic M., Couch S.M. and Bromm V., 2009, Astrophys.J., 696, L146 Montesinos M. and de Freitas Pacheco J.A. (MF11), 2011, Astron.&Astrophys., 526, A146 Navarro J.F. and White S.D.M., 1993, Month.Not.Roy.Ast.Soc., 265, 271 Nemmen R.S. and Brotherton M.S., 2010, Month.Not.Roy.Ast.Soc., 408, 1598 Nobili L., Turolla R. and Zampieri, L., 1991, Astrophys.J., 383, 250 Pacucci F., Volonteri M. and Ferrara A., 2015, Month.Not.Roy.Ast.Soc., 452, 1922 Paumard,T., Genzel R., Martins F. et al., 2006, Jour.Phys. Conf. Ser., 54, 199 Peebles P.J.E., 1993, in Principles of Physical Cosmology, Princeton University Press, p. 165 Peirani S. and de Freitas Pacheco J.A., 2008, Phys.Rev.D, 77, 064023 Piran T., 1978, Astrophys.J. 221, 652 Regan J.A. and Haehnelt M.G., 2009, Month.Not.Roy.Ast.Soc., 396, 343 Richstone D., Ajhar E.A., Bender R. et al., 1998, Nature, 395, 14 Salpeter, E.E., 1964, Astrophys.J., 140, 79 Sazonov S.Yu., Ostriker J.P., Ciotti L. and Sunyaev R.A., 2005, Month.Not.Roy.Ast.Soc., 358, 168 Schmidt, M., 1963, Nature, 197, 1040 Shao Y., Wang R., Jones G.C. et al., 2017, Astrophys.J., 845, art. 138, 7pp Shakura N.I. and Sunyaev R.A., 1973, Astron.&Astrophys., 24, 337 Shapiro S.L. 2004, Astrophys.J., 613, 1213 Shvartsman, V.F., 1971, Sov. Astron. (AJ), 15, 377 Silk J. and Rees M.J., 1998, Astron.&Astrophys., 331, L1 Small T.A. and Blandford R.D., 1992, Month.Not.Roy.Ast.Soc., 259, 725 Springel V., 2005, Month.Not.Roy.Ast.Soc., 364, 1105 Soltan A. 1982, Month.Not.Roy.Ast.Soc., 200, 115 Sutherland R.S. and Dopita M.A., 1993, Astrophys.J.Supp., 88, 253 Trakhtenbrot B., Volonteri M. and Natarajan P., 2017, Astrophys.J., 836, L1 Tremaine S., Gebhardt K., Bender R. et al. 2002, Astrophys.J., 574, 740 Venemans B.P., Walter F., Decarli R. et al., 2017, arxiv:1712.01886 Walter F., Riechers D., Cox P. et al., 2009, Nature, 457, 699 Weiss A., Downes D., Walter F. and Henkel C., 2007, ASP Conference Series, 375, 25 Wise J.H., Turk M.J. and Abel T., 2008, Astrophys.J., 682, 745 Wu, X.-B., Wang, F., Fan, X., et al. 2015, Nature, 518, 512
--- bibliography: - 'References.bib' --- \ Danijel Grahovac$^1$[^1], Nikolai N. Leonenko$^2$[^2]\ \ **Abstract:** Multifractal analysis of stochastic processes deals with the fine scale properties of the sample paths and seeks for some global scaling property that would enable extracting the so-called spectrum of singularities. In this paper we establish bounds on the support of the spectrum of singularities. To do this, we prove a theorem that complements the famous Kolmogorov’s continuity criterion. The nature of these bounds helps us identify the quantities truly responsible for the support of the spectrum. We then make several conclusions from this. First, specifying global scaling in terms of moments is incomplete due to possible infinite moments, both of positive and negative order. For the case of ergodic self-similar processes we show that negative order moments and their divergence do not affect the spectrum. On the other hand, infinite positive order moments make the spectrum nontrivial. In particular, we show that the self-similar stationary increments process with the nontrivial spectrum must be heavy-tailed. This shows that for determining the spectrum it is crucial to capture the divergence of moments. We show that the partition function is capable of doing this and also propose a robust variant of this method for negative order moments. Introduction ============ The notion of multifractality first appeared in the setting of measures. The importance of scaling relations was first stressed in the work of Mandelbrot in the context of turbulence modeling ([@mandelbrot1972; @mandelbrot1974]). Later the notion has been extended to functions and studying fine scale properties of functions (see [@muzy1993multifractal; @jaffard1997multifractal1; @jaffard1996old]). In this setting, multifractal analysis deals with the local scaling properties of functions characterized by the Hausdorff dimension of sets of points having the same Hölder exponent. Hausdorff dimension of these sets for varying Hölder exponent yields the so-called spectrum of singularities (or multifractal spectrum). The function is called multifractal if its spectrum is nontrivial, in the sense that it is not a one point set. However, from a practical point of view, it is impossible to numerically determine the spectrum directly from the definition. Frisch and Parisi ([@frisch1985fully]) were the first to propose the idea of determining the spectrum based on certain average quantities, as a numerically attainable way. In order to relate this global scaling property and the local one based on the Hölder exponents, one needs “multifractal formalism” to hold. This is not always the case and there has been an extensive research on this topic (see [@jaffard1997multifractal1; @riedi1995improved; @jaffard1997multifractal2; @jaffard2000frisch; @riedi1999multifractal]). In order to overcome the problem, one takes the other way around and seeks for different definitions of global and local scaling properties that would always be related by a certain type of multifractal formalism (see [@jaffard2006wavelet] for an overview in the context of measures and functions). Many authors claim that wavelets provide the best way to specify the multifractal formalism, both theoretically and numerically (see e.g. [@jaffard2006wavelet; @bacry1993singularity]). For stochastic processes, the local scaling properties can be immediately generalized by simply applying the definition for a function on the sample paths. As a global property, the extension is not so straightforward. In [@MFC1997MMAR], the authors present a theory of multifractal stochastic processes and define the scaling property in terms of the process moments. The underlying idea is to define a scaling property more general than the well known self-similarity. However, this can lead to discrepancy. For example, $\alpha$-stable Lévy processes with $0<\alpha<2$ are known to be self-similar with index $1/\alpha$. On the other hand, it follows from [@jaffard1999] that the sample paths of these processes exhibit multifractal features in the sense of the nontrivial spectrum. The goal of this paper is to make a contribution to the multifractal theory of stochastic processes by exhibiting limitations of the existing definitions and proposing methods to overcome these. The issue of infinite moments has so far been discussed mostly as a problem of the estimation methods for determining the spectrum and has been a major critic for the partition function method. To our best knowledge, our results are the first that link heavy-tails of self-similar processes with their path irregularities in this sense. It is an intriguing fact that in this case, ignorant estimation of infinite moments will yield the correct spectrum. The bounds on the support of the spectrum we derive can be used to easily detect trivial spectrum. We do this for the class of Hermite processes. Although these bounds are very general, we later restrict our attention to stationary increments processes. We consider only $\mathbb{R}$-valued stochastic processes and our treatment is intended to be probabilistic. The paper is organized as follows. In the next section we formally state different definitions of multifractal stochastic processes and recall some implications between them. We also discuss the multifractal formalism and different estimation methods. In Section \[sec3\] we derive general bounds that determine the support of the multifractal spectrum and relate the bounds with the moment scaling properties. We show implications of these results for self-similar stationary increments processes. Section \[sec4\] provides examples of stochastic processes from the perspective of different definitions. We show how the results of Section \[sec3\] apply for each example. In Section \[sec5\] we propose a simple modification of the partition function method that overcomes divergencies of negative order moments. We illustrate on the simulated data the advantages of this modification. Appendix contains some general facts about processes considered in Section \[sec4\]. Definitions of the multifractal stochastic processes {#sec2} ==================================================== In this section we provide an overview of different scaling relations that are usually referred to as multifractality. Examples of processes that satisfy these properties are given in Section \[sec4\]. All the processes considered in this paper are assumed to be measurable, separable, nontrivial (in the sense that they are not a.s. constant) and stochastically continuous at zero, meaning that for every $\varepsilon>0$, $P(|X(h)|>\varepsilon)\to 0$ as $h\to 0$. The best known scaling relation in the theory of stochastic processes is the self-similarity. A stochastic process $\{X(t), t \geq 0\}$ is said to be self-similar if for any $a>0$, there exists $b>0$ such that $$\{X(at)\} \overset{d}{=} \{b X(t)\},$$ where equality is in finite dimensional distributions. If $\{X(t)\}$ is self-similar, nontrivial and stochastically continuous at $0$, then $b$ must be of the form $a^H, a>0$, for some $H\geq 0$, i.e. $$\{X(at)\} \overset{d}{=} \{a^H X(t)\}.$$ A proof can be found in [@embrechts2002]. These weak assumptions are assumed to hold for every self-similar process considered in the paper. The exponent $H$ is usually called the Hurst parameter or index and we say $\{X(t)\}$ is $H$-ss and $H$-sssi if it also has stationary increments.\ Following [@MFC1997MMAR], the definition of a multifractal that we present first is motivated by generalizing the scaling rule of self-similar processes in the following manner: \[defD\] A stochastic process $\{X(t)\}$ is multifractal if $$\label{mfdefgeneral} \{X(ct)\} \overset{d}{=} \{M(c) X(t)\},$$ where for every $c>0$, $M(c)$ is a random variable independent of $\{X(t)\}$ whose distribution does not depend on $t$. When $M(c)$ is non-random, then $M(c)=c^H$ and the definition reduces to $H$-self-similarity. The scaling factor $M(c)$ should satisfy the following property: $$\label{multiplicativeproperty} M(ab) \overset{d}{=} M_1(a) M_2(b),$$ for every choice of $a$ and $b$, where $M_1$ and $M_2$ are independent copies of $M$. This is sometimes called log-infinite divisibility and a motivation for this property can be found in [@MFC1997MMAR]. In [@bacry2008continuous], the authors show that implies .\ However, instead of Definition , scaling is usually specified in terms of moments. The idea of extracting the scaling properties from average type quantities, like $L^p$ norm, dates back to the work of Frisch and Parisi ([@frisch1985fully]). \[defM\] A stochastic process $\{X(t)\}$ is multifractal if there exist functions $c(q)$ and $\tau(q)$ such that $$\label{mfdefEq} E|X(t)-X(s)|^q=c(q) |t-s|^{\tau(q)}, \quad \text{for all } t,s \in \mathfrak{T}, q \in \mathfrak{Q},$$ where $\mathfrak{T}$ and $\mathfrak{Q}$ are intervals on the real line with positive length and $0\in \mathfrak{T}$. The function $\tau(q)$ is called the scaling function. Set $\mathfrak{Q}$ can also include negative reals. The definition can also be based on the moments of the process instead of the increments. If the increments are stationary, these definitions coincide. It is clear that if $\{X(t) \}$ is $H$-sssi then $\tau(q)=Hq$. One can also show that $\tau(q)$ must be concave. Strict concavity can hold only over a finite time horizon, otherwise $\tau(q)$ would be linear. This is not considered to be a problem for practical purposes (see [@MFC1997MMAR] for details). Since the scaling function is linear for self-similar processes, every departure from linearity can be attributed to multifractality. However for this reasoning to make sense, one must assume moment scaling to hold as otherwise self-similarity and multifractality are not complementary notions. The drawback of involving moments in the definition is that they can be infinite. This narrows the applicability of the definition and as we show later, can hide the information about the singularity spectrum. It is easy to see that under stationary increments the defining property along with the property implies multifractality Definition \[defM\]. Indeed, implies that $E|M(c)|^q$ must be of the form $c^{\tau(q)}$ and from $X(t) \overset{d}{=} M(t) X(1)$ the claim follows. One has to assume finiteness of the moments involved in order for the statements like to have sense. Also notice that both definitions imply $X(0)=0$ a.s. which will be used through the paper. There exist many variations of the Definition \[defM\]. Some processes, like the classical multiplicative cascade, obey the definition only for small range of values $t$ or for asymptotically small $t$. The stationarity of increments can also be imposed. When referring to multifractality we will make clear which definition we mean. However we exclude the case of self-similar processes from the preceding definitions.\ Definition \[defM\] provides a simple criterion for detecting the multifractal property of the data set. Consider a stationary increments process $X(t)$ defined for $t \in [0,T]$ and suppose $X(0)=0$. Divide the interval $[0,T]$ into $\lfloor T / \Delta t \rfloor$ blocks of length $\Delta t$ and define the partition function (sometimes also called the structure function): $$\label{partitionfun} S_q(T,\Delta t) = \frac{1}{\lfloor T / \Delta t \rfloor} \sum_{i=1}^{\lfloor T / \Delta t \rfloor} \left| X ( i \Delta t) - X ( (i-1) \Delta t) \right|^q.$$ If $\{ X(t) \}$ is multifractal with stationary increments then $E S_q(T,\Delta t)= E |X (\Delta t) |^q = c(q) {\Delta t}^{\tau(q)}$. So, $$\label{linearrelation} \ln E S_q(T, \Delta t)=\tau(q) \ln {\Delta t} + \ln c(q).$$ One can also see $S_q(T, \Delta t)$ as the empirical counterpart of the left-hand side of . As follows from , it makes sense to consider $\tau(q)$ as the slope of the linear regression of $\ln S_q(T, \Delta t)$ on $\ln {\Delta t}$. In practice, one should first check that relation is valid. See [@FCM1997multifractalityDEM; @anh2010simulation] for more details on this methodology. It was shown in [@GL] that a large class of processes behaves as the relation holds even though there is no exact moment scaling . Suppose that the process is sampled at equidistant time points. We can assume these are the time points $1,\dots,T$ (see [@GL]). By choosing points $0\leq {\Delta t}_1 < \cdots < {\Delta t}_N \leq T$ and $q_j > 0$, $j=1,\dots,M$, based on the sample $X_1,\dots,X_T$ we can calculate $$\label{points} \left\{ S_{q_j} (n, \Delta t_i) \ : \ i=1,\dots, N, j=1,\dots,M \right\}.$$ Suppose that it is checked that for fixed $q$ the points $(\ln \Delta t_i , \ln S_q(T, \Delta t))$, $i=1,\dots,n$ behave approximately linear. Using the well known formula for the slope of the linear regression line, we can define the empirical scaling function: $$\label{tauhat} \hat{\tau}_{N,T}(q) = \frac{\sum_{i=1}^{N} \ln {\Delta t_i} \ln S_q(n,\Delta t_i) - \frac{1}{N} \sum_{i=1}^{N} \ln {\Delta t_i} \sum_{j=1}^{N} \ln S_q(n,\Delta t_i) }{ \sum_{i=1}^{N} \left(\ln {\Delta t_i}\right)^2 - \frac{1}{N} \left( \sum_{i=1}^{N} \ln {\Delta t_i} \right)^2 },$$ where $N$ is the number of time points chosen in the regression. For reference, we state the following property as a definition. \[defE\] A stochastic process $\{X(t)\}$ is (empirically) multifractal if it has stationary increments and the empirical scaling function is non-linear. Although the definition follows naturally from the moment scaling relation , it is not very common in the literature. Usually one tries to estimate the scaling function by using only the smallest time scale available. For example, for the cascade process on the interval $[0,T]$ the smallest interval is usually of the length $2^{-j}T$ for some $j$. One can then estimate the scaling function at point $q$ as $$\label{tauhatalternative} \frac{\log_2 S_q(T,2^{-j}T)}{-j}.$$ Estimator estimates the scaling function across different time scales and is therefore more general than . Spectrum of singularities ------------------------- Preceding definitions involve “global” properties of the process. Alternatively, one can base the definition on the “local” scaling properties, such as roughness of the process sample paths measured by the pointwise Hölder exponents. There are different approaches on how to develop the notion of a multifractal function. First, we say that a function $f: [0,\infty) \to \mathbb{R}$ is $C^{\gamma}(t_0)$ if there exists constant $C>0$ such that for all $t$ in some neighborhood of $t_0$ $$|f(t)-f(t_0) | \leq C |t - t_0|^{\gamma}.$$ One can also define that $f$ is Hölder continuous at point $t_0$ if $|f(t)-P_{t_0} (t) | \leq C |t - t_0|^{\gamma}$ for some polynomial $P_{t_0}$ of degree at most $\lfloor \gamma \rfloor$. Two definitions coincide if $\gamma<1$. Therefore we will use the former one in this paper as in many cases we consider only functions for which $\gamma<1$ at any point. For more details see [@riedi1999multifractal]. A pointwise Hölder exponent of the function $f$ at $t_0$ is then $$\label{pointwiseHolder} H(t_0)= \sup \left\{ \gamma : f \in C^{\gamma}(t_0) \right\}.$$ Consider sets $S_h=\{ t : H(t)=h \}$ where $f$ has the Hölder exponent of value $h$. These sets are usually fractal in the sense that they have non-integer Hausdorff dimension. Define $d(h)$ to be the Hausdorff dimension of $S_h$, using the convention that the dimension of an empty set is $-\infty$. Function $d(h)$ is called the spectrum of singularities (also multifractal or Hausdorff spectrum). We will refer to set of $h$ such that $d(h)\neq - \infty$ as the support of the spectrum. Function $f$ is said to be multifractal if support of its spectrum contains an interval of non-empty interior. This is naturally extended to stochastic processes: \[defL\] A stochastic process $\{X(t)\}$ on some probability space $(\Omega, \mathcal{F}, P )$ is multifractal if for (almost) every $\omega \in \Omega$, $t \mapsto X(t,\omega)$ is a multifractal function. When considered for a stochastic process, Hölder exponents are random variables and $S_h$ random sets. However in many cases the spectrum is deterministic ([@balanca2013]). Multifractal formalism ---------------------- Multifractal formalism relates local and global scaling properties by connecting singularity spectrum with the scaling function via the Legendre transform: $$\label{formalism} d(h)= \inf_q \left( hq - \tau(q) +1\right).$$ When $d(h)=-\infty$, $h$ is not the Hölder exponent, thus the convention that $\dim_{H}(\emptyset)=-\infty$. Since the Legendre transform is concave, the spectrum is always concave function, provided multifractal formalism holds. If the multifractal formalism holds, the spectrum can be estimated as the Legendre transform of the estimated scaling function. Substantial work has been done to investigate when this formalism holds. The validity of the formalism depends which definition of $\tau$ one uses. Since it ensures that the spectrum can be estimated from computable global quantities, it is a desirable property of the object considered. This is the reason many authors seek for different definitions of global and local scaling properties that would always be related by a certain type of multifractal formalism. The validity of the multifractal formalism is known to be narrow when the scaling function is based on the process increments ([@muzy1993multifractal]). It has been showed that a large class of processes can produce nonlinear scaling function and that this behaviour is influenced by the heavy tail index ([@GL]). These nonlinearities are not connected with the spectrum, except in the models that posses some scaling property. In many examples negative order moments can also produce concavity since in many models they are infinite. As we will show on the example of self-similar stationary increments processes, divergence of the negative order moments has nothing to do with the spectrum. Thus the estimated nonlinearity is merely an artefact of the estimation method. We propose a simple modification of the partition function that will make it more robust. On the other hand, nonlinearity that comes from diverging positive order moments is crucial in estimating the spectrum with . For self-similar processes, increments based partition function can capture these nonlinearities correctly. Wavelets are considered to be the best approach to define multifractality. This is usually done by basing the definition of the partition function on the wavelet decomposition of the process (see e.g. [@riedi1999multifractal; @audit2002wavelet]). This leads to different methods for multifractal analysis based on wavelets. However, this type of definition is also sensitive to diverging moments as has been noted in [@gonccalves2005diverging], where the wavelet based estimator of the tail index is proposed. Scaling based on the wavelet coefficients is also unable to yield a full spectrum of singularities. In [@jaffard2004wavelet], the formalism based on wavelet leaders has been proposed. This in some sense resembles the method we propose in Section \[sec5\], although our motivation comes from the results given in the next section. On the other hand, one can also replace the definition of the spectrum to achieve multifractal formalism. For other definitions of the local scaling, such as the one based on the so-called coarse Hölder exponents, see e.g. [@riedi1999multifractal; @CFM1997large]. The choice of the range over which the infimum in is taken can also be a subject of discussion. From the statistical point of view, moments of negative order are not usually investigated. Sometimes $\tau(q)$ is calculated only for $q>0$ and can therefore yield only left (increasing) part of the spectrum. For more details see [@riedi1999multifractal; @jaffard1999]. Bounds on the support of the spectrum {#sec3} ===================================== The fractional Brownian motion (FBM) is a Gaussian process $\{B_H(t)\}$, which starts at zero, has zero expectation for every $t$ and the following covariance function $$E B_H(t) B_H(s) = \frac{1}{2} \left( |t|^{2H} + |s|^{2H} - |t-s|^{2H} \right), \quad H \in (0,1).$$ If $H=1/2$, FBM is the standard Brownian motion (BM). FBM is $H$-sssi and has a trivial spectrum consisting of only one point, i.e. $d(H)=1$, and $d(h)=-\infty$ for $h\neq H$. So there is no doubt that FBM is self-similar and not multifractal in the sense of all definitions considered. However some self-similar processes have nontrivial spectrum. Our goal in this section is to identify the property of the process that makes the spectrum nontrivial. We do this by deriving the bounds on the support of the spectrum. The lower bound is a consequence of the well-known Kolmogorov’s continuity theorem. For the upper bound we prove a sort of complement of this theorem. Before we proceed, we fix the following notation for some general process $\{X(t), t \in [0,T]\}$. We denote the range of finite moments as $\mathfrak{Q}=(\underline{q},\overline{q})$, i.e. $$\label{qLU} \begin{aligned} \overline{q} &= \sup \{ q >0 : E|X(T)|^q < \infty \},\\ \underline{q} &= \inf \{ q <0 : E|X(T)|^q < \infty \}. \end{aligned}$$ If $\{X(t)\}$ is multifractal in the sense of Definition \[defM\] with the scaling function $\tau$ define $$\label{H-+} \begin{aligned} H^- &= \sup \left\{ \frac{\tau(q)}{q} - \frac{1}{q} : q \in (0, \overline{q}) \ \& \ \tau(q)>1 \right\},\\ \widetilde{H^+} &= \inf \left\{ \frac{\tau(q)}{q} - \frac{1}{q} : q \in (\underline{q},0) \ \& \ \tau(q)<1 \right\}. \end{aligned}$$ The lower bound --------------- Using the well known Kolmogorov’s criterion it is easy to derive the lower bound on the support of the spectrum. The proof of the following theorem can be found in [@karatzas1991brownian Theorem 2.8]. \[thm:Kolmogorov-Chentsov\] Suppose that a process $\{X(t), t \in [0,T]\}$ satisfies $$\label{kolmomcrit} E|X(t)-X(s)|^{\alpha} \leq C |t-s|^{1+\beta},$$ for some positive constants $\alpha,\beta,C$. Then there exists a modification $\{\tilde{X}(t), t \in [0,T]\}$ of $\{X(t)\}$, which is locally Hölder continuous with exponent $\gamma$ for every $\gamma \in (0,\beta/\alpha)$. This means that there exists some a.s. positive random variable $h(\omega)$ and constant $\delta>0$ such that $$P \left( \omega : \sup_{|t-s|<h(\omega), \ s,t \in [0,T]} \frac{|\tilde{X}(t,\omega)-\tilde{X}(s,\omega)|}{|t-s|^{\gamma}} \leq \delta \right)=1.$$ \[prop1\] Suppose $\{X(t), t \in [0,T]\}$ is multifractal in the sense of Definition \[defM\]. If for some $q>0$, $E|X(T)|^q<\infty$ and $\tau(q)>1$, then there exists a modification of $\{X(t)\}$ which is locally Hölder continuous with exponent $\gamma$ for every $$\gamma \in \left(0,\frac{\tau(q)}{q} - \frac{1}{q} \right).$$ In particular, there exist a modification such that for almost every sample path, $$H^- \leq H(t) \quad \text{ for each } t \in [0,T],$$ where $H(t)$ is defined by and $H^-$ by . This is a simple consequence of Theorem \[thm:Kolmogorov-Chentsov\] since Definition \[defM\] implies $$E|X(t)-X(s)|^q=c(q) |t-s|^{1+(\tau(q)-1)}.$$ Fixing $s$ in the definition of the local Hölder exponent gives the pointwise Hölder exponent. In the sequel we always suppose to work with the modification from Proposition \[prop1\]. We can conclude that the spectrum $d(h)=-\infty$ for $h \in (0,H^-)$. This way we can establish an estimate for the left endpoint of the interval where the spectrum is defined. It also follows that if the process is $H$-sssi and has finite moments of every positive order, then $H^-=H\leq H(t)$. Thus, when moment scaling holds, path irregularities are closely related with infinite moments of positive order. We make this point stronger later. Theorem \[thm:Kolmogorov-Chentsov\] is valid for general stochastic processes. Although moment condition is appealing, the condition needed for the proof of Theorem \[thm:Kolmogorov-Chentsov\] can be stated in a different form. If we assume stationarity of the increments, other forms can also be derived. Some of them may seem strange at the moment but will prove to be useful later on. \[lemma:kolconditions\] Suppose that $\{X(t), t \in [0,T]\}$ is a stochastic process. Then there exists a modification of $\{X(t)\}$ which is a.s. locally Hölder continuous of order $\gamma>0$ if any of the following holds: (i) for some $\eta>1$ it holds that for every $s\in [0,T)$ and $C>0$ $$P \left( \left| X(s+t) - X(s) \right| \geq C t^{\gamma} \right) = O(t^{\eta}), \quad \text{ as } t \to 0,$$ (ii) for some $m \in \mathbb{N}$, $\eta>1$ it holds that for every $s\in [0,T)$ and $C>0$ $$P \left(\max_{l=1,\dots,m} \left| X(s+lt) - X(s+(l-1)t) \right| \geq C t^{\gamma} \right) = O(t^{\eta}), \quad \text{ as } t \to 0$$ (iii) for some $m \in \mathbb{N}$, $\alpha>0$ and $\beta > \alpha \gamma + 1$ it holds that for every $s\in [0,T)$ $$E \left[ \max_{l=1,\dots,m} \left| X(s+lt) - X(s+(l-1)t) \right| \right]^{\alpha} = O \left( t^{ \beta} \right), \quad \text{ as } t \to 0.$$ If $\{X(t)\}$ has stationary increments it is enough to consider only $s=0$. That $(i)$ is sufficient is obvious from the proof of Theorem \[thm:Kolmogorov-Chentsov\]; see [@karatzas1991brownian Theorem 2.8]. Since $m$ is fixed it is easy to see that $(ii)$ implies $(i)$. That $(iii)$ implies $(ii)$ follows from the Chebyshev’s inequality. The upper bound --------------- It is considered that the negative order moments determine the right part of the spectrum. We show that this is only partially true, as this depends on whether the negative order moments are finite. To establish the bound on the right endpoint of the spectrum, one needs to show that sample paths are nowhere Hölder continuous of some order $\gamma$, i.e. that a.s. $t \mapsto X_t \notin C^{\gamma}(t_0)$ for each $t_0\in [0,T]$. To show this we first use a criterion based on the negative order moments, similar to . The resulting theorem can be seen as a sort of a complement of the Kolmogorov-Chentsov theorem. We then apply this to moment scaling multifractals to get an estimate for the support of the spectrum. \[thm:ComplementKolmogorov-Chentsov\] Suppose that a process $\{X(t), t \in [0,T]\}$ defined on some probability space $(\Omega,\mathcal{F}, P)$ satisfies $$\label{kolmomcritnega} E|X(t)-X(s)|^{\alpha} \leq C |t-s|^{1+\beta},$$ for all $t,s \in [0,T]$ and for some constants $\alpha<0$, $\beta<0$ and $C>0$. Then, for $P$-a.e. $\omega\in \Omega$ it holds that for each $\gamma > \beta/\alpha$ the path $t \mapsto X_t(\omega)$ is nowhere Hölder continuous of order $\gamma$. If suffices to prove the statement by fixing arbitrary $\gamma > \beta/\alpha$. Indeed, this would give events $\Omega_{\gamma}$, $P(\Omega_{\gamma})=0$ such that for $\omega \in \Omega \backslash \Omega_{\gamma}$, $t \mapsto X_t(\omega)$ is nowhere Hölder continuous of order $\gamma$. If $\Omega_0$ is the union of $\Omega_{\gamma}$ over all $\gamma \in (\beta/\alpha, \infty) \cap \mathbb{Q}$, then $\Omega_0 \in \mathcal{F}$, $P(\Omega_0)=0$ and $\Omega \backslash \Omega_0$ would fit the statement of the theorem. For notational simplicity, we assume $T=1$. For $j,k\in \mathbb{N}$ define the set $$M_{jk} := \bigcup_{t\in [0,1]} \bigcap_{h \in [0,1/k]} \left\{ \omega \in \Omega : |X_{t+h}(\omega) - X_t(\omega)| \leq j h^{\gamma} \right\}.$$ It is clear that if $\omega \notin M_{jk}$ for every $j,k \in \mathbb{N}$, then $t \mapsto X_t(\omega)$ is nowhere Hölder continuous of order $\gamma$. As there is countably many $M_{jk}$, it is enough to fix arbitrary $j,k \in \mathbb{N}$ and show that $M_{jk} \subset A$ for some $A\in \mathcal{F}$ such that $P(A)=0$. Suppose $n>2 k$ and $\omega \in M_{jk}$. Then there is some $t \in [0,1]$ such that $$\label{doktm2pom1} |X_{t+h}(\omega)-X_t(\omega) | \leq j h^{\gamma}, \quad \text{ for all } h\in [0,1/k].$$ Take $i \in \{1,\dots,n\}$ such that $$\label{doktm2pom2} \frac{i-1}{n} \leq t < \frac{i}{n}.$$ Then since $n>2 k$ we have $$0 \leq \frac{i}{n} - t < \frac{i+1}{n} - t \leq \frac{i+1}{n} - \frac{i-1}{n} = \frac{2}{n},$$ and from it follows $$|X_{\frac{i+1}{n}}(\omega)-X_{\frac{i}{n}}(\omega) | \leq |X_{\frac{i+1}{n}}(\omega) - X_t( \omega)| + |X_t( \omega)-X_{\frac{i}{n}}(\omega) | \leq 2 j n^{-\gamma}.$$ Put $A_i^{(n)}=\left\{ |X_{\frac{i+1}{n}}-X_{\frac{i}{n}} | \leq 2 j n^{-\gamma} \right\}$. Since $\omega$ was arbitrary it follows that $$M_{jk} \subset \bigcup_{i=1}^n A_i^{(n)}.$$ Using Chebyshev’s inequality for $\alpha<0$ and the assumption of the theorem we get $$\label{condinproof} \begin{aligned} P(A_i^{(n)}) &\leq \frac{E |X_{\frac{i+1}{n}}-X_{\frac{i}{n}} |^{\alpha} }{(2j)^{\alpha} n^{-\gamma \alpha}} \leq C (2j)^{-\alpha} n^{\gamma \alpha - 1 - \beta},\\ P \left(\bigcup_{i=1}^n A_i^{(n)} \right) &\leq \sum_{i=1}^n P(A_i^{(n)}) \leq C (2j)^{-\alpha} n^{-(\beta - \gamma \alpha)}. \end{aligned}$$ If we set $$A = \bigcap_{n > k} \bigcup_{i=1}^n A_i^{(n)},$$ then $A \in \mathcal{F}$ and $M_{jk} \subset A$. Since $\gamma> \beta / \alpha$, it follows that $\beta- \gamma \alpha>0$ and hence $P(A)=0$. This proves the theorem. \[prop2\] Suppose $\{X(t), t \in [0,T]\}$ is multifractal in the sense of Definition \[defM\]. If for some $q<0$, $E|X(T)|^q<\infty$ and $\tau(q)<1$, then almost every sample path of $\{X(t)\}$ is nowhere Hölder continuous of order $\gamma$ for every $$\gamma \in \left(\frac{\tau(q)}{q} - \frac{1}{q}, \ +\infty \right).$$ In particular, for almost every sample path, $$H(t) \leq \widetilde{H^+} \quad \text{ for each } t \in [0,T].$$ Definition \[defM\] implies $$E|X(t)-X(s)|^q=c(q) |t-s|^{1+(\tau(q)-1)}.$$ Since $q<0$, $\tau(q)<0$ the statement follows from Theorem \[thm:ComplementKolmogorov-Chentsov\]. This proposition shows that $d(h)=-\infty$ for $h \in (\widetilde{H^+},\infty)$. Recall that $\widetilde{H^+}$ is defined in . \[remark2\] Statements like the ones in the Proposition \[prop1\] and \[prop2\] are stronger than saying, for example, that for every $t \in [0,T]$, $H(t) \leq C$ almost surely. Indeed, an application of the Fubini’s theorem would yield that for almost every path, $H(t) \leq C$ for almost every $t$. If we put $h=C + \delta$, then the Lebesgue measure of the set $S_h=\{ t : H(t)=h \}$ is zero a.s. This, however, does not imply that $d(h)=-\infty$ and hence it is impossible to say something about the spectrum of almost every sample path. On the other hand, it is clear that this type of statements are implied by Propositions \[prop1\] and \[prop2\]. For the example of this weaker type of the bound, consider $\{X(t), t \in [0,T]\}$ multifractal in the sense of Definition \[defM\]. If for some $q<0$, $E|X(t)|^q<\infty$, then for every $t \in [0,T]$ $$H(t) \leq \frac{\tau(q)}{q} \text{ a.s.}$$ Indeed, let $\delta>0$ and suppose $C>0$. Since $q<0$, by the Chebyshev’s inequality $$P \left( \left| X(t+\varepsilon) - X(t) \right| \leq C \varepsilon^{\frac{\tau(q)}{q}+\delta} \right) \leq \frac{E \left| X(t+\varepsilon) - X(t) \right|^q }{C^q \varepsilon^{\tau(q)+\delta q}} = \frac{c(q)}{C^q \varepsilon^{\delta q}} \to 0,$$ as $\varepsilon \to 0$. We can choose a sequence $(\varepsilon_n)$ that converges to zero such that $$P \left( \left| X(t+\varepsilon_n) - X(t) \right| \leq C \varepsilon_n^{\frac{\tau(q)}{q}+\delta} \right) \leq \frac{1}{2^n}.$$ Now, by the Borel-Cantelli lemma $$\frac{\left| X(t+\varepsilon_n) - X(t) \right|}{\varepsilon_n^{\frac{\tau(q)}{q}+\delta}} \to \infty \ \ a.s., \text{ as } n \to \infty.$$ Thus for arbitrary $\delta>0$ it holds that for every $t$, $H(t) \leq \frac{\tau(q)}{q}+\delta$ a.s. However this result does not allow us to say anything about the spectrum. Consider for the moment the FBM. The range of finite moments is $(-1,\infty)$ and $\tau(q)=Hq$ for $q \in (-1,\infty)$, so we have $\widetilde{H^+}=H+1$. Thus, the best we can say from Proposition \[prop2\], is that $d(h)=-\infty$ for $h > H+1$. However we know that $d(h)=-\infty$ for $h > H$. If the bound $\widetilde{H^+}$ could be considered over all negative order moments, we would get exactly the right endpoint of the support of the spectrum. The fact that the bound derived in Proposition \[prop2\] is not sharp enough for some examples points that negative order moments may not be the right paradigm to explain the spectrum. We therefore provide more general conditions that do not depend on the finiteness of moments. First of them is obvious from the proof of Theorem \[thm:ComplementKolmogorov-Chentsov\], Equation . \[lemma2\] Suppose that $\{X(t), t \in [0,T]\}$ is a stochastic process. Then almost every sample path of $\{X(t)\}$ is nowhere Hölder continuous of order $\gamma>0$ if for every $s\in [0,T]$ and $C>0$ $$P \left( \left| X(s+t) - X(s) \right| \leq C t^{\gamma} \right) = O(t^{\eta}), \quad \text{ as } t \to 0.$$ with some $\eta>1$. If the increments are stationary it is enough to take $s=0$. \[thm3\] Let $\{X(t), t \in [0,T]\}$ be a stochastic process defined on some probability space $(\Omega,\mathcal{F}, P)$. Suppose that for some $\gamma>0$, $\eta>1$, $m \in \mathbb{N}$ it holds that for every $s\in [0,T]$ and $C>0$ $$\label{thm3condition} P \left( \max_{l=1,\dots,m} \left| X(s+lt) - X(s+(l-1)t) \right| \leq C t^{\gamma} \right) = O \left( t^{\eta} \right), \quad \text{as } t \to 0.$$ Then, for $P$-a.e. $\omega\in \Omega$ the path $t \mapsto X_t(\omega)$ is nowhere Hölder continuous of order $\gamma$. In the stationary increments case it is enough to consider $s=0$. The first part of the proof goes exactly as in the proof of Theorem \[thm:ComplementKolmogorov-Chentsov\]. Fix $j,k \in \mathbb{N}$ and take $n \in \mathbb{N}$ such that $$n > (m+1) k.$$ If $\omega \in M_{jk}$, then there is some $t \in [0,1]$ and $i \in \{1,\dots,n\}$ such that and hold. Choice of $n$ ensures that for $l \in \{1,\dots,m\}$ $$\label{} 0< \frac{i+l-1}{n}-t < \frac{i+l}{n}-t < \frac{i-l}{n}-t + \frac{l+1}{n} \leq \frac{l+1}{n} \leq \frac{1}{k}.$$ It follows from that for each $l \in \{1,\dots,m\}$ $$|X_{\frac{i+l}{n}}(\omega)-X_{\frac{i+l-1}{n}}(\omega) | \leq j \left( \frac{l+1}{n} \right)^{\gamma} + j \left( \frac{l}{n} \right)^{\gamma} \leq 2 j \left( \frac{m+1}{n} \right)^{\gamma}.$$ Denote $$\begin{aligned} A_{i,l}^{(n)} &=\left\{ |X_{\frac{i+l}{n}}-X_{\frac{i+l-1}{n}} | \leq 2 j \left( \frac{m+1}{n} \right)^{\gamma} \right\},\\ A_{i}^{(n)} &= \bigcap_{l=1}^m A_{i,l}^{(n)}.\end{aligned}$$ It then follows that $$M_{jk} \subset \bigcup_{i=1}^n A_i^{(n)}.$$ From the assumption we have $$\begin{aligned} P(A_i^{(n)}) &= P \left( \max_{l=1,\dots,m} |X_{\frac{i+l}{n}}-X_{\frac{i+l-1}{n}} | \leq 2 j (m+1)^{\gamma} \left( \frac{1}{n} \right)^{\gamma} \right) \leq C n^{-\eta},\\ P \left(\bigcup_{i=1}^n A_i^{(n)} \right) &\leq \sum_{i=1}^n P(A_i^{(n)}) \leq C_1 n^{-(\eta - 1)}.\end{aligned}$$ Now setting $$A = \bigcap_{n > k} \bigcup_{i=1}^n A_i^{(n)} \in \mathcal{F},$$ it follows that $P(A)=0$, since $\eta>1$. Theorem \[thm3\] enables one to avoid using moments in deriving the bound. As an example, we consider how Theorem \[thm3\] can be applied in the simple case when $\{X(t)\}$ is the BM. Since $\{X(t)\}$ is $1/2$-sssi we have $$P \left( \max_{l=1,\dots,m} \left| X(lt) - X((l-1)t) \right| \leq C t^{\gamma} \right) = P \left( \max_{l=1,\dots,m} \left| X(l) - X(l-1) \right| \leq C t^{\gamma-1/2} \right).$$ Due to independent increments, then $$P \left( \max_{l=1,\dots,m} \left| X(l) - X(l-1) \right| \leq C t^{\gamma-1/2} \right) \leq C_1 t^{m(\gamma-1/2)},$$ This holds for every $\gamma>1/2$ and $m \in \mathbb{N}$ and by taking $m>1/(\gamma-1/2)$ we conclude $d(h)=-\infty$ for $h>1/2$. Before we proceed on applying these results, we state the following simple corollary that expresses the criterion in terms of negative order moments, but now moments of the maximum of increments. This is a generalization of Theorem \[thm:ComplementKolmogorov-Chentsov\] that enables bypassing infinite negative order moments under very general conditions. From this criterion we derive in the next subsection, strong statements about the $H$-sssi processes. \[newComplementKC\] Suppose that a process $\{X(t), t \in [0,T]\}$ defined on some probability space $(\Omega,\mathcal{F}, P)$ satisfies $$\label{thmmomentcond} E \left[ \max_{l=1,\dots,m} \left| X(s + lt) - X(s + (l-1)t) \right| \right]^{\alpha} \leq C t^{1+\beta},$$ for all $t,s \in [0,T]$ and for some $\alpha<0$, $\beta<0$, $m \in \mathbb{N}$ and $C>0$. Then, for $P$-a.e. $\omega\in \Omega$ it holds that for each $\gamma > \beta/\alpha$ the path $t \mapsto X_t(\omega)$ is nowhere Hölder continuous of order $\gamma$. This follows directly from the Chebyshev’s inequality for negative order moments and Theorem \[thm3\] $$\begin{aligned} &P \left( \max_{l=1,\dots,m} \left| X(s+lt) - X(s+(l-1)t) \right| \leq C t^{\gamma} \right)\\ &\quad \leq C^{-\alpha} t^{-\gamma \alpha} E \left[ \max_{l=1,\dots,m} \left| X(s+lt) - X(s+(l-1)t) \right| \right]^{\alpha} = O \left( t^{-\alpha \gamma+1+\beta} \right),\end{aligned}$$ since $1+\beta- \alpha \gamma >1$. The case of the self-similar stationary increments processes ------------------------------------------------------------ In this subsection we refine our results for the case of the $H$-sssi processes by using Corollary \[newComplementKC\]. These results can also be viewed in the light of the classical papers [@vervaat1985sample; @takashima1989sample]. To be able to do this, we need to make sure that the moment in can indeed be made finite by choosing $m$ large enough. We state this condition explicitly for reference. \[C1\] Suppose $\{X(t),t\geq 0 \}$ is a stationary increments process. For every $\alpha<0$ there is $m_0 \in \mathbb{N}$ such that $$E \left[ \max_{l=1,\dots,m_0} \left| X(l) - X(l-1) \right| \right]^{\alpha} < \infty.$$ One way of assessing the Condition \[C1\] is given in the following lemma which is weak enough to cover all the examples considered later. Recall the definition of the range of finite moments $\underline{q}$ and $\overline{q}$ given in . \[lemma:condition1\] Suppose $\{X(t),t\geq 0 \}$ is stationary increments process which is ergodic in the sense that if $E |f(X_1)| < \infty$ for some measurable $f$, then $$\frac{\sum_{l=1}^m f(X_l-X_{l-1}) }{m} \overset{a.s.}{\to} E f(X_1), \ \text{ as } m \to \infty.$$ Suppose also that $\underline{q}<0$. Then Condition \[C1\] holds. Let $r<0$ be such that $E|X(1)-X(0)|^{r} <\infty$. Then $$\begin{aligned} \inf_{l\in \mathbb{N}} |X(l) - X(l-1)|^r &= \lim_{m\to \infty} \min_{l=1,\dots,m} |X(l) - X(l-1)|^r \\ & \leq \lim_{m\to \infty} \frac{\sum_{l=1}^m |X(l) - X(l-1)|^r }{m} = E |X(1)-X(0)|^r=:M, \ \text{ a.s.}\end{aligned}$$ So, $$\inf_{l\in \mathbb{N}} |X(l) - X(l-1)|^{\alpha} = \left( \inf_{l\in \mathbb{N}} |X(l) - X(l-1)|^{r}\right)^{\frac{\alpha}{r}} = \left( \inf_{l\in \mathbb{N}} |X(l) - X(l-1)|^r \right)^{\frac{\alpha}{r}} \leq M^{\frac{\alpha}{r}}, \ \text{ a.s.}$$ and $\inf_{l\in \mathbb{N}} |X(l) - X(l-1)|^{\alpha} $ is bounded and thus has finite expectation. Given $\alpha<0$ we can choose $m_0$ such that $$\begin{aligned} &\left[ \max_{l=1,\dots,m_0} \left| X(l) - X(l-1) \right| \right]^{\alpha} = \left[ \frac{1}{\max_{l=1,\dots,m_0} \left| X(l) - X(l-1) \right|} \right]^{-\alpha}\\ &=\left[ \min_{l=1,\dots,m_0} \frac{1}{\left| X(l) - X(l-1) \right|} \right]^{-\alpha} = \min_{l=1,\dots,m_0} |X(l) - X(l-1)|^{\alpha} \leq M^{\frac{\alpha}{r}}, \ \text{ a.s.}\end{aligned}$$ which implies the statement. Two examples may provide insight of how far the assumptions of Lemma \[lemma:condition1\] are from Condition \[C1\]. If $X(t)=tX$ for some random variable $X$, then $\max_{l=1,\dots,m} \left| X(l) - X(l-1) \right|=X$ and thus Condition \[C1\] depends on the range of finite moments of $X$. For the second example, suppose $X(l)-X(l-1)$ is an i.i.d. sequence such that $P(|X(1)-X(0)| \leq x) \sim 1/ \ln x$ as $x \to 0$. This implies, in particular, that $E|X(1)-X(0)|^r=\infty$ for any $r<0$. Moreover, $$\begin{aligned} E \left[ \max_{l=1,\dots,m} \left| X(l) - X(l-1) \right| \right]^{\alpha} &= - \int_0^{\infty} \alpha y^{\alpha-1} P(\max_{l=1,\dots,m} \left| X(l) - X(l-1) \right| \leq y ) dy\\ &= - \int_0^{\infty} \alpha y^{\alpha-1} \frac{1}{(\ln y )^m} dy = \infty,\end{aligned}$$ for every $\alpha<0$ and $m \in \mathbb{N}$, thus Condition \[C1\] does not hold. We are now ready to prove a general theorem about the $H$-sssi processes. \[thm:boundsHsssi\] Suppose $\{X(t), t \in [0,T]\}$ is $H$-sssi stochastic process such that Condition \[C1\] holds and $H-1/\overline{q}\geq 0$. Then, for almost every sample path, $$H-\frac{1}{\overline{q}} \leq H(t) \leq H \quad \text{ for each } t \in [0,T].$$ Moreover, $d(H)=1$ a.s. By the argument at the beginning of the proof of Theorem \[thm:ComplementKolmogorov-Chentsov\] it is enough to take arbitrary $\gamma>H$. Given $\gamma$ we take $\alpha<1/(H-\gamma)<0$ which implies $\gamma > H -1/\alpha$. Due to Condition \[C1\], we can choose $m_0\in \mathbb{N}$ such that $E \left[ \max_{l=1,\dots,m_0} \left| X(lt) - X((l-1)t) \right| \right]^{\alpha} < \infty$. Self-similarity then implies that $$E \left[ \max_{l=1,\dots,m_0} \left| X(lt) - X((l-1)t) \right| \right]^{\alpha} = t^{H \alpha} E \left[ \max_{l=1,\dots,m_0} \left| X(l) - X(l-1) \right| \right]^{\alpha} = C t^{1+ (H \alpha-1)}.$$ The claim now follows immediately from Corollary \[newComplementKC\] with $\beta=H \alpha - 1$ since $\gamma > \beta/\alpha$. That $H(t) \geq H-1/\overline{q}$ follows from Proposition \[prop1\]. Since $E|X(t)|^q<\infty$ for some $q<0$ it follows from Remark \[remark2\] that for every $t\in [0,T]$, $H(t)\leq H$ a.s. On the other hand, taking $0<q<\overline{q}$ we can get that for $\delta>0$ and $C>0$ $$P \left( \left| X(t+\varepsilon) - X(t) \right| \geq C \varepsilon^{H-\delta} \right) \leq \frac{E \left| X(t+\varepsilon) - X(t) \right|^q }{C^q \varepsilon^{Hq-\delta q}} = \frac{c(q)}{C^q \varepsilon^{-\delta q}} \to 0,$$ as $\varepsilon \to 0$. The same arguments as in Remark \[remark2\] imply that for every $t\in [0,T]$, $H(t)\geq H$ a.s. By Fubini’s theorem it follows that a.s. for almost every $t\in [0,T]$ $H(t)=H$. Thus the Lebesgue measure of the set $S_H=\{ t : H(t)=H \}$ is $1$ and so $d(H)=1$. A simple consequence of the preceding is the following statement. \[cor:Hsssi\] Suppose that Condition \[C1\] holds. A $H$-sssi process with all positive order moments finite must have trivial spectrum, i.e. $d(h)=-\infty$ for $h\neq H$. This applies to FBM, but also to all Hermite processes, like e.g. Rosenblatt process (see Section \[sec4\]). Thus, under very general conditions a self-similar stationary increments process with a nontrivial spectrum must be heavy-tailed. This shows clearly how infinite moments can affect path properties when scaling holds. The following simple result shows this more precisely. Suppose $\{X(t)\}$ is $H$-sssi. If $\gamma<H$ and $d(\gamma)\neq - \infty$, then $E|X(1)|^{q}=\infty$ for $q>1/(H-\gamma)$. Suppose $E|X(t)|^{q}<\infty$ for $q>1/(H-\gamma)$. Then for $\varepsilon>0$ we can apply Chebyshev’s inequality to get $$P \left( |X(t)| \geq C t^{\gamma} \right) = P \left( |X(1)| \geq C t^{\gamma-H} \right) \leq \frac{E \left|X(1)\right|^{\frac{1}{H-\gamma}+\varepsilon} }{t^{-1-\varepsilon(H-\gamma)}}=O(t^{1+\varepsilon(H-\gamma)}).$$ By Theorem \[thm:Kolmogorov-Chentsov\] and Lemma \[lemma:kolconditions\] this implies $d(\gamma)=-\infty$, which is a contradiction. The case of the multifractal processes {#subsec3.4} -------------------------------------- Our next goal is to show that in the definition of $\widetilde{H^+}$ one can essentially take the infimum over all $q<0$. At the moment this makes no sense as $\tau$ from Definition \[defM\] may not be defined in this range. It is therefore necessary to redefine the meaning of the scaling function. We therefore work with the more general Definition \[defD\]. In the next section we will see on the example of the log-normal cascade process that when the multifractal process has all negative order moments finite, the bound derived in Proposition \[prop2\] is sharp. In general this would not be the case for any multifractal in the sense of Definition \[defD\]. Take for example a multifractal random walk (MRW), which is a compound process $X(t)=B(\theta(t))$ where $B$ is BM and $\theta$ is the independent cascade process, say log-normal cascade (see [@bacry2003]). By the multifractality of the cascade for $t<1$, $\theta(t)=^d M(t) \theta(1)$ and multifractality of MRW implies $X(t)=^d (M(t) \theta(1))^{1/2} B(1)$. Now by independence of $B$ and $\theta$, if $E|B(1)|^q=\infty$ then $E|X(t)|^q=\infty$. Since $B(1)$ is Gaussian, moments will be infinite for $q \leq -1$. We thus provide a more general bound which only has a restriction on the moments of the random factor from the Definition \[defD\]. Therefore, if the process satisfies Definition \[defD\] and if the random factor $M$ is multifractal by Definition \[defM\] with scaling function $\tau$, we define $$H^+ = \min \left\{ \frac{\tau(q)}{q} - \frac{1}{q} : q < 0 \ \& \ E|M(t)|^q<\infty \right\}.$$ \[cor:upperboundMF\] Suppose $\{X(t), t \in [0,T]\}$ is defined on some probability space $(\Omega,\mathcal{F}, P)$, has stationary increments and Condition \[C1\] holds. Suppose also it is multifractal by Definition \[defD\] and the random factor $M$ satisfies Definition \[defM\] with the scaling function $\tau(q)$. If $E|M(T)|^{q} < \infty$ for $q<0$, then for $P$-a.e. $\omega\in \Omega$ it holds that for each $$\gamma > \frac{\tau(q)}{q}-\frac{1}{q}$$ the path $t \mapsto X_t(\omega)$ is nowhere Hölder continuous of order $\gamma$.\ In particular, for almost every sample path, $$H(t) \leq H^+ \quad \text{ for each } t \in [0,T].$$ By Condition \[C1\] for $m$ large enough it follows from the multifractal property that $$E \left[ \max_{l=1,\dots,m} \left| X(lt) - X((l-1)t) \right| \right]^{q} = E |M(t)|^{q} E \left[ \max_{l=1,\dots,m} \left| X(l) - X(l-1) \right| \right]^{q} = C t^{1 + \tau(q)-1}.$$ The claim now follows from Corollary \[newComplementKC\] with $\alpha=q$ and $\beta=\tau(q)-1$ and by the argument at the beginning of the proof of Theorem \[thm:ComplementKolmogorov-Chentsov\]. In summary, we provide bounds on the support of the multifractal spectrum. We show that the lower bound can be derived using positive order moments and link the non-existing moments with the path properties for the case of $H$-sssi process. In general, negative order moments are not appropriate for explaining the right part of the spectrum. To derive an upper bound on the support of the spectrum, we use negative order moments of the maximum of increments. This avoids the non existence of the negative order moments which is a property of the distribution itself. Examples {#sec4} ======== In this section we list several examples of stochastic processes and investigate if the Definitions \[defD\]-\[defL\] hold. We show how the results of Section \[sec3\] apply in these cases and also discuss how the multifractal formalism could be achieved. Definitions and further details on the processes considered are given in the Appendix. Self-similar processes ---------------------- It follows from Theorem \[thm:boundsHsssi\] and Corollary \[cor:Hsssi\] that if $H$-sssi process is ergodic with finite positive order moments, then the spectrum is simply $$d(h) = \begin{cases} 1, & \text{if } \ h=H\\ -\infty, & \text{otherwise}. \end{cases}$$ This applies to all Hermite processes, e.g. BM, FBM and Rosenblatt process. Indeed, Hermite processes have all positive order moments finite and the increments are ergodic (see e.g. [@samorodnitsky2007long Section 7]). We now discuss heavy tailed examples of $H$-sssi processes. ### Stable processes Suppose $\{ X(t)\}$ is an $\alpha$-stable Lévy motion. $\{ X(t)\}$ is $1/\alpha$-sssi and moment scaling holds but makes sense only for a range of finite moments, that is for $\mathfrak{Q}=(-1,\alpha)$ in Definition \[defM\]. For this range of $q$, $\tau(q)=q/\alpha$ and the process is self-similar. Due to infinite moments beyond order $\alpha$ the empirical scaling function will asymptotically behave for $q>0$ as $$\tau_{\infty}(q)= \begin{cases} \frac{q}{\alpha}, & \text{if } 0<q\leq \alpha,\\ 1, & \text{if } q>\alpha. \end{cases}$$ See [@GL] for the precise result. Non-linearity points to multifractality in the sense of Definition \[defE\]. The spectrum of singularities is given by ([@jaffard1999]): $$d(h) = \begin{cases} \alpha h, & \text{if } \ h \in [0, 1/\alpha],\\ -\infty, & \text{if } \ h \in (1/\alpha,+\infty]. \end{cases}$$ Hence the spectrum is nontrivial and supported on $[0, 1/\alpha]$. These are exactly the bounds given in Theorem \[thm:boundsHsssi\] as in this case $H=1/\alpha$. We stress that even self-similar processes can have multifractal paths and that this is closely related with infinite moments. We now discuss which form of the scaling function would yield the multifractal spectrum via the Legendre transform. This will highly depend on the range of $q$ over which the infimum in the Legendre transform is taken. For example if we take infimum over $q \in [0,\alpha]$, then we get the correct spectrum from Definitions \[defM\] and \[defE\]. Since in practice $\alpha$ is unknown, one can take infimum over $q \in [0,+\infty)$. In this case Definition \[defE\] yields the formalism, i.e. $$d(h)= \inf_{q \in [0,\infty)} \left( hq - \tau_{\infty}(q) +1\right).$$ So even though the moments beyond order $\alpha$ are infinite, estimating infinite moments with the partition function can lead to the correct spectrum of singularities. ### Linear fractional stable motion In the same manner we treat linear fractional stable motion (LFSM) (see Appendix for the definition). Dependence introduces a new parameter in the scaling relations and the spectrum. LFSM $\{ X(t)\}$ is $H$-sssi, thus is not multifractal in the sense of Definition \[defD\]. For the range of finite moments $\mathfrak{Q}=(-1,\alpha)$, Definition \[defM\] holds with $\tau(q)=Hq$. In this sense process is self-similar. As follows from the results of [@GLT] (see also [@HeydeSly2008]), empirical scaling function asymptotically behaves for $q>-1$ as $$\label{LFSMtau} \tau_{\infty}(q)= \begin{cases} Hq, & \text{if } 0<q\leq \alpha,\\ 1+q(H-\frac{1}{\alpha}), & \text{if } q>\alpha. \end{cases}$$ The combined influence of infinite moments and dependence produces concavity, pointing to multifractality in the sense of Definition \[defE\]. In [@balanca2013], the spectrum was established for $\alpha \in [1,2)$, $H\in (0,1)$ and the long-range dependence case $H>1/\alpha$: $$\label{LFSMspec} d(h) = \begin{cases} \alpha (h-H)+1, & \text{if } \ h \in [H-\frac{1}{\alpha}, H],\\ -\infty, & \text{otherwise}. \end{cases}$$ It is known that in the case $H<1/\alpha$ sample paths are nowhere bounded which explains the assumptions. Also, increments of the LFSM are ergodic (see e.g. [@cambanis1987ergodic]). Since $\alpha=\overline{q}$ is the tail index, Theorem \[thm:boundsHsssi\] gives sharp bounds on the support of the spectrum. One can easily check that multifractal formalism can not be achieved with any of the definitions considered, except the empirical one. Indeed, it holds that $$d(h)= \inf_{q \in [0,\infty)} \left( hq - \tau_{\infty}(q) +1\right).$$ It is a curiosity that if we ignorantly estimate the scaling function using non-existing moments we get the correct spectrum. ### Inverse stable subordinator Inverse stable subordinator $\{X(t)\}$ is a non-decreasing $\alpha$-ss stochastic process, for some $\alpha \in (0,1)$. However application of the results of the previous section is not straightforward as it has non-stationary increments. Yet we can prove that it has a trivial spectrum defined only in point $\alpha$. To derive the lower bound we use Theorem \[thm:Kolmogorov-Chentsov\]. First recall that $a^{\alpha}+ b^{\alpha} \leq (a+b)^{\alpha}$ for $a,b\geq0$ and $\alpha \in (0,1)$. Taking $a=t-s$, $b=s$ when $t\geq s$ and $a=t$, $b=s-t$ when $t<s$ gives that $|t^{\alpha} - s^{\alpha}| \leq |t-s|^{\alpha}$. Since $\{X(t)\}$ has finite moments of every positive order we have for arbitrary $q>0$ and $t,s>0$ $$E|X(t)-X(s)|^q = |t^{\alpha} - s^{\alpha}|^q E |X(1)|^q \leq E|X(1)|^q |t-s|^{1 + \alpha q - 1}.$$ By Theorem \[thm:Kolmogorov-Chentsov\] there exists modification which is a.s. locally Hölder continuous of order $\gamma< \alpha-1/q$. Since $q$ can be taken arbitrarily large, we can get the modification such that a.s. $H(t) \geq \alpha$ for every $t \in [0,T]$. For the lower bound we use Theorem \[thm3\]. Given $\gamma>\alpha$ we choose $m \in \mathbb{N}$ such that $m>1/(\gamma - \alpha)$. If $\{Y(t)\}$ is the corresponding stable subordinator, from the property $\{ X(t) \leq a \} = \{ Y(a) \geq t \}$ we have for every $t_1<t_2$ and $a>0$ $$\{X(t_2)-X(t_1) \leq a\}= \{ Y_{X(t_1)+a} \geq t_2 \} = \{ Y_{X(t_1)+a} -t_1 \geq t_2 - t_1\}.$$ By [@bertoin1998levy Theorem 4, p. 77], for every $t_1>0$, $P(Y_{X(t_1)}>t_1)=1$, thus on this event $$\{ Y_{X(t_1)+a} -t_1 \geq t_2 - t_1\} \subset \{ Y_{X(t_1)+a} - Y_{X(t_1)} \geq t_2 - t_1\}.$$ Now by the strong Markov property choosing $t$ small enough and stationarity of increments of $\{Y(t)\}$ we have $$\begin{aligned} &P \left( \max_{l=1,\dots,m} \left| X(s+lt) - X(s+(l-1)t) \right| \leq C t^{\gamma} \right) \\ &= P \left( X(s+t) - X(s) \leq C t^{\gamma}, \dots, X(s+mt) - X(s+(m-1)t) \leq C t^{\gamma}\right)\\ &\leq P \left( Y_{X(s)+C t^{\gamma}} - Y_{X(s)} \geq t, \dots, Y_{X(s+(m-1)t)+Ct^{\gamma}} - Y_{X(s+(m-1)t)} \geq t \right)\\ &\leq \left( P \left( Y(Ct^{\gamma}) \geq t \right) \right)^m = \left( P \left( Y(1) \geq C^{-\frac{1}{\alpha}} t^{1- \frac{\gamma}{\alpha}} \right) \right)^m \leq \left( C_1 t^{\gamma-\alpha} \right)^m,\end{aligned}$$ by the regular variation of the tail. Due to choice of $m$, $m(\gamma-\alpha)>1$. This property of the first-passage process has been noted in [@bertoin1998levy p. 96] Lévy processes -------------- Suppose $\{X(t), t \geq 0\}$ is a Lévy process. The Lévy processes in general do not satisfy the moment scaling of the form . The only such examples are the BM and the $\alpha$-stable Lévy process. It was shown in [@GL] that the data from these processes will behave as obeying the scaling relation . If $X(1)$ is zero mean with heavy-tailed distribution with tail index $\alpha$ and if $\Delta t_i$ in is of the form $T^{\frac{i}{N}}$ for $i=1,\dots,N$, then for every $q>0$ as $T,N \to \infty$ the empirical scaling function will asymptotically behave as $$\label{tau} \tau_{\infty}(q)= \begin{cases} \frac{q}{\alpha}, & \text{if } 0<q\leq \alpha \ \& \ \alpha\leq 2,\\ 1, & \text{if } q>\alpha \ \& \ \alpha\leq 2,\\ \frac{q}{2}, & \text{if } 0<q\leq \alpha \ \& \ \alpha> 2,\\ \frac{q}{2}+\frac{2(\alpha-q)^2 (2\alpha+4q-3\alpha q)}{\alpha^3 (2-q)^2}, & \text{if } q>\alpha \ \& \ \alpha> 2. \end{cases}$$ See [@GL] and [@GJLT] for the proof and more details. This shows that estimating the scaling function under infinite moments is influenced by the value of the tail index $\alpha$ and will yield a concave shape of the scaling function. Local regularity of Lévy processes has been established in [@jaffard1999] and extended in [@balanca2013] under weaker assumptions. Denote by $\beta$ the Blumenthal-Getoor (BG) index of a Lévy process, i.e. $$\beta=\inf \left\{ \gamma\geq 0 : \int_{|x|\leq 1} |x|^{\gamma} \pi (dx) < \infty \right\},$$ where $\pi$ is the corresponding Lévy measure. If $\sigma$ is a Brownian component of the characteristic triplet, define $$\beta' = \begin{cases} \beta , & \text{if } \ \sigma=0,\\ 2, & \text{if } \ \sigma\neq 0.\\ \end{cases}$$ Multifractal spectrum of the Lévy process is given by $$\label{spectrumLevyP} d(h) = \begin{cases} \beta h, & \text{if } \ h \in [0, 1/\beta'),\\ 1, & \text{if } \ h = 1/\beta',\\ -\infty, & \text{if } \ h \in (1/\beta',+\infty]. \end{cases}$$ Thus the most Lévy processes have a non-trivial spectrum. Moreover, the estimated scaling function and the spectrum are not related as they depend on the different parts of the Lévy measure. Behaviour of the estimated scaling function is governed by the tail index which depends on the behaviour of the Lévy measure at infinity since for $q>0$, $E|X(1)|^q<\infty$ is equivalent to $\int_{|x|>1} |x|^q \pi (dx) < \infty$. On the other hand, the spectrum is determined by the behaviour of $\pi$ around origin, i.e. by the BG index. The discrepancy happens as there is no exact scaling in the sense of or . If there is an exact scaling property, like in the case of the stable process, spectrum can be estimated correctly. It is therefore important to check the validity of relation from the data. This may be problematic as it is hard to distinguish exact scaling from the asymptotic one exhibited by a large class of processes. As there is no exact moment scaling, Propositions \[prop1\] and \[prop2\] generally do not hold. Thus, in order to establish bounds on the support of the spectrum we use other criteria from Section \[sec3\]. We present two analytically tractable examples to illustrate the use of these criteria. ### Inverse Gaussian Lévy process Inverse Gaussian Lévy process is a subordinator such that $X(1)$ has an inverse Gaussian distribution $IG(\delta, \lambda)$, $\delta>0, \lambda\geq 0$, given by the density $$f(x)=\frac{\delta}{\sqrt{2\pi}} {\mathop{}\!\mathrm{e}}^{\delta \lambda} x^{-3/2} \exp \left\{ -\frac{1}{2} \left( \frac{\delta^2}{x} + \lambda^2 x \right) \right\}, \quad x>0.$$ The expression for the cumulant reveals that for each $t$ $X(t)$ has $IG(t\delta,\lambda)$ distribution. Lévy measure is absolutely continuous with density given by $$g(x)=\frac{\delta}{\sqrt{2\pi}} y^{-3/2} \exp \left\{ -\frac{\lambda^2 y }{2} \right\}, \quad x>0,$$ thus the BG index is $\beta=1/2$. See [@eberlein2004generalized] for more details. Inverse Gaussian distribution has moments of every order finite and for every $q \in \mathbb{R}$ we can express them as $$\begin{aligned} E |X(1)|^q &= \int_0^{\infty} x^q f(x) dx = \frac{\delta}{\sqrt{2\pi}} {\mathop{}\!\mathrm{e}}^{\delta \lambda} \left( \frac{2}{\lambda^2} \right)^{q-1/2} \int_0^{\infty} x^{q-3/2} \exp \left\{ -x - \frac{\delta^2 \lambda^2}{4 x} \right\} dx\\ &=\frac{\delta}{\sqrt{2\pi}} {\mathop{}\!\mathrm{e}}^{\delta \lambda} \left( \frac{2}{\lambda^2} \right)^{q-1/2} K_{-q+\frac{1}{2}} ( \delta \lambda ) 2 \left( \frac{\delta \lambda}{2} \right)^{q-\frac{1}{2}} = \sqrt{\frac{2}{\pi}} {\mathop{}\!\mathrm{e}}^{\delta \lambda} \delta^{q+\frac{1}{2}} \lambda^{-q+\frac{1}{2}} K_{-q+\frac{1}{2}} ( \delta \lambda ).\end{aligned}$$ where we have used [@olver2010 Equation 10.32.10] and $K_{\nu}$ denotes the modified Bessel function of the second kind. This implies that $$E |X(t)|^q = \sqrt{\frac{2}{\pi}} {\mathop{}\!\mathrm{e}}^{t \delta \lambda} t^{q+\frac{1}{2}} \delta^{q+\frac{1}{2}} \lambda^{-q+\frac{1}{2}} K_{-q+\frac{1}{2}} ( t \delta \lambda ) \sim C t^{q+\frac{1}{2}} t^{-|-q+\frac{1}{2}|}, \ \text{ as } t \to 0,$$ where we have used $K_{\nu}(z) \sim \frac{1}{2} \Gamma(\nu) (\frac{1}{2} z )^{-\nu}$ for $z>0$ and $K_{-\nu}(z)=K_{\nu}(z)$. For any choice of $\gamma>0$ condition (i) of Lemma \[lemma:kolconditions\] cannot be fulfilled, so the best we can say is that the lower bound is $0$, in accordance with . Since negative order moments are finite, Lemma \[lemma2\] yields the sharp upper bound on the spectrum. Indeed, given $\gamma>1/\beta=2$ we have for $q<1/(2-\gamma)<0$ $$P \left( |X(T) \leq C t^{\gamma} \right) \leq \frac{E|X(t)|^q}{t^{\gamma q}} \leq C_1 t^{-q (\gamma-2)}.$$ It follows that the upper bound is $2$ which is exactly the reciprocal of the BG index. ### Tempered stable subordinator Positive tempered stable distribution is obtained by exponentially tilting the Lévy density of the totally skewed $\alpha$-stable subordinator, $0<\alpha<1$. Tempered stable subordinator is a Lévy process $\{X(t)\}$ such that $X(1)$ has positive tempered stable distribution given by the cumulant function $$\Phi(\theta) = \log E \left[ {\mathop{}\!\mathrm{e}}^{-\theta X(1)} \right] = \delta \lambda - \delta \left( \lambda^{1/\alpha} + 2 \theta \right)^{\alpha},\quad \theta \geq 0,$$ where $\delta$ is the scale parameter of the stable distribution and $\lambda$ is the tilt parameter. In this case BG index is equal to $\alpha$ (see [@schoutens2003levy] for more details). We use Lemma \[lemma2\] for $\gamma>\alpha$ to get $$P \left( |X(T) \leq C t^{\gamma} \right) \leq {\mathop{}\!\mathrm{e}}^{-1} E \left[ {\mathop{}\!\mathrm{e}}^{-\frac{X(t)}{Ct^{\gamma}}} \right] = {\mathop{}\!\mathrm{e}}^{-1} {\mathop{}\!\mathrm{e}}^{t \Phi (C^{-1} t^{-\gamma}) } = O( {\mathop{}\!\mathrm{e}}^{-t^{1-\gamma/\alpha} }), \ \text{ as } t \to 0,$$ As this decays faster than any power of $t$ as $t\to 0$, the upper bound follows. Multiplicative cascade ---------------------- Although it is ambiguous what multifractality means, some models are usually studied in this sense. One of the first models of this kind is the multiplicative cascade. Cascades are actually measures, but can be used to construct non-negative increasing multifractal processes. Discrete cascades satisfy only discrete scaling invariance, in the sense that Definition \[defM\] is valid only for discrete time points. Another drawback of these processes is the non-stationarity of increments. In [@bacry2003], a class of measures has been constructed having continuous scaling invariance and called multifractal random measures, thus generalizing the earlier cascade models. We will refer to a process obtained from these measures simply as the cascade. Since this is a notable example of a theoretically well developed multifractal process, we analyze it in the view of the results of the preceding section. Furthermore, we consider only one cascade process, the log-normal cascade. One can use cascades as subordinators to BM to build more general models called log-infinitely divisible multifractal processes (see [@bacry2003; @ludena2008lp] and the references therein). Following properties hold for the log-normal cascade $\{ X(t)\}$ with parameter $\lambda^2$ ([@bacry2008continuous]). $\{ X(t)\}$ satisfies Definition \[defD\] with the random factor $M(c)=c {\mathop{}\!\mathrm{e}}^{2\Gamma_c}$ where $\Gamma_c$ is Gaussian random variable and can therefore be considered as a true multifractal. Moment scaling holds with $$\tau(q)=q(1+2 \lambda^2)-2 \lambda^2 q^2.$$ Increments of $\{ X(t) \}$ are heavy-tailed with tail index equal to $2/\lambda^2$ and moments of every negative order are finite provided $\lambda^2<1/2$ (see [@bacry2013lognormal Proposition 5]). Although the asymptotic behaviour of the scaling function defined by is unknown, there are results for the estimator defined by . Fixed domain asymptotic properties of this estimator for the multiplicative cascade has been established in [@ossiander2000statistical] where it was shown that when $j\to \infty$ estimator tends a.s. to $$\label{LGNtau} \tau_{\infty}(q)= \begin{cases} h_0^{-} q, & \text{if } q\leq q_0^{-},\\ q(1+2 \lambda^2)-2 \lambda^2 q^2, & \text{if } q_0^{-} < q < q_0^{+}\\ h_0^{+} q, & \text{if } q\geq q_0^{+}, \end{cases}$$ where $$\begin{aligned} \label{q0+-} q_0^{+} &= \inf \{ q \geq 1 : q \tau'(q)-\tau(q) + 1 \leq 0 \}=\frac{1}{\sqrt{2 \lambda^2}},\\ q_0^{-} &= \sup \{ q \leq 0 : q \tau'(q)-\tau(q) + 1 \leq 0 \}=-\frac{1}{\sqrt{2 \lambda^2}}\end{aligned}$$ and $h_0^{+}=\tau'(q_0^{+})$, $h_0^{-}=\tau'(q_0^{-})$. So the estimator is consistent for a certain range of $q$, while outside this interval the so-called linearization effect happens. Similar results have been established in the mixed asymptotic framework ([@bacry2010multifractal]); see also [@ludena2014estimating] for a different method. The spectrum of the log-normal cascade is supported on the interval $\left[ 1 + 2 \lambda^2 - 2 \sqrt{2\lambda^2}, 1 + 2 \lambda^2 + 2 \sqrt{2\lambda^2}\right]$, given by $$d(h)= \inf_{q \in (-\infty,\infty)} \left( hq - \tau(q) +1\right) = 1- \frac{(h-1 - 2 \lambda^2)^2}{8 \lambda^2},$$ and the multifractal formalism holds ([@barral2002multifractal]). Condition $\tau(q)>1$ of Proposition \[prop1\] yields $q \in (1,1/(2\lambda^2))$. We then get that $$H^-=1+2\lambda^2 - 2 \sqrt{2 \lambda^2}.$$ This is exactly the left endpoint of the interval where the spectrum of the cascade is defined, in accordance with Proposition \[prop1\]. This maximal lower bound is achieved for $q=1/\sqrt{2 \lambda^2}=q^+_0$. If $q^-$ is the point at which maximal lower bound $H^-$ is achieved, then $$\left( \frac{\tau(q)}{q} - \frac{1}{q} \right)'= \frac{1}{q^2} \left( q \tau'(q) - \tau(q) + 1 \right)$$ must be equal to $0$ at $q^-$. This is exactly defined in . Although the range of finite moments is not relevant for computing $H^-$ in this case, in general it can depend on $\overline{q}$. Since all negative order moments are finite we get that $$\widetilde{H^+}=H^+=1+2\lambda^2 + 2 \sqrt{2 \lambda^2}$$ achieved for $q=-1/\sqrt{2 \lambda^2}$. Thus again the bound from Proposition \[prop2\] is sharp giving the right endpoint of the interval where the spectrum is defined. Multifractal random walk ------------------------ With this example we want to show that we may have $\widetilde{H^+} \neq H^+$ and that the definition of the scaling function needs to be adjusted to avoid infinite moments of negative order. Multifractal random walk (MRW) driven by the log-normal cascade is a compound process $X(t)=B(\theta(t))$ where $B$ is a BM and $\theta$ is the independent cascade process (see [@bacry2003]). Multifractal properties of this process are inherited from those of the underlying cascade. $\{ X(t)\}$ satisfies Definition \[defD\] with the random factor $M(c)=c^{1/2} {\mathop{}\!\mathrm{e}}^{\Gamma_c}$ where $\Gamma_c$ is Gaussian random variable and the scaling function is given by $$\tau(q)=q\left( \frac{1}{2} + \lambda^2 \right)-\frac{ \lambda^2}{2} q^2.$$ The range of finite moments is $(-1, 1/\lambda^2)$ as explained in Subsection \[subsec3.4\]. The spectrum is defined on the interval $\left[ 1/2 + \lambda^2 - \sqrt{2\lambda^2}, 1/2 + \lambda^2 + \sqrt{2\lambda^2}\right]$ and given by $$d(h)= \inf_{q \in (-\infty,\infty)} \left( hq - \tau(q) +1\right) = 1- \frac{(h-1/2 - \lambda^2)^2}{2 \lambda^2}.$$ Random factor $M(c)$ is the source of multifractality, has the same scaling function, but all negative order moments are finite. Thus we get $$\begin{aligned} H^{-} &= 1/2 + \lambda^2 - \sqrt{2\lambda^2},\\ \widetilde{H^+} &= \frac{3}{2}+\frac{3 \lambda^2}{2},\\ H^+ &= 1/2 + \lambda^2 + \sqrt{2\lambda^2}.\end{aligned}$$ $H^{-}$ and $H^+$ give the sharp bounds, while $\widetilde{H^+}$ is affected by the divergence of the negative order moments. This shows that when the multifractal process has infinite negative order moments, one should specify scaling in terms of the random factor. Robust version of the partition function {#sec5} ======================================== In Section \[sec3\] using Corollary \[newComplementKC\] we managed to avoid the problematic infinite moments of negative order and prove results like Theorem \[thm:boundsHsssi\] and Corollary \[cor:upperboundMF\]. When scaling function is estimated from the data, spurious concavity may appear for negative values of $q$ due to the effect of diverging negative order moments. We use the idea of Corollary \[newComplementKC\] to develop a more robust version of the partition function. Instead of using plain increments in the partition function , we can use the maximum of some fixed number $m$ of the same length increments. This will make negative order moments finite for some reasonable range and prevent divergencies. The underlying idea also resembles the wavelet leaders method where leaders are formed as the maximum of the wavelet coefficients over some time scale (see [@jaffard2004wavelet]). Since $m$ is fixed, this does not affect the true scaling. Same idea can be used for $q>0$ as Lemma \[lemma:kolconditions\] indicates this condition can also explain the spectrum. It is important to stress that the estimation of the scaling function makes sense only if the underlying process is known to possess scaling property of the type . Suppose $\{X(t)\}$ has stationary increments and $X(0)=0$. Divide the interval $[0,T]$ into $\lfloor T / (m \Delta t) \rfloor$ blocks each consisting of $m$ increments of length $\Delta t$ and define the modified partition function: $$\label{modifiedpartitionfun} \widetilde{S}_q(T,\Delta t) = \frac{1}{\lfloor T / (m \Delta t) \rfloor} \sum_{i=1}^{\lfloor T / (m \Delta t) \rfloor} \max_{l=1,\dots,m} \left| X ( i m \Delta t+l \Delta t) - X ( i m \Delta t+ (l-1) \Delta t) \right|^q.$$ One can see $\widetilde{S}_q(T,\Delta t)$ as a natural estimator of . Analogously we define the modified scaling function as in by using $\widetilde{S}_q(n,\Delta t_i)$: $$\label{modifiedtauhat} \widetilde{\tau}_{N,T}(q) = \frac{\sum_{i=1}^{N} \ln {\Delta t_i} \ln \widetilde{S}_q(n,\Delta t_i) - \frac{1}{N} \sum_{i=1}^{N} \ln {\Delta t_i} \sum_{j=1}^{N} \ln \widetilde{S}_q(n,\Delta t_i) }{ \sum_{i=1}^{N} \left(\ln {\Delta t_i}\right)^2 - \frac{1}{N} \left( \sum_{i=1}^{N} \ln {\Delta t_i} \right)^2 }.$$ One can alter the definition only for $q<0$ although there is no much difference between two forms when $q>0$. To illustrate how this modification makes the scaling function more robust we present several examples comparing and . We generate sample paths of several processes and estimate the scaling function by both methods. We also estimate the spectrum numerically using . Results are shown in Figures \[Fig1\]-\[Fig4\]. Each figure shows the estimated scaling functions and the estimated spectrum by using standard definition and by using . We also added the plots of the scaling function that would yield the correct spectrum via multifractal formalism and the true spectrum of the process. For the BM (Figure \[Fig1\]) and the $\alpha$-stable Lévy process (Figure \[Fig2\]) we generated sample paths of length $10000$ and we used $\alpha=1$ for the latter. LFSM (Figure \[Fig3\]) was generated using $H=0.9$ and $\alpha=1.2$ with path length $15784$ (see [@stoev2004simulation] for details on the simulation algorithm used). Finally, MRW of length $10000$ was generated with $\lambda^2=0.025$ (Figure \[Fig4\]). For each case we take $m=20$ in defining the modified partition function . In all the examples considered, the modified scaling function is capable of yielding the correct spectrum of the process with the multifractal formalism. As opposed to the standard definition, it is unaffected by diverging negative order moments. Moreover, it captures the divergence of positive order moments which determines the shape of the spectrum. Appendix {#appendix .unnumbered} ======== We provide a brief overview of different classes of stochastic processes that are used along the paper. Hermite process $\{Z_H^{(k)}(t), t \geq 0 \}$ with $H \in (1/2,1)$ and $k \in \mathbb{N}$ can be defined as $$Z_H^{(k)} (t) = C(H,k) \int_{\mathbb{R}^k}^{'} \int_0^t \left( \prod_{j=1}^k (s-y_j)_{+}^{-( \frac{1}{2} + \frac{1-H}{k})} \right) {\mathop{}\!\mathrm{d}}s {\mathop{}\!\mathrm{d}}B(y_1) \cdots {\mathop{}\!\mathrm{d}}B(y_k), \quad t \geq 0,$$ where $\{B(t) \}$ is the standard BM and the integral is taken over $\mathbb{R}^k$ except the hyperplanes $y_i=y_j$, $i \neq j$. Constant $C(H,k)$ is chosen such that the $E [Z_H^{(k)}(1) ]^2=1$ and $(x)_{+}=\max (x,0)$. Hermite processes are $H$-sssi. For $k=1$ one gets the FBM and for $k=2$ Rosenblatt process. See e.g. [@embrechts2002] for more details. Lévy process is a process with stationary and independent increments starting form $0$. Given an infinitely divisible distribution there exists a Lévy process such that $X(1)$ has this distribution. Moreover, characteristic function can be uniquely represented by the Lévy-Khintchine formula. See [@bertoin1998levy] and [@schoutens2003levy] for more details. $\alpha$-stable Lévy process is a process such that $X(1)$ has stable distribution with stability index $0<\alpha<2$. In general, a random variable $X$ has an $\alpha$-stable distribution with index of stability $\alpha \in (0,2)$, scale parameter $\sigma \in (0,\infty)$, skewness parameter $\beta\in [-1,1]$ and shift parameter $\mu \in \mathbb{R}$, denoted by $X \sim S_{\alpha}(\sigma,\beta,\mu)$ if its characteristic function has the following form $$E \exp \{ i\zeta X \} = \begin{cases} \exp\left\{-\sigma^{\alpha}|\zeta|^{\alpha}\left( 1-i\beta{\rm sign}(\zeta)\tan{\frac{\alpha\pi}{2}}+i\zeta\mu\right) \right\}, & \text{if } \alpha\ne1,\\ \exp\left\{-\sigma|\zeta|\left(1-i\beta\frac{2}{\pi}{\rm sign}(\zeta)\ln{|\zeta|}+i\zeta\mu\right)\right\}, & \text{if } \alpha=1, \end{cases} \quad \zeta \in \mathbb{R}.$$ Stable Lévy process is $1/\alpha$-sssi. Linear fractional stable motion (LFSM) is an example of a process with heavy-tailed and dependent increments. LFSM can be defined through the stochastic integral $$X(t)=\frac{1}{C_{H,\alpha}} \int_{\mathbb{R}} \left( (t-u)_{+}^{H-1/\alpha} - (-u)_{+}^{H-1/\alpha} \right) dL_{\alpha}(u),$$ where $\{L_{\alpha}\}$ is a strictly $\alpha$-stable Lévy process, $\alpha \in (0,2)$, $0<H<1$ and $(x)_{+}=\max (x,0)$. The constant $C_{H,\alpha}$ is chosen such that the scaling parameter of $X(1)$ equals $1$, i.e. $$C_{H,\alpha} = \left( \int_{\mathbb{R}} \left| (1-u)_{+}^{H-1/\alpha} - (-u)_{+}^{H-1/\alpha} \right|^{\alpha} du \right)^{1/ \alpha}.$$ It is then called standard LFSM. The LFSM is $H$-sssi. Setting $\alpha=2$ in the definition reduces the LFSM to the FBM. By analogy to this process, the case $H>1/\alpha$ is referred to as a long-range dependence and the case $H<1/\alpha$ as negative dependence. The parameter $\alpha$ governs the tail behaviour of the marginal distributions implying, in particular, that $E|X(t)|^q=\infty$ for $q \geq \alpha$. For more details see [@samorodnitsky1994]. A Lévy process $\{Y(t)\}$ such that $Y(1) \sim S_{\alpha}(\sigma,1,0)$, $0<\alpha<1$ is referred to as the stable subordinator. It is nondecreasing and $1/\alpha$-sssi. Inverse stable subordinator $\{ X(t)\}$ is defined as $$X(t) = \inf \left\{ s >0 : Y(u) >t \right\}.$$ It is $\alpha$-ss with dependent, non-stationary increments and corresponds to a first passage time of the stable subordinator strictly above level $t$. For more details see [@meerschaert2013inverse] and references therein. [^1]: dgrahova@mathos.hr [^2]: LeonenkoN@cardiff.ac.uk
--- abstract: 'Recent works using deep learning to solve the Traveling Salesman Problem (TSP) have focused on learning construction heuristics. Such approaches find TSP solutions of good quality but require additional procedures such as beam search and sampling to improve solutions and achieve state-of-the-art performance. However, few studies have focused on improvement heuristics, where a given solution is improved until reaching a near-optimal one. In this work, we propose to learn a local search heuristic based on 2-opt operators via deep reinforcement learning. We propose a policy gradient algorithm to learn a stochastic policy that selects 2-opt operations given a current solution. Moreover, we introduce a policy neural network that leverages a pointing attention mechanism, which unlike previous works, can be easily extended to more general k-opt moves. Our results show that the learned policies can improve even over random initial solutions and approach near-optimal solutions at a faster rate than previous state-of-the-art deep learning methods.' author: - Paulo Roberto de Oliveira da Costa - Jason Rhuggenaath - Yingqian Zhang - Alp Akcay title: 'Learning 2-opt Heuristics for the Traveling Salesman Problem via Deep Reinforcement Learning' --- Introduction ============ The Traveling Salesman Problem (TSP) is a well-known combinatorial optimization problem. In the TSP, given a set of locations (nodes) in a graph, we need to find the shortest tour that visits each location exactly once and returns to the departing location. The TSP has been shown to be NP-hard [@papadimitriou1977euclidean] even in its Euclidean formulation, i.e., nodes are points in the 2D space. Classic approaches to solve the TSP can be classified in exact and heuristic methods. The former have been extensively studied using integer linear programming [@applegate2006traveling] which are guaranteed to find an optimal solution but are often too computationally expensive to be used in practice. The latter are based on handcrafted (meta-)heuristics that exploit properties of the problem to construct approximated solutions requiring less computational time, such as heuristics based on edge swaps like $k$-opt [@helsgaun2009general]. Nevertheless, designed heuristics require specialized knowledge and their performances are often limited by algorithmic design decisions. Recent works in machine learning and deep learning have focused on learning heuristics for combinatorial optimization problems [@bengio2018machine; @lombardi2018boosting]. For the TSP, both supervised learning [@vinyals2015pointer; @joshi2019efficient] and reinforcement learning [@Bello2017WorkshopT; @wu2019learning; @kool2018attention; @deudon2018learning; @khalil2017learning] methods have been proposed. The idea is that a machine learning method could potentially learn better heuristics by extracting useful information directly from data, rather than having an explicitly programmed behavior. Most approaches to the TSP have focused on learning construction heuristics, i.e., methods that can generate a solution given a set of input nodes. These methods employed sequence representations [@vinyals2015pointer; @Bello2017WorkshopT], graph neural networks [@khalil2017learning; @joshi2019efficient] and attention mechanisms [@kool2018attention; @deudon2018learning; @wu2019learning] resulting in high-quality solutions. However, construction methods still require additional procedures such as beam search, classical improvement heuristics and sampling to achieve such results. This limitation hinders their applicability as it is required to revert back to handcrafted improvement heuristics and search algorithms for state-of-the-art performance. Thus, learning improvement heuristics that can search for high-quality solutions remains relevant. That is, we focus on methods in which a given solution is improved sequentially until reaching an (local) optimum. Here the idea is that if we can learn a policy to improve a solution, we can use it to get better solutions from a construction heuristic or even random solutions. Recently, a deep reinforcement learning method [@wu2019learning] has been proposed for such task, achieving near-optional results using node swap and 2-opt moves. However, the architecture has its output fixed by the number of possible moves and TSP size, which makes it less favorable to expand to more general $k$-opt [@helsgaun2009general] moves and to learn general policies independent of the number of nodes. In this work, we propose a deep reinforcement learning algorithm trained via Policy Gradient to learn improvement heuristics based on 2-opt moves. Our architecture is based on a pointer attention mechanism [@vinyals2015pointer] that outputs nodes sequentially for action selection. We introduce a reinforcement learning formulation to learn a stochastic policy of the next promising solutions, incorporating the search’s history information by keeping track of the current best-visited solution. Our results show that we can learn policies for the Euclidean TSP that achieve near-optimal solutions even when starting from solutions of poor quality. Moreover, our approach can achieve better results than previous deep learning methods based on construction [@vinyals2015pointer; @joshi2019efficient; @kool2018attention; @deudon2018learning; @khalil2017learning; @Bello2017WorkshopT] and improvement [@wu2019learning] heuristics. Compared to [@wu2019learning], our method can be easily adapted to general $k$-opt and requires fewer samples to achieve state-of-the-art-performance. In addition, policies trained on small instances can be reused on larger instances of the TSP. Lastly, our method outperforms other effective heuristics such as Google’s OR-Tools [@ortools] and are close to optimal solutions computed by Concorde [@applegate2006traveling]. Related Work ============ Exact approaches for the TSP, such as linear programming, may require a large amount of computational time to find optimal solutions. For this reason, designing fast heuristics for the TSP is necessary. However, classical heuristics require specialized knowledge and may have sub-optimal handcrafted design rules. Therefore, methods that can automatically learn good heuristics have the potential to be more effective than handcrafted ones. In machine learning, early works for the TSP have focused on Hopfield networks [@hopfield1985neural] and deformable template models [@angeniol1988self]. However, the performance of these approaches has not been on par with classical heuristics [@la2012comparison]. Recent deep learning methods have achieved high performance learning construction heuristics for the TSP. Pointer Networks (PtrNet) [@vinyals2015pointer] introduced a sequence model coupled with an attention mechanism trained to output TSP tours using solutions generated by Concorde [@applegate2006traveling]. In [@Bello2017WorkshopT], the PtrNet was further extended to learn without supervision using Policy Gradient, trained to output a distribution over node permutations. Other approaches encoded instances via graph neural networks. A *structure2vec* (S2V) [@khalil2017learning] model was trained to output the ordering of partial tours using Deep Q-Learning (DQN). Later, graph attention modules [@deudon2018learning], showed that a hybrid approach using 2-opt local search on top of tours produced via Policy Gradient improves performance. Graph attention was extended in [@kool2018attention] training via REINFORCE [@williams1992simple] with a greedy rollout baseline, resulting in lower optimality gaps. Recently, the supervised approach was revisited using Graph Convolution Networks (GCN) [@joshi2019efficient] to learn probabilities of edges occurring on a TSP tour, achieving state-of-the-art results up to 100 nodes whilst also combining with search heuristics. Important to previous methods are additional procedures such as beam search, classical improvement heuristics and sampling to achieve good solutions. However, little attention has been posed on learning such policies that search for improved solutions. A recent approach [@wu2019learning], based on the transformer architecture, encoded employed Graph Attention Network (GAT) [@velickovic2018graph] coupled with 2-opt and node swap operations. The limitations of this approach are related to the fixed output embeddings. These are vectors with fixed dimensions defined by the squared number of nodes. This choice makes the model specific to an instance size and expanding to general $k$-opt [@helsgaun2009general] moves requires increasing the dimension of the output vector. Moreover, the approach requires more samples than construction methods to achieve similar results. In contrast, we encode edge information using graph convolutions and use classical sequence encoding to learn tour representations. We decode these representations via a pointing attention mechanism [@vinyals2015pointer] to learn a stochastic policy of the action selection task. Our approach resembles classical 2-opt heuristics [@hansen2006first] and can outperform previously deep learning methods in solution quality and sample efficiency. Background ========== Travelling Salesman Problem --------------------------- We focus on the 2D Euclidean TSP. Given an input graph, represented as a sequence of $n$ locations in a two dimensional space $X = \{x_i\}^n_{i=1}$ where $x_i \in [0, 1]^2$, we are concerned with finding a permutation of the nodes, i.e. a tour $S = (s_1, \dots, s_n)$, that visits each node once (except the starting node) and has the minimum total length (cost). We define the cost of a tour as the sum of the distances (edges) between consecutive nodes in $S$ as $$L(S)= \left\|x_{s_n}-x_{s_1} \right\|_2 + \sum_{i=1}^{n-1}\left\| x_{s_i}-x_{s_{i+1}}\right\|_2\,,$$ where $ \left\|\cdot\right\|_2$ denotes the $\ell_2$ norm. $k$-opt Heuristic for the TSP ----------------------------- Improvement heuristics enhance feasible solutions through a search procedure. A procedure starts at an initial solution $S_0$ and replaces a previous solution $S_t$ by a better solution $S_{t+1}$. Local search methods such as the effective Lin-Kernighan-Helsgaun (LKH) [@helsgaun2009general] heuristic perform well for the TSP. The procedure searches for $k$ edge swaps ($k$-opt moves) that will be replaced by new edges resulting in a shorter tour. A simpler version [@lin1973effective], considers 2-opt (Figure \[fig:operator\]) and 3-opt moves as alternatives as these balance solution quality and the $O(n^k)$ complexity of the moves. Moreover, sequential pairwise operators such as $k$-opt moves can be decomposed in simpler $l$-opt ones, where $l<k$. For instance, 3-opt sequential operations can be decomposed into one, two or three 2-opt operations [@helsgaun2009general]. However, in local search algorithms, the quality of the initial solution usually affects the quality of the final solution, i.e. local search methods can easily get stuck in local optima [@hansen2006first]. ![TSP solution before a 2-opt move (left), and the output sequence after a 2-opt move (right). Replaced edges are represented in dashed lines. Note that the sequence $s_i, \dots, s_j$ is inverted.[]{data-label="fig:operator"}](2opt_new.pdf){width="50.00000%"} To avoid local optima, different metaheuristcs have been proposed including Simulated Annealing and Tabu Search. These work by accepting worse solutions to allow more exploration of the search space. In general, allowing a larger exploration of the search space leads to better solution quality. However, metaheuristics still require expert knowledge and may have sub-optimal rules in their design. To tackle this limitation, we propose to combine machine learning and 2-opt operators to learn a stochastic policy to sequentially improve a TSP solution with one in its neighborhood. Our policy iterates over feasible solutions and the best solution (minimum cost) is returned at the end. The idea behind our method is that by taking future improvements into account we can (potentially) find better solutions than handcrafted heuristics. Reinforcement Learning Formulation ================================== Our formulation considers the task of solving the TSP via 2-opt moves as a Markov Decision Process (MDP), detailed below. In our MDP, a given solution (tour) at step $t$ is an observation $S_t$ and the proposed *policy gradient neural architecture* (Section \[sec:PGN\]) is used as function approximation for the stochastic policy $\pi_{\theta}(A_t\mid \Bar{S}_t)$ where action $A_t$ is selected given a state $\Bar{S}_t = (S_t, S^\prime_t) $. Each state $\Bar{S}_t$ is represented as a tuple of the current solution $S_t$ and the best solution $S^\prime_t$ (minimum cost) seen up to $t$, where $\theta$ represents the trainable parameters of our *policy* network. Each $A_t$ corresponds to a 2-opt move in which nodes are sampled sequentially. Our architecture also encompasses a *value* network that outputs value estimates $V_\phi(\bar{S}_t)$, with $\phi$ as learnable parameters. We assume that TSP samples are drawn from the same distribution $\mathcal{S}$ and use Policy Gradient to learn the actions of an agent optimizing the parameters of the policy and value networks (Section \[sec:PGO\]). **States** $S_t$ represents a solution to a TSP instance at search step $t$, i.e. a tour. A state is composed of the tuple $\Bar{S_t} = (S_t, S^\prime_t)$, where $S^\prime_t$ is the lowest cost solution encountered up to step $t$, defined as $$S^{\prime}_{t} = \begin{cases} S_{t}, & \text{if $L(S_{t}) < L(S^{\prime}_{t-1}) $},\\ S^{\prime}_{t-1} , & \text{otherwise}\,. \end{cases}$$ where $S^{\prime}_{0}= S_{0}$ is an intial solution. **Actions** Actions correspond to 2-opt operations that change the current solution $S_t$ to a solution $S_{t+1}$. We model actions as tuples $ A_t = (a_1, a_2)$ where $a_1,a_2 \in \{1,\dots,n\}$, $a_1 \ne a_2$ correspond to index positions of solution $S_t = (s_1,\dots,s_n)$. **Transitions** Transitioning to a next state $\Bar{S}_{t+1}$ is defined from state-action pairs $(\Bar{S}_t, A_t)$. That is, given $A_t=(a_1=i, a_2=j)$ transitioning to the a next state defines a deterministic change to solution $S_t = (\dots,s_i, \dots, s_j, \dots)$, resulting in a new solution $S_{t+1} = (\dots,s_{i-1},s_j, \dots, s_i,s_{j+1}, \dots) $ and state $\Bar{S}_{t+1} = (S_{t+1}, S^\prime_{t+1})$. **Rewards** Similar to [@wu2019learning], we attribute rewards to actions that can improve upon the current best-found solution over a number of time steps. Thus, we define our reward function as $R_{t} = L(S^{\prime}_t) - min \Big(L(S^{\prime}_t), L(S_{t+1})\Big)$. Since this reward function automatically results in the agent favoring swaps of very long edges with short edges, we clip rewards to 1 to assign similar rewards in those cases. **Environment** Our environment runs for a maximum number of steps $\mathbb{T}$. Within each run, we define episodes over a number of steps $T \leq \mathbb{T}$ after which a new episode starts from the last state seen in the previous episode. This ensures that the agent has access to poor quality solutions at $t=0$, and high quality solutions as $t$ grows. In our experiments, treating the environment as continuous and bootstrapping [@mnih2016asynchronous] resulted in lower quality policies under the same conditions. **Returns** Our objective is to maximize the expected return $G_t$, which is the cumulative reward starting at time step $t$ and finishing at $T$ at which point no future rewards are available. i.e. $G_t =\sum_{t^\prime=t}^{T-1} \gamma^{t^\prime-t} R_{t^\prime} $, where $\gamma \in (0, 1]$ is a discount factor. Policy Gradient Neural Architecture {#sec:PGN} =================================== Our neural network, represented in Figure \[fig:nn\], follows the general encoder-decoder architecture. Our encoder maps independently each component of a state $\Bar{S}_t = (S_t, S^\prime_t) $[^1]. That is, each encoding unit reads nodes coordinates $X = (x_1, \dots, x_n)$, where $x_i$ are the coordinates associated with node $s_i$ in solution $S$. The encoder then transforms the inputs to a set of node representations $Z = (z_1, \dots, z_n)$ that embed graph topology. Later, we map these representations to a learned sequential embedding $O = (o_1, \dots, o_n)$ that encodes the positions of each node in a solution. Given node and sequence embeddings from $\Bar{S}$, the *policy* decoder is autoregressive and samples output actions $A = (a_1, \dots, a_k)$ one element at a time, where each $a_i$ corresponds to an index position of the input and $k$ denotes the number of nodes to output, i.e., $k=2$ for 2-opt. The *value* decoder operates on the same representations but generates real-valued outputs to estimate state values. We detail each component of the architecture in the subsequent sections. ![image](architecure.pdf){width="100.00000%"} Encoder ------- The purpose of our encoder is to obtain a representation for each node in the input graph given its topological structure and its position in a given solution. To accomplish this objective, we incorporate elements from Graph Convolution Networks (GCN) [@kipf2016semi] and sequence embedding via Recurrent Neural Networks (RNN) [@hochreiter1997long] to build its representation. Furthermore, we use edge information to build a more informative encoding of the TSP graph. Incorporating the neighboring edge information accelerates the convergence of the algorithm. ### Embedding Layer We input two dimensional coordinates $x_i \in {[0,1]}^2$, $ \forall i \in (1,\ldots, n) $, which are embedded to $d$-dimensional features as[^2] $$x^0_i = W_x x_i\,,$$ where $W_x\in \mathbb{R}^{d \times 2} $. We use as input the Euclidean distances $e_{i,j}$ between coordinates $x_i$ and $x_j$ to add edge information and weigh the node feature matrix. To avoid scaling the inputs to different magnitudes we adopt symmetric normalization [@kipf2016semi] as $$\tilde{e}_{i,j}= \frac{e_{i,j}}{\sqrt{\sum_{j=1}^n e_{i,j} \sum_{i=1}^n e_{i,j}}}\,.$$ Then the normalized edges are used in combination with GCN layers to create richer node representations using its neighboring topology. ### Graph Convolutional Layers In the GCN layers, we denote as $x_i^{\ell}$ the node feature vector at GCN layer $\ell$ associated with node $i$. We define the node feature at the subsequent layer combining features from nodes in the neighborhood $\mathcal{N}(i)$ of node $i$ as $$\label{eq:4} x_i^{\ell+1} = x_i^{\ell} + \text{ReLU} \Big(\sum\nolimits_{j\in \mathcal{N}(i)} \tilde{e}_{i,j} W_g^\ell x_j^{\ell} \Big),$$where $W^{\ell}_g,\in \mathbb{R}^{d \times d} $, $\text{ReLU}$ is the rectified linear unit and $\mathcal{N}(i)$ of node $i$ corresponds to the remaining $n-1$ nodes of the complete TSP graph. At the input to these layers, we have $\ell=0$ and after $\mathbb{L}$ layers we arrive at representations $z_i = x_i^{\mathbb{L}}$ leveraging node features with the additional edge feature representation. ### Sequence Embedding Layers Next, we use node embeddings $z_i$ to learn a sequence representation of the input and encode a tour. Due to symmetry, a tour from nodes $(1,\ldots,n)$ has the same cost as the tour $(n,\ldots,1)$. Therefore, we read the sequence in both orders to explicitly encode the symmetry of a solution. To accomplish this objective, we employ two Long Short-Term Memory (LSTM) [@hochreiter1997long] as our RNN functions, computed using hidden vectors from the previous node in the tour and the current node embedding resulting in $$\label{eq:5} (h^{\rightarrow}_i, c^{\rightarrow}_i) = \text{RNN}(z_i^{\rightarrow}, (h_{i-1}^{\rightarrow},c_{i-1}^{\rightarrow}) ), \quad i \in (1,\ldots,n)$$ $$\label{eq:7} (h^{\leftarrow}_i, c^{\leftarrow}_i) = \text{RNN}(z_i^{\leftarrow}, (h_{i+1}^{\leftarrow},c_{i+1}^{\leftarrow}) ), \quad i \in (n,\ldots,1)$$ where in (\[eq:5\]) a forward RNN goes over the embedded nodes from left to right, in (\[eq:7\]) a backward RNN goes over the nodes from right to left and $h_i, c_i \in \mathbb{R}^d$ are hidden vectors. Our representation reconnects back to the first node in the tour ensuring we construct a sequential representation of the complete tour, i.e. $(h^{\rightarrow}_0, c^{\rightarrow}_0) = \text{RNN}(z_n, 0)$ and $(h^{\leftarrow}_{n+1}, c^{\leftarrow}_{n+1}) = \text{RNN}(z_1, 0)$. Afterwards, we combine forward and backward representations to form unique node representations in a tour as $o_{i} = \text{tanh}(W_fh^{\rightarrow}_i + W_bh^{\leftarrow}_i)$, and a tour representation $h_n = h^{\rightarrow}_n + h^{\leftarrow}_n$, where $h_i, o_i \in \mathbb{R}^d$ and $W_f, W_b \in \mathbb{R}^{d \times d}$. ### Dual Encoding In our formulation, a state $\Bar{S} = (S, S^\prime) $ is represented as a tuple of the current solution $S$ and the best solution seen so far $S^\prime$. For that reason, we use the aforementioned encoding layers to encode both $S$ and $S^\prime$ using independent encoding layers (Figure \[fig:nn\]). Thus, we abuse notation and define a sequential representation of $S^\prime$ after going through encoding layers as $h^{\prime}_n \in \mathbb{R}^d$. Policy Decoder -------------- We aim to learn the parameters of a stochastic policy $\pi_\theta (A \mid \Bar{S})$ that given a state $\Bar{S}$, assigns high probabilities to moves that reduce the cost of a tour. Following [@Bello2017WorkshopT], our architecture uses the chain rule to factorize the probability of a $k$-opt move as $$\pi_{\theta}(A\mid \Bar{S}) = \prod_{i=1}^{k} p_\theta\left(a_i \mid a_{<i}\,, \Bar{S}\right), \label{eqn:prob}$$ and then uses individual softmax functions to represent each term on the RHS of , where $a_i$ corresponds to node positions in a tour, $a_{<i}$ corresponds to previously sampled nodes and $k=2$. During training, we divide by $k$ to normalize loss values. At each output step $i$, we map the tour embedding vectors to the following *query* vector $$q_i = \tanh( W_q q_{i-1} + W_o o_{i-1}), \label{eqn:query}$$ where $W_q, W_o \in \mathbb{R}^{d \times d} $ are learnable parameters and $o_0 \in \mathbb{R}^{d}$ is a fixed parameter initialized from a uniform distribution $\mathcal{U}(\frac{-1}{\sqrt{d}}, \frac{1}{\sqrt{d}})$. Our initial query vector $q_0$ receives the tour representation from $S$ and $S^\prime$ as $h_{\Bar{s}} = W_s h_n \| W_{s^{\prime}} h^\prime_n$ and a *max pooling* graph representation $z_g = \max( z_1,\ldots, z_n )$ from $S$ to form $$q_0 = h_{\Bar{s}} + z_g, \label{eqn:q_0}$$ where learnable parameters $W_s, W_{s^\prime} \in \mathbb{R}^{\frac{d}{2} \times d}$, and $\cdot\|\cdot$ represents the concatenation operation. Similar to [@vinyals2015pointer; @Bello2017WorkshopT; @deudon2018learning], our query vectors $q_i$ interact with a set of $n$ vectors to define a pointing distribution over the action space. As soon as the first node is sampled, the query vector updates its inputs with the previously sampled node using its sequential representation to select the subsequent nodes. ### Pointing Mechanism We use a pointing mechanism to predict a distribution over node outputs given encoded actions (nodes) and a state representation (query vector). Our pointing mechanism is parameterized by two learned attention matrices $K \in \mathbb{R}^{d \times d}$ and $Q \in \mathbb{R}^{d \times d}$ and vector $v \in \mathbb{R}^{d}$, $$u^i_j=\begin{cases} v^T \tanh(K o_j+ Qq_i) & \text{if $j > a_{i-1}$ }\\ - \infty , & \text{otherwise}\,, \end{cases}$$ where $$p_\theta\left(a_i \mid a_{<i}, \Bar{S}\right) = \text{softmax}(C\tanh(u^i))$$ predicts a distribution over the set of $n$ actions, given a query vector $q_i$ with $u^i \in \mathbb{R}^{n}$. We mask probabilities of nodes prior to the current $a_i$ as we only need to consider choices of nodes in which $a_i > a_{i-1}$ due to symmetry. This ensures a smaller action space for our model as we only consider $n(n-1)/2$ possible moves and feasible permutations of the input. We clip logits in $[-C, +C]$ [@Bello2017WorkshopT], where $C \in \mathbb{R}$ is a parameter to control the entropy of $u^i$. Value Decoder ------------- Similar to the policy decoder, our value decoder works by reading tour representations from $S$ and $S^\prime$ and a graph representation from $S$. That is, given embeddings $Z$ the value decoder works by reading the outputs $z_i$ for each node in the tour and the sequence hidden vectors $h_n, h^\prime_n$ to estimate the value of a state as $$V_\phi(\Bar{S}) = W_r~\text{ReLU}\Big(W_z\Big(\frac{1}{n}\sum_{i=1}^n z_i + h_v\Big)\Big)\,,$$ with $ h_{v} = W_v h_n \| W_{v^{\prime}} h^\prime_n$. Where $W_z \in \mathbb{R}^{h \times h}$, $W_r \in \mathbb{R}^{h \times 1}$ are learned parameters that map the state representation to a real valued output and $W_v, W_{v^{\prime}} \in \mathbb{R}^{\frac{d}{2} \times d}$ map the tours to a combined value representation. Similar to [@wu2019learning], we use a *mean pooling* operation to combine node representations $z_i$ in a single graph representation. This vector is then combined with the tour representation $h_v$ to estimate current state values. Initialize policy and critic parameters $\theta$ and $\phi$;\ Policy Gradient Optimization {#sec:PGO} ============================ In our formulation, we maximize the expected rewards given a state $\Bar{S}$ defined as $$J(\theta \mid \Bar{S}) = \mathbb{E}_{\pi_\theta}[G_t\mid \Bar{S}]\,.$$ Thus, we define the total training objective over a distribution $\mathcal{S}$ of TSP solutions as $$J(\theta) = \mathbb{E}_{\mathcal{S}}[J(\theta \mid \Bar{S})]. \label{eq:exp_JS}$$ To optimize our policy we resort to the Policy Gradient learning rule, which provides an unbiased gradient estimate w.r.t. the model’s parameters $\theta$. During training, we draw $B$ i.i.d. graphs and approximate the gradient in (\[eq:exp\_JS\]), indexed at $t=0$ as $$\nabla J(\theta) \approx \frac{1}{B}\frac{1}{T}\Big[\sum_{b=1}^{B} \sum_{t=0}^{T-1} \nabla_\theta \log \pi_\theta (A^b_t \mid \Bar{S}^b_t) (G^b_t - V_\phi(\Bar{S}^b_t))\Big] \label{eq:polgrad}$$ and define $ \mathcal{A}^b_t = G^b_t - V_\phi(\Bar{S}^b_{t}) $. To avoid premature convergence to a sub-optimal policy [@mnih2016asynchronous], we add an entropy bonus $$H(\theta) = \frac{1}{B} \sum_{b=1}^{B} \sum_{t=0}^{T-1} H (\pi_\theta (\cdot \mid \Bar{S}^b_t))\,, \label{eq:entropy}$$ with $ H (\pi_\theta (\cdot \mid \Bar{S}^b_t)) = -\mathbb{E}_{\pi_\theta}[ \log \pi_\theta (\cdot \mid \Bar{S}^b_t)] $, and similarly to (\[eq:polgrad\]) we normalize loss values in (\[eq:entropy\]) dividing by $k$. Moreover, we increase the length of an episode after a number of epochs, i.e. at epoch $e$, $T$ is replaced by $T_e$. The value network is trained on a mean squared error objective between its predictions and Monte Carlo estimates of the returns, formulated as an additional objective $$\mathcal{L}(\phi) = \frac{1}{B}\frac{1}{T}\Big[\sum_{b=1}^{B} \sum_{t=0}^{T-1} \left\| G^b_t - V_\phi(\Bar{S}^b_t))\right\|^2_2\Big]\,. \label{eq:mse}$$ Afterwards, we combine the previous objectives via stochastic gradient descent updates via Adaptive Moment Estimation (ADAM) [@kingma2015adam], with $\beta_H ,\beta_V$ representing weights of (\[eq:entropy\]) and (\[eq:mse\]), respectively. Our model is close to REINFORCE [@williams1992simple] but with periodic episode length updates, and to Advantage Actor-Critic (A2C) [@mnih2016asynchronous] bootstrapping only from terminal states. In our case, this is beneficial as at the start the agent learns how to behave over small episodes for easier credit assignment, later tweaking its policy over larger horizons. The complete algorithm is depicted in Algorithm \[alg:training\]. Experiments and Results ======================= We conduct extensive experiments to investigate the performance of our proposed method. We consider three benchmark tasks, Euclidean TSP with 20, 50 and 100 nodes, TSP20, TSP50 and TSP100 respectively. For all tasks, node coordinates are drawn uniformly at random in the unit square $[0, 1]^2$. Experimental Settings --------------------- All our experiments use a similar set of hyperparameters. We use a batch size of $B=512$ for TSP20 and TSP50; $B=256$ for TSP100 due to GPU memory. For this reason, we generate 10 random mini-batches for TSP20 and TSP50 and 20 mini-batches for TSP100 in each epoch. TSP20 trains for 200 epochs as convergence is faster for smaller problems, whereas TSP50 and TSP100 train for 300 epochs. We use the same $\gamma = 0.99$, $\ell_2$ penalty of $1 \times 10^{-5}$ and learning rate $\lambda = 0.001$, $\lambda$ decaying by $0.98$ at each epoch. Loss weights are $\beta_V = 0.5$, $\beta_H = 0.0045$ for TSP20 and TSP50, $\beta_H = 0.0018$ for TSP100. $\beta_H$ decays by 0.9 after every epoch for stable convergence. In all tasks, $d=128$, $\mathbb{L}=3$ and we employ one RNN block. The update in episode lengths are $T_1 = 8, T_{100} = 10 , T_{150} = 20 $ for TSP 20; $T_1 = 8, T_{100} = 10 , T_{200} = 20$ for TSP50; and $T_1 = 4, T_{100} = 8 , T_{200} = 10$ for TSP100. $C=10$ is used during training and testing. Vector $v$ is initialized as $\mathcal{U}(\frac{-1}{\sqrt{d}}, \frac{1}{\sqrt{d}})$ and remaining parameters are initialized according to PyTorch’s default parameters. We train on a single RTX 2080Ti GPU, generating random initial solutions on the fly at each epoch. Each epoch takes an average time of 2m 01s, 3m 05s and 7m 16s for TSP20, TSP50 and TSP100, respectively. Due to GPU memory capacity, we employ mixed precision training [@jia2018highly] for TSP50 and TSP100. Similar to [@wu2019learning], we train for a maximum step limit of 200. During testing, we run our policy for 500, 1,000 and 2,000 steps to showcase that the policy generalizes to larger horizons than the ones trained upon. Our implementation will be made available online. Experimental Results and Analysis --------------------------------- ![Tour cost of 2-opt heuristics for 1,000 steps.[]{data-label="fig:boxplot"}](opt_gap.pdf){width="\textwidth"} ![Tour cost of 2-opt heuristics for 1,000 steps.[]{data-label="fig:boxplot"}](perf.pdf){width="\textwidth"} ![Tour cost of 2-opt heuristics for 1,000 steps.[]{data-label="fig:boxplot"}](boxplot.pdf){width="\textwidth"} We learn policies for TSP20, TSP50 and TSP100, and depict the optimality gap and its exponential moving average in the log scale in Figure \[fig:training\]. In the figure, the optimality gap is averaged over 256 validation graphs and 200 steps (same as training). We can observe that instances with a lower number of nodes result in lower optimality gaps as solving instances with a high number of nodes is harder. Moreover, we observe that increasing regularly the size of the episodes leads to improved performance. In Figure \[fig:testing\], we show the best found tour cost for 512 test instances over 2,000 steps using the best performing policies on validation data. Here, we note that we can quickly reduce the optimality gap at the start of the run and later steps attempt to fine-tune the best tour as rewards become harder to obtain. Moreover, results show that the learned policies can be seen as a solver requiring only random initial solutions. To showcase that, we compare the learned policies with original 2-opt *first improvement* (FI) and *best improvement* (BI) heuristics, which select the first and best cost-reducing 2-opt operation and are the inspiration for our learned policies. Since simple local search methods can easily get stuck in local optima, we also include a version of the heuristics using *restarts*. That is, similar to [@wu2019learning], we restart the local search at a random solution as soon as we reach a local optimum. We run all heuristics and learned polices for a maximum of 1,000 steps over 512 instances starting from the same solutions. The boxplots in Figure \[fig:boxplot\], show that our policies (TSP100-Policy) have lower median and interquartile range than the other heuristics based on 2-opt, which supports our initial hypothesis of considering future rewards in the choice of 2-opt moves. Moreover, we point out that our method does not scan the neighborhood before picking the next solution, i.e. it avoids the worst case $O(n^2)$ complexity of selecting the next solution. ### Comparison to Classical Heuristics, Exact and Learning Methods Our comparison results are reported on the same 10,000 instances for each TSP size as reported in [@kool2018attention]. We report optimal results obtained by Concorde [@applegate2006traveling] and compare against Nearest, Random and Farthest Insertion constructions heuristics based on their optimality gaps reported in [@kool2018attention]. Additionally, we compare to the vehicle routing solver of OR-Tools [@ortools] which includes 2-opt and LKH as improvement heuristics [@Bello2017WorkshopT]. Furthermore, we compare to recent state-of-the-art deep learning methods based on construction heuristics, including supervised [@vinyals2015pointer; @joshi2019efficient] and reinforcement [@kool2018attention; @deudon2018learning; @khalil2017learning; @Bello2017WorkshopT] learning methods. We note, however, that supervised learning is not ideal for combinatorial optimization problems due to the lack of optimal labels for larger problems. We present the optimality gaps as reported in [@kool2018attention; @joshi2019efficient; @wu2019learning] using greedy, sampling and search decoding and refer to the methods by their neural network architecture. We also compare to the learned improvement heuristic [@wu2019learning]. We focus our attention on GAT [@kool2018attention] and GAT-T [@wu2019learning] (GAT-Transformer) representing the best performing construction and improvement heuristics, respectively. Our results are summarized in Table \[table:comparison\]. In Table \[table:comparison\], we observe that with only 500 steps, our method outperforms traditional construction heuristics, construction deep learning methods based on greedy decoding and OR-Tools achieving $0.01\%$, $0.36\%$ and $1.84\%$ optimality gap for TSP20, TSP50 and TSP100, respectively. Moreover, we outperform GAT-T [@wu2019learning] requiring half the number of steps (500 vs 1,000). We note that with 500 steps, our method also outperforms all previous reinforcement learning methods using sampling or search, including GAT [@deudon2018learning] applying 2-opt local search on top of generated tours. Our method only falls short of the supervised learning method GCN [@joshi2019efficient], using beam search and shortest tour heuristic. However, GCN [@joshi2019efficient], similar to samples in GAT [@kool2018attention], uses a beam width of 1,280. Increasing the number of samples (steps) increases the performance of our method considerably. When augmenting the number of samples to 1,000 (280 samples short of GCN [@joshi2019efficient] and GAT [@kool2018attention]) we outperform all previous methods that do no employ further local search improvement and perform on par with GAT-T [@wu2019learning] on TSP50, using 5,000 samples ($5\times$ as many samples). For TSP100, sampling 1,000 steps results in a lower optimality gap ($1.26\%$) than all compared methods. Lastly, increasing the sample size to 2,000 2-opt moves results in even lower gaps, $0.00\%$, $0.12\%$ and $0.87\%$ for TSP20, TSP50 and TSP100, respectively. [@llc|ccc|ccc|ccc@]{} & & & & &\ & & & Cost & Gap & Time & Cost & Gap & Time & Cost & Gap & Time\ &Concorde [@applegate2006traveling] & Solver & $3.84$ & $0.00 \%$ &(1m)& $5.70$ & $0.00 \%$ &(2m)& $7.76$ & $0.00 \%$ & (3m)\ &OR-Tools [@ortools] & S & $3.85$ & $0.37 \%$ & & $5.80$ & $1.83 \%$ & & $7.99$ & $2.90 \%$ &\ &Nearest Insertion & G & $4.33$ & $12.91 \%$ & (1s) & $6.78$ & $19.03 \%$ & (2s) & $9.46$ & $21.82 \%$ & (6s)\ &Random Insertion & G & $4.00$ & $4.36 \%$ & (0s) & $6.13$ & $7.65 \%$ & (1s) & $8.52$ & $9.69 \%$ & (3s)\ &Farthest Insertion & G & $3.93$ & $2.36 \%$ & (1s) & $6.01$ & $5.53 \%$ & (2s) & $8.35$ & $7.59 \%$ & (7s)\ &PtrNet [@vinyals2015pointer] & SL & $3.88$ & $1.15 \%$ & & $7.66$ & $34.48 \%$ & &\ &GCN [@joshi2019efficient] & SL & $3.86$ & $0.60 \%$ & (6s) & $5.87$ & $3.10 \%$ & (55s) & $8.41$ & $8.38 \%$ & (6m)\ &PtrNet [@Bello2017WorkshopT] & RL & $3.89$ & $1.42 \%$ & & $5.95$ & $4.46 \%$ & & $8.30$ & $6.90 \%$ &\ &S2V [@khalil2017learning] & RL & $3.89$ & $1.42 \%$ & & $5.99$ & $5.16 \%$ & & $8.31$ & $7.03 \%$ &\ &GAT [@deudon2018learning] & RL,T & $3.85$ & $0.42 \%$ & (4m) & $5.85$ & $2.77 \%$ & (26m) & $8.17$ & $5.21 \%$ & (3h)\ &GAT [@kool2018attention] & RL & $3.85$ & $0.34 \%$ & (0s) & $5.80$ & $1.76 \%$ & (2s) & $8.12$ & $4.53 \%$ & (6s)\ &GCN [@joshi2019efficient] & SL,B & $3.84$ & $0.10 \%$ & (20s) & $5.71$ & $0.26 \%$ & (2m) & $7.92$ & $2.11 \%$ & (10m)\ &GCN [@joshi2019efficient] & SL,BS & $3.84$ & $0.01 \%$ & (12m) & $\mathbf{5.70}$ & $\mathbf{0.01} \%$ & (18m) & $7.87$ & $1.39 \%$ & (40m)\ &PtrNet [@Bello2017WorkshopT] & RL,S & & $5.75$ & $0.95 \%$ & & $8.00$ & $3.03 \%$ &\ &GAT [@deudon2018learning] & RL,S & $3.84$ & $0.11 \%$ & (5m) & $5.77$ & $1.28 \%$ & (17m) & $8.75$ & $12.70 \%$ & (56m)\ &GAT [@deudon2018learning] & RL,S,T & $3.84$ & $0.09 \%$ & (6m) & $5.75$ & $1.00 \%$ & (32m) & $8.12$ & $4.64 \%$ & (5h)\ &GAT {1280} [@kool2018attention] & RL,S & $3.84$ & $0.08 \%$ & (5m) & $5.73$ & $0.52 \%$ & (24m) & $7.94$ & $2.26 \%$ & (1h)\ &GAT-T {1000} [@wu2019learning] & RL & $3.84$ & $0.03 \%$ & (12m) & $5.75$ & $0.83 \%$ & (16m) & $8.01$ & $3.24 \%$ & (25m)\ &GAT-T {3000} [@wu2019learning] & RL & $3.84$ & $0.00 \%$ & (39m) & $5.72$ & $0.34 \%$ & (45m) & $7.91$ & $1.85 \%$ & (1h)\ &GAT-T {5000} [@wu2019learning] & RL & $3.84$ & $0.00 \%$ & (1h) & $5.71$ & $0.20 \%$ & (1h) & $7.87$ & $1.42 \%$ & (2h)\ &Ours {500} & RL & $3.84$ & $0.01 \%$ & (5m)& $5.72$ & $0.36 \%$ & (7m) & $7.91$ & $1.84 \%$ & (10m)\ &Ours {1000} & RL & $\mathbf{3.84}$ & $\mathbf{0.00 \%}$ & (10m) & $5.71$ & $0.21 \%$ & (13m) & $7.86$ & $1.26\%$ & (21m)\ &Ours {2000} & RL& $\mathbf{3.84}$ & $\mathbf{0.00} \%$ & (15m) & $5.70$ & $0.12 \%$ & (29m)& $\mathbf{7.83}$ & $\mathbf{0.87\%}$ & (41m)\ ### Testing Learned Policies on Larger Instances Since we are interested in learning general policies that can solve the TSP regardless of its size, we test the performance of our policies when learning on TSP50 instances (TSP50-Policy) and applying on larger TSP100 instances. Result, in Table \[table:train50\], show that we are able to extract general enough information to still perform well on 100 nodes. Similar to the policy trained on 100 nodes, our 50 nodes policy can outperform previous reinforcement learning construction approaches and requires fewer samples. With 1,000 samples our TSP50 policy performs similarly to GAT-T [@wu2019learning] using 3,000 samples, reaching $1.86\%$ optimality gap. These results are closer to optimal than previous learning methods without further local search improvement as in GCN [@joshi2019efficient]. When increasing to 2,000 steps, we outperform previous deep learning and classical heuristics methods getting as close to $1.37\%$ of the optimal solutions. ------- -------- ----------- -- -------- ----------- Steps Cost Gap Cost Gap 500 $7.91$ $1.84 \%$ $7.98$ $2.78 \%$ 1000 $7.86$ $1.26\%$ $7.91$ $1.86 \%$ 2000 $7.83$ $0.87 \%$ $7.87$ $1.37\%$ ------- -------- ----------- -- -------- ----------- : Performance of policies trained on 50 and 100 nodes on TSP100 instances.[]{data-label="table:train50"} ### Running Times and Sample Efficiency Comparing running times is difficult due to varying hardware and implementations among different approaches. In Table \[table:comparison\], we report the running times to solve 10,000 instances as reported in [@kool2018attention; @joshi2019efficient; @wu2019learning] and our running times using the available GPU. We focus on learning methods, as classical heuristics and solvers are efficiently implemented using multi-threaded CPUs and can be run much faster than learning methods. We point out that our method cannot compete in speed with greedy methods as we start from poor solutions and require sampling to find improved solutions. This is neither surprising nor discouraging, as one can see greedy construction heuristics as a way to generate initial solutions for an improvement heuristic like ours. We note, however, that while sampling 1,000 steps, our method is faster than GAT-T [@wu2019learning] even though we use a less powerful GPU (RTX 2080Ti vs Tesla V100). Moreover, our method requires fewer samples to achieve superior performance. The comparison to GAT [@kool2018attention] is not so straightforward as they employ a GTX 1080Ti over 1,280 samples. For this reason, we run GAT [@kool2018attention] using the hardware at hand and report running times whilst sampling the same number of solutions in Table \[table:kool\]. As it can be observed, our method is slower than the construction model for TSP20 and TSP50 sampling 2,000 solutions. However, as we reach TSP100, our method can be computed faster than GAT [@kool2018attention]. Moreover, if we consider only running times, our method can produce shorter tours in less time. ---------------------------------- --------- ---------- --------- ----------- --------- ----------- Cost Time Cost Time Cost Time GAT {500} [@kool2018attention] $3.839$ **(3m)** $5.727$ (10m) $7.955$ (27m) Ours {500} $3.836$ (5m) $5.716$ **(7m)** $7.907$ **(10m)** GAT {1,000} [@kool2018attention] $3.838$ **(4m)** $5.725$ (14m) $7.947$ (42m) Ours {1,000} $3.836$ (10m) $5.708$ **(13m)** $7.861$ **(21m)** GAT {2,000} [@kool2018attention] $3.838$ **(5m)** $5.722$ **(22m)** $7.939$ (1h13m) Ours {2,000} $3.836$ (15m) $5.703$ (29m) $7.832$ **(41m)** ---------------------------------- --------- ---------- --------- ----------- --------- ----------- : Performance of GAT [@kool2018attention] vs our method with the same number of samples. []{data-label="table:kool"} Conclusions and Future Work =========================== We introduced a novel deep reinforcement learning approach for approximating an improvement heuristic for the 2D Euclidean Traveling Salesman Problem. We proposed graph and sequence embeddings to learn local search policies using 2-opt operators. Our experimental results show that we are able to outperform state-of-the-art deep learning construction and improvement heuristics. As future work, we will explore expanding the model to consider $k$-opt operations dynamically. Moreover, we intend to explore general improvement heuristics that can be applied to a large number of combinatorial problems. [10]{} \[1\][`#1`]{} \[1\][https://doi.org/\#1]{} Angeniol, B., Vaubois, G.D.L.C., Le Texier, J.Y.: Self-organizing feature maps and the travelling salesman problem. Neural Networks **1**(4), 289–293 (1988) Applegate, D.L., Bixby, R.E., Chvatal, V., Cook, W.J.: The traveling salesman problem: a computational study. Princeton university press (2006) Bello, I., Pham, H.: Neural combinatorial optimization with reinforcement learning. In: Proceedings of the 5th International Conference on Learning Representations (ICLR) (2017) Bengio, Y., Lodi, A., Prouvost, A.: Machine learning for combinatorial optimization: a methodological tour d’horizon. arXiv preprint arXiv:1811.06128 (2018) Deudon, M., Cournut, P., Lacoste, A., Adulyasak, Y., Rousseau, L.M.: Learning heuristics for the tsp by policy gradient. In: Proceedings of the 15th International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research (CPAIOR). pp. 170–181 (2018) Hansen, P., Mladenovi[ć]{}, N.: First vs. best improvement: An empirical study. Discrete Applied Mathematics **154**(5), 802–817 (2006) Helsgaun, K.: General k-opt submoves for the lin–kernighan tsp heuristic. Mathematical Programming Computation **1**(2-3), 119–163 (2009) Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural computation **9**(8), 1735–1780 (1997) Hopfield, J.J., Tank, D.W.: Neural computation of decisions in optimization problems. Biological cybernetics **52**(3), 141–152 (1985) Jia, X., Song, S., He, W., Wang, Y., Rong, H., Zhou, F., Xie, L., Guo, Z., Yang, Y., Yu, L., et al.: Highly scalable deep learning training system with mixed-precision: Training imagenet in four minutes. arXiv preprint arXiv:1807.11205 (2018) Joshi, C.K., Laurent, T., Bresson, X.: An efficient graph convolutional network technique for the travelling salesman problem. arXiv preprint arXiv:1906.01227 (2019) Khalil, E., Dai, H., Zhang, Y., Dilkina, B., Song, L.: Learning combinatorial optimization algorithms over graphs. In: Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS). pp. 6348–6358 (2017) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Machine Learning (2015) Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: 5th International Conference on Learning Representations, [ICLR]{} 2017, Conference Track Proceedings (2017) Kool, W., van Hoof, H., Welling, M.: Attention, learn to solve routing problems! In: Proceedings of the 7th International Conference on Learning Representations (ICLR) (2019) La Maire, B.F., Mladenov, V.M.: Comparison of neural networks for solving the travelling salesman problem. In: 11th Symposium on Neural Network Applications in Electrical Engineering. pp. 21–24. IEEE (2012) Lin, S., Kernighan, B.W.: An effective heuristic algorithm for the traveling-salesman problem. Operations research **21**(2), 498–516 (1973) Lombardi, M., Milano, M.: Boosting combinatorial problem modeling with machine learning. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI). pp. 5472–5478 (2018) Mnih, V., Badia, A.P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., Kavukcuoglu, K.: Asynchronous methods for deep reinforcement learning. In: Proceedings of the 33rd International Conference on Machine Learning (ICML). pp. 1928–1937 (2016) Papadimitriou, C.H.: The euclidean travelling salesman problem is np-complete. Theoretical Computer Science **4**(3), 237–244 (1977) Perron, L., Furnon, V.: Or-tools, <https://developers.google.com/optimization/> Veli[č]{}kovi[ć]{}, P., Cucurull, G., Casanova, A., Romero, A., Li[ò]{}, P., Bengio, Y.: [Graph Attention Networks]{}. In: Proceedings of the 6th International Conference on Learning Representations (ICLR) (2018) Vinyals, O., Fortunato, M., Jaitly, N.: Pointer networks. In: Proceedings of the 29th Conference on Neural Information Processing Systems (NIPS). pp. 2692–2700 (2015) Williams, R.J.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning **8**(3-4), 229–256 (1992) Wu, Y., Song, W., Cao, Z., Zhang, J., Lim, A.: Learning improvement heuristics for solving the travelling salesman problem. arXiv preprint arXiv:1912.05784 (2019) [^1]: Search step index $t$ is omitted in subsequent definitions to avoid notation clutter. Network parameters are shared for all steps $t$. [^2]: In the definitions, bias terms are omitted unless otherwise specified.
--- author: - 'M. Chabot, T. Tuna, K. Béroff , T. Pino, A. Le Padellec, P. Désequelles, G. Martinet, V. O. Nguyen-Thi, Y. Carpentier, F. Le Petit, E. Roueff,' - 'V. Wakelam' date: 'Received 18 May 2010 / Accepted 3 August 2010' subtitle: title: ' Statistical universal branching ratios for cosmic ray dissociation, photodissociation, and dissociative recombination of the C$_{\rm n=2-10}$, C$_{\rm n=2-4}$H and C$_3$H$_2$ neutral and cationic species' --- Introduction ============ Carbon clusters and highly unsaturated hydrocarbons are among the molecules most often observed in the Inter Stellar Medium (ISM) (http://astrochemistry.net). They are present in diffuse clouds , in dense clouds [@1999ApJ...518..740B; @2001ApJ...552..168F; @2000ApJ...542..870D], in circumstellar envelopes [@2002ApJ...580L.157C], protostar envelopes [@2008ApJ...672..371S], planetary nebula [@2001ApJ...546L.123C] and in photon dominated regions (PDR) . They are also present around evolved (carbon) stars [@1988Sci...241.1319H], in the Titan ionosphere or in comet commae . Since their discovery in the early 70’s, many studies have been devoted to those species [@VanOrden1998], mostly on their structural and spectroscopic properties. The spectroscopic studies were pushed mainly by the dramatically increasing performances of the observation capabilities, while for the structural studies it was also driven by the increasing use of carbon$-$based materials. The origin of carbon clusters and highly unsaturated hydrocarbons and their abundances in the ISM objects is still a puzzling question. It is thought that they may come from gas$-$phase synthesis as well as be released into the gas$-$phase of carbonaceous solid materials that are present in the ISM. Therefore the detailed investigation of their origin and abundances requires homogeneous and heterogeneous chemical reactions to be explored. Whatever the carbon reservoir, those molecules are processed in the gas phase by neutral$-$neutral and ion$-$neutral collisions as well as cosmic rays (CR), ultra violet (UV) photons and dissociative recombination (DR) from collisions with the thermal electrons. These last three mechanisms lead to highly electronically excited species that may undergo fragmentation. The detailed dynamics and the fragmentation channels have thus to be investigated to enable their incorporation into various astrochemical models. In gas phase chemistry, chemical networks for carbon clusters and highly unsaturated hydrocarbons synthesis have been proposed for a large variety of astrophysical conditions . Nevertheless, despite the strong theoretical and experimental efforts in structure and spectroscopy, many of the reaction rates still remain uncertain and may consequently limit the confidence in chemical models. It is convenient to write the reaction rate coefficient for a reaction A+B $ \rightarrow$ C+D as the product of a total reaction rate coefficient $k_{tot}$ and a branching ratio factor (BR) $$\rm k[A+B \rightarrow C+D] = k_{tot}[A+B \rightarrow (AB)^\ast] \times BR[(AB)^\ast \rightarrow C+D] .$$ The reliability of the reaction rate may be improved when the total rate and BR are independently predicted. Most of the reaction rates are sensitive to the surrounding medium (temperature and photon spectrum for instance). In Eq. (1) this dependency is implicitly included in the total rate, whereas BRs are assumed to be constant. This assumption of constant BRs may however be inappropriate for some reactions as for example, neutral$-$neutral or ion$-$neutral reactive collisions where energy barriers can be present. For electronic excitation mechanisms followed by dissociation that occur in the ISM, the intermediate complex AB is appropriately described by an ensemble of molecules prepared in various highly exited electronic states. It has been pointed out several times that this physical situation fits the requirements of statistical theory concepts very well . In the particular case of small species, non$-$statistical mechanisms need also to be considered because the density of states is generally not high enough, even if high$-$energy excitations are involved. Non$-$statistical behavior may also arise in rare photodissociation situations, where a dissociative state is resonant with a discrete line from a local photon source [@1988rcia.conf...49V]. In the first part of this paper we will report on measured BRs after electronic excitations that take place in a high$-$velocity collision experiment. In the second part we will introduce through the comparison between the different processes that are responsible for the electronic excitations in the ISM, the idea of statistical universal BR for cosmic$-$ray$-$induced dissociation, photodissociation, and dissociative recombination processes. We will then propose to correct BRs of current online databases (OSU [^1], UMIST [^2]) when they are not resulting from measurements or detailed calculations. In a last part we will observe the influence of the new branching ratios on two typical astrochemical models that are aimed to simulate the dense clouds and the Horse Head PDR. Fragmentation of C$_{\rm n=2-10}$, C$_{\rm n=2-4}$H and C$_3$H$_2$ molecules that are electronically excited by high$-$velocity collision ========================================================================================================================================== Experimental set-up ------------------- The experimental set-up has already been described in detail in previous papers [@2002NIMPB.197..155C; @2008JChPh.128l4312T] and only a brief description is given here. Tandem MP (15 MVolts) accelerator at the Institut de Physique Nucléaire d’Orsay produced molecular cationic beams of several MeV of energy [@1996NIMPA.382..348W]. After magnetic analysis and collimation, singly charged molecules collided at high velocity (few a.u.) with a single target atom. Half a meter downstream, a parallel$-$plates electrostatic deflector analyzed parents and fragments with respect to their q/m ratio. A set of silicon detectors intercepted all the trajectories in a dedicated chamber. It is because of their high velocity that parent and fragments may be detected with silicon detectors. This type of detector enables the measurement of the kinetic energy of the particle, i.e., with high$-$energy molecular beams of constant velocity, the mass of the detected molecule. Moreover, silicon detectors are 100% efficient and, owing to the kinematics, small detector sizes can cover 100% of the solid angle for fragments emission. All the detectors are operating in coincidence, event by event. To get branching ratios on neutral species, the grid technique [@PhysRevLett.26.602; @Larsson1995403] may be used when the number of channels is small. In this method, a grid of known transmission is placed in front of the detector. Branching ratios of dissociation are linked to the recorded mass spectra through a set of linear equations. In the present work, because high$-$energy molecular beams were used, the shape of the current signal from silicon detectors could be analyzed to resolve a pile-up of several neutral fragments [@2002NIMPB.197..155C]. However, this technique was insufficient to get the whole information for the hydrogenated species. The grid technique was then mixed with a signal shape analysis to finally fully resolve the fragmentation of neutral and cationic C$_{\rm n}$, C$_{\rm n}$H, and C$_3$H$_2$ species [@2008JChPh.128l4312T]. High$-$velocity collision (HVC) processes ----------------------------------------- ### The excitation processes ![Internal energy distributions following HVC-charge transfer (see text). Thick solid line: C$_2$H, dotted line: C$_3$H, broken line: C$_4$H, thin solid line: C$_3$H$_2$. Peak energies and widths of distributions may be regarded with errors on the order of a few ev. Details on the method of extraction are given in [@2008JChPh.128l4312T].[]{data-label="fig1"}](15010fg1.ps){width="0.7\linewidth"} ![Internal energy distributions following HVC-excitation (see text). Solid line: C$_2$H$^+$, dotted line: C$_3$H$^+$, broken line: C$_4$H$^+$, dot-dashed line: C$_3$H$_2^+$. Peak energies and widths of distributions may be regarded with errors on the order of a few ev. Details on the method of extraction are given in [@2008JChPh.128l4312T].[]{data-label="fig2"}](15010fg2.ps){width="0.7\linewidth"} During the fast ($\sim 10^{-16}$ s) collision between a molecule X and an atom, charge transfer (HVC - CT) may occur: $$\rm X^+ + He \rightarrow X^* + He^+$$ Owing to the high initial velocity of the transferred electron, electronically excited states as well as the ground state are populated [@2006JPhB...39.2593C]. Until now no calculations have been performed to predict the internal energy distribution associated with this process. Nevertheless, internal energy distributions may be deduced for the present species from experimental multiplicity distribution, i.e. probabilities associated with a given number of emitted fragments [@2008JChPh.128l4312T]. Some internal energy distributions are reported in Fig. \[fig1\]. In addition excitation (HVC - E) may also occur during the collision. $$\rm X^+ + He \rightarrow X^{+*} + He$$ Let us note in this velocity regime that only electronic excitations take place [@1998NIMPB.146...29W]. Calculations of internal energy following HVC-E have been performed through independent atom and electron model [@1993PhRvA..48.4784W] using doubly differential probabilities with the energy and impact parameter calculated within the classical trajectory monte carlo (CTMC) theory [@1999Maynard]. These calculated distributions agree very well with the energy distributions shown in Fig. \[fig2\], wich are deduced from experimental multiplicity distributions. Ionization is also ocuring in HVC. In the present experiments with cationic beams it leads to multiply charged species which are generally also electronically excited [@PhysRevLett.104.043401]. We will not report on the fragmentation of these multiply charged species, because this is out of the scope of this paper. ### Fragmentation of electronically excited species by high$-$velocity collision ![ Theoretical fragmentation BRs as a function of internal excitation energy for a C$_7$ carbon cluster. Calculations were performed with the MMMC statistical theory [@2006IJMSp.252..126D].[]{data-label="fig3"}](15010fg3.ps){width="0.85\linewidth"} ![C$_7$ calculated BRs with various internal energy distribution. The BRs were obtained by convolution of theoretical curves of Fig. \[fig3\] with the gaussian energy distribution: $\rm p(E) = \frac{1}{\sigma\sqrt{2\pi}}\exp(-\frac{1}{2}(\frac{E-E_0}{\sigma})^2)$. The BRs are normalized to the multiplicity probability. Solid lines: $\sigma$ = 2 eV dotted lines: $\sigma$ = 5 eV.a) BR for two fragments breakup: triangles up: C$_4$/C$_3$, squares: C$_5$/C$_2$, circle: C$_6$/C. b) BR for three fragments breakup: hexagons: C$_3$/C$_3$/C, circle: C$_3$/C$_2$/C$_2$, triangle down: C$_4$/C$_2$/C and C$_5$/C/C. []{data-label="fig4"}](15010fg4_bis.eps){width="0.5\linewidth"} Statistical hypothesis has been proposed and often applied to calculate the fragmentation of finite size systems. It stipulates that all accessible micro-states are equiprobably populated, and conservations of energy and momentum define which these micro-states are. Their ensemble is forming a so-called phase space. Theoretical predictions have to rely on numeration of individual states or integration in the phase-space. Two physical situations are presumably close to the statistical behavior. In the first one, the system gets enough time to explore all the phase space whatever the bottlenecks in this random exploration. In the second physical situation, the system has no time to explore the whole phase space, but is prepared in such a large set of initial states that sampling of the phase space is presumably achieved by the entrance channels. Fragmentation of molecules that are electronically excited by HVC processes are clearly related to the second scenario and therefore can generally be interpreted in the frame of a statistical theory. For example, a statistical theory of fragmentation, the microcanonical metropolis monte carlo (MMMC) [@1995ZPhyD..35...27G], has been applied to the carbon clusters C$_{\rm n}$ and found to agree very well with the experimental measurements [@2004PhRvL..93f3401M]. For C$_{\rm n}$, C$_{\rm n}$H, and C$_3$H$_2$ the creation of each fragment costs roughly the same amount of energy (4-7 eV [^3]). Therefore the fragmentation BRs exhibit specific energy domains associated to a given multiplicity (see Fig. \[fig3\]). Normalizing the branching ratios to multiplicity probabilities somehow vanishes the role of the input energy on BRs. To illustrate this point, we present in Fig. \[fig4\] the calculated BRs for C$_7$. They were obtained by convolution of the theoretical curves of the Fig. \[fig3\] with Gaussian shapes centered at energies from 0 to 15 eV.The variations are found to be small, i.e. within $\pm$ 10$\%$ . It shows that as long as phase space scrambling is achieved, normalized BRs are mostly insentive to the internal energy distribution, unlike the multiplicity distributions. Experimental results and analysis --------------------------------- ![Experimental branching ratios of neutral C$_3$H$_2$ produced in HVC-CT reaction at a fixed number of emitted fragments (N$_{\rm f}$) from left to right (N$_{\rm f}$) = 2, 3, 4.[]{data-label="fig5"}](15010fg5.ps){width="1\linewidth"} ![Experimental branching ratios of C$_3$H$_2^+$ produced in HVC-E reaction at a fixed number of emitted fragments (N$_{\rm f}$) from left to righ (N$_{\rm f}$) = 2, 3, 4.[]{data-label="fig6"}](15010fg6.ps){width="1\linewidth"} N$_{\rm f}$ Proba ($\pm$ err) ------------- ------------------- 1 0.16 (0.05) 2 0.43 (0.07) 3 0.28 (0.08) 4 0.08 (0.03) 5 0.05 (0.01) : Probability of dissociation of C$_3$H$_2$ produced in HVC-CT reaction as a function of number of the emitted fragments (N$_{\rm f}$).N$_{\rm f}$=1 corresponds to non$-$dissociative HVC-CT. \[Tab1\] N$_{\rm f}$ Proba ($\pm$ err) ------------- ------------------- 2 0.34 (0.02) 3 0.43 (0.03) 4 0.19 (0.08) 5 0.05 (0.01) : Probability of dissociation of (C$_3$H$_2^+$)\* produced in HVC-E reaction as a function of number of the emitted fragments (N$_{\rm f}$). Non$-$dissociative excited species are not detected in the experiment. \[Tab2\] Results for C$_3$H$_2$ fragmentation (multiplicity distribution: Table \[Tab1\], BR: Fig. \[fig5\]) and C$_3$H$_2^+$ (multiplicity distribution: Table \[Tab2\], BR: Fig. \[fig6\]) have been recently obtained. Detailed physical discussion on this species will be done elsewhere. Note that the detachment of one or two hydrogens is the most favorable channel. Fragmentation results for C$_{\rm n}$ and C$_{\rm n}$H species can be found elsewhere [@2004PhRvL..93f3401M; @2006IJMSp.252..126D; @2008JChPh.128l4312T].A subset of these results are shown in Figs. \[fig7\] to  \[fig9\]. Briefly, the most favorable channels were found to be H production for C$_{\rm n}$H and C$_3$ production for C$_{\rm n}$. These channels are indeed the most exothermic ones. For C$_{\rm n}$, C$_3$ is a magic number because of the shell closure [@VanOrden1998]. Application of HVC fragmentation branching ratios in astrochemical reaction networks ==================================================================================== Statistical fragmentation relevance in the context of ISM chemistry ------------------------------------------------------------------- ![Comparison between a two fragments breakup after dissociative recombination (DR) (blue diamonds) and HVC-CT (red hexagons - this work). DR$-$BR are adapted from @2004PCCP....6..949E for C$_2$H; @Angelova20047 for C$_3$H, C$_3$H$_2$; @Angelova2004195 for C$_4$H and @heber:022712 for C$_4$. The parenthesis on Y$-$axis labels mean that hydrogen may be localized on either of the two fragments in DR experiments.[]{data-label="fig7"}](15010fg7.ps){width="0.9\linewidth"} Three physical processes are involved in the creation of transient molecular electronically excited species in ISM [@1984inch.book.....D]. They are HVC with cosmic rays and secondary electrons [@1968ApJ...152..971S], photoabsorption in the various local radiation fields [@1984inch.book.....D], and recombination between molecular ions and thermal electrons in the local plasma [@2003guberman]. The two latter processes are generally dominating, except perhaps in dark clouds where CR$-$induced secondary electrons may have a significant contribution [@1983ApJ...267..603P; @1987ApJ...323L.137G]. Because of the energy and momentum conservation laws the internal energy, in the electronic recombination process, should be close to the neutral species ionization potential (IP). In hydrocarbon molecules, the IP is always higher than the dissociation energy (DE). Then, fragmentation will occur rapidly for small size molecules, as compared to radiative deexcitation. According to the IP values [8 to 12 eV; @VanOrden1998] and DE [4 to 7 eV; @2006IJMSp.252..126D; @2008JChPh.128l4312T], the channels that lead to two fragments are expected to be the most populated. Statistical fragmentation may be invoked in the dissociative recombination process because there are always many electronically excited states close to the IP in carbon and hydrocarbon molecules [@2008CP....343..292V]. As a result a rather efficient sampling of the phase space should be performed in the entrance channels, and dissociative recombination BRs should be governed by a statistical fragmentation as in HVC. Figure 7 shows a comparison between HVC-CT and DR results for C$_4$, C$_2$H, C$_3$H, C$_4$H, and C$_3$H$_2$. Some of those results (C$_3$H and C$_4$) are not BR but a summation of BRs. Indeed, a resolution of C-H bond breaking is not always achieved in DR experiments nearby storage ring facilities due to the limitation of the grid method (see experimental section). The DR data agree with HVC-CT data within $\pm$ 10$\%$ on average. This small discrepancy arises because the internal energy distributions are quite different between HVC-CT (large distributions) and DR (narrow distribution around IP). An exception is seen in C$_2$H, where the agreement is poor. It may come from a non$-$statistical behavior of the fragmentation for a triatomic molecule this small.\ ![Comparison between a two fragments$-$breakup after photodissociation (black triangles) and HVC-CT (red hexagons - this work). The BRs for photodissociation are adapted from [@2000JPC] measurements. []{data-label="fig8"}](15010fg8_bis.eps){width="0.9\linewidth"} ![Comparison between a two fragments$-$breakup after photodissociation (triangles) and HVC-E (hexagons - this work). The BRs for photodissociation are adapted from [@1986ZPhyD...3..309G] []{data-label="fig9"}](15010fg9.ps){width="0.9\linewidth"} In the ISM, molecules are photo-dissociated and photo-ionized because of the absorption of UV photons. These UV photons are produced by nearby stars. The interstellar standard radiation field (ISRF) embedding the gas in the Galaxy has been determined by Draine [@1978ApJS...36..595D]. In star$-$forming regions, the UV flux is the sum of the ISRF plus the emission spectra of the nearby stars. UV photons are absorbed in the continuum by dust and via discrete lines of the most important molecules. Photo-absorption rates thus depend on the optical depth. In the interior of dense clouds, UV photons are produced by the excitation in Lyman and Werner electronic states of molecular hydrogen by CR$-$secondary electrons [@1983ApJ...267..603P]. Taking into account the strong density of electronically excited states for the carbon and hydrocarbon molecules in the 6-11 eV energy range [@2008CP....343..292V], many electronic transitions are expected to occur, resulting in a wide distribution of prepared excited states. Then fragmentation, as for HVC and DR, is expected to be governed by statistical behavior. Figure 8 presents fragmentation BR for C$_4$, C$_5$, and C$_6$ carbon clusters obtained by photodissociation and those from HVC-CT. In the reported photodissociation experimental results [@2000JPC], the incident photon energy has been varied up to 6 eV with some contribution of multi-photon absorption. The horizontal bars on these measurements correspond to a variation of the BRs with the different photon energies. One observes agreement within $\pm$ 10$\%$ between photodissociation and HVC-CT BR. Note that these photodissociation experiments were favorably compared to statistical calculations [@2000JPC]. Figure 9 presents the same comparison for cationic C$_n^+$ (n= 5 to 10) species. Photodissociation was produced by single and runaway multiphoton absorption [@1991JChPh..95.4719S; @1986ZPhyD...3..309G]. Therefore, those results have to be considered with caution. Inside the above restrictions, the BRs from photodissociation and HVC-E processes still agree also quite well. In view of the above comparisons, a universality of statistical BRs may be proposed. In order to take into account the particularity of the different electronic processes, an error bar (1 $\sigma$) of $\pm$ 10$\%$ seems reasonable to apply. New BRs for astrochemistry databases ------------------------------------ Reactants Total rate (10$^{3}$ s$^{-1}$) (OSU-01-2007) Products HVC-SUBR OSU-01-2007-BR ----------------- ---------------------------------------------- ---------------- ---------- ---------------- C$_2$H + cr 5.0 C$_2$ + H 0.81 1.00 C+CH 0.19 0.00 C$_3$H + cr 5.0 C$_3$ + H 0.65 1.00 C$_2$H + C 0.33 0.00 C$_2$ + CH 0.02 0.00 C$_4$ + cr 1.0 C$_3$ + C 0.77 1.00 C$_2$ + C$_2$ 0.23 0.00 C$_3$H$_2$ + cr 5.0 C$_3$H + H 0.46 1.00 C$_3$ + H$_2$ 0.24 0.00 C$_2$H$_2$ + C 0.23 0.00 C$_2$H + CH 0.04 0.00 C$_2$ + CH$_2$ 0.03 0.00 C$_4$H + cr 5.00 C$_4$ + H 0.58 1.00 C$_3$H + C 0.26 0.00 C$_3$ + CH 0.00 0.00 C$_2$H + C$_2$ 0.16 0.00 C$_5$ + cr 1.00 C$_4$ + C 0.13 1.00 C$_3$ + C$_2$ 0.87 0.00 C$_6$ + cr 1.00 C$_5$ + C 0.09 1.00 C$_4$ + C$_2$ 0.11 0.00 C$_3$ + C$_3$ 0.80 0.00 C$_7$ + cr 1.00 C$_6$ + C 0.01 1.00 C$_5$ + C$_2$ 0.19 0.00 C$_4$ + C$_3$ 0.80 0.00 C$_8$ + cr 1.00 C$_7$ + C 0.03 1.00 C$_6$ + C$_2$ 0.01 0.00 C$_5$ + C$_3$ 0.90 0.00 C$_4$ + C$_4$ 0.06 0.00 C$_9$ + cr 1.00 C$_8$ + C 0.00 1.00 C$_7$ + C$_2$ 0.06 0.00 C$_6$ + C$_3$ 0.66 0.00 C$_5$+C$_4$ 0.28 0.00 : New HVC-SUBR for CR reactions. Total rates and BR of OSU-01-2007 ( http:www.eric/o.html) are also reported. We do not report on C$_{10}$ because this reaction is not included in OSU-01-2007. \[tab3\] Reactants Total rate (10$^{-7}$ cm$^{3}$ s$^{-1}$) (OSU-01-2007) Products HVC-SUBR OSU-01-2007-BR ---------------------- -------------------------------------------------------- ---------------- ---------- ---------------- C$_3$H$^+$ + e$^-$ 3.0 C$_3$+H 0.65 0.50 C$_2$H+C 0.33 0.50 C$_2$ + CH 0.02 0.00 C$_3$H$_2^+$ + e$^-$ 3.6 C$_3$H + H 0.46 0.42 C$_3$ + H$_2$ 0.23 0.42 C$_2$H$_2$ + C 0.24 0.08 C$_2$H + CH 0.04 0.00 C$_2$ + CH$_2$ 0.04 0.08 C$_4$H$^+$ + e$^-$ 3.00 C$_4$ + H 0.58 0.40 C$_3$H + C 0.26 0.15 C$_3$ + CH 0.00 0.15 C$_2$H + C$_2$ 0.16 0.30 C$_5^+$ + e$^-$ 3.00 C$_4$ + C 0.13 0.50 C$_3$ + C$_2$ 0.87 0.50 C$_6^+$ + e$^-$ 20.00 C$_5$ + C 0.09 0.50 C$_4$ + C$_2$ 0.11 0.50 C$_3$ + C$_3$ 0.80 0.00 C$_7^+$ + e$^-$ 23.00 C$_6$ + C 0.01 0.43 C$_5$ + C$_2$ 0.19 0.43 C$_4$ + C$_3$ 0.80 0.14 C$_8^+$ + e$^-$ 20.00 C$_7$ + C 0.03 0.50 C$_6$ + C$_2$ 0.01 0.50 C$_5$ + C$_3$ 0.90 0.00 C$_4$ + C$_4$ 0.06 0.00 C$_9^+$ + e$^-$ 20.00 C$_8$ + C 0.00 0.50 C$_7$ + C$_2$ 0.06 0.50 C$_6$ + C$_3$ 0.66 0.00 C$_4$ + C$_4$ 0.28 0.00 C$_{10}^+$ + e$^-$ 20.00 C$_9$ + C 0.01 0.50 C$_8$ + C$_2$ 0.01 0.50 C$_7$ + C$_3$ 0.70 0.00 C$_6$ + C$_4$ 0.03 0.00 C$_5$ + C$_5$ 0.25 0.00 : New HVC-SUBR for DR reactions. Total rates and BR of OSU-01-2007 ( http:www.eric/o.html) are also reported. We do not report about DR on C$_2$H$^+$ and C$_4^+$ because OSU-01-2007 used experimental measurements. \[tab4\] Reactants Total rate (10$^{-9}$ s$^{-1}$) (OSU-01-2007) Products HVC-SUBR OSU-01-2007-BR --------------------- ----------------------------------------------- ---------------- ---------- ---------------- C$_2$H + h$\nu$ 1.0 C$_2$ + H 0.81 1.00 C + CH 0.19 0.00 C$_3$H + h$\nu$ 1.0 C$_3$ + H 0.65 1.00 C$_2$H + C 0.33 0.00 C$_2$ + CH 0.02 0.00 C$_4$ + h$\nu$ 0.4 C$_3$ + C 0.77 0.50 C$_2$ + C$_2$ 0.23 0.50 C$_3$H$_2$ + h$\nu$ 2.9 C$_3$H + H 0.46 0.66 C$_3$ + H$_2$ 0.23 0.34 C$_2$H$_2$ + C 0.24 0.00 C$_2$H + CH 0.04 0.00 C$_2$ + CH$_2$ 0.04 0.00 C$_4$H + h$\nu$ 2.0 C$_4$ + H 0.58 0.50 C$_3$H + C 0.26 0.00 C$_3$ + CH 0.00 0.00 C$_2$H + C$_2$ 0.16 0.50 C$_5$ + h$\nu$ 0.01 C$_4$ + C 0.13 0.00 C$_3$ + C$_2$ 0.87 1.00 C$_6$ + h$\nu$ 1.0 C$_5$ + C 0.09 1.00 C$_4$ + C$_2$ 0.11 0.00 C$_3$ + C$_3$ 0.80 0.00 C$_7$ + h$\nu$ 1.0 C$_6$ + C 0.01 1.00 C$_5$ + C$_2$ 0.19 0.00 C$_4$ + C$_3$ 0.80 0.00 C$_8$ + h$\nu$ 1.0 C$_7$ + C 0.03 1.00 C$_6$ + C$_2$ 0.01 0.00 C$_5$ + C$_3$ 0.90 0.00 C$_4$ + C$_4$ 0.06 0.00 C$_9$ + h$\nu$ 1.0 C$_8$ + C 0.00 1.00 C$_7$ + C$_2$ 0.06 0.00 C$_6$ + C$_3$ 0.66 0.00 C$_5$ + C$_4$ 0.28 0.00 C$_{10}$ + h$\nu$ 1.14 C$_9$ + C 0.01 0.82 C$_8$ + C$_2$ 0.01 0.17 C$_7$ + C$_3$ 0.70 0.00 C$_6$ + C$_4$ 0.03 0.01 C$_5$ + C$_5$ 0.25 0.00 : New HVC-SUBR for photo dissociation reactions. Total rates and BR of OSU-01-2007 (http:www.eric/o.html) are also reported. \[tab5\] We propose here, to use the present complete set of BR obtained with HVC as statistical universal BRs (HVC-SUBR) for C$_n$, C$_n$H, and C$_3$H$_2$ when they are missing or guessed in ISM databases. Table \[tab3\] presents CR HVC-SUBR for C$_n$, C$_n$H, and C$_3$H$_2$ together with OSU-01-2007 CR-BR. We do not report the Umist data base values because the BR are almost the same as those of OSU-01-2007. When reactions were not considered in OSU-01-2007, we do however used Umist06 rates. Note that the most recent version of the osu database (USU.2009) has the same BR as OSU-01-2007 for these species. All CR-BR from OSU-01-2007 result from the pioneering estimates [@1984ApJS...56..231L]. In most cases, they used zero level statistical behavior to predict BR, i.e. all the dissociation goes only to the most exothermic channel. It is consistent with the HVC-SUBR obtained for the C$_n$H, but not for the C$_n$ species. Indeed, they assumed C$_{n-1}$/C and C$_{n-1}$/C$_2$ to be the most exothermic channels, whereas it is always the C$_{n-3}$/C$_3$ channel. In general, HVC-SUBR lead to a much wider dispersion of the fragmentation channels. It is remarkable that C$_3$ production by CR is strongly enhanced. Table \[tab4\] presents new DR HVC-SUBR for C$_n^+$, C$_n$H$^+$, and C$_3$H$_2^+$ together with OSU-01-2007 BR and reaction rates. We do not report on C$_2$H$^+$ and C$_4^+$ because OSU-01-2007 used experimental DR-BR [@2004PCCP....6..949E; @heber:022712]. For C$_3$H$^+$, in spite of existing partial measurements [@Angelova20047], the OSU-01-2007 database uses the same probability for the emission of C or H. Although [@1984ApJS...56..231L] assumed 100% of H emission, equiprobability was proposed by , based on more detailed calculations. The HVC-SUBR are indeed between these two extreme situations. For C$_4$H$^+$, OSU-01-2007 uses the experimental DR- BR [@Angelova2004195] for the breaking of the C-C bonds and, without any available information on the ratio between emission of C or CH, OSU-01-2007 took it equal to 1. The HVC-SUBR results show that actually CH emission is very unlikely. For C$_3$H$_2^+$, OSU-01-2007 uses the experimental DR-BR [@Angelova20047] for the C-C bonds. For the missing experimental information on the proportion between H and H$_2$ emission when the carbon skeleton stays intact and the proportion between C and CH when it is broken, OSU-01-2007 still assumed equal probability. As for C$_3$H$^+$, HVC-SUBR show that CH emission is unlikely. For the proportion of evaporation of H and H$_2$, equal probability is not too far from HVC-SUBR. It is remarkable that the HVC-SUBR give a ratio H$_2$/H (35%) in qualitative agreement with the recent experiment of neutral-neutral reaction C+C$_2$H$_2$ $\rightarrow$ C$_3$H$_2^*$ . For the C$_n^+$ species, as for CR, note that HVC-SUBR predict an enhanced production of C$_3$ driven by DR together with a more extended dispersion in the carbon cluster$-$mass daughters compared to OSU-01-2007. Table \[tab5\] presents a new photon HVC-SUBR for C$_n$, C$_n$H, and C$_3$H$_2$ together with OSU-01-2007 BR and reaction rates. Note that the OSU database has been developed mainly for cold dark clouds. Therefore, references about BR for photodissociation processes are scarce. OSU-01-2007 Photodissociation BR for C$_2$H and C$_3$H used zero level statistical picture: only H emission is allowed. For C$_4$H, it assumed equal probability between emission of C$_2$ and H. For all C$_n$H species HVC-SUBR predict H emission to be by far the dominant channel. For C$_3$H$_2$, OSU-01-2007 allows H or H$_2$ emission. Emission of C should also be an open channel. For the C$_n$ series, as for the two previous processes, OSU-01-2007 assumed exclusive C emission. Exception arises from C$_{10}$ where RRKM statistical calculations were performed [@1995IJMSI.149..321B]. Again HVC-SUBR predicts a stronger enhancement of the C$_3$ cluster production compared to OSU-01-2007. Effects of the new branching ratios on chemical model predictions for dense clouds ================================================================================== Dense cloud model ----------------- In order to test the effect of the new branching ratios on chemical model predictions for dense clouds, we used the Nahoon chemical model developed by V. Wakelam . This model follows the time-dependent chemistry of gas-phase species at a fixed temperature and density. We used Nahoon for a single spatial point (0D). We considered “typical” dense cloud conditions: a temperature of 10 K, an H density of $2\times 10^4$ cm$^{-3}$, a visual extinction (Av) of 10 and a cosmic-ray ionization rate of $1.3\times 10^{-17}$ s$^{-1}$. For the initial conditions, we started with all the elements but H in the atomic form with the low-metal elemental abundances from @1982ApJS...48..321G. The standard network used for this analysis is OSU-01-2007 (http://www.physics.ohio-state.edu/$\sim$eric/research.html), which contains 452 species and 4430 reactions. We updated the branching ratios listed in Tables \[tab3\], \[tab4\], and \[tab5\] for 1) cosmic-ray dissociations of C$_n$ (n = 4 to 9), C$_n$H (n = 2 to 4) and C$_3$H$_2$, 2) dissociative recombinations of C$_n^+$ (n = 5 to 10), C$_n$H$^+$ (n = 3 to 4) and C$_3$H$_2^+$, and 3) photo-dissociations of C$_n$ (n = 4 to 10), C$_n$H (n = 2 to 4) and C$_3$H$_2$. We then let the system evolve over $10^8$ yr, when it reaches steady-state. Results ------- Figure \[fig10\] shows the ratio between carbon$-$bearing species abundances computed with the updated database and those computed with the standard network OSU-01-2007. The new branching ratios modify the species abundances at two different times. The first one is a very early stage, before $10^4$ yr, which is irrelevant for dense cloud chemistry. The second is much later, between $10^6$ and $10^7$ yr, but then the effect of the new branching ratios is less important. At the typical dense cloud age of $10^5$ yr , the new branching ratios are unimportant. Higher ages have however been found for TMC-1 with other models [@1998ApJ...501..207T], so the period around $10^6$ yr is not without interest. After $10^6$ yr, the maximum effect is obtained for the largest molecules. The C$_n$H molecules are an exception because C$_7$H and C$_8$H are more influenced than C$_9$H. All the C-bearing species abundances are decreased by the new BR by a maximum factor of two. The weak sensitivity of TMC-1 chemistry to BR has already been pointed out\ ![Evolution with time of the ratios of abundances computed with the updated database and with the standard network (OSU-01-2007) for a dense clouds for a) C$_n$ and b) C$_n$H$_m$ species. (see text) []{data-label="fig10"}](15010fg10.ps){width="0.9\linewidth"} Effects of the new branching ratios on chemical model predictions for photon$-$dominated regions ================================================================================================ PDR model --------- We tested the influence of these new rates on a PDR model. For this, we used the Meudon PDR code (http//pdr.obspm.fr) described in @2006ApJS..164..506L. The Meudon PDR code computes the structure of a 1D plan-parallel and stationary slab of dust and gas. It consistently solves the radiative transfer from far UV to sub-millimeter, chemistry, and thermal balance. To test the influence of the new rates, we reproduced the model of the Horse Head by this PDR code presented in first with an old chemical network based on the rates provided by OSU-01-2007, secondly with the new branching ratios presented in this paper. The Horse Head is good candidate because of the large number of observed hydrocarbons. Pety et al. (2005) suggest that fragmentation of PAHs can contribute to the synthesis of small hydrocarbons and conclude by mentioning the need for precise chemical rates to perform chemical models in these regions. The proton density in the Horse Head is estimated to n$_{\textrm{H}}$ = 10$^5$ cm$^{-3}$ and the intensity of the incident UV flux to 100 times the ISRF [in Draine’s units, @1978ApJS...36..595D]. We adopted a flux of comic rays of $5\times10^{-17}$ per H and per second. The model assumes a semi-infinite cloud. Our objective is to compare the profile of abundances of some relevant species computed with our new branching ratios and with older ones. To refine models of the Horse Head is beyond the scope of this paper. Results ------- Figure \[fig11\] presents the ratios of abundances provided by the two chemistries of C$_{n}$ and C$_{n}$H with $n$ from 2 to 9 as a function of the position in the cloud expressed in visual extinction. This ratio is defined as the abundance provided with the new chemistry divided by the abundance obtained with a older rates. First we note that the effect of the new rates are only visible for A$_{\textrm{V}}$ lower than 4. Indeed, it is in this region that the dissociative recombinations and photo-processes dominate the other chemical processes.\ It is often difficult to reproduce the abundance of C$_3$ in PDR models (for example in the diffuse interstellar gas towards Zeta Persus, the model by requires a high density component to reproduce this molecule). Figure \[fig11\] shows that the new branching ratios enhance the abundance of this molecule. This is explained by two factors. First, the new branching ratios of the C$_3$H$^+$ recombination reaction enhance the route leading to C$_3$ by 30%. Secondly, we showed that the dissociative recombination of C$_6^+$ can efficiently produce two C$_3$ molecules.This route was not considered in previous chemistries.\ For C$_n$ molecules with $n>$4, the new branching ratios systematically produce a decrease in the abundances. The model shows that the abundance of C$_3$H is significantly enhanced in the photodissociation area (Fig. \[fig11\]b). This is explained by the new photodissociation route of C$_4$H. This route was not considered in the OSU database.\ Finally, the abundances of the hydrocarbons, C$_n$H molecules (n$>$3), are reduced compared to the old chemistry. This is directly linked to the decrease in the abundances the C$_n$ molecules. Indeed, the chain of $ $reactions leading to the hydrocarbons with n$>$3 is $$\rm C_n\stackrel{C^+}{\rightarrow}C_{n+1}^{+}\stackrel{H_2}{\rightarrow}\textrm{C}_{n+1}H^+\stackrel{e^-}{\rightarrow}\textrm{C}_nH.$$ Because the abundance of C$_n$ molecules is reduced by the new branching ratios, the abundances of the C$_n$H are also reduced.\ ![Evolution, with the **visual** extinction Av, of the ratios of aboundances computed with the updated database and with the standard network (OSU-01-2007) for a PDR a) C$_n$ and b) C$_n$H$_m$ species (see text). []{data-label="fig11"}](15010fg11.ps){width="0.9\linewidth"} Conclusions =========== High velocity collision in inverse kinematics scheme was used to measure the complete fragmentation pattern of electronically excited C$_{n}$ ($n$=2 to 10), C$_{n}$H ($n$=2 to 4) and C$_3$H$_2$ molecules. Branching ratios of dissociation were deduced from those experiments. The comparison between the branching ratios obtained in this work and other types of experiments showed a good agreement. It was interpreted as the signature of a statistical behavior of the fragmentation. We thus propose new branching ratios for: 1) the dissociation of C$_n$ ($n$=2-10), C$_n$H ($n$=2-4) and C$_3$H$_2$ molecules by interstellar UV photons or cosmic-ray processes and 2) the dissociative recombination of C$_n^+$ ($n$=5-10), C$_n$H$^+$ ($n$=3-4) and C$_3$H$_2^+$. The new values have been tested in dense cloud and PDR models. We showed that only chemistry occurring at A$_{\rm V}$ smaller than 4 is really affected. We however recommend astrochemists to use these branching ratios, even for dense chemistry, because it is well known that the importance of a reaction depends on the network we use. The data published in this paper have been added to the online database KIDA (KInetic Database for Astrochemistry, http://kida.obs.u-bordeaux1.fr). The national CNRS program PCMI (Physique et Chimie du Milieu Interstellaire), the CNRS IN2P3 (Institut National de Physique Nucléaire et de Physique des Particules) and the PPF Matière Carbonée of the University Paris Sud 11 are acknowledged for their financial support. [64]{} natexlab\#1[\#1]{} Angelova, G., Novotny, O., Mitchell, J. B. A., [et al.]{} 2004, International Journal of Mass Spectrometry, 232, 195 Angelova, G., Novotny, O., Mitchell, J. B. A., [et al.]{} 2004, International Journal of Mass Spectrometry, 235, 7 , M. B., [Feldman]{}, P. A., [Watson]{}, J. K. G., [et al.]{} 1999, , 518, 740 , R. P. A. & [Herbst]{}, E. 1995, International Journal of Mass Spectrometry and Ion Processes, 149, 321 , J. H. & [Dalgarno]{}, A. 1977, ApJS, 34, 405 , J., [Goicoechea]{}, J. R., & [Benilan]{}, Y. 2002, , 580, L157 , J., [Heras]{}, A. M., [Tielens]{}, A. G. G. M., [et al.]{} 2001, , 546, L123 , M., [della Negra]{}, S., [Lavergne]{}, L., [et al.]{} 2002, Nuclear Instruments and Methods in Physics Research B, 197, 155 , M., [Martinet]{}, G., [Mezdari]{}, F., [et al.]{} 2006, Journal of Physics B Atomic Molecular Physics, 39, 2593 Chabot, M., Mezdari, F., Béroff, K., Martinet, G., & Hervieux, P.-A. 2010, Phys. Rev. Lett., 104, 043401 , H., [Bise]{}, R., [Hoops]{}, A., [Mordaunt]{}, D., & [Neumark]{}, D. 2000, The Journal of Physical Chemistry A, 104, 2025 , S., [S[á]{}nchez]{}, G., [Alcam[í]{}]{}, M., [et al.]{} 2006, International Journal of Mass Spectrometry, 252, 126 , J. E., [Irvine]{}, W. M., [Snell]{}, R. L., [et al.]{} 2000, ApJ, 542, 870 , B. T. 1978, , 36, 595 , W. W. & [Williams]{}, D. A. 1984, [Interstellar chemistry]{}, ed. [Duley, W. W. & Williams, D. A.]{} , A., [Hellberg]{}, F., [Thomas]{}, R., [et al.]{} 2004, Physical Chemistry Chemical Physics (Incorporating Faraday Transactions), 6, 949 , D., [Cernicharo]{}, J., [Gerin]{}, M., & [Cox]{}, P. 2001, , 552, 168 , G., [P[ě]{}tlewski]{}, A., [Musaev]{}, F., [et al.]{} 2002, , 395, 969 , M. E., [Jarrold]{}, M. F., [McIlrath]{}, T. J., [et al.]{} 1986, Zeitschrift fur Physik D Atoms Molecules Clusters, 3, 309 , T. E., [Langer]{}, W. D., & [Frerking]{}, M. A. 1982, ApJS, 48, 321 , R., [Lepp]{}, S., & [Dalgarno]{}, A. 1987, ApJ Lett., 323, L137 , J. M. 1976, , 39, 9 , D. H. E. & [Hervieux]{}, P. A. 1995, Zeitschrift fur Physik D Atoms Molecules Clusters, 35, 27 Heber, O., Seiersen, K., Bluhme, H., [et al.]{} 2006, Physical Review A (Atomic, Molecular, and Optical Physics), 73, 022712 , J., [Rauer]{}, H., [Boice]{}, D. C., & [Huebner]{}, W. F. 2005, , 442, 1107 , E. 1978, , 222, 508 , E. 2003, [Dissociative Recombination in Interstellar Clouds]{}, ed. [Guberman, S. L. Kluwer Academic - Plenum Publishers]{} , E. & [Lee]{}, H. 1997, , 485, 689 , E. & [Leung]{}, C. M. 1989, ApJS, 69, 271 , K. W., [Keady]{}, J. J., & [Bernath]{}, P. F. 1988, Science, 241, 1319 , L. M. & [Campbell]{}, B. 1982, , 254, 108 , M. & [Kroto]{}, H. 1990, , 351, 222 Larsson, M. 1995, International Journal of Mass Spectrometry and Ion Processes, 149-150, 403 , honour Biography David Smith , P. P., [Coustenis]{}, A., & [Vardavas]{}, I. M. 2008, , 56, 67 , F., [Nehm[é]{}]{}, C., [Le Bourlot]{}, J., & [Roueff]{}, E. 2006, , 164, 506 , F., [Roueff]{}, E., & [Herbst]{}, E. 2004, , 417, 993 , A., [D’Hendecourt]{}, L., [Boissel]{}, P., & [Desert]{}, F. X. 1989, , 213, 351 , F., [Petrucci]{}, R., [Hickson]{}, K. M., [et al.]{} 2008, , 56, 1658 , C. M., [Herbst]{}, E., & [Huebner]{}, W. F. 1984, ApJS, 56, 231 , J. P., [Lakin]{}, N. M., [Walker]{}, G. A. H., & [Bohlender]{}, D. A. 2001, , 553, 267 , G., [D[í]{}az-Tendero]{}, S., [Chabot]{}, M., [et al.]{} 2004, Physical Review Letters, 93, 063401 , G. 1999, Private communication , T. J., [Defrees]{}, D. J., [McLean]{}, A. D., & [Herbst]{}, E. 1988, , 194, 250 , T. J., [Herbst]{}, E., & [Bettens]{}, R. P. A. 2000, , 316, 195 Morgan, T. J., Berkner, K. H., & Pyle, R. V. 1971, Phys. Rev. Lett., 26, 602 , J., [Teyssier]{}, D., [Foss[é]{}]{}, D., [et al.]{} 2005, , 435, 885 , S. S. & [Tarafdar]{}, S. P. 1983, ApJ, 267, 603 , M., [Abel]{}, N. P., [Bell]{}, T., [et al.]{} 2007, , 467, 187 , E., [Felenbok]{}, P., [Black]{}, J. H., & [Gry]{}, C. 2002, , 384, 629 , N., [Sakai]{}, T., [Hirota]{}, T., & [Yamamoto]{}, S. 2008, ApJ, 672, 371 , M. B., [Hintz]{}, P. A., & [Anderson]{}, S. L. 1991, , 95, 4719 , L. J. & [Tomasko]{}, M. G. 1968, ApJ, 152, 971 , R. & [Herbst]{}, E. 1998, ApJ, 501, 207 , D., [Foss[é]{}]{}, D., [Gerin]{}, M., [et al.]{} 2004, , 417, 135 , T., [Chabot]{}, M., [Pino]{}, T., [et al.]{} 2008, , 128, 124312 , B. E., [Herbst]{}, E., & [Terzieva]{}, R. 2000, ApJS, 126, 427 , E. F. 1988, in Rate Coefficients in Astrochemistry. Proceedings of a Conference held in UMIST, Manchester, United Kingdom, September 21-24, 1987. Editors, T.J. Millar, D.A. Williams; Publisher, Kluwer Academic Publishers, Dordrecht, Boston, 1988. ISBN \# 90-277-2752-X. LC \# QB450 .R38 1988. P. 49, 1988, ed. [T. J. Millar & D. A. Williams]{}, 49–+ , M. C. & [van Dishoeck]{}, E. F. 2008, Chemical Physics, 343, 292 , A. & [Saykally]{}, R. J. 1998, Chemical Reviews, 98, 2313 , B., [della-Negra]{}, S., & [Lafoux]{}, A. 1996, Nuclear Instruments and Methods in Physics Research A, 382, 348 , V., [Caselli]{}, P., [Ceccarelli]{}, C., [Herbst]{}, E., & [Castets]{}, A. 2004, A&A, 422, 159 , V., [Herbst]{}, E., & [Selsis]{}, F. 2006, A&A, 451, 551 , K., [Chabot]{}, M., [Foss[é]{}]{}, R., [et al.]{} 1998, Nuclear Instruments and Methods in Physics Research B, 146, 29 , K. & [Watson]{}, R. L. 1993, , 48, 4784 [^1]: http://www.physics.ohio-state.edu/ eric/research.html [^2]: http://www.udfa.net/ [^3]: C$_3$/H$_2$ formation energy is only 3 eV, but there is a barrier close to 6 eV .
--- abstract: 'We investigate the properties of Type IIP supernovae that are dominantly powered by the rotational kinetic energy of the newly born neutron star. While the spin-down of a magnetar has previously been proposed as a viable energy source in the context of super-luminous supernovae, we show that a similar mechanism could produce both normal and peculiar Type IIP supernova light curves from red supergiant progenitors for a range of initial spin periods and equivalent dipole magnetic field strengths. Although the formation channel for such magnetars in a typical red supergiant progenitor is unknown, it is tantalizing that this proof of concept model is capable of producing ordinary Type IIP lightcurve properties, perhaps implying that rotation rate and magnetic field strength may play important roles in some ordinary looking Type IIP supernova explosions.' author: - | Tuguldur Sukhbold & Todd A. Thompson\ Department of Astronomy and Center for Cosmology & Astro-Particle Physics, The Ohio State University, Columbus, Ohio 43210 title: Magnetar Powered Ordinary Type IIP Supernovae --- \[firstpage\] magnetars, supernovae Introduction ============ [[\[sec:intro\]]{}]{} The most common massive star supernovae in the universe, by number, are the explosions of red-supergiant (RSG) progenitors with initial masses of $\sim$ 9-18 [$\mathrm{M}_\odot$]{} [@Sma15], and with kinetic energies on the order of $\sim10^{51}$ ergs [@Pej15]. These explosions create light curves of Type IIP, which gradualy releases the shock deposited energy to produce roughly a constant luminosity for about 3 months [e.g., @Fil97; @Arc12]. The most intensely studied mechanism for these explosions is the delayed-neutrino driven mechanism [@Col66; @Arn66; @Bet85]. While the neutrino mechanism is rather successful for the low energy explosions of lighter progenitors, there is no consensus yet in the supernova community on more massive progenitors that would provide the observed energies of typical supernovae [@Jan16 and references therein]. Inspired by @Bod74 and the application of neutron star models to luminous supernovae by @Woo10 and @Kas10, in this work we explore the scenario where the rotational energy source plays a dominant role throughout the explosion of a RSG progenitor. Soon after the discovery of pulsars, it was suggested that an embedded pulsar might power the light curves and explosions of ordinary supernovae [@Ost71]. More recently, the idea has gained new traction as a possible explanation for gamma-ray bursts [e.g., @Uso92; @Tho04; @Uze06; @Buc09; @Met11] and other unusal supernovae [@Aki03; @Mae07]. The pulsars required for these transients have atypical properties in that their field strengths and rotation periods are extreme. Today, such models are thus commonly referred to as “magnetar” models rather than “pulsar” models. Following [@Mae07], @Woo10 and @Kas10 independently promoted the idea that magnetars might underlie the production of a broad class of hydrogen poor super-luminous supernovae (SLSN-I), which are brighter than 10$^{43}$ ergs s$^{-1}$ for a time longer than typical Type Ia supernovae (a couple of weeks); see @Qui11 [@Gal12] and @Nic15 for recent reviews of SLSN-I. Since 2010, many studies have interpreted the light curves of SLSN-I as the product of magnetar spin down [e.g., @Ben14; @Nic13; @Met15; @Don16]. Consider a scenario where a weak explosion is initiated in a RSG progenitor, and soon afterwards a magnetar, which formed from the collapse of a spinning progenitor core, starts to deposit energy. The exact nature of the formation is not well understood, an issue we revisit in [[§ [[\[sec:conclude\]]{}]{}]{}]{}. In general, such a scenario would be representative of a case where the original neutrino wind becomes increasingly magnetically dominated as the proto-neutron star cools [@Tho04; @Met07; @Met11], eventually transitioning to a highly relativistic “pulsar”-type wind, depositing the pulsar rotational kinetic energy on the spin-down timescale. Since the brightness and duration of the light curve plateau phase scales with the promptly deposited explosion energy as $\rm\propto E^{5/6}$ and $\rm\propto E^{-1/6}$ respectively [@Pop93], the weak explosion alone, without any magnetar input, will create a long and dim transient compared to a typical Type IIP case. For the magnetar to transform this weak explosion into a typical one, it will need to inject a significant amount of energy in a short timescale, so that the plateau phase is brighter and transitions to the nebular phase sooner. Approximating the initial rotational kinetic energy of the magnetar as $\rm E_m\approx2\times10^{52}P^{-2}_{ms,i}$ ergs, where $\rm P_{ms,i}$ is the initial spin period in milliseconds, we can see that if it is to yield a final kinetic energy of an ordinary Type IIP explosion roughly between 0.5$-$2$\times10^ {51}$ ergs, the initial spin needs to be approximately between 3 and 6 ms. At late times (after the plateau phase) the tail of the light curve must be dominantly powered by the radioactivity, $\rm L_m(t_{late}) < L_{Co}(t_{late})$, where $\rm L_m$ is the spin down luminosity and $\rm L_{Co}$ is the luminosity from the decay of $^{56}$Co, resulting from a typical Type IIP yield of $^{56}$Ni. For vacuum dipole spin-down the magnetar luminosity is $\rm L_m\approx10^{49}B_{15}^2P_{ms}^{-4}\ ergs\ s^{-1}$, where $\rm B_{15}$ is the magnetic field strength in $10^{15}$ G and an angle of $\pi/6$ was assumed between the rotational and magnetic axes. Taking $\rm t_{late}$ as 150 days and adopting a typical $^{56}$Ni mass of 0.1 [$\mathrm{M}_\odot$]{}, we see that the spin down timescale, $\rm t_m = E_m/L_m$, needs to be shorter than roughly a day for $3\rm<P_{i}<6$ ms, or the constant dipole magnetic field needs to be larger than roughly $10^{15}$ G. At the other end, invoking more extreme conditions with $\rm P_{i}\sim 1$ ms and field strengths of $>10^{16}$ G may end up producing a typical IIP-like light curves in some situations, but will ultimately yield much higher energies and also may overproduce $^{56}$Ni. These basic considerations, though based on an idealized situation where the magnetar keeps indefinitely injecting energy, that is efficiently thermalized, based on vacuum dipole emission (braking index of $\rm n=3$) and a constant magnetic field strength, demonstrate that the rotational rates and magnetic field strengths of such IIP-powering magnetars must be larger than what is commonly inferred from pulsars. In this work, through a set of numerical experiments we explore the question of over what parameter space the classical magnetar spin down scenario can transform a weak explosion model of a RSG into one where the light curve characteristics, $^{56}$Ni yields, and final kinetic energies are close to those of typical IIP supernovae. Numerical Calculations with [`KEPLER`]{} ======================================== [[\[sec:numeric\]]{}]{} We calculate a set of magnetar powered RSG explosion models using the 1D implicit hydrodynamic code [`KEPLER`]{} [@Wea78]. All calculations start with a RSG progenitor model from @Suk14 with an initial mass of 15 [$\mathrm{M}_\odot$]{}. At the time of core collapse this model had a radius of 841 [$\mathrm{R}_\odot$]{} and a total mass of 12.6 [$\mathrm{M}_\odot$]{}, of which the outermost 8.3 [$\mathrm{M}_\odot$]{} was in the H-rich envelope. We first launch a weak explosion by using the moving inner boundary method, i.e. “piston-scheme” [@Woo95; @Suk16a], so that the final kinetic energy of the ejecta is only about $\sim 5 \times 10^{49}$ ergs. This explosion synthesized roughly 0.16 [$\mathrm{M}_\odot$]{}  of $^{56}$Ni, but due to late time fallback only about $\sim 0.015$ [$\mathrm{M}_\odot$]{} is ejected. The synthesized $^{56}$Ni mass is on the large side because for this demonstration model the piston was deliberately placed at the edge of the iron core (1.43 [$\mathrm{M}_\odot$]{}), which is significantly deeper than the mass cut that could represent a fully neutrino-driven explosion ($\sim$1.6 [$\mathrm{M}_\odot$]{}  for the same model in @Suk16a). This choice primarily stems from our expectation that magnetar input, with our current description, would not significantly contribute to the $^{56}$Ni synthesis, which we discuss further in [[§ [[\[sec:conclude\]]{}]{}]{}]{}. As in @Woo10, we do not specify the physical nature of the explosion initiation. ![The light curve from the low energy $5\times10^{49}$ ergs explosion (dashed black), without any further energy input, is shown in comparison with an approximate luminosity band of typical Type IIP supernovae. With much lower energy, the resulting light curve is much dimmer and has a long lasting plateau phase. [[\[fig:lowE\]]{}]{}](lowE.pdf){width=".48\textwidth"} As expected, this low energy explosion, without any additional energy input produces a long lasting dim transient ([[[[Fig. [[\[fig:lowE\]]{}]{}]{}]{}]{}]{}). Taking the scaling relations based on the survey of model Type IIP light curves from @Suk16a [Eqs. 15 and 17], one gets a plateau luminosity of $\rm L_{p} = 2.3 \times10^{41}\ ergs\ s^{-1}$ and a duration of the plateau, including the effects of radioactivity, is $\rm \tau_p = 176$ days. These values are in a good agreement with the low energy explosion model shown in [[[[Fig. [[\[fig:lowE\]]{}]{}]{}]{}]{}]{} as a dashed black curve. The plot also shows a rough luminosity range that represents the typical IIP light curves (light gray band): $\rm 10^{42}<L_p<10^{43}\ ergs\ s^{-1}$, $\rm 90<\tau_p<130$ days and the tail representing $^{56}$Co decay luminosity for $\rm 0.05<M_{Ni}<0.2$ [$\mathrm{M}_\odot$]{} [@Pej15]. Note that the “normal” light curve band corresponds to a much brighter and briefer plateaus than in the low energy explosion reference model (dashed curve in [[[[Fig. [[\[fig:lowE\]]{}]{}]{}]{}]{}]{}). -------------------------------------------------- -------------------------------------------------- ![image](grid_Lp_inf.pdf){width=".48\textwidth"} ![image](Lp_Emtm_inf.pdf){width=".48\textwidth"} ![image](grid_tp_inf.pdf){width=".48\textwidth"} ![image](tp_Emtm_inf.pdf){width=".48\textwidth"} -------------------------------------------------- -------------------------------------------------- The piston that drives the low energy explosion first moves inward from its initial location at the edge of the iron core to a minimum radius of $10^7$ cm in 0.25 seconds. Once it starts to move outwards, we begin to deposit energy to the inner part of the ejecta as heat according to the vacuum dipole spin-down formulation, $\rm L_m(t)=E_m t_m^{-1} (1+t/t_m)^{-2}\ ergs\ s^{-1}$. The evolution of the ejecta is followed for $\sim350$ days including the energy from radioactivity, and the light curves are approximately calculated with flux-limited diffusion. The magnetar parameters were varied from B=$10^{14-16}$ G for the constant dipole magnetic field strength at the equator, and between 1 and 10 ms for the initial rotational period. These spins correspond to a range of initial rotational kinetic energies between $0.2-20\times10^{51}$ ergs, and spin-down timescales between 20 seconds and 230 days. Once the explosion is initiated, we expect a neutrino-driven wind to emerge from the cooling proto-neutron star [@Dun86; @Jan96; @Bur95]. For a given magnetic field strength and spin period, the flow will become increasingly relativistic over the Kelvin-Helmholz cooling timescale as the Alfvén point in the flow approaches the light cylinder [@Tho04; @Met07; @Met11]. The energy injection rate at these very early times is higher than implied by the vacuum dipole expression for the same $\rm B$ and $\rm P$, and may be complicated as the neutron star contracts and its convection changes character throughout the cooling epoch [@Pon99; @Rob12]. However, because the long-term spindown behavior more directly affects the eventual lightcurve shape and dynamics than the detailed evolution at these very early times, we simply assume the vacuum dipole energy injection formula. For these purposes $\rm B$ and $\rm P$ should be interpreted as defining the rotation energy reservoir and the reference energy loss rate. The effect of late time magnetar powered light curves have been extensively studied in the context of SLSN-I emerging from stripped cores [e.g., @Ins13]. In general, with shorter spin down timescale, most of the magnetar energy input is lost to adiabatic expansion, while with longer timescale more energy is channeled into radiation and produces luminous light curves. The same generic behavior is also seen in our calculations, however, with the extended envelope of the RSG progenitor the light curves present a diverse structure, and with a larger ejecta mass the magnetar powered Type II light curves are generally less luminous than Type I cases. [[Figure [[\[fig:tdepinf\]]{}]{}]{}]{} shows the resulting light curve plateau properties from magnetar powered explosions when the energy was injected until the end of calculation ($\sim$350 days). The plateau duration, $\rm \tau_p$, was conservatively measured as the time span from the beginninng of explosion until when the photospheric radius dips below $10^{14}$ cm, while the plateau luminosity was measured in the middle of the plateau as $\rm L_p=L(\tau_p/2)$. These conditions apply well to all calculations, except those with the weakest field stregths and slowest initial spins. In those models, the spin-down timescale is comparable to the calculation time and in a few cases the photospheric radius does not reach its maximum within 300 days, resulting in very long $\rm \tau_p$. For a given initial spin, the most luminous light curves emerge from models with weakest field strengths since the spin-down timescale is longest. For a given field strength, the spin-down timescale decreases with faster initial spin, but due to the increasing energy budget, much more energy is channeled into radiation compared to a slower spin model. The most luminous model with $10^{14}$ G and initial spin of $\rm P_{ms}$=1 exceeds a peak luminosity of a few times $\rm 10^{44}\ ergs\ s^{-1}$. In models with stronger fields the plateau luminosity never exceeds of $\rm 10^{43}\ ergs\ s^{-1}$, except the few with the fastest initial spins. From the other side, the least luminous models come from strongest fields and lowest energy budgets. In all of these calculations, the deposited energy is much larger than the initial weak explosion energy of $5\times10^{49}$ ergs, and thus all the models are significantly more luminous than the reference light curve shown in [[[[Fig. [[\[fig:lowE\]]{}]{}]{}]{}]{}]{}. The behaviour of the plateau duration is slightly more complicated. With more energy deposited promptly (i.e. shorter spin-down timescale), the ejecta will be strongly ionized and will expand faster, resulting in brighter and briefer plateaus. While with more gradual deposition, the magnetar energy extends the plateau phase by supporting ionization, in much the same way as done in through radioactive decay energy [see @Kas09]. This is why the plateau duration increases with slower initial spin, for a given field strength. However, note that for a given initial spin the plateau duration first shortens with increasing field strength, and then starts to lengthen again for $\rm B>10^{15}$ G. For smaller field strengths, the spin-down luminosity is always greater than or comparable to the luminosity from radioactive decay, while at higher field strengths it is always much weaker than the decay luminosity at late times. Consistent with the results found in @Suw15, none of our models produced extra $^{56}$Ni in addition to the initial explosion. However, in all of the models the fallback that occurs in the original reference model does not occur and thus they all receive power from the decay of 0.16 [$\mathrm{M}_\odot$]{} $^{56}$Ni. This has little relevance for the plateau phase of the lightcurve when the magnetar is slowly depositing energy, but it can significantly extend the plateau phase when nearly all of magnetar energy is deposited promptly. If the energy contribution from radioactivity is removed, the plateau duration monotonically decreases with increasing field strength as expected. The regions bounded by a dashed line in [[[[Fig. [[\[fig:tdepinf\]]{}]{}]{}]{}]{}]{} highlight the models that have similar plateau durations and luminosities to typical Type IIP supernovae. Note the plateau luminosities are between $10^{42}$ and $\rm 10^{43}\ ergs\ s^{-1}$ for most models with $\rm P_{i}>3$ ms, and so this region is primarily bounded by the plateau durations, except when the spin-down timescale is long with weaker field strengths and slower initial spins. As expected ([[§ [[\[sec:intro\]]{}]{}]{}]{}), this region covers mostly models that were powered by magnetars with $\rm P_i$ roughly between 3 and 6 ms, and $\rm B$ stronger than $10^{15}$ G. For stronger field strengths there is a slight preference for a slower initial spin, since the radioactive extension of the plateau becomes less relevant with increasing rotational energy of the magnetar. ![Magnetar powered light curves (colored curves) from the highlighted region in [[[[Fig. [[\[fig:tdepinf\]]{}]{}]{}]{}]{}]{}, where the models have plateau luminosities and durations that lie within the range of observed ordinary Type IIP supernovae (gray band from [[[[Fig. [[\[fig:lowE\]]{}]{}]{}]{}]{}]{}). The light curves from magnetars with log(B)=15 G are shown in blue, log(B)=15.5 G shown in red, and with log(B)=16 G are shown in orange. For a given field strength (curves from a given color), with increasing initial rotational energy (smaller $\rm P_i$) the light curve plateau phase is brighter and shorter. For a given initial spin, with longer the spin-down timescale (smaller B) the plateau phase brightens in time due to persistent magnetar luminosity at late times, while with shorter spin-down timescale the plateau is roughly constant or dimming with time. Observed Type IIP light curves with rising plateaus are often associated with blue supergiant progenitors, but here we show that late time magnetar energy deposition can result in such light curves from a normal RSG progenitor. [[\[fig:lcs\]]{}]{}](lcs.pdf){width=".48\textwidth"} [[Figure [[\[fig:lcs\]]{}]{}]{}]{} shows light curves (colored curves) from the models that lie in the region bounded by dashed lines in [[[[Fig. [[\[fig:tdepinf\]]{}]{}]{}]{}]{}]{}. As in [[[[Fig. [[\[fig:lowE\]]{}]{}]{}]{}]{}]{}, it also shows the reference weak explosion model without any magnetar input (dashed curve), and the approximate luminosity band (gray band) for “typical” Type IIP light curves. Note that while some light curves have a fairly constant luminosity during the plateau phase (red), others are sharply increasing (blue) or decreasing (orange) until before they transition to nebular phase. The increasing luminosity during the plateau phase results when the magnetar deposition dominates for a time comparable or longer than the effective diffusion timescale of the ejecta, so that without much prompt energy deposition it is dimmer in the first few weeks and it gradually brightens due to the magnetar luminosity at later times. Also note that the tail is slightly brighter than models with shorter spin-down due to magnetar input, even though all of the models receive the same amount of radioactive energy. In general, this effect is even more prominent for weaker field strengths than $10^{15}$ G, where the plateau keeps rising for hundreds of days and several orders of magnitude in luminosity, but the resulting light curves are well beyond what we consider as typical of Type IIP. With shorter spin-down times, the magnetar efficiently deposits energy promptly and therefore the early light curve appears brighter, but it stays roughly constant or keeps dimming afterwards as the ejecta expands much faster without experiencing much magnetar energy input at later times. These model light curves are consistent with the diversity seen in the observations [e.g., @Arc12]. The most notable are those with increasing plateau luminosity, which are often classified as peculiar and associated with a blue supergiant progenitor [e.g., @Tad16] due to their similarity with SN1987A. With the explosion of a compact progenitor, the light curve is initially less luminous due to the energy lost in adiabatic expansion. However, our calculations demonstrate that such light curves can emerge from RSG progenitors when their explosions are powered by magnetars. In this case, however, the progenitor is already extended to begin with, and the light curve starts from a lower luminosity because of weaker recombination of the envelope and gradually brightens when the magnetar keeps depositing significant amount of energy at late times. Discussion ========== [[\[sec:conclude\]]{}]{} We have explored the light curves emerging from an explosion of a RSG progenitor that was initiated by a weak piston and followed by energy deposition from a magnetar. Through a set of spherically symmetric hydrodynamical calculations with approximate radiation transport, we have shown that for a narrow range of magnetar parameters, the resulting light curves resemble what we observe in ordinary Type IIP supernova explosions. For the progenitor explored, when the initial spin of the magnetar is between 3 and 5 ms, and its dipole magnetic field is strength greater than $10^{15}$ G, the energy deposition through vacuum dipole spin-down transforms the long-lasting dim transient, produced by the weak piston-driven explosion alone, into transients with plateau luminosities of $\rm 10^{42-43}\ ergs\ s^{-1}$ and durations of 90-130 days. The final kinetic energies of the ejecta are in the expected range ($\sim0.8-2\times10^{51}$ ergs) and since the magnetar deposition does not produce any extra $^{56}$Ni, the magnetar energy input with a short spin-down time results in radioactively powered light curve tails seen in ordinary Type IIP supernovae, as long as enough is synthesized through the initial weak explosion The above-mentioned preferred range for the magnetar spin period and magnetic field strength correspond in part to the some of the employed assumptions. For example, if the initial prompt energy assumed is higher, then lower initial spin rates would yield typical Type IIP energies at late times. When we repeat the calculation using a RSG progenitor with an initial mass of 12 [$\mathrm{M}_\odot$]{}, and with a prompt explosion energy of $8\times10^{49}$ ergs, the corresponding range of $\rm P_i$ that produces typical Type IIP light curves shifts to $\sim4-7$ ms, because the prompt non-magnetar component of the energy budget for the explosion is larger than in the 15 [$\mathrm{M}_\odot$]{} progenitor shown in [[[[Fig. [[\[fig:lowE\]]{}]{}]{}]{}]{}]{}. But as long as this prompt explosion energy is much smaller than the rotational kinetic energy of the magnetar, the same generic results will hold - initial period of a few ms ($\rm P_i\sim (20/(1-E_{prompt}/10^{51}))^{1/2}$ ms) and a strong field strength of $\rm B>10^{15}$ G for a relatively fast initial spin-down time. The dependence on the progenitor structure is rather small since for solar metallicity the ejecta masses and the envelope masses are fairly similar for progenitors with initial masses between $10-25$ [$\mathrm{M}_\odot$]{}[e.g., @Suk14], which are responsible for majority of Type IIP supernovae [@Sma15], Another possibility is that many of the neutron stars born in Type IIP supernovae start with high, but short-lived, magnetic fields generated by a dynamo mechanism as the neutron star cools and convects (e.g., @Dun92). To probe this scenario, we have re-calculated all of the models but depositing energy only during the initial 50 seconds ([[[[Fig. [[\[fig:tdep50\]]{}]{}]{}]{}]{}]{}). The amount of energy deposited during the initial time t is approximately $\rm \Delta E=E_{m,i}t(t+t_m)^{-1}$. When the spin-down timescale is much larger the deposition time it approaches $\rm L_mt$. Therefore for $\rm B < 10^{15}$ G and $\rm P_i > 4$ ms the energy deposition for only 50 seconds does not result in any noticable effect on the light curve. Accordingly the range of B and $\rm P_i$ that results in ordinary Type IIP plateau characteristics shifts to $\sim1$ ms and $\sim10^{16}$ G range. In general, the magnetar spin-down luminosity during the initial 50 seconds is highly uncertain, since the neutron star is convective and its wind rapidly evolves [@Tho04; @Met11]. Therefore the magnetic field strengths in [[[[Fig. [[\[fig:tdep50\]]{}]{}]{}]{}]{}]{} should be thought of as a proxy for the time-averaged magnetar luminosity during the initial 50 seconds. Future models should explore how pulsar driven shells might synthesize of $^{56}$Ni and power the shockwave, as it moves though dense inner core of the ejecta. ![Same as [[[[Fig. [[\[fig:tdepinf\]]{}]{}]{}]{}]{}]{}, but here the magnetar deposits energy only during the initial 50 seconds. The range of magnetar parameters that transforms the weak explosion into a typical Type IIP light curve is now $\rm log(B)>10^{15}$ G and $\rm P_i <3$ ms. [[\[fig:tdep50\]]{}]{}](grid_Lp_50.pdf "fig:"){width=".48\textwidth"} ![Same as [[[[Fig. [[\[fig:tdepinf\]]{}]{}]{}]{}]{}]{}, but here the magnetar deposits energy only during the initial 50 seconds. The range of magnetar parameters that transforms the weak explosion into a typical Type IIP light curve is now $\rm log(B)>10^{15}$ G and $\rm P_i <3$ ms. [[\[fig:tdep50\]]{}]{}](grid_tp_50.pdf "fig:"){width=".48\textwidth"} Although the evidence is growing to connect the nature of the most energetic explosions (e.g., long duration gamma-ray bursts and SLSN-I) to a rotational energy source, imagining a magnetar being responsible for some ordinary Type IIP explosions is not too far fetched. So far we have detected only a few dozen magnetars in the Galaxy, and some of them are known to reside in Type II supernova remnants (based on their large ejecta mass) that seem to indicate a canonical explosion energy of $\sim10^{51}$ ergs [@Vin06]. One can also roughly approximate the vacuum spin-down periods for the 5 magnetars listed in the the McGill Online Magnetar Catalogue[^1] [@Ola14], that have a clear association with a supernova remnant. Using the measured surface dipole field strengths, assuming a breaking index of $\rm n=3$, and $\rm P_i=4$ ms, would bring the vacuum spin-down periods at the estimated ages (as $\rm P_i(1+t_{age}/t_m)^{1/2}$) to of order the measured periods. This of course ignores known factors of the evolving magnetic field strength, both potentially on short time scales due to cooling, convection, and dynamo effects, and on long time scales due to non-MHD dissiapation as in @Vig13 [@Gul15]. Nevertheless this is suggestive that magnetars might be somehow connected to some ordinary Type IIP supernovae. Given the presupernova conditions for a typical solar metallicity RSG progenitor from stellar evolution calculations with dynamo processes, it is not straightforward to obtain proto-neutron star magnetic fields greater than $10^{15}$ G and initial spins of just few ms [@Heg05]. Magnetic torques during the evolution result in a slower spinning iron core, and without invoking magneto-rotational instabilities, or some other magnetic amplification processes, the flux compression alone is not sufficient for a strong enough field. However, we note that the general problem of understanding the effective angular momentum transport in stellar interiors is still very much an open question. Until the existing theories are tested against asteroseismological measurements, which are the only way of probing the internal rotation profiles of evolved stars. The existing seismological data, though only available for much lower mass stars at the moment, already challenge our current understanding of angular momentum transport in redgiants [e.g., @Deh15; @Tay13]. In some ways it is not surprising that magnetar models can fit nearly all kinds of explosion light curves, including some regular Type IIP, when a big fraction of the explosion energy reservoir is replaced with a simple model that allows us to conveniently control its budget (initial spin) and injection rate (constant dipole magnetic field strength). This of course could be a fine-tuning, or interesting evidence that the supernova mehcanism is connected to rotation and magnetic fields. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Stan Woosley, Chris Kochanek, Laura Lopez, Katie Auchetl, Ondrej Pejcha and John Beacom for helpful discussions and comments. We also thank Alex Heger for his contributions in developing the `KEPLER` code. TS is partly supported by NSF (PHY-1404311) to John Beacom. [99]{} Akiyama, S., Wheeler, J. C., Meier, D. L., & Lichtenstadt, I. 2003, , 584, 954 Arcavi, I., Gal-Yam, A., Cenko, S. B., et al. 2012, , 756, L30 Arnett, W. D. 1966, Canadian Journal of Physics, 44, 2553 Benetti, S., Nicholl, M., Cappellaro, E., et al. 2014, , 441, 289 Bethe, H. A., & Wilson, J. R. 1985, , 295, 14 Bodenheimer, P., & Ostriker, J. P. 1974, , 191, 465 Bucciantini, N., Quataert, E., Metzger, B. D., et al. 2009, , 396, 2038 Burrows, A., Hayes, J., & Fryxell, B. A. 1995, , 450, 830 Colgate, S. A., & White, R. H. 1966, , 143, 626 Deheuvels, S., Ballot, J., Beck, P. G., et al. 2015, , 580, A96 Dong, S., Shappee, B. J., Prieto, J. L., et al. 2016, Science, 351, 257 Duncan, R. C., Shapiro, S. L., & Wasserman, I. 1986, , 309, 141 Duncan, R. C., & Thompson, C. 1992, , 392, L9 Filippenko, A. V. 1997, , 35, 309 Gal-Yam, A. 2012, Science, 337, 927 Gull[ó]{}n, M., Pons, J. A., Miralles, J. A., et al. 2015, , 454, 615 Heger, A., Woosley, S. E., & Spruit, H. C. 2005, , 626, 350 Inserra, C., Smartt, S. J., Jerkstrand, A., et al. 2013, , 770, 128 Janka, H.-T., & Mueller, E. 1996, , 306, 167 Janka, H.-T., Melson, T., & Summa, A. 2016, arXiv:1602.05576 Kasen, D., & Woosley, S. E. 2009, , 703, 2205 Kasen, D., & Bildsten, L. 2010, , 717, 245 Maeda, K., Tanaka, M., Nomoto, K., et al. 2007, , 666, 1069 Metzger, B. D., Thompson, T. A., & Quataert, E. 2007, , 659, 561 Metzger, B. D., Giannios, D., Thompson, T. A., Bucciantini, N., & Quataert, E. 2011, , 413, 2031 Metzger, B. D., Margalit, B., Kasen, D., & Quataert, E. 2015, , 454, 3311 Nicholl, M., Smartt, S. J., Jerkstrand, A., et al. 2013, , 502, 346 Nicholl, M., Smartt, S. J., Jerkstrand, A., et al. 2015, , 452, 3869 Olausen, S. A., & Kaspi, V. M. 2014, , 212, 6 Ostriker, J. P., & Gunn, J. E. 1971, , 164, L95 Pejcha, O., & Prieto, J. L. 2015, , 806, 225 Pons, J. A., Reddy, S., Prakash, M., Lattimer, J. M., & Miralles, J. A. 1999, , 513, 780 Popov, D. V. 1993, , 414, 712 Quimby, R. M., Kulkarni, S. R., Kasliwal, M. M., et al. 2011, , 474, 487 Roberts, L. F., Shen, G., Cirigliano, V., et al. 2012, Physical Review Letters, 108, 061103 Smartt, S. J. 2015, PASA, 32, e016 Sukhbold, T., & Woosley, S. E. 2014, , 783, 10 Sukhbold, T., Ertl, T., Woosley, S. E., Brown, J. M., & Janka, H.-T. 2016, , 821, 38 Suwa, Y., & Tominaga, N. 2015, , 451, 282 Taddia, F., Sollerman, J., Fremling, C., et al. 2016, , 588, A5 Tayar, J., & Pinsonneault, M. H. 2013, , 775, L1 Thompson, T. A., Chang, P., & Quataert, E. 2004, , 611, 380 Usov, V. V. 1992, , 357, 472 Uzdensky, D. A., & MacFadyen, A. I. 2006, , 647, 1192 Vigan[ò]{}, D., Rea, N., Pons, J. A., et al. 2013, , 434, 123 Vink, J., & Kuiper, L. 2006, , 370, L14 Weaver, T. A., Zimmerman, G. B., & Woosley, S. E. 1978, , 225,1021 Woosley, S. E. 2010, , 719, L204 Woosley, S. E., & Weaver, T. A. 1995, , 101, 181 [^1]: http://www.physics.mcgill.ca/$\sim$pulsar/magnetar/main.html
--- abstract: 'Gamma-ray burst afterglow flares and rebrightenings of the optical and X-ray light curve have been attributed to both late time inner engine activity and density changes in the medium surrounding the burster. To test the latter, we study the encounter between the relativistic blast wave from a gamma-ray burster and a stellar wind termination shock. The blast wave is simulated using a high performance adaptive mesh relativistic hydrodynamics code, <span style="font-variant:small-caps;">amrvac</span>, and the synchrotron emission is analyzed in detail with a separate radiation code. We find no bump in the resulting light curve, not even for very high density jumps. Furthermore, by analyzing the contributions from the different shock wave regions we are able to establish that it is essential to resolve the blast wave structure in order to make qualitatively correct predictions on the observed output and that the contribution from the reverse shock region will not stand out, even when the magnetic field is increased in this region by repeated shocks. This study resolves a controversy in recent literature.' author: - | H.J. van Eerten$^{1}$[^1], Z. Meliani$^2$, R.A.M.J. Wijers$^1$, R. Keppens$^{2,3,4}$\ $^{1}$Astronomical Institute ’Anton Pannekoek’, PO box 94248, 1090 SJ Amsterdam, the Netherlands\ $^{2}$Centre for Plasma Astrophysics, K.U. Leuven, Celestijnenlaan 200B, 3001 Leuven, Belgium\ $^{3}$FOM-Institute for Plasma Physics Rijnhuizen, Nieuwegein, The Netherlands\ $^{4}$Astronomical Institute, Utrecht University, The Netherlands date: 'Accepted ... Received ...; in original form ...' title: 'No visible optical variability from a relativistic blast wave encountering a wind-termination shock' --- Introduction ============ Gamma-ray Burst (GRB) afterglows are produced when a relativistic blast wave interacts with the circumstellar medium around the burster and emits nonthermal radiation. (for reviews, see @Piran2005 [@Meszaros2006]) The general shape of the resulting spectra and light curves can be described by combining the self-similar Blandford-McKee (BM) model [@Blandford1976] for a relativistic explosion with synchrotron radiation emission from a relativistic electron population accelerated into a power law distribution at the shock front. This model describes a smooth synchrotron light curve, with the slope of the curve a function of the power law slope of the accelerated electrons and of the density structure of the surrounding medium [@Meszaros1997; @Wijers1997]. This picture however, is far from complete and with the increasing quality of the available data (e.g. from *swift*) more deviations from the standard of a smoothly decaying (in the optical and X-ray) light curve are being found, for example in the shape of flares [@Burrows2005; @Nousek2006; @OBrien2006] in the X-ray afterglows and early optical variability [@Stanek2006]. Along with prolonged inner engine activity, changes in the surrounding density structure have often been suggested as a cause of this variability [@Wang2000; @Lazzati2002; @Nakar2003]. The details of the shape of the surrounding medium have therefore been the subject of various studies (e.g. @vanMarle2006), as well as the hydrodynamics of a relativistic blast wave interacting with a complex density environment [@Meliani2007b]. Two recent studies combine a description for the structure of the blast wave after encountering a sudden change in density, like the wind termination shock of a Wolf-Rayet star, with an analysis of the emitted synchrotron radiation that is a result of this encounter (@Nakar2007, hereafter NG and @Peer2006, hereafter PW), but arrive at different conclusions. A short transitory feature in the observed light curves (at various wavelengths) is predicted by PW, whereas NG conclude that any sudden density change of arbitrary size will result in a smooth transition. The purpose of this paper is to resolve this discrepancy in the literature by performing, for the first time, a detailed analysis of the radiation produced by a blast wave simulated with a high performance adaptive-mesh refinement code. For this analysis, we use the radiation code described in @vanEerten2009 and the <span style="font-variant:small-caps;">amrvac</span> relativistic magnetohydrodynamics (RHD) code (@Keppens2003 [@Meliani2007]). We take special care to perform our simulation at a sufficiently high spatial and temporal resolution, such that a transitory feature, if any, is properly resolved. In section \[initial\_section\] we will first describe the setup and technical details of our simulation run. In section \[results\_section\] we will discuss the resulting optical light curve and the fluid profile during the encounter. Our numerical results confirm those of NG. However, by following the same approximations for the shock wave dynamics as PW, who approximate the different shocked regions by homogeneous slabs, we find that we are able to reproduce their result of a rebrightening of the afterglow curve. In section \[slabs\_section\] we argue how this illustrates the importance of resolving the downstream density structure. After that we separately discuss in section \[RS\_section\] the contribution of the reverse shock that is triggered when the blast wave hits a density discontinuity, as it is the main transitory phenomenon during the encounter. This contribution is overestimated by PW and assumed similar in behavior to that of the forward shock in NG. Since both NG and PW do not invoke electron cooling in their arguments and optical flashes, if any, occur at observer frequencies that are orders of magnitude below the cooling break, we will not enable electron cooling in our radiation code. We summarize our results in section \[summary\_section\]. Initial conditions {#initial_section} ================== We will study the case of a massive ($M \gtrsim 25 M_\odot$), low metallicity ($Z \backsim 0.01 Z_\odot$) progenitor star. During its Wolf-Rayet phase (lasting $\backsim 10^6$ years) a stellar wind is produced, which determines the shape of the circumstellar medium. The typical mass-loss rate is approximately $\dot{M} \backsim 10^{-6} M_\odot \textrm{ yr}^{-1}$ and the typical wind velocity $v_w \backsim 1000 \textrm{ km s}^{-1}$. Because the stellar wind flow is supersonic, a shock is produced. A simple schematic description of the circumstellar medium (where we ignore complications such as the influence of photo-ionization) consists therefore of (starting near the star and moving outwards) a free-flowing stellar wind region, a density jump separating the stellar wind region from a homogenized region influenced by the reverse shock, a contact discontinuity followed by a region shocked by the forward shock. The forward shock front then separates the shocked medium from the unshocked interstellar medium (ISM). Following the GRB explosion, a relativistic blast wave is sent into this environment. For the typical progenitor values above, an ISM number density $n_{ISM} \backsim 10^3 \textrm{ cm}^{-3}$ and a GRB explosion energy of $E = 10^{53}$ erg, this blast wave will only encounter the first discontinuity during its relativistic stage. The discontinuity will be positioned at $R_0 = 1.6 \cdot 10^{18}$ cm and corresponds to a jump in density of a factor 4. Before the jump the radial density profile is given by $n(r) = 3 \cdot ( r / 1\cdot10^{17} )^{-2} \textrm{ cm}^{-1}$, and after the jump by the constant $n(r) = 4 \times 3 \cdot (R_0 / 1 \cdot 10^{17} )^{-2} \textrm{ cm}^{-1}$. These exact values are chosen to conform to PW. We have run a number of simulations of relativistic blast waves hitting the wind termination shock at $R_0$. The initial fluid profile is generated from the impulsive energy injection BM solution with the parameters described above for the explosion energy and circumburst density, keeping the adiabatic index fixed at 4/3. The starting time is taken when the shock Lorentz factor is 23. The blast wave will hit the discontinuity when its Lorentz factor is $\backsim 22.27$, at an explosion lab frame time $t_{enc} = 5.34 \cdot 10^7$ seconds (with $t = 0$ set to the start of the explosion). This time corresponds to $\backsim 0.3 $ days for radiation coming from the shock front in observer time (which is taken to be zero at the start of the explosion). To completely simulate the encounter, we will follow the evolution of the blast wave from $5 \cdot 10^6$ seconds to $6.4 \cdot 10^7$ seconds and will store enough output to obtain a temporal resolution (in lab frame simulation time) $d t$ of $1.56 \cdot 10^3$ seconds. For the outer boundary of the computational grid we take $6 \cdot 10^{18}$ cm, enough to completely capture the shock profile during the encounter even if it were to continue at the speed of light. In order to resolve the shock wave, even at its smallest width at Lorentz factor 23, we take 10 base level cells and allow the adaptive mesh refinement routine to locally double the resolution (where needed) up to 17 times. This implies an effective resolution $dr \backsim 6.3 \cdot 10^{11}$ cm and effectively 1,310,720 grid cells. Three simulations were performed using the initial conditions from PW (along with some at lower resolutions, to check for convergence): a test run with stellar wind profile only (and no discontinuity), one with a density jump of 4 and one with a far stronger density jump of 100. Although density jumps much larger than 4 may be feasible (see @vanMarle2006, for an example scenario where the progenitor star has a strong proper motion -the relativistic blast wave will then be emitted into a stellar environment that takes the shape of a bow shock), this is not the main motivation for the factor 100 simulation run. The primary focus is on establishing if the lack of an observer effect in the light curve persists for general values of the density jump. To study relativistic as well as ultra-relativistic blast waves, in addition to the Lorentz factor 23 scenario we have also performed two simulations (one with jump and a test run without) where we moved the density jump outward to $3\cdot10^{19}$ cm, while keeping the other parameters equal. In this scenario the blast wave encounters the jump when it has a shock Lorentz factor $\backsim 5$. The simulation output is then analyzed using the radiation code for an observer at a distance of $1 \cdot 10^{28}$ cm. The microphysics of the shock acceleration is captured by a number of ignorance parameters. The fraction of thermal energy residing in the small scale downstream magnetic field is $\epsilon_B = 0.01$, the fraction of thermal energy in the accelerated particle distribution $\epsilon_E = 0.1$, the number of power law accelerated electrons as a fraction of the electron number density $\xi_N = 1$ and the slope of this power law $p = 2.5$. Again these values are chosen to match PW. Light curve and shock profile {#results_section} ============================= The discussion below refers to the shock Lorentz factor 23 scenario. The Lorentz factor 5 simulations lead to qualitatively similar light curves and will therefore not be discussed in further detail. The transition then takes extremely long due to the longer dominance of earlier emission. These simulations confirm that the results presented hold for relativistic blast waves as well, not just for ultra-relativistic blastwaves. Directly after hitting the discontinuity, the blast wave splits into three regions. The innermost region, up to the reverse shock (RS) front remains unaware of the collision. Beyond the RS the plasma gets homogenized up to the contact discontinuity (CD). The region following the contact discontinuity, up to the forward shock (FS) is not homogeneous but will gradually evolve into a BM profile again for a modified value of the circumburst density structure. A snapshot of the shock structure during the encounter is shown in figure \[snapshot\_figure\]. We show comoving density (as opposed to the lab frame density) because the differences between the different regions then stand out more clearly. ![A snapshot of the comoving density profile at 17 refinement levels of the fluid at emission time $t_e = 5.48578 \cdot 10^7$ s, for the factor 4 increase in density. The different regions are clearly visible. From left to right we have: up to the steep rise the region not yet influenced by the encounter, the plateau resulting from the passage of the reverse shock, and starting at the gradual rise the region of the forward shock. The front part of the forward shock region is again homogeneous in density, showing the difference between the idealized BM solution and actual simulation results. The flat part of the forward shock region (smallest, rightmost region) is resolved by $\backsim 100$ cells.[]{data-label="snapshot_figure"}](profilesfinal.eps){width="49.00000%"} The optical light curves calculated from the simulations are observed at $\nu = 5 \cdot 10^{14}$ Hz, which lies between the synchrotron peak frequency $\nu_m$ and the cooling break frequency $\nu_c$ (it may be helpful to emphasize that here, contrary to shock interaction during the prompt emission phase, $\nu_m$ is found at a similar frequency for both the forward and reverse shock contributions). Because the observer frequency lies well below the cooling break, we ignore the effect of electron cooling. The light curves for the factor 4 and factor 100 density jumps are found in figure \[termination\_shock\_figure\]. For complete coverage at the observed times and clarity of presentation, analytically calculated emission from a BM profile with Lorentz factors $>23$ (or $>5$) has been added to that calculated from the simulations. From the light curves we draw the following conclusion: *an encounter between the relativistic blast wave and a wind termination shock does not lead to a bump in the light curve, but instead to a smooth change in slope.* The new slope eventually matches that of a BM solution for the density structure found beyond the discontinuity. ![The figure shows the resulting optical light curves at $5 \cdot 10^{14}$ Hz, for the cases of a continuous stellar wind environment, a jump of a factor 4 followed by a homogeneous environment and a jump of a factor 100. 50 data points have been devoted to 0.3 - 1 day and 50 data points to the following 19 days. A smooth transition towards the power law behaviour corresponding to a BM shock wave expanding into a homogeneous environment is visible, even for the extreme change in density.[]{data-label="termination_shock_figure"}](bumps_complete.eps){width="49.00000%"} Resolved blast wave versus homogeneous slab {#slabs_section} =========================================== ![Resulting light curves at $5 \cdot 10^{14}$ Hz when our radiation code is applied to the homogeneous slabs approximation of PW, instead of a hydrodynamical simulation. The bottom curve shows the resulting light curve if the magnetic field in the reverse shock region does not contain the additional increase in magnetic field in the reverse shock region. Contrary to the light curves shown in figure \[termination\_shock\_figure\], in *both* cases a clear rise in intensity with respect to the previous level is seen over the course of a few hours, as predicted by PW for homogeneous slabs.[]{data-label="Peer_lightcurves_figure"}](Peer_lightcurves.eps){width="49.00000%"} The optical light curves presented in the above section differ distinctly from those presented by PW in that they show no bumps. This difference in results has to be caused by one or more differences in our assumptions, which are: - PW include both electron cooling and synchrotron self-absorption, while in this paper we have included neither. - We take the magnetic field to be a fixed fraction $\epsilon_B$ of the local thermal energy in all parts of the fluid, even those shocked twice, whereas PW have a magnetic field in the reverse shock region that is slightly higher. This is because they take into account that the dominant magnetic field in the reverse shock region is actually the field advected with the flow from the region shocked once. The newly created field is approximately a factor 1.2 smaller. - We resolve the downstream fluid profile, while PW approximate the different regions behind the shock front by homogeneous slabs of varying density, thermal energy and Lorentz factor. Also, they freeze the fluid Lorentz factors during the encounter. Since the optical light curve corresponds to an observer frequency sufficiently above the self-absorption critical frequency and sufficiently below the cooling break frequency, neither cooling nor absorption should have any visible effect on the shape of the curve. The fact that cooling is not required for the bump found by PW is also immediately obvious from figure \[Peer\_lightcurves\_figure\], where we have applied our radiation code directly to the homogeneous slabs approximation of PW, with electron cooling disabled. The light curves thus generated *do* show a bump feature after the onset of the encounter (this also provides a check on the internal consistency of both models). To explicitly check the effect of the stronger magnetic field in the reverse shock region we have generated two light curves: one where all the fluid quantities are exactly similar to those of PW and one where we ignored the stronger field in the reverse shock region but kept the field at fixed fraction of the thermal energy (which is the same as that in the forward region in the homogeneous slab approximations, due to pressure balance across the contact discontinuity). As can be seen from the figure, the temporary rise occurs in both cases, with only a marginal difference between the two curves. This brings us to the third difference listed. We conclude that *to determine the visible response of a blast wave to density perturbations, it is crucial to take the radial structure of the blast wave into account*. This (along with establishing the lack of a transitory feature itself) is the main conclusion from this paper and forms an important justification for the kind of detailed approach that we have employed, where the dynamics of the blast wave are simulated using a high performance RHD code, together with a radiation code that accurately probes all local contributions to the synchrotron spectrum. It is also important to emphasize that the bump found by PW is *not* the result of inaccurately modeling the different arrival times for photons arriving from different angles relative to the line of sight, as has been stated in NG. This also can be seen from figure \[Peer\_lightcurves\_figure\] which confirms that, for homogeneous slabs, the light curves published by PW are calculated correctly. The importance of the downstream shock structure can be understood as follows. By taking a homogeneous slab one not only locally overestimates the downstream density, but also the Lorentz factor and thermal energy (and hence the magnetic field). Also, the width of the homogeneous slab is determined by comparison with the downstream density structure *or* the energy density structure *or* the velocity structure, and matching the width to one of these comes at the expense of a lack of similarity to the others. (And finally, keeping the Lorentz factors fixed during the encounter also contributes to the overestimation of the flux emitted during the encounter). Essentially, all this indicates a lack of resolution. The homogeneous slab implies a spatial resolution[^2] $\Delta r \backsim R / \Gamma^2$ cm (with $R$ the blast wave radius and $\Gamma$ the blast wave Lorentz factor), and is therefore in principle only applicable to describe behavior on time scales $\Delta t > \Delta r / c$. This is true in general, not just for simulations, and in our case yields $\Delta t \backsim 1.5$ days at the time of the onset of the encounter. The reason that the homogeneous slab *does* work to describe the general shape of the light curve from the BM blast wave, as was done by @Waxman1997 among others, is that in these cases the slab is used to describe behavior on time scales $\Delta t >> \Delta r / c$ (actually $\Delta t$ arbitrary large, for understanding of asymptotic behavior). But one should for example not expect the homogeneous slab approximation to get the absolute scale right, and indeed it is off by a factor of a few (justifying more detailed calculations like @Granot2002 [@vanEerten2009]). ![Received flux at observer frequency $5 \cdot 10^{14}$ Hz, calculated for a single emission time $t_e = 5.48578 \cdot 10^7$ s (the same time as in fig. \[snapshot\_figure\]). Curves are shown both for the homogeneous slab approximation and for the numerical simulation fluid profile. In each case the contribution from the different regions has been marked: the top curve shows the total flux, the curve below the flux when the contribution from the forward shock region is omitted and for the lowest curve the reverse shock region has been omitted as well. The flux level for the homogeneous slab approximation is much higher than that from the simulation, with (for this particular emission time) the contribution from the reverse shock dominating the total output. At the same emission time, the reverse shock region contribution for the simulation is still significant, but no longer dominant. For the simulation snapshot we have estimated the position of the contact discontinuity, and therefore the edge of the reverse shock region, at the right edge of the plateau, before the onset of the rise in density. (see fig. \[snapshot\_figure\].)[]{data-label="single_flash_figure"}](singleflash_both.eps){width="49.00000%"} The reverse shock contribution {#RS_section} ============================== In the previous we have established that the reverse shock caused by the encounter with the density perturbation does not cause a rise in the observed light curve. Since this reverse shock has been evoked to explain rebrightening (e.g. by @Ramirez2005), it is of interest to look at its contribution in some more detail. In fig. \[single\_flash\_figure\]. this contribution (in the optical) is compared directly to the total flux emitted from the shock profile, both for the simulation and for the PW approximation. The important difference is the relative overestimation of the reverse shock region in the PW approximation. The relative contributions for the different regions within either the homogeneous slab or the resolved blast wave simulation of course depend on their relative sizes and therefore on the emission time. Another feature of note is that the homogeneous slabs approximation results in an emission profile that is sharply peaked, whereas the more accurate profile displays a flatter tail and a smoother transition between rise and decay. The shock structure is also calculated and implemented in NG, starting from the shock jump conditions and assuming homogeneous slabs for the forward and reverse shock region, yet they do not find a temporary rebrightening. This is a consequence of the fact that they set the reverse shock contribution at a fixed fraction of the forward shock contribution, while allowing this forward shock contribution to evolve according to the appropriate BM profile following the density change, as opposed to freezing the shock Lorentz factors during the encounter. That the forward shock determines the shape of the light curve is then imposed as a feature of their model (i.e. in their equation 20) and yields an adequate heuristic description of the light curve found as a result of their simulations. The difference between the simulations by NG and ours is merely a technical one: instead of a Eulerian code (that can also be used for simulations in more than one dimension, which we will perform in future work), they use a Lagrangian code for the dynamics. The reconstruction of the light curves from the code is equivalent. They also, like us, do not take a slight increase in the magnetic field in the reverse shock region into account. NG provide no information on the spatial and temporal resolution of their simulations. Summary and conclusions {#summary_section} ======================= We have performed high resolution hydrodynamical simulations of a relativistic blast wave encountering a wind termination shock and have calculated the resulting light curve using the radiation code described in @vanEerten2009. As a result we have found *no* variability in the optical, not even for very large density changes, for blast waves in the self-similar phase. This renders it very unlikely that observed optical variability in GRB afterglow light curves can be explained from density perturbations in the external medium surrounding the burster, as suggested by e.g. @Wang2000 [@Lazzati2002; @Nakar2003], PW. This research, however, has been limited to spherically symmetric density perturbations. A second caveat is the assumption of self-similarity for the blast wave approaching the wind termination shock. As demonstrated by @Meliani2007b, for a termination shock close to the star ($R \backsim 10^{16}$ cm in their simulation, for a short Wolf-Rayet phase), the blast wave structure may still somewhat retain the initial structure of the ejecta (in their simulation, a uniform static and hot shell, i.e. fireball), which may have observable consequences. The latter is however not likely, given the already reasonably strong resemblance between their simulation output during the encounter and ours, where the same shock regions can be identified in the fluid profile with similar values for the physical quantities of interest. Also, if the pre-encounter shock wave is sufficiently different from the self-similar solution this will also have consequences for the global shape and temporal evolution of the observable light curve, and the slope will become markedly different from the one predicted from the BM solution. Of the two main explanations for (sometimes quite strong) late optical variability, refreshed or multiple shocks appear to be a far more realistic option than circumburst medium interactions. We are currently performing simulations on multiple interacting shocks to test this alternative hypothesis. We have compared the results of our simulation to the literature and from a comparison to the approximations and assumptions used by PW and NG especially, we conclude that the fact that we resolve the radial blast wave structure explains the discrepancy between our resuls and those of PW. This, in turn, forms an important justification for the kind of detailed approach that we have employed, where the dynamics of the blast wave are simulated using a high perfomance RHD code, together with a radiation code that accurately probes all local contributions to the synchrotron spectrum. We note that, contrary to what is stated by NG, the calculation of angular smearing of the signal in PW (which in turn was based on @Waxman1997) is correct. Acknowledgements {#acknowledgements .unnumbered} ================ This research was supported by NWO Vici grant 639.043.302 (RAMJW) and NOVA project 10.3.2.02 (HJvE). ZM performed computations on the K.U.Leuven High Performance computing cluster VIC, and acknowledges financial support from the FWO, grant G.0277.08, and from the GOA/2009/009. We would like to thank Asaf Pe’er for feedback and discussion. [99]{} Blandford, R.D., McKee, C.F. 1976, Phys. Fluids 19, 8 O’Brien, P.T. et al. 2006, ApJ 647, 1213 Burrows, D.N. et al. 2005, Science, 309, 1833 Downes, T.P., Duffy, P., Komissarov, S.S. 2002, MNRAS, 332, 144 van Eerten, H.J., Wijers, R.A.M.J., 2009, MNRAS, 394, 2164 Granot, J., Sari, R. 2002, ApJ, 568, 820 Keppens, R., Nool, M., Tóth, G., Goedbloed, J.P. 2003, Comp Phys Commun 153, 317 Lazzati, D. Rossi, E., Covino, S., Ghisellini, G. & Malesani, D. 2002, A & A, 396, L5 van Marle, A.J., Langer, N., Achterberg, A., Garcia-Segura, G. 2006, A & A, 460, 105 Meliani, Z., Keppens, R., Casse, F., Giannios, D.. 2007, MNRAS, 376(3), 1189 Meliani, Z., Keppens, R. 2007, A & A 467, L41 Mészáros, P. and Rees, M. 1997, ApJ, 476, 232 Nakar, E., Piran, T. & Granot, J. 2003, New Astron., 8, 495 Nakar, E., Granot, J. 2007, MNRAS, 380, 1744 Nousek, J.A., Kouveliotou, C., Grupe, D., Page, K., Granot, J., Ramirez-Ruiz, E. et al. 2006, ApJ, 642, 389 Pe’er, A. and Wijers, R.A.M.J. 2006, ApJ, 643, 1036 Piran, T. 2005, Rev. Mod. Phys. 76, 1143 Ramirez-Ruiz, E., García-Segura, G., Salmonson, J.D., Pérez-Rendón, B. 2005, ApJ, 631, 435 Stanek, K.Z. et al. 2006 ApJ, 654, L21 Mészáros, P. 2006. Rept. Prog. Phys., 69, 2259 Wang, X. & Loeb A. 2000, ApJ, 535, 788. Waxman, 1997 ApJ 491, L19. R.A.M.J. Wijers, M. Rees, P. Mészáros. 1997, MNRAS, 288, L51 [^1]: E-mail: H.J.vanEerten@uva.nl [^2]: Even though PW identify three different regions during the encounter, this in itself does not imply an improved spatial resolution, since the fluid conditions in each region are connected to each other (and the upstream medium) via shock-jump conditions that strictly speaking require all regions to be directly adjacent at the same position. The simulation snapshot in fig. \[snapshot\_figure\], shows that the assumption of the reverse shock region being thermalized and isotropic is not unreasonable, but also shows a clear density gradient within the forward shock region.
--- abstract: | We investigate the coupling between the magnetic and superconducting order parameters in an 8 m long meander line (wire") made of a $% \mathrm{La_{1.94}Sr_{0.06}CuO_{4}}$ film with a cross section of $0.5\times 100$  $\mathrm{\mu m^{2}}$. The magnetic order parameter is determined using the Low-Energy muon spin relaxation technique. The superconducting order parameter is characterized by transport measurements and modified by high current density. We find that when the superconducting order parameter is suppressed by the current, the magnetic transition temperature, $T_{m}$, increases. The extracted sign and magnitude of the Ginzburg-Landau coupling constant indicate that the two orders are repulsive, and that our system is located close to the border between first and second order phase transition. author: - Meni Shay - Amit Keren - Gad Koren - Amit Kanigel - Oren Shafir - Lital Marcipar - Gerard Nieuwenhuys - Elvezio Morenzoni - Andreas Suter - Thomas Prokscha - Moshe Dubman - Daniel Podolsky title: 'The interaction between the magnetic and superconducting order parameters in a $\mathrm{La_{1.94}Sr_{0.06}CuO_{4}}$ wire ' --- When cuprates are doped their low temperature ordered phase changes from an antiferromagnetic (AFM) to a superconducting (SC) one. The transition takes place over a range of doping levels where, at low enough temperatures, the samples are both superconducting and magnetic [Niedermayer98,JulienPRL99,Tranquada]{}. It is natural to expect phase separation due to the inhomogenous doping. However local probe such as muon spin relaxation indicates that the magnetic volume fraction is 100%, namely, the magnetic field exists everywhere, even in the SC regions [Niedermayer98]{}. Therefore, the nature of the presence of SC and magnetism is unclear. Are the two orders coupled, and if yes, what are the sign and strength of the coupling? What is the order of the transition between the AFM and SC phases as a function of doping? Is it first order with phase separation or second order with coexistence? Here we answer this question by looking at the effect of current $I$ on the magnetic phase transition temperature, $T_{m}$. A current, on the scale of the second critical current $I_{c2}$, diminishes the superconducting order parameter. If the two orders interact, the magnetic order parameter is expected to react to the current and either increase or decrease depending on the type of coupling between the two orders. This, in turn, will increase or decrease $T_{m}$, respectively. Therefore, we map the magnetic phase transition with and without current. We find that, with current, the magnetic phase transition temperature increases. This results implies that the orders are coupled, and that they are repulsive. Analysis based on the Ginzburg-Landau (GL) model shows that the phase transition is close to the border between first and second order. The experiment is done with an 8 m long wire made of $\mathrm{% La_{1.94}Sr_{0.06}CuO_{4}}$ film. The film is prepared using laser ablation deposition on (100) $\mathrm{{LaAlO_{3}}}$ substrate, standard photolithographic patterning and wet acid etching (0.05% HCl). The 6% Sr doping was chosen since the corresponding bulk material has a $T_{c}\approx 10$ K and $T_{m}\approx 6$ K [@Niedermayer98; @Panagopoulos2002; @Uemura90], which makes both critical temperatures reachable in a standard cryostat. The cross section of the wire is $0.5$ $\mathrm{{\mu m}\times 100}$ $\mathrm{% \mu m}$ so that a typical applied current of a few mA is comparable to $% I_{c2}$. Probing the magnetic properties of such a thin wire is achieved by using the new low energy muon spin relaxation (LE-$\mu $SR) technique [Prokscha2008,em1994prl]{}. In this technique, the muons are first slowed down in an Ar moderator where their kinetic energy drops from 4 MeV to 15 eV, while their initial full polarization is conserved. They are then electrostatically accelerated to 15 keV and transported in ultra high vacuum (UHV) to the sample. Four counters collect positrons from the asymmetric muon decay. One pair of counters is parallel to the initial muon spin direction and the other pair is perpendicular to it. The muon asymmetry in these directions is calculated by taking the difference over the sum of the count for each pair. This asymmetry is proportional to the component of the muon polarization in each direction. The field the muon experience is either internal, below $T_{m}$, or external (designated by H), or both. For more details on $\mu $SR in the presence of superconductivity and magnetism see Ref. . The muons beam spot size has a 15 mm diameter (FWHM). In order to avoid muons missing the sample, the wire is folded in the form of a long meandering line covering a disc 3 cm in diameter. The inset of Fig. \[fig2\] shows a magnified image of one corner of the sample. First, we discuss the sample characterization. In order to verify that the wire is indeed a bulk superconductor and that the current flows in the bulk of the wire we performed transverse field LE-$\mu $SR measurements in a field of $H=1$ kG. Figure \[fig2\](a) depicts the results from the magnetic phase ($T=2.9$ K) in a rotating reference frame, using zero field cooling (ZFC). The muons depolarize very quickly and after $3$ $\mathrm{\mu s% }$ the remaining decay asymmetry is due to muons that have stopped in the substrate. For comparison, data from a blank substrate, normalized by its effective area, are also shown. We also present the decay asymmetry in the pure superconducting phase ($T=6$ K) using field cooling (FC) conditions. In this case, the muon polarization is lost exponentially versus time at a rate $r_{sc}$ due to the magnetic field distribution of the vortices in the superconducting phase. After $6$ $\mathrm{\mu s}$ the polarization reaches the level of the substrate and the ZFC run, and thus most of the muons are affected by vortices. We fit the function: $$\begin{aligned} Asy(t) &=&A_{sc}e^{-(r_{n}t)^{2}/2-r_{sc}t}\cos (\omega _{sc}t)+A_{sb}e^{-r_{sb}t}\cos (\omega _{n-sb}t) \nonumber \\ &&+A_{n}e^{-(r_{n}t)^{2}/2}\cos (\omega _{n-sb}t) \label{eq:Asy}\end{aligned}$$to the muon decay asymmetry at all temperatures. Here $A_{sc}$, $A_{sb}$, and $A_{n}$ represent the respective contributions from the part of the meander that turns superconducting upon cooling, the substrate, and the part of the meander that remains normal upon cooling. $r_{sc}$, $r_{sb}$, and $% r_{n}$ are the relaxation rates of muons that land in a superconducting, substrate, and normal material, respectively. $\omega _{n-sb}$ is the rotation frequencies in the normal material and the substrate (taken to be equal). $\omega _{sc}$ is the rotation frequency in the superconducting part. The only parameters that are allowed to vary with $T$ are $r_{sc}$ and $\omega _{sc}$. The superconducting volume fraction is estimated from $% A_{sc}/(A_{sc}+A_{n})$ and was found to be $90\pm 5\%$. Figure \[fig2\](b) shows $r_{sc}$ and the resistivity versus temperature. The midpoint of the resistivity transition to the superconducting state, and the onset of $r_{sc}(T)$ occur at $T_{c}=16$ K. The London penetration depth $\lambda _{ab}$ at $T=7$ K is $500$ nm as estimated from the relation $% r_{sc}=0.04\gamma _{\mu }\phi _{0}/\lambda _{ab}^{2}$ where $\gamma _{\mu }/2\pi =13.5$ MHz/kG is the muon gyromagnetic ratio, and $\phi _{0}$ is the magnetic flux quanta [@Brandt88]. This penetration depth value is similar to the meander thickness and therefore the current will flow uniformly in the bulk of the meandering wire. ![(color online) Determination of the superconducting volume fraction and penetration depth (a) $\protect\mu $SR asymmetry under an applied field of 1 kG in a rotating reference frame at $T=2.9$ K with zero field cooling, $% T=6.0$ K with field cooling, and at $T=5$ K from the substrate. (b) The resistivity and muon depolarization rate $r_{sc}$ as a function of temperature showing $T_{c}$. Below $T_{m}\approx 6$ K the muon relaxation increases rapidly.[]{data-label="fig2"}](fig2.ps){width="3.2093in"} It is challenging to flow a current in the meander line during a LE-$\mathrm{% \mu }$SR experiment while keeping its temperature well determined. This results from the fact that the sample is cooled by a cold finger in a UHV ambient. Above the first critical current, $I_{c1}$, the superconducting wire acts as a heater and is not in thermal equilibrium with either the cold finger or any attached thermometer. Therefore, the wire’s temperature can be measured only by an *a priori* calibration procedure. For this, we chose to take the V-I curve of the wire at each temperature in a flow cryostat. In such a cryostat the thermal contact between the wire and a thermometer, even at high currents, is good. Using this calibration, the wire acts as its own thermometer. To account for possible drifts in the calibration we repeated the calibration in the flow cryostat also after the LE-$\mu $SR experiment. This proved the temperature uncertainty to be smaller than 0.01 K, namely, when we say that we are comparing two runs with equal temperatures we mean that we managed to keep the two runs 0.01K away from each other. Fig. \[fig3\](a) shows several V-I curves recorded at different temperatures on a short segment (1 cm long) of the wire. These V-I curves are used for the determination of $I_{c1}$ and $I_{c2}$ which are needed for the analysis. The curves are fitted to the function $\Theta (I-I_{c1})e^{k(I-I_{c1})}$, where $\Theta $ is the Heaviside step function. It is seen in Fig. \[fig3\](a) that, at $T=12$ K, $I_{c1}$ drops to zero and the 1 cm segment of the wire shows Ohmic behavior with a normal resistance of $R_{n}=60\Omega $. We estimate $I_{c2}$ using a variation of the offset criterion [@concise]. The exponential dependence of $V$ on $I$ is extrapolated to the value of $I$ that gives a differential resistance equal to $R_{n}$. The obtained values of both critical currents as a function of temperature are plotted in Fig. \[fig3\](b). ![(color online) Calibration curves used for temperature determination and for the estimation of $I_{c1}$ and $I_{c2}$.(a) V-I curves of a short segment of the wire. Similar measurements on the full wire are used for the temperature calibration. (b) $I_{c1}$ and $I_{c2}$ as a function of temperature were extracted from the data shown in the top panel.[]{data-label="fig3"}](fig3.ps){width="3.2093in"} Next, we study the effect of the current on the magnetic order. Figure [fig4]{} shows raw muon decay asymmetry data from the meander wire at several temperatures with no external field and in the laboratory frame. The open symbols represent measurements at low currents (used only for temperature determination) and the solid symbols are measurements at high currents. At $% T>T_{m}$, the asymmetry resembles a Gaussian with relatively slow relaxation, typical of magnetic fields generated by copper nuclear magnetic moments. As the temperature decreases, there is a clear increase in the muon spin depolarization rate indicating that the magnetic order has set in. For comparison, we show in the inset of Fig. \[fig4\] standard $\mu $SR measurements taken with a He flow cryostat on the bulk powder used for making the film. In this case the measurements could be extended to $T=1.65$ K. We find that the magnetic transition in the wire is very similar to that of ours and others bulk samples [@Niedermayer98; @Uemura90], having similar $T_{m}$. In addition, the data in the bulk at low enough temperatures is typical of the case where muons in the full sample volume experience frozen magnetism, with spontaneous precession below about $2$ K with a frequency $f\simeq 3$ MHz, again in agreement with others. The effect of the current is demonstrated by the $T=5$ K measurement (red symbols in Fig. \[fig4\]). The depolarization of the muons spin is faster when a higher current is applied. The difference between the two measurements is emphasized by the shaded area. The change in the asymmetry line shape caused by the application of current is equivalent to cooling by about $0.3$ K, although, as mentioned before, the sample temperature is stable to within $0.01$ K. This effect was observed at several temperatures along the magnetic transition. ![(color online) Muon decay asymmetry measurements versus time at low current (open symbols) and high current (solid symbols). Different colours represent different temperatures. The area shaded in yellow marks the effect of the current on the muon decay asymmetry at 5 K. The horizontal line shows the expected base line from the substrate. The inset shows standard $\protect% \mu$SR measurements on the bulk powder used for making the film.[]{data-label="fig4"}](fig4.ps){width="3.2093in"} ![(color online) The magnetic phase transition, with and without current. Solid lines are guide to the eye.[]{data-label="fig5"}](fig5.ps){width="3.2093in"} Above $T_{m}$ and below $4$ K the application of current has no effect on the asymmetry. This finding is particularly important since, *a priori*, the current might affect the muon asymmetry directly by means of the magnetic field it produces, or by colliding with the muons. However, we found that once the electronic spins are fully frozen the current does not change the muon asymmetry indicating that there is no direct current muon coupling. This is in agreement with calculations showing that the magnetic field the current produces is very small compared to the internal field. Similarly, the lack of current effect above $T_{m}$ rules out collisions between muon and electron charge. In order to determine the magnetic phase transition temperature, without assuming a specific spatial field distribution or temporal fluctuation model, we define the order parameter in a model-free way. At each temperature the asymmetry as a function of time is averaged to produce $% \langle Asy\rangle =\frac{1}{t_{m}}\int_{0}^{t_{m}}Asy(t)dt$ where the measurement time $t_{m}=8$ $\mu \sec $. We expect $\langle Asy\rangle $ to decrease with increasing magnetic moment size $M(T)$, and therefore defined $$\frac{M(T)}{M(0)}\equiv \frac{\langle Asy\rangle ^{-1}\left( T\right) -\langle Asy\rangle ^{-1}\left( \infty \right) }{\langle Asy\rangle ^{-1}\left( 0\right) -\langle Asy\rangle ^{-1}\left( \infty \right) }. \label{eq:mPol}$$For $\langle Asy\rangle \left( \infty \right) $ we take the averaged $Asy$ at $T=7.35$ K, which is above the transition. The magnetic phase transition temperature $T_{m}$ is taken as the onset of the sudden change in $M(T)$. The magnetic transition is sharp enough that other, model-based, analysis methods gave indistinguishable $M(T)$. The temperature dependence of $M$ with and without current is presented in Fig. \[fig5\]. We find that the application of a current of about $0.2\cdot I_{c2}(T)$ increases the magnetic phase transition temperature by $0.4\pm 0.1$ K. This effect means that the two orders interact repulsively. It is complementary to the effect of a strong magnetic field on doped samples, where the magnetic order is enhanced while the superconducting order is suppressed [Katano2000,Lake2002]{}. However, since current, in contrast to magnetic field, does not couple directly to spins, the effect presented here is more simply analyzed. For example, it shows that the enhanced magnetism in the applied field could be a result of supercurrent in the bulk [DemlerPRL01]{}, and not necessarily due to magnetism in the vortex core [HuJCP]{}. A simple interpretation of the result can be given in the framework of the GL model. In this model the free energy density near the critical temperature $T_{m}$ can be written as $F=-a(T)\left( 1-I^{2}/I_{c2}^{2}\right) |\psi |^{2}+U_{s}|\psi |^{4}-b\left( T_{m}^{0}-T\right) |\phi |^{2}+U_{m}|\phi |^{4}+2U_{sm}|\phi |^{2}|\psi |^{2} $ (plus gradient terms) where $\psi $ and $\phi =M/\sqrt{v}\mu _{B}$ are the superconducting and magnetic order parameters respectively, $U_{sm}$ is their coupling constant, $v$ is the unit cell volume, $b$ is a dimensionless parameter, $T_{m}^{0}$ is the magnetic phase transition temperature for $|\psi |^{2}=0$, $a(T)$, $U_{s}$ and $U_{m}$ are the standard GL parameters. All the parameters can be experimentally determined [@Huang; @DeGennes]: ${a(T)}={\hbar ^{2}/2m}^{{\ast }}\xi ^{2}$ where $\xi =2~$nm is the superconducting coherence length [@EPL64]; ${\psi _{0}^{2}}% =m^{\ast }/{4\mu _{0}e^{2}}\lambda ^{2}$ where $\lambda =500$ nm is the London penetration depth; $U_{s}=a/2\psi _{0}^{2}$ according to the minimum condition; ${bT}_{m}={\hbar ^{2}/2m\kappa }^{2}$ where ${% \kappa }=4$ nm is the magnetic coherence length [MagneticCoherencelength1,MagneticCoherencelength2]{}; the electron mass can be approximated by the stiffness of the xy model where $\hbar ^{2}/mA=J$, $A$ is the cell area and $J\simeq 10^{3}$ K is the superexchange; from the ratio of muon oscillation frequency between our sample and pure $\mathrm{{% La_{2}CuO_{4}}}$ [@Magneticmoment] we find a local magnetic moment $% M=0.33\mu _{B}$ giving $\phi ^{2}=0.33^{2}/v$; $U_{m}=bT_{m}/2\phi _{0}^{2}$ again by the minimum condition. $U_{sm}$ is obtained from our current dependent measurement (neglecting gradient terms at this stage). Since $T_{c}$ is higher than $T_{m}$ we do not expect $|\phi |^{2}$ to affect $|\psi |^{2}$. Therefore $|\psi (I,T)|^{2}=|\psi (0,T)|^{2}(1-I^{2}/I_{c2}^{2})$. The minimization of $F$ with respect to $|\phi |^{2}$ yields, $|\phi |^{2}={b(T_{m}^{0}-2U_{sm}|\psi (I,T)|^{2}/b-T)}/{2U_{m}}$. Thus, the measured magnetic transition temperature is given by $T_{m}=T_{m}^{0}-2U_{sm}|\psi (I,T_{m})|^{2}/b$. We assume that near $T_{m}$, $\psi ^{2}(0,T)=\psi _{0}^{2}$ where $\psi _{0}^{2} $ is the ground state value of $\psi ^{2}$. Therefore, the change in the transition temperature, $\delta T_{m}\equiv T_{m}(I)-T_{m}(0)$, caused by the current is $\delta T_{m}(I)={2U_{sm}\psi _{0}^{2}I^{2}/bI_{c2}^{2}}.$ The interesting parameter is $$R\equiv \frac{U_{sm}}{\sqrt{U_{s}U_{m}}}=\frac{2e\lambda \xi MI_{c2}^{2}\delta T_{m}}{\mu _{_{B}}\hbar \kappa I^{2}T_{m}}\sqrt{\frac{J\mu _{0}}{h}}$$where $h$ is the unit cell height. For $R>1$ the GL model predicts phase separation and first order phase transition. For $R<1$ the model predicts coexistence and a second order phase transition. The $R=1$ condition is essential for SO(5) symmetry [@Demler04]. At $T=5$ K we found that $% I_{c2}=17$ mA (see Fig. \[fig3\]b) and used $I=4$ mA in the LE-$\mu $SR. This yields a positive $R=1.4$. Although numerical factors can change $R$, they cannot change its proximity to unity. In summary, we demonstrated the presence of interaction between the magnetic and superconducting order parameters and measured its sign and strength. We find that phase transition at zero temperature from magnetic to superconducting orders, as a consequence of doping, must be very close to the boarder between first and second order. We acknowledge very helpful discussions with Assa Auerbach and Yariv Kafri. We also thank the PSI team for supporting the $\mu $SR experiments, and for providing the continuous high quality beam. This work was also funded in part by the Israeli Science Foundation and the joint German-Israeli DIP project. [99]{} Ch. Niedermayer, C. Bernhard, T. Blasius, A. Golnik, A. Moodenbaugh, and J. I. Budnick, Phys. Rev. Lett. **80**, 3843, (1998). M.-H. Julien, F. Borsa, P. Carretta, M. Horvatic', C. Berthier, and C. T. Lin, Phys. Rev. Lett. **83**, 604 (1999). J. M. Tranquada et al., Nature **375**, 561 (1995). C. Panagopoulos, J. L. Tallon, B. D. Rainford, T. Xiang, J. R. Cooper, and C. A. Scott, Phys. Rev. B **66**, 064501 (2002). B. J. Sternlieb, G. M. Luke, Y. J. Uemura, T. M. Riseman, J. H. Brewer, P. M. Gehring, K. Yamada, Y. Hidaka, T. Murakami, T. R. Thurston, R. J. Birgeneau, Phys. Rev B **41**, 8866, (1990). T. Prokscha, E. Morenzoni, K. Deiters, F. Foroughi, D. George, R. Kobler, A. Suter, V. Vrankovic, Nucl. Instr. Meth. A **595** 317-331 (2008). E. Morenzoni, F. Kottmann, D. Maden, B. Matthias, M. Meyberg, T. Prokscha, T. Wutzke, U. Zimmermann, Phys.  Rev.  Lett. **72**, 2793 (1994). J. E. Sonier, Reports on Progress in Physics **70**, 1717 (2007). E. H. Brandt, Phys. Rev. B **37**, 2349 (1988). J. W. Ekin in *Concise Encyclopedia of Magnetic and Superconducting Materials*, edited by J. E. Evetts (Pergamon, New York, 1991). S. Katano, M. Sato, K. Yamada, T. Suzuki, and T. Fukase Phys. Rev. B **62**, R14677 (2000). B. Lake *et al.* Nature **415**, 299 (2002). E. Demler, S. Sachdev, and Y. Zhang, Phys. Rev. Lett **87**, 067202 (2001). Jiang-Ping Hu, Shou-Cheng Zhang, Journal of Physics and Chemistry of Solids **63**, 2277 (2002). K. Huang, *Statistical Mechanics* $\mathrm{{2^{nd}}}$ edition, p.425, (John Wiley & Sons, New York, 1987). P. G. de Gennes, *Superconductivity of Metals and Alloys* , p.185, (Westview press, 1999). H. H. Wen, H. P. Yang, S. L. Li, X. H. Zeng, A. A. Soukiassian,W. D. Si and X. X. Xi, Eur. Phys. Lett. **64**, 790 (2003). R. J. Birgeneau *et al.* Phys. Rev. B **38**, 6614 (1988). B. Keimer *et. al.* Phys. Rev. B **46** 14034 (1992). Y. S. Lee *et al.* Phys. Rev. B **60**, 3643 (1999). E. Demler, W. Hanke and S. C. Zhang, Rev. Mod. Phys. **76**, 909 (2004).
--- author: - 'Santosh Aditham,  and Nagarajan Ranganathan,  ' title: | A System Architecture for the Detection of\ Insider Attacks in Big Data Systems --- data solutions are widely adopted across various government and enterprise domains such as software, finance, retail and healthcare. Big data applications are pioneering in the field of advanced data analytics and have a projected market of approximately 50 billion dollars by 2018. The most frequent use-cases of big data are information retrieval from complex, unstructured data; and real time data analysis [@IDC]. Along with its rapid market growth, the big data trend also has its share of challenges and risks. In an era where extracting information from data is sanctioned to all, users are understandably more skeptical to let providers host their data away from them. This, along with the recent increase in the number of cyber attacks, boosted the importance for security. Yet, the losses due to boundless security holes in existing systems seem to overshadow the investments towards increasing their security. Hence, there is an immediate need to address architectural loopholes in order to provide better security. For instance, current big data security platforms focus on providing fine-grained security through extensive analysis of stored data. But such models indirectly facilitate the abuse of user data in the hands of the provider. Insider attacks are becoming more common and are considered the toughest attacks to detect [@Vormetric]. There does not exist much in the literature on solutions for insider attacks in general [@Salem]. Though privacy and security are touted to be important problems in the big data world, the solutions concentrate only on leveraging big data systems for efficient security in other domains. To the best of our knowledge, there is no robust solution for detecting or preventing insider threats within big data infrastructures. For example, security mechanisms of popular big data systems such as Hadoop [@Hadoop] and Spark [@Spark] include third-party applications such as Kerberos [@Kerberos], access control lists (ACL), log monitoring and data encryption (to some extent). But for an insider, especially a traitor, circumventing these mechanisms is not difficult [@Aditham]. It is crucial to address the problem of insider attacks in big data systems for three main reasons: (a) traitor within the provider’s organization will be able to circumvent the security system in place (b) sensitivity of customer information stored in the system is increasing by day; and (c) there is no consensus or widespread agreement on well-defined security standards in the big data community. Recently, two unauthorized backdoors were discovered in Juniper Networks firewalls that might have given attackers access to highly classified information. Some important facts about this particular hack are: (a) it comes at the cost of compromising national security (b) it shows that even a major network security company is vulnerable to attacks (c) in spite of the high stakes and vast resources, it is believed that these backdoors were left undiscovered for almost 3 years; and (d) it was reported that the attackers could have deleted the security logs [@Swati]. This is one of the many examples to show that the efficiency of common attack prevention techniques, such as identity management, ACLs and data encryption, is necessary but sufficient to prevent attacks. As per OpenSOC, in 60% of breaches data gets stolen within hours of the breach and 54% of breaches are not discovered for months [@Sirota]. This indicates that infrastructures need to have efficient *attack detection* techniques along with strong *attack prevention* techniques for robust security. In the big data world, it is considered that moving computation to where the data resides is better than the traditional approach of moving data for computation. The main features of big data infrastructures are fast data processing, high scalability, high availability and fault-tolerance. Availability and fault-tolerance of big data systems comes from intelligent replication of data. This implies SIMD style, parallel execution of the same program at multiple locations. When a program is scheduled for execution on the big data cluster, it runs as an individual process on every data node that hosts a copy of the program data. The replication of data on various nodes in the big data system can be utilized in providing security. Security for a computing system can be implemented at hardware and software level. Given the advantage of isolation that can be achieved at hardware level security, we propose delegating security to special purpose hardware, such as TPM [@TPM] and TXT [@TXT] chips, that reside on the nodes of the big data cluster. Such an infrastructure will have the advantages of (a) performing security analysis remotely (b) reducing the overhead on main processor by delegating security, and (c) significantly decrease the cost of data transfer while providing efficient security techniques such as isolated vulnerability scanning through program profiling. In this paper, we propose a system architecture for attack detection in big data systems that can efficiently detect insider attacks. Our proposed system uses a two step algorithm for attack detection. First, *program profiling* is performed by individual nodes of the big data cluster on the processes they execute. In this step, process binaries of scheduled processes are disassembled and analyzed to generate control instruction sequences (CIS). These sequences are then hashed, encrypted and shared among data nodes that host the same data i.e. primary and replica nodes. Next, *consensus* among data nodes is achieved regarding the possibility of a process being attacked. This step involves two phases: hash matching and information sharing. Upon receiving encrypted messages from primary nodes, the replica nodes apply sequential, on-demand string matching between the locally generated hash and the received hash. Next, the result of this comparison is shared with the primary node. Depending on the results received, the primary data node notifies the master node to take necessary recovery measures. All communications among data nodes are performed using a *secure communication protocol* that is based on public-private key encryption. Our main contributions are as follows: - We propose a novel extrinsic workflow for security in big data systems using control instruction sequences (CIS), hash matching and encrypted communication. - We suggest using a one-shot program profiling technique that builds instruction-level CIS from the native code of scheduled processes. - We endorse the idea of having security as an independent module in big data systems by designing a system architecture for detecting insider attacks in big data systems. The paper is organized as follows: Section \[sec:background\] gives a primer on insider attacks in general purpose computing. This section also discusses the current security methods in big data and some related works. Section \[sec:model\] describes our attack model. The proposed system which includes the two step attack detection algorithm and the secure communication protocol are explained in Section \[sec:proposed\]. A model of the proposed system with a list of required components is also given in this section. Section \[sec:experiments\] shows the impact and usefulness of the proposed security system architecture by conducting real-world experiments on Hadoop and Spark clusters. Finally, section \[sec:conclusion\] draws the conclusion and outlines future work. Background & Related Work {#sec:background} ========================= This section gives a primer on insider attacks and their solutions in general purpose computing and discusses the current security mechanisms available in the big data world. Also, the various related works are briefly described here. ![Entities and Relationships in Insider Attacks[]{data-label="fig_ia"}](insiderattacks.pdf){width="50.00000%"} Insider Attacks --------------- Though security in general computing has been extensively studied and implemented over the years, computers are still vulnerable to attacks. Software based attacks that typically target a computer network or system, called cyberattacks, are growing in their frequency and impact. The plot for any type of software attack involves exploitation of a piece of code that runs on a computer. It is inherent to this perspective about a cyberattack that security can be provided at two levels: (a) by the software that is used to compile and execute the program; and (b) by the hardware that runs the program. Providing security at software level gives more context and information about the target programs that are being protected. But this comes with the risk of the security software itself being compromised. On the other hand, having security at hardware level gives more isolation to the process of analyzing and securing programs though it becomes difficult to give detailed context about the programs and the infrastructures running them. In any case, the toughest software attacks to counter are the ones whose genesis is intentional and are performed by those who have a good understanding of the underlying system. Based on our literature review, we have identified four major questions that can guide towards better handling of insider attacks: (a) who can perform these attacks? (b) what gets affected? (c) how to detect these attacks? and (d) how to prevent them from happening? Figure \[fig\_ia\] gives a list of entities to consider when dealing with insider attacks. The figure also shows the four questions, from above, as relationships among the entities. Insider attacks can be performed by (a) *traitors* who are legally a part of the system but want to misuse the access privileges given to them; (b) *masqueraders* who get access to the system by stealing identities of those who have legitimate access. Insider attacks can affect the proper functionality of a program or corrupt the data used by the programs. Profiling and trapping are two most common ways to detect insider attacks [@Salem; @Schultz]. Profiling can be performed (a) at the program level [@Anup] and at the user level [@Lunt]. Traps can be set in the programs or in the network to force the attacker into performing certain actions that help towards exposing the attack [@Spitzner]. The biggest concern with these insider attack detection methods is the possibility of losing valuable data. Hence, insider attack prevention mechanisms such as identity management [@Froomkin; @Khalil2], access control lists [@Ravi; @Kerberos], data encryption [@Don; @Goyal] etc must be employed at the same time. In this work, we are more interested in Control Flow Integrity (CFI) [@Ligatti; @Ligatti2] which is another popular and effective technique for attack prevention which enforces the execution of a program to follow a path that belongs to the program’s control flow graph. The set of possible paths are determined ahead of time using static CFG [@Ligatti; @Ligatti2]. A coarse-grained or fine-grained version of CFI can be used for program profiling. But the problem with any such profiling techniques is the overhead incurred in conducting them, even more if performed remotely. Though such limitations of this approach have been identified [@Jujutsu], it is accepted as a strong and stable security enforcing mechanism. There are a plethora of CFG-based code similarity algorithms [@Chan]. But such CFG similarity check methods are complex, expensive, have no defined standards. Most CFG similarity algorithms rely on some simplification techniques such as fingerprints, edit distance, comparison only with known graphs in a database etc. Also, the impact of CFG similarity analysis differs a lot depending on when and how the CFG is generated for a program. These complexities and uncertainties led to a new set of control flow analysis techniques that avoid translating the program code to a formal model. For example, insider attack detection based on symbolic execution and model-checking of assembly code was proposed in [@Karthik]. In this work, we propose a novel approach for control flow similarity check for attack detection that totally discards the idea of building CFGs. Instead, our idea is based on simple string matching of control instruction sequences obtained from assembly code of scheduled processes. Insider attacks are a dangerous security problem in any domain because they are difficult to predict and detect [@Schultz]. Hence organizations must try to safe guard their systems and data from insider attacks [@Oltsik]. Predictive models for user/program/network behavior with the help of continuous monitoring is a widely adopted solution for insider attack detection. But such prediction is not completely reliable and the difficulty in detecting attacks grows with the complexity of the underlying system. Recent advancements in computing led to wide adoption of services such as cloud computing and big data which are extremely complex in their design and development. In cloud computing, many insider attacks can be performed by misleading the client side services and once compromised, data obtained can provide social engineering opportunities for cascade attacks [@Duncan]. Having security as a service model for cloud environments [@Vijay] and having sealed clouds [@Jager] are some ideas proposed towards protecting cloud infrastructures from insider attacks. While cloud computing is more about computing on the fly, big data deals with organizing and managing large sets of data. Insider attack detection and prevention for big data frameworks is an area that is not well explored yet. Security in Big Data -------------------- Security in big data is gaining tremendous momentum in both research and industry. But big data security is overwhelmingly inclined towards leveraging big data’s potential in providing security for other systems [@Liang]. Security within big data systems is still a budding phenomenon. It is ideal to include security as a major component in the holistic view of big data systems. But the requirements of big data applications such as real-time data processing, fault tolerance and continuous availability give little scope to employ complex and robust security mechanisms. All existing security techniques implemented within big data frameworks are software based and try to prevent external entities from attacking the system. For example, the main requirements in hadoop security design focus only on access control [@Malley]. Big data systems encourage software based fine-grained security mechanisms such as Kerberos; access control lists (ACL); log monitoring etc. Big data security is inclined towards specifying multi-level access rights: user level, application level and data level. Advantages of having such simple software oriented security mechanisms, such as Kerberos, are better performance and simple management. But there are various problems with such a policy enforcing security software, as identified in [@Hu] and [@Gaddam]. Also, none of these approaches can strongly counter insider attacks. According to Hadoop Security Design[@Malley], permissible performance overhead for a change in architecture is only 3%. This is precisely the reason behind coarse-grained security mechanisms such as data encryption being an optional and restricted feature in big data systems. Data encryption in hadoop is only available for data that gets exchanged between user and the system but not for data that travels within the system. Randomized data encryption for data security was proposed in [@Adluru] but this work acknowledges that faster results are yet to be achieved. Also, big data properties such as large scale distributed infrastructures and replication make it difficult to detect insider attacks precisely using the traditional methods. In this work, we demonstrate the inefficiency of existing big data security mechanisms by implementing two insider attacks on a big data cluster. Figure \[fig\_bdsf\] shows two workflows we used to successfully implement an insider attack in a hadoop big data cluster. This paper only discusses the workflow of the attacks but a detailed report on the results of these attacks can be found in our previous work [@Aditham]. ### Manipulating Activity Logs The first attack, as shown in Figure \[fig\_first\_case\], manipulates log data in order to produce erroneous results during log analysis. Flume and Kakfa are two popular big data products for real-time event processing. Most big data analysis and security solutions tend to use these services within their framework. Hortonworks tutorial [@Hortonworks] shows how a **system admin** will be able to detect distributed DOS attacks on the hadoop cluster by analyzing the server log data. Interestingly, this tutorial can be used as a counterexample to show that admin can act as a traitor, manipulate the server log data and create results that depict a wrong picture to the higher administration. As per the workflow in this example, users requests the client service to access data stored in HDFS. These user requests will all be logged by the log4j service. Hence, any attackers requests will also be logged. The system admin can easily build a framework with the help of services such as Flume, Hive and Hcatalog to monitor and track the user requests. A small script that filters the streaming data going from Flume to Hive can be induced by an insider to script the results according to the insider’s choice. ### Deleting Edit Log The second attack, as shown in Figure \[fig\_second\_case\], deletes the contents of *editlog* such that user data gets deleted eventually. A **system admin** who has access to the *secondary namenode* in a hadoop cluster can implement this attack. *Namenode* is the focal point (and a single point of failure) of a HDFS file system. It stores the metadata of all files in HDFS along with their storage locations in a data blob called the *fsImage*. Editlogs, along with fsImage, are updated periodically such that the namenode has access to up to date information about data stored in the hadoop cluster. To save time and computation energy on the namenode, this process is performed off-site on secondary namenode, sometimes called the *checkpoint node* and the output fsImage is directly dumped on to the namenode. Hence, manipulating edit log content will reflect, by the next checkpoint, on the fsImage which will be used by the namenode for job allocation and scheduling. This is a weak point in the hadoop architecture that can be misused easily by insiders. Figure \[fig\_second\_case\] shows the workflow for checkpoint-ing in a hadoop cluster and how an insider can introduce a script to delete user data forever. In the most extreme case, if an insider induces a small script that completely wipes out the editlog, the fsImage will be empty at the next checkpoint. Finally, proposing hardware oriented security methods for hadoop are on the rise in recent times. A TPM based authentication protocol for hadoop was proposed by [@Khalil] which claims to be much faster than Kerberos, though it has not been fully implemented. A hardware oriented security method to create trusted Apache Hadoop Distributed File System (HDFS) was proposed in [@Cohen] which is a theoretically novel concept but was proven to work only on one node. The overhead of data encryption by TPM acts as a hindrance in adopting this method, especially when the size of data maintained in big data systems is ever growing. In this work, we propose the delegation of security in big data systems by designing an independent system with the necessary components. Attack Model {#sec:model} ============ Our attack model focuses on misuse of log data by system admins of a big data platform. Security features such as data confidentiality and operational guarantees such as correctness of results can be compromised because of such misuse. The goals of an insider conducting such attacks can vary from personal vendetta to financial gain. The proposed system targets such specific insider attacks because they are easy to implement with existing security solutions on platforms such as Hadoop and Spark. Attacks targeting misuse of log data can be performed by creating malicious programs or by modifying of existing program binaries with malicious intent. Given the existing security features of user-level activity monitoring, we exclude the possibility of system admins writing new malicious programs from the scope of our attack model. Instead, our attack model focuses on system admins being able to modify binaries of existing programs. Our goal is to spot vulnerabilities in code that can be exploited by insiders. We acknowledge that insider attacks are too broad and not all of them can be mitigated by the proposed solution. There can be other possible insider attacks in big data that are not visible at compile time and the proposed system may or may not be able to detect. Proposed System Architecture {#sec:proposed} ============================ In this section we explain the proposed system in detail. Figure \[fig\_framework\] shows the proposed system that includes a secure communication protocol and a two step attack detection algorithm. The first step in the attack detection algorithm is process profiling, which is conducted locally and independently at each node to identify possible attacks. In the next step is hash matching and consensus, which is conducted by replica data nodes to conclude about the authenticity of a possible attack. Secure Communication Protocol ----------------------------- ![image](framework.pdf){width="90.00000%" height="3.5in"} A big data system is technically a distributed data storage system that relies on secure and efficient communication protocols for data transfer. The proposed system aims to provide robust security for big data systems by having a modular design and being independent from the core big data services. For this reason, a separate secure communication protocol is included in the proposed system design that can be isolated from the set of default communication protocols used by the big data system. The proposed system is a mix of independent security modules that work together and reside on individual nodes of the system. These modules use the secure communication protocol to share packets of data with their counterparts on other nodes of the cluster. The data shared among the security modules in our system architecture contain vital information about the analysis of a process. Hence, we propose using a public key cryptosystem in our secure communication protocol. All data transferred by any node using this secure communication channel is encrypted upfront using private key encryption and hardcoded keys that are not accessible to anyone. The associated public key will be shared with all other replica nodes that a data node need to communicate with. Hardware security chips such as TPM [@TPM] or Intel’s TXT [@TXT] have public-private key encryption modules. Such hardware security chips come with a hardcoded, on-chip master key. A simple random number generator module is used to generate public-private key pairs periodically using the hardwired master key. For this work, we relied on SSH protocol for secure communication using RSA for key exchange but any such cryptosystem will work. Given the off chance of leakage of private keys, a key pair is held active for only a certain time period $T$. This increases the robustness of the communication protocol. In this work, we did not focus on finding the perfect value for $T$ but assumed it to be a predefined value of 1 second. The public key of a node is shared with all other nodes it has to communicate with i.e. replica nodes and master node. All incoming data packets to a node will be encrypted with its *current* public key and can only be decrypted using the corresponding private key that is stored locally. Decrypted information will be sent to the *process matching* module to identify attacks. Given the short lifespan of public keys used in our secure communication protocol, each node should be able to store public keys of all other nodes it has to communicate with. Also, storing older keys of other nodes helps in verifying authenticity of nodes in case of attack recovery. Hence, we propose to use queue data structures on every node to store the periodically generated public keys of other nodes. Back of $queue_{n}$ will be the latest public key to be used for encrypting packets to be sent to node $n$ while front of $queue_{n}$ will be deleted when $queue_{n}$ is full (to accommodate a new key). Limiting the maximum queue size by some $k$ will make sure that a node has enough information to support attack recovery measures while not consuming too much memory. Again, we did not focus on finding the perfect value for $k$ but used a predefined value of 3 while conducting our experiments. Algorithm \[alg\_sec\_comm\] shows the steps involved in the proposed secure communication protocol. Once a model of the proposed system is installed, all nodes will periodically generate public-private key pairs for as long as the system is in use. This is accomplished with the help of the hardwired key on the special purpose security chip and the random number generator module. At the end of every $T$ time units, a new public-private key ($newkp_n$) is generated on a node for communicating with replica node $n$. The private key $priv_n$ of $newkp_n$ will be used for decrypting incoming data from node $n$ and the public key $pub_n$ of $newkp_n$ will be shared with node $n$. For ease of access to keys during decryption, current private keys of all nodes are stored in an array $arr_{priv}[]$. Once a public key $pub_{n}$ is shared with node $n$, all incoming messages from node $n$ will only be decrypted using the associated $priv_{n}$ for the next $T$ time units. An array of queues, $arr_{pub}[]$, is used to store public keys received from all other nodes. When a node has to send an message $msg$ to replica nodes, the public key of that node is used to create an encrypted message $msg_e$. $newkp_{n} \gets$ get new public private key pair ($TPM$) $pub_{n} \gets$ get public key from $newkp_{n}$ $priv_{n} \gets$ get private key from $newkp_{n}$ $node_{n} \gets send (pub_{n})$ $arr_{priv}[n] \gets priv_{n}$ $dequeue(queue_{n})$ $queue_{n} \gets enqueue (pub_{n})$ $arr_{pub}[n] \gets queue_{n}$ $msg$ to be sent to all replicas $pub_{r} \gets back(arr_{pub}[n])$ $msg_{e} \gets encrypt(msg, pub_{r})$ $send(msg_{e})$ Detection Algorithm ------------------- The main part of the proposed system is the attack detection algorithm which will be explained in this subsection. Our attack detection algorithm is a two step process: process profiling (step 1) and consensus through hash matching (step 2). $proc_{new} \gets$ get newly scheduled process $code \gets$ get assembly code from $HotSpotVM(proc_{new})$ $seq_{jump} \gets$ add $instr$ to sequence of jumps $seq_{call} \gets$ add $instr$ to sequence of calls $seq_{return} \gets$ add $instr$ to sequence of returns $seq_{array} \gets$ add $seq_{jump}$ $seq_{array} \gets$ add $seq_{call}$ $seq_{array} \gets$ add $seq_{return}$ $hash_{seq} \gets$ get hash from $sha(seq)$ $hash_{hashes} \gets$ add $hash_{seq}$ $msg \gets$ get hash from $sha(hash_{hashes})$ send $msg$ using Secure Communication Protocol ### Step 1: Process Profiling Traditionally vulnerability scanning is performed away from the source program’s execution domain to guarantee isolation. Hence, the results of such scan must be communicated back to the program. But this leads to a cost versus isolation trade-off, depending on the remoteness of the location used to perform the vulnerability scan. In big data applications, the source program’s execution is distributed across multiple nodes of the cluster. This makes it difficult to implement techniques such as vulnerability scans on big data systems. But big data infrastructures use replication of data for high availability. This enforces the same program to be run on multiple nodes that host the data required for the program. We exploit this unique property of big data systems and introduce a variation of CFI to create a novel process profiling technique that can help in detecting insider attacks in big data systems. Evans et al. [@Jujutsu] show that CFI, either with limited number of tags or unlimited number of tags, is not completely effective in attack prevention. Also, CFI is usually based on CFG created from static analysis of program code. Most big data applications are packaged as *jars* that run on Java Virtual Machines (JVM). These jars are not completely compiled and do not convey much about the program they represent. Hence, we do not use CFI on CFG’s created using statistical code analysis. We propose to build the control structure of a program from its corresponding JVM output i.e. the assembly code of the Hotspot VM that hosts the JVM. Since this is considered the final run-time code that gets executed on the hardware, the control structure generated from the output of Hotspot VM is expected to be less susceptible to software attacks compared to a CFG generated from statistical analysis of program code. In the context of big data platforms, this mitigates the possibility of launching an attack on the entire cluster. Another major variation from CFI in our process profiling technique is to use individual control flow instruction sequences instead of CFG paths. Control instructions dictate the control flow in a program. Generating instruction sequences of such control flow instructions from the assembly code output of hotspot VM should technically give us all information a CFG can provide in this context and avoid the complexity involved in generating a CFG. ![image](algo.pdf){width="90.00000%" height="3.25in"} ### Step 2: Hash Matching and Consensus $msg_{p} \gets$ get message about process p from main copy $hash_{hashes}(received_{p}) \gets decrypt(msg_{new}, priv_{k})$ $hash_{hashes}(local_{p}) \gets process-profile(p)$ $confirmation \gets$ safe $confirmation \gets$ unsafe $send(confirmation, main)$ The analyzer module in the proposed system creates instruction sequences for jumps, calls and returns from the JVM output of a given program (based on Intel’s Instruction Set Architecture). Then, the SHA cryptographic hash function module is used to generate a fixed-length output for each of the three instruction sequences. All three hashes are combined and again given to the SHA cryptographic hash function module to generate a final hash for the program. This hash of hashes strengthens the uniqueness in identifying a program. All programs that run on every node in the cluster will follow the same routine. Encryption module of the node with the primary copy of data uses currently active public keys of replica nodes to encrypt the hash of hashes and send it to the associated replica node. Hence, this node acts as the $coordinator$ for performing step 2 in the attack detection algorithm. Algorithm \[alg\_profile\] shows the steps involved in the proposed process profiling step. This algorithm will be running independently in the analyzer module of all machines in the big data cluster. Every process output, $proc_{new}$, from the HotSpot VM is grabbed by the analyzer module of the proposed system and profiled based on the control flow instructions present in its assembly code. Line by line analysis of $proc_{new}$ is conducted and each instruction $instr$ is matched with the set of control flow instructions available in the instruction set of the processor architecture. For this work, we used only the most prominent control flow instructions of Intel’s x86 architecture i.e. jumps, calls and returns. When an $instr$ in the $code$ of the $proc_{new}$ is a control flow instruction, it gets added to the corresponding sequence string. The $seq_{array}$ represents the array of individual control flow instruction sequences in the process $proc_{new}$. This array is used later as input while generating the hashes for each control sequence string. All fixed length hash outputs are combined as $hash_{hashes}$ and rehashed to generate a final hash called $msg$ that represents the program. This $msg$ is then shared with all replicas running the same program using the secure communication protocol described above. The second step in our attack detection algorithm is a consensus algorithm similar to the extended 2-phase commit protocol [@Roger]. In this step, the node with primary copy of data acts as coordinator and requests all replica nodes, that act as workers, to confirm if their local hash of hashes, ($msg$) of a particular process matches exactly with the coordinator’s version. The coordinator then decides on the safety of the process depending on the acknowledgments received from participating replica nodes. A process is considered to be safe by the coordinator if and only if it receives safe acknowledgments from all of the workers. At the end of process profiling step, encrypted message $msg_e$ is shared by coordinator node with all worker nodes. The nodes that receive such messages will decrypt the message with their currently active private key. The decrypted message is essentially the hash of hashes of the three control instruction sequence strings. This decrypted hash of hashes can be directly compared to the local version of the same process to detect the possibility of an attack. If the result of such comparison of strings is a perfect match, then that indicates that the same process (with the same code) was run on both nodes. This indicates a safe process unless both nodes of the cluster are attacked the same way, in which case it will be a false positive. A confirmation message about the result of the hash comparison will be sent to the coordinator node as response to the original incoming message. The coordinator node will wait to receive responses from all replicas in order to arrive at a conclusion about the possibility of an attack in a process. The given big data system is safe as long as all the replicas respond with a *safe* confirmation. A single *unsafe* response will mean that the system is under attack. Algorithms \[alg\_match\] and \[alg\_consensus\] give more details about the hash matching and consensus steps that take place in step 2 of the attack detection algorithm. A pictorial representation of the steps involved in our 2-step attack detection algorithm is given in Figure \[fig\_algo\]. This figure represents a big data system with a replication factor of 3 and hence there is one coordinator (represented with a dark black shadow below the node) and two workers. Active communication channels are represented using a dotted line while the regular lines between nodes represent passive communication channel. The blue dotted loop around each node in step 1 and 3 of the figure represent local computations. Algorithm \[alg\_match\] is used in the hash matching step of the attack detection algorithm. When a worker node, $node_k$ receives $msg_{p}$ from the coordinator node about a process $p$, it will decrypt that message using its current private key, $priv_{k}$ and stores the result as $hash_{hashes}(received_{p})$. The local version of the same string i.e. $hash_{hashes}(local_{p})$ will be compared against the $hash_{hashes}(received_{p})$ to identify similarity between local and received hash of a process. The result of this hash matching is sent back as $confirmation$ to the coordinator node, $main$. The value of $confirmation$ is *safe* in case of a perfect match of hashes and *unsafe* otherwise. $confirmation_{node} \gets$ get confirmation about process $p$ from replica $count_{safe} \gets count_{safe} + 1 $ $attack \gets$ no $attack \gets$ yes $master_{node} \gets recovery(p)$ Algorithm \[alg\_consensus\] is used by the coordinator node to identify an attack, with the help of worker nodes. After step 1, the coordinator node waits for responses from all the recipients. The worker nodes respond with a confirmation message that says whether the process is $safe$ or $unsafe$. If the count of number of $safe$ responses i.e. $count_{safe}$ from worker nodes matches with the count of number of $nodes$ in the replica set i.e. $count_{replicas}$, the coordinator node assumes that there is no attack in the current process $p$ and resets the $attack$ variable. Else, if a mismatch in the process analysis is observed, the $attack$ variable is set and the $master_{node}$ is notified about the possibility of an attack in process $p$. Model of the Proposed System Architecture ----------------------------------------- The proposed security system is a combination of 3 parts: secure communication protocol, process profiling and hash matching. As shown in Figure \[fig\_framework\], these three parts are made of multiple modules that need to be installed on all nodes in the big data system. Also, locality of these modules impacts the performance of the system greatly. The closer they are to the main processor of a node, the faster and less expensive it will be to communicate. But from a security standpoint, these modules need to be isolated from the big data system main workflow. Hence we designed a model for the proposed system that can fit on isolated special purpose security hardware chips. Such chips can be built on top of existing security hardware such as TPM or Intel’s TXT chips [@TPM; @TXT]. Hardware solutions are popularly known to affect the scalability and flexibility of the big data infrastructure, comparing to a software solution which can be very adaptive. But in this case, we avoid such problems by decoupling our solution from the workflow of a big data platform. There will be a one-time extra cost due to the hardware security modules. An overview of the elements in such a model of the proposed system is given in Figure \[fig\_hardarch\]. The functionality of each of these elements is as follows: ![Elements in a Model of the Proposed System Architecture[]{data-label="fig_hardarch"}](hardarch.pdf){width="45.00000%"} - **Analyzer**, this module will get the data from the hotspot VM and perform the initial steps of cleaning the data. Result from analyzer is stored in *Memory*. - **CFI filter**, this module takes input, a set of assembly language instructions, from the *Analyzer* module (technically, the *Memory* module) and filters out the control flow instructions, while maintaining the order. - **Sequencers**, there are three sequencers in our model, one each for jumps, calls and returns. Each sequencer goes through the output of *CFI filter* module and forms a delimited sequence string of the instruction it is associated with. Then, the sequencer uses the *SHA hasher* module to generate and store a fixed length hash output from the variable length instruction sequence string. - **Register Array**, there are 4 registers in this array to store message, jump instruction hash, call instruction hash and return instruction hash. - **Message Register**, this is a special register in the *Register Array* used to store the message in thread-safe manner. - **Message Generator**, this module combines all the individual hash outputs stored in registers and uses the *SHA hasher* module to generate a fixed length hash output. This hash of hashes is combined with the process metadata to generate and store a message that represents the process. - **Encryptor / Decryptor**, this module uses the *Key Store* to access the current set of public/private keys and the *Message Register* to access the current process message. The Encryptor module uses the public key of a replica node from the *Key Store* and encrypts the message in *Message Register*. The decryptor module uses the private key of the node from the *Key Store* to decrypt an incoming message. - **Comparator**, this module performs string comparison between local message (hash of hashes) and received message. - **Key Generator**, this module uses the underlying TPM/TXT chip’s [@TPM; @TXT] in-built functionality. The hardwired key and the random number generator of the security chip are used to generate a new public/private key pair; and the timer of the chip to trigger this action periodically. - **Key Store**, this module uses an array of memory locations to store the public key queues of all replica nodes and the current public / private key pair of this node. The three most recent public keys of each replica node is stored in its queue. - **Exchanger**, this module uses TCP/IP protocol to exchange messages with other nodes. Experiments and Results {#sec:experiments} ======================= In this section we describe the experimental setup, explain in detail about our choice of experiments and analyze the results. The hadoop security design specifies that a 3% slowdown in performance is permissible for any newly proposed security solutions [@Malley]. Hence, it is important for the proposed system to offer both theoretical correctness and feasibility in practical implementation and usage. Security in big data systems is a new area that does not have set standards and specifically designed open-source benchmarks to evaluate the overhead. Hence, we had to handpick a set of general big data benchmark programs that are relevant and provided by the big data community to test the efficiency of our proposed security system. Setup ----- The 3 big data services used for our experiments are: - **Hadoop [@Hadoop]**, the most popular implementation of a big data framework that is maintained by the Apache open-source community. It allows storing and processing of large date using programming models such as MapReduce. - **Spark [@Spark]**, a fast and general engine for large-scale data processing that is supposedly much faster than Hadoop and it is maintained by the Apache open-source community as well. - **Amazon web services (AWS) [@AWS; @EC2; @EBS]**, a perfect example of real-world big data system. AWS provides Elastic Cloud Compute (EC2) service that allows users to use Amazon cloud’s compute capacity depending on their needs. EC2 presents a true virtual computing environment. Storage for the EC2 nodes is provided by Amazon Elastic Block Store (EBS) which offers persistent storage. EBS volumes are automatically replicated to protect user from component failure, offering high availability and durability. [0.5]{}[|C|C|C|]{} **Attribute** &\ Instance Model & t2.micro & m1.large\ Processor & Intel Xeon with Turbo & Intel Xeon E5-2650\ Compute Units & 1 (Burstable) & 4\ vCPU & 1 & 2\ Memory (GB) & 1 & 7.5\ Storage (SSD) & Elastic Block Store & Elastic Block Store\ Networking Performance & low & moderate\ Operating System & Linux/UNIX & Linux/UNIX\ Hadoop distribution &2.7.1 & 2.7.1\ Spark distribution &N/A & 1.6\ We used AWS supported hadoop and spark clusters for conducting our experiments. The *Hadoop Cluster* that we used is a 5 node cluster built using basic t2.micro nodes of Amazon EC2 and EBS. Each node is equipped with only 1 vCPU and 1GB memory. The network performance is minimal for this cluster. The *Spark Cluster* that we used is a 4 node cluster built using general purpose m1.large nodes of Amazon EC2 and EBS. Each node is equipped with 2 vCPU and 7.5GB memory. Network performance is moderate for this cluster. Both cluster configurations satisfy the minimum requirement to support replication factor of 3. The hardware and software configurations of the EC2 nodes can be found in table \[table\_ec2\]. We built a 64-bit Ubuntu AMI (Amazon Machine Instance) for each node-type before setting up the clusters. These AMIs were equipped with the latest distributions of Hadoop, Spark and GCC along with with our code base. The hadoop cluster had 5 nodes, where 1 node acted as the namenode, 1 node acted as the secondary name node and 3 nodes were acting as data nodes. The spark cluster had a master and 3 slave nodes. Since our proposed system works independently, all modules of the model had to be installed on every node of the EC2 clusters. A library of all modules in the model was implemented in C++ programming language using STL and multi-threading libraries and packaged together. Our code used TCP/IP protocol and SSH keys for communication between the nodes of the clusters. [0.45]{}[| C | C | C |]{} **Exp.no**&**Name**&**Description**\ 1& aggregatewordcount &An Aggregate-based map/reduce program that counts the words in the input files.\ 2& aggregatewordhist &An Aggregate-based map/reduce program that computes the histogram of the words in the input files.\ 3& bbp &A map/reduce program that uses Bailey-Borwein-Plouffe to compute exact digits of Pi.\ 4& distbbp &A map/reduce program that uses a BBP-type formula to compute exact bits of Pi.\ 5& grep &A map/reduce program that counts the matches of a regex in the input.\ 6& pi &A map/reduce program that estimates Pi using a quasi-Monte Carlo method.\ 7& randomtextwriter &A map/reduce program that writes 10GB of random textual data per node.\ 8& randomwriter &A map/reduce program that writes 10GB of random data per node.\ 9& sort &A map/reduce program that sorts the data written by the random writer.\ 10& teragen &Generate data for the terasort.\ 11& terasort &Run the terasort.\ 12& teravalidate &Check the results of the terasort.\ 13& wordcount &A map/reduce program that counts the words in the input files.\ 14& wordmean &A map/reduce program that counts the average length of the words in the input files.\ 15& wordmedian &A map/reduce program that counts the median length of the words in the input files.\ 16& wordstandarddeviation &A map/reduce program that counts the standard deviation of the length of the words in the input files.\ [0.45]{}[| C | C | C |]{} **Exp.no**& **Name** &**Description**\ 1&fp-growth& Frequent Pattern Matching Tests to find frequent item sets\ 2&word2vec& Feature Transformation Tests for distributed presentation of words\ 3&chi-sq-feature& Statistic Toolkit Tests using Chi-square for correlation\ 4&spearman& Statistic Toolkit Tests using Spearman’s Correlation\ 5&pearson & Statistic Toolkit Tests using Pearson’s Correlation\ 6&block-matrix-mult& Matrix Multiplication on distributed matrix\ 7&summary-statistics& Linear Algebra Tests using Summary Statistics (min, max, ...)\ 8&pca& Linear Algebra Tests using Principal Component Analysis\ 9&svd& Linear Algebra Tests using Singular Value Decomposition\ 10&gmm& Clustering Tests using Gaussian Mixture Model\ 11&kmeans& Clustering Tests using K-Means clustering\ 12&als& Recommendation Tests using Alternating Least Squares\ 13&decision-tree& Random Forest Decision Tree\ 14&naive-bayes& Classification Tests using Naive Bayes\ 15&glm-classification& Generalized Linear Classification Model\ 16&glm-regression& Generalized Linear Regression Model\ Though the main requirement for any attack detection service is to be able to detect an attack successfully, being able to detect the attack before the attacked program completes execution is also a necessity. We show the efficiency and the overhead of the proposed system by conducting the experiments in real-time using popular examples and tests. We used two sets of open-source big data benchmark programs: (a) 16 *Hadoop MapReduce Examples:* that are provided in the Apache hadoop installation kit; and (b) 16 *Spark-perf MLlib Tests:* for machine learning algorithms given in the spark performance test suite by Databricks [@Databricks]. More details about these examples and tests are given in tables \[table\_examples\_hadoop\] and \[table\_examples\_spark\]. The input to our model (built from the proposed system) is the run-time assembly code of a program. The hadoop mapreduce examples were coded in Java and the Spark-perf MLlib tests were coded in Scala. So, the jars to run these examples were built using just-in-time compiling. Their bytecodes are insufficient to create the assembly codes of the individual programs. We used a software called jit-watch [@Jitwatch] to generate the assembly codes (Intel x86 specification) of the programs from the jars. Since our algorithm only needs control-flow instructions from the generated assembly code outputs of each program, we used a custom parser that can filter out control flow instructions from the native files. All 32 example programs are infected by a code snippet that calls a function `foo` to print a line to the console and involves a total of 3 `call` instructions and 1 `return` instruction. The command used for generating assembly code output of JVM (or Hotspot VM) when running the program is: `java -XX:+UnlockDiagnosticVMOptions -XX:+PrintAssembly -XX:PrintAssemblyOptions=intel -XX:+TraceClassLoading -XX:+LogCompilation -XX:LogFile=filename -cp [path to additional classes] [main method] [args]` First, we calculated the execution times for the hadoop mapreduce examples on the hadoop cluster. Then we studied the run times of the implemented model while it was analyzing the assembly codes of the driver programs of the same examples. These experiments are adhoc because the input arguments for some of the experiments were intentionally low to simulate worst case scenario’s where the process takes very less time to execute. To meet the input data requirements of the mapreduce examples, we put the configuration file data from `etc` folder of hadoop into HDFS. The generic command used to run these mapreduce examples is: `time hadoop jar hadoop-mapreduce-examples.jar [main method] [args]` **Exp.no** **Example** **Instruction Count** **CFI** **Jumps** **Calls** **Returns** **%CFI** **% Jumps** **% Calls** **% Returns** ------------ ----------------------- ----------------------- --------- ----------- ----------- ------------- ------------ ------------- ------------- --------------- 1 aggregatewordcount 81713 17195 12722 4009 464 21.04% 15.57% 4.91% 0.57% 2 aggregatewordhist 48428 9812 7133 2366 313 20.26% 14.73% 4.89% 0.65% 3 bbp 85514 17880 13182 4211 487 20.91% 15.42% 4.92% 0.57% 4 distbbp 68283 13880 10234 3238 408 20.33% 14.99% 4.74% 0.60% 5 grep 81404 16911 12501 3937 473 20.77% 15.36% 4.84% 0.58% 6 pi 65397 13607 10170 3070 367 20.81% 15.55% 4.69% 0.56% 7 randomtextwriter 70909 14896 11186 3332 378 21.01% 15.78% 4.70% 0.53% 8 randomwriter 91414 19462 14508 4475 479 21.29% 15.87% 4.90% 0.52% 9 sort 101298 21420 16003 4885 532 21.15% 15.80% 4.82% 0.53% 10 teragen 134747 28228 21013 6516 699 20.95% 15.59% 4.84% 0.52% 11 terasort 121541 25420 18925 5827 668 20.91% 15.57% 4.79% 0.55% 12 teravalidate 139583 29244 21838 6630 776 20.95% 15.65% 4.75% 0.56% 13 wordcount 77393 16341 12100 3791 450 21.11% 15.63% 4.90% 0.58% 14 wordmean 62412 13093 9726 2994 373 20.98% 15.58% 4.80% 0.60% 15 wordmedian 66401 13435 9869 3161 405 20.23% 14.86% 4.76% 0.61% 16 wordstandarddeviation 82079 16917 12492 3932 493 20.61% 15.22% 4.79% 0.60% 86157 17984 13350 4148 485 **20.83%** **15.45%** **4.81%** **0.57%** **Exp.no** **Algorithm** **Instruction Count** **CFI** **Jumps** **Calls** **Returns** **% CFI** **% Jumps** **% Calls** **% Returns** ------------ -------------------- ----------------------- --------- ----------- ----------- ------------- ------------ ------------- ------------- --------------- 1 fp-growth 216009 46544 35200 10251 1093 21.55% 16.30% 4.75% 0.51% 2 word2vec 147737 30235 22638 6772 825 20.47% 15.32% 4.58% 0.56% 3 chi-sq-feature 172014 35783 26736 8119 928 20.80% 15.54% 4.72% 0.54% 4 spearman 194615 41043 30857 9155 1031 21.09% 15.86% 4.70% 0.53% 5 pearson 184628 38694 28996 8691 1007 20.96% 15.71% 4.71% 0.55% 6 block-matrix-mult 195714 41245 31030 9174 1041 21.07% 15.85% 4.69% 0.53% 7 summary-statistics 196555 41034 30736 9235 1063 20.88% 15.64% 4.70% 0.54% 8 pca 192280 40427 30377 9020 1030 21.03% 15.80% 4.69% 0.54% 9 svd 143996 29684 22334 6550 800 20.61% 15.51% 4.55% 0.56% 10 gmm 170722 35655 26848 7898 909 20.88% 15.73% 4.63% 0.53% 11 kmeans 170694 35842 26957 7962 923 21.00% 15.79% 4.66% 0.54% 12 als 181836 38032 28603 8428 1001 20.92% 15.73% 4.63% 0.55% 13 decision-tree 175889 36655 27546 8140 969 20.84% 15.66% 4.63% 0.55% 14 naive-bayes 171945 36053 27036 8082 935 20.97% 15.72% 4.70% 0.54% 15 glm-classification 186454 39088 29362 8715 1011 20.96% 15.75% 4.67% 0.54% 16 glm-regression 200255 42439 32020 9346 1073 21.19% 15.99% 4.67% 0.54% 181334 38028 28580 8471 977 **20.95%** **15.74%** **4.67%** **0.54%** The spark-perf MLlib tests on the spark cluster were conducted the same way the mapreduce examples were tested. But here the inputs for the tests were predetermined by the benchmark provider in the `config.py` script. The generic command used to run these MLlib tests is: `spark-submit –class mllib.perf.TestRunner –master [ip of node] –driver-memory [limit] mllib-perf-tests-assembly.jar [algorithm name] [args]` Results and Analysis -------------------- The experiments we used for evaluating our proposed security system comprise of stress tests and performance benchmarks of hadoop and spark. Hence, knowing which threads of investigation to follow and which to ignore was difficult and challenging. We chose to focus on execution time and code size of the experiments. The overhead in our experiments is calculated from time measurements. We divide the time taken to detect an attack in a process $p$ by the execution time of the same process and multiply the result by 100 to find the percentage of time overhead, as given in equation \[eq:overhead\]. Here $time_{detect}(p)$ is calculated using system clock measurements for encrypting process analysis information, decrypting received messages and hash matching. The communication cost in sending data packets from one node to another is not included. The overhead calculations show the worst case scenario since the input arguments are intentionally low for some of the experiments. Real-world big data programs will be much more complex jobs and hence the overhead will be much lesser than what is shown here. Tables 6 and 7 and Figures \[subfig\_time\_hadoop\] and \[subfig\_time\_spark\] show the analysis of run-times for executing the experiments and the model built from the proposed system. On average, the overhead of running the model is 3.28%. We used linear regression and best-fit plots, given in Figures \[subfig\_forecast\_hadoop\] and \[subfig\_forecast\_spark\], to show the relation between programs (given in number of control flow instructions of their assembly representations) and time to detect an attack in them. The time taken to execute example number 4 i.e. *distributed bbp* program of hadoop mapreduce example set was too high (288 seconds) to plot on the graph shown in Figure \[subfig\_time\_hadoop\]. $$\label{eq:overhead} \% overhead(p) = \frac{time_{detect}(p)}{time_{execute}(p)} \times 100$$ \[table\_hadoop\_time\] [0.45]{}[|C|C|C|C|]{} **Exp.no** & **Time to Execute** & **Time to Detect** & **% Overhead**\ 1& 17.56& 0.69& 3.93%\ 2& 20.14& 0.42& 2.10%\ 3& 6.39& 0.76& 11.84%\ 4& 287.62& 0.67& 0.23%\ 5& 7.96& 0.79& 9.89%\ 6& 6.48& 0.72& 11.12%\ 7& 37.63& 0.77& 2.05%\ 8& 31.51& 0.97& 3.07%\ 9& 41.71& 1.57& 3.75%\ 10& 4.45& 1.46& 32.82%\ 11& 4.99& 1.37& 27.37%\ 12& 4.61& 1.47& 31.96%\ 13& 6.68& 0.99& 14.86%\ 14& 6.63& 0.90& 13.63%\ 15& 6.64& 0.92& 13.82%\ 16& 7.76& 1.08& 13.88%\ Average Values& 31.17& 0.97& **3.12%**\ \[table\_spark\_time\] [0.45]{}[|C|C|C|C|]{} **Exp.no** & **Time to Execute** & **Time to Detect** & **% Overhead**\ 1& 2.92& 0.34& 11.67%\ 2& 12.942& 0.24& 1.87%\ 3& 3.899& 0.28& 7.19%\ 4& 15.708& 0.33& 2.08%\ 5& 3.314& 0.31& 9.23%\ 6& 3.011& 0.34& 11.31%\ 7& 5.312& 0.35& 6.63%\ 8& 8.124& 0.34& 4.23%\ 9& 24.647& 0.30& 1.21%\ 10& 4.584& 0.33& 7.24%\ 11& 7.529& 0.35& 4.69%\ 12& 16.884& 0.36& 2.12%\ 13& 31.963& 0.37& 1.17%\ 14& 1.664& 0.37& 22.34%\ 15& 8.151& 0.41& 5.05%\ 16& 8.542& 0.45& 5.26%\ Average Values& 9.950& 0.34& **3.44%**\ The proposed system performs a similarity check of control flow within duplicate processes running on different nodes of a big data cluster. This control flow similarity check is performed by matching control instruction sequences. Since the infected node is predetermined in our experiments, our test cases do not have a false positive or false negative. But a false positive will occur when all data nodes are attacked in the same way. A false negative will occur in case of runtime attacks or attacks that originate outside the big data platform. But given our attack model, such cases are outside the scope of this work. Instead, we try to understand the control flow in the programs used in the experiments section, i.e. hadoop mapreduce examples and the spark performance tests for machine learning algorithms. Results from tables \[table\_hadoop\_values\] and \[table\_spark\_values\] and fugures \[subfig\_hadoop\] and \[subfig\_spark\] show instruction level properties of the examples and tests used in our experiments. It can be observed that only 20.8% of the total instruction count in the hadoop mapreduce examples account for control flow instructions. In case of spark performance tests for machine learning algorithms, 20.9% of instructions in the assembly code are control flow instructions. Of all control flow instructions, `jumps` are the most significantly used CFI with a lion share of 15.45% of the total instruction count in hadoop mapreduce examples and 15.74% of the total instruction count in spark performance tests. `Calls` and `returns` cover only 4.8% and 0.5% respectively in the hadoop mapreduce example set and; 4.6% and 0.5% respectively in the spark performance tests set. It can be inferred from these results that control flow instructions account for only one-fifth of the total instruction count for a program (assembly code). This is a remarkable coincidence among these two sets of programs because (a) they belong to different domains - mapreduce on hadoop, machine learning in spark; (b) their source programming language is different - java for hadoop mapreduce examples, scala for spark-perf machine learning tests; and (c) they differ in program size - 86,000 instructions on average per program for the mapreduce example set and 180,000 instructions on average per program for the spark perf machine learning tests. This observation strengthens our initial argument that generating dynamic CFG for large and complex big data programs is cumbersome. This is because the size of CFG is proportional to the code lines which is related to the number of instructions. Hence, the proposed idea of generating CIS and hashing them is a good alternative to the CFG memory complexity problem. The overhead incurred in using the model built from the proposed system architecture is less than 3.28% if it is hosted by the same hardware that hosts the big data systems. This is in the acceptable range of overhead for big data platforms like Hadoop. The time our system takes to analyze the programs and compare the results is linearly dependent on the number of control flow instructions in the program, but not on the number of lines of assembly code. This greatly reduces the complexity of the similarity analysis from the conventional and complex approach of generating a CFG. Also, generating CIS only needs a one time parse through the program code (assembly code) and can be performed independently and in parallel on each node of the cluster. The experimental results show the feasibility of implementing a model of the proposed system. Building and implementing a detailed version of this system will demonstrate lower overhead and convince the vendors to adopt it. Conclusion {#sec:conclusion} ========== In this paper, we proposed a security system for big data systems to detect insider attacks quickly with low overhead. The system consists of a two step attack detection algorithm and a secure communication protocol. A simple hash string matching technique is proposed to fulfill the distributed process similarity check and identify attacks. A secure communication protocol for data nodes that uses periodically generated random keys is proposed to conduct the detection algorithm. A model of the proposed system is tested in real-time on Amazon’s EC2 clusters using a different sets of Hadoop and Spark programs. The time overhead was 3.28% and it is observed from the results that the proposed security system uses only 20% of program code to detect attacks. In this work, we also propose the idea of delegating security as an independent module and the components needed for such models are discussed. For future work, we would like to evaluate our system on security related big data benchmarks (when available). Also, we would like to actualize the hardware architecture of security chips that can independently support our system. [1]{} IDC. New IDC Forecast Sees Worldwide Big Data Technology and Services Market Growing to \$ 48.6 Billion in 2019, Driven by Wide Adoption Across Industries. IDC, 09 Nov. 2015. Web. 01 Jan. 2016. Vormetric. “2015 Insider Threat Report.” Vormetric, Inc, 01 Sept. 2015. Web. 01 Jan. 2016. Salem, Malek Ben, Shlomo Hershkop, and Salvatore J. Stolfo. “A survey of insider attack detection research.” Insider Attack and Cyber Security. Springer US, 2008. 69-90. White, Tom. Hadoop: The definitive guide. “ O’Reilly Media, Inc.”, 2012. Zaharia, Matei, et al. “Spark: cluster computing with working sets.” Proceedings of the 2nd USENIX conference on Hot topics in cloud computing. Vol. 10. 2010. Neuman, B. Clifford, and Theodore Ts’ O. “Kerberos: An authentication service for computer networks.” Communications Magazine, IEEE 32.9 (1994): 33-38. Aditham, Santosh, and Nagarajan Ranganathan. “A novel framework for mitigating insider attacks in big data systems.” Big Data (Big Data), 2015 IEEE International Conference on. IEEE, 2015. Khandelwal, Swati. “Juniper Firewalls with ScreenOS Backdoored Since 2012.” The Hacker News. The Hacker News, 18 Dec. 2015. Web. 01 Jan. 2016. Sirota, James, and Sheetal Dolas. “OpenSOC.” Open Security Operations Center. Cisco, 05 June 2014. Web. 01 Jan. 2016. Bajikar, Sundeep. “Trusted platform module (tpm) based security on notebook pcs-white paper.” Mobile Platforms Group Intel Corporation (2002): 1-20. Greene, James. “Intel trusted execution technology.” Intel Technology White Paper (2012). Schultz, E. Eugene. “A framework for understanding and predicting insider attacks.” Computers & Security 21.6 (2002): 526-531. Ghosh, Anup K., Aaron Schwartzbard, and Michael Schatz. “Learning Program Behavior Profiles for Intrusion Detection.” Workshop on Intrusion Detection and Network Monitoring. Vol. 51462. 1999. Lunt, Teresa. “Detecting intruders in computer systems.” Proceedings of the 1993 conference on auditing and computer technology. Vol. 61. 1993. Spitzner, Lance. “Honeypots: Catching the insider threat.” Computer Security Applications Conference, 2003. Proceedings. 19th Annual. IEEE, 2003. Froomkin, A. Michael. “Anonymity and its enmities.” J. Online L. art. 1995 (1995): 4-5. Khalil, Issa, Abdallah Khreishah, and Muhammad Azeem. “Consolidated Identity Management System for secure mobile cloud computing.” Computer Networks 65 (2014): 99-110. Sandhu, Ravi S., et al. “Role-based access control models.” Computer 2 (1996): 38-47. Coppersmith, Don. “The Data Encryption Standard (DES) and its strength against attacks.” IBM journal of research and development 38.3 (1994): 243-250. Goyal, Vipul, et al. “Attribute-based encryption for fine-grained access control of encrypted data.” Proceedings of the 13th ACM conference on Computer and communications security. Acm, 2006. Abadi, Martín, et al. “Control-flow integrity.” Proceedings of the 12th ACM conference on Computer and communications security. ACM, 2005. Abadi, Martín, et al. “A theory of secure control flow.” Formal Methods and Software Engineering. Springer Berlin Heidelberg, 2005. 111-124. Evans, Isaac, et al. “Control jujutsu: On the weaknesses of fine-grained control flow integrity.” Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. ACM, 2015. Chan, Patrick PF, and Christian Collberg. “A Method to Evaluate CFG Comparison Algorithms.” Quality Software (QSIC), 2014 14th International Conference on. IEEE, 2014. Pattabiraman, Karthik, et al. “Discovering application-level insider attacks using symbolic execution.” Emerging Challenges for Security, Privacy and Trust. Springer Berlin Heidelberg, 2009. 63-75. Oltsik, Jon. “2013 Vormetric/ESG Insider Threats Survey: The Ominous State of Insider Threats.” Enterprise Strategy Group and Vormetric (2013). Duncan, Adrian, Sadie Creese, and Michael Goldsmith. “An overview of insider attacks in cloud computing.” Concurrency and Computation: Practice and Experience (2014). Varadharajan, Vijay, and Udaya Tupakula. “Security as a service model for cloud environment.” Network and Service Management, IEEE Transactions on 11.1 (2014): 60-75. Jager, Hubert A., et al. “Sealed Cloud—A Novel Approach to Safe Guard Against Insider Attacks.” Trusted Cloud Computing. Springer International Publishing, 2014. 15-34. Liang, Qilian, et al. “Security in big data.” Security and Communication Networks 8.14 (2015): 2383-2385. O’Malley, Owen, et al. “Hadoop security design.” Yahoo, Inc., Tech. Rep (2009). Hu, Daming, et al. “Research on Hadoop Identity Authentication Based on Improved Kerberos Protocol.” (2015). Gaddam, Ajit. “Data Security in Hadoop.” Data Security in Hadoop (2015): n. pag. 20 Feb. 2015. Web. 10 Jan. 2016. Adluru, Pradeep, Srikari Sindhoori Datla, and Xiaowen Zhang. “Hadoop eco system for big data security and privacy.” Systems, Applications and Technology Conference (LISAT), 2015 IEEE Long Island. IEEE, 2015. “Hadoop Tutorial: How to Refine and Visualize Server Log Data.” Hortonworks. N.p., 2014. Web. 17 Jan. 2016. Khalil, Issa, Zuochao Dou, and Abdallah Khreishah. “TPM-based Authentication Mechanism for Apache Hadoop.” (2015). Cohen, Jason C., and Subrata Acharya. “Towards a trusted HDFS storage platform: Mitigating threats to Hadoop infrastructures using hardware-accelerated encryption with TPM-rooted key protection.” Journal of Information Security and Applications 19.3 (2014): 224-244. Brockmeyer, Roger L., et al. “Extension of two phase commit protocol to distributed participants.” U.S. Patent No. 5,546,582. 13 Aug. 1996. Varia, Jinesh. “Architecting for the cloud: Best practices.” Amazon Web Services (2010). Amazon, E. C. “Amazon elastic compute cloud (Amazon EC2).” Amazon Elastic Compute Cloud (Amazon EC2) (2010). Amazon, E. B. S. “Amazon Elastic Block Store.” (2010). “Databricks/spark-perf.” GitHub. Apache, 2015. Web. 17 Jan. 2016. Newland, Chris. “AdoptOpenJDK/jitwatch.” GitHub. FreeBSD, 2013. Web. 17 Jan. 2016. [Santosh Aditham]{} received the B.Tech. degree in computer science from Andhra University, Visakhapatnam, India, in 2007 and the M.S. degree in computer science from the Texas Tech University, Lubbock, TX in 2010. Currently he is a PhD candidate in computer science and engineering department from the University of South Florida, Tampa, FL. He likes to work at computer architecture and operating systems level. His research interests include security of big data systems and low-power scheduling in distributed systems. Mr. Aditham is a student member of the IEEE Computer Society. [Nagarajan “Ranga” Ranganathan]{} received the B.E. (Honors) degree in Electrical and Electronics Engineering from Regional Engineering College (National Institute of Technology) Tiruchi, University of Madras, India, 1983, and the Ph.D. degree in Computer Science from the University of Central Florida, Orlando in 1988. He is a Distinguished University Professor Emeritus of Computer Science and Engineering at the University of South Florida, Tampa. During 1998-99, he was a Professor of Electrical and Computer Engineering at the University of Texas at El Paso. His research interests include VLSI circuit and system design, VLSI design automation, multi-metric optimization in hardware and software systems, computer architecture, reversible logic and parallel computing. He has developed many special purpose VLSI circuits and systems for computer vision, image and video processing, pattern recognition, data compression and signal processing applications. He and his students have developed several VLSI CAD algorithms based on decision theory, game theory, auction theory and Fuzzy modeling. He has co-authored about 300 papers in refereed journals and conferences, five book chapters and co-owns eight U.S. patents and two pending. Dr. Ranganathan was elected as a Fellow of IEEE in 2002 for his contributions to algorithms and architectures for VLSI systems and as Fellow of AAAS in 2012.
--- author: - 'T. Laffargue' - 'P. Sollich' - 'J. Tailleur' - 'F. van Wijland' bibliography: - './Chaos\_letter.bib' title: 'Large-scale fluctuations of the largest Lyapunov exponent in diffusive systems' --- The long history of cross-fertilisation between statistical mechanics and chaos theory has led to the emergence of many subfields where they are deeply intertwined [@Dorfman1999], from the foundation of statistical physics [@Penrose1979; @Gaspard1998a; @Dettmann1999; @Grassberger1999], to the discovery of the fluctuation theorems [@Evans1993; @Gallavotti1995] and the study of random dynamical systems [@Benzi1984; @Arnold1986; @Graham1988; @Paladin1995]. Paramount here is the idea that studying fluctuations of dynamical observables could allow one to extend statistical mechanics from configuration space into trajectory space. Indeed, while the usual statistical mechanics tells us about the phases of a given system, it remains silent about their dynamical nature. Working in trajectory space allows one to answer this question in an elegant way. Over the past ten years, a number of methods have been found for studying fluctuations directly in trajectory space [@Derrida2004; @Bodineau2004; @Bertini2005; @Bertini2005a; @Imparato2009; @Brunet2010; @Lecomte2010; @Hurtado2011; @Meerson2014; @Garrahan2007; @Turci2011; @Lecomte2012; @Speck2012] whence revealing novel dynamical phase transitions. Perhaps the most salient example is that of glassy systems, whose statics do not differ from their liquid counterpart but whose dynamics display a drastic slowing down. These have recently been studied by classifying trajectories according to their level of dynamic activity [@Merolle2005; @Garrahan2007; @Hedges2009; @Pitard2011]. While the activity is easy to define for lattice-based kinetically constrained models [@Garrahan2007], quantifying it in realistic physical systems such as molecular glasses has always involved a great deal of arbitrariness: the need to distinguish cooperatively rearranging regions from local rattling leads to *ad hoc* constructions based on *a posteriori* knowledge of the dynamic evolution [@Pitard2011; @Speck2012; @Fullerton2013]. A natural path to circumvent this problem is to rely on more fundamental quantities, such as the Lyapunov exponents (LEs) that form the basis of the thermodynamic formalism of Bowen, Ruelle and Sinai [@Ruelle1978]. Connections between the Lyapunov spectrum and transport coefficients have been investigated in the recent past [@Dorfman1999; @Gaspard1998] and suggest that dynamical phase transitions involving the current of some conserved quantity could also be understood in terms of fluctuations of the Lyapunov spectrum. In fact, Lyapunov exponents may well prove to be the unifying concept behind the variety of known dynamical phase transitions. Unfortunately, studying their fluctuations is a notoriously difficult task that, in spite of a large effort from the community, has been carried out mostly in low dimensions [@Bohr1987; @Grassberger1988; @Beck1993] (with some notable exceptions [@Appert1997; @Kuptsov2011; @Pazo2013; @Tailleur2007; @Laffargue2013]). For deterministic systems, computations in high dimensions appear out of reach, beginning with the difficult task to find their SRB measures, which are crucial to properly define averages and fluctuations. Fortunately, many systems of interest effectively have, to an excellent level of approximation, stochastic dynamics. Then ergodic issues are bypassed and fluctuations are easier to access as they correspond to different noise realisations. There are several ways to define LEs for stochastic dynamics depending on context and goals [@Benzi1984; @Arnold1986; @Graham1988; @Paladin1995] but studying their fluctuations in high dimensions remains very challenging, with few results available [@Tailleur2007; @Laffargue2013]. Of course, real condensed matter systems are spatially extended and endowed with interactions; the study of chaotic properties of high-dimensional systems is thus of great interest [@Takeuchi2013; @Yang2009]. When studying collective phenomena, like the glass transition, our interest is not in the individual behavior of single particles but rather in the emergent behavior of the system. In other words, we are interested in collective modes, rather than in microscopic degrees of freedom. Characterising the fluctuations of LEs of collective modes is thus both an important goal and a difficult task. In this Letter, we show how this program can actually be carried out analytically for a class of many-body interacting systems, whose dynamics is described by diffusive fluctuating hydrodynamics. Such a description applies to systems devoid of long-range interactions (for which special precautions must be taken [@Touchette2006]) and, despite being intuitively appealing, can be mathematically challenging to establish [@Caglioti1996]. For the sake of concreteness, we first introduce a paradigmatic example of such models: the Kipnis-Marchioro-Presutti (KMP) model of heat conduction [@Kipnis1982]. Then, we show how the Macroscopic Fluctuation Theory (MFT) [@Spohn1983; @Bertini2001; @Bertini2002; @Bertini2005; @Bertini2005a; @Tailleur2008; @Imparato2009; @Derrida2009a; @Lecomte2010; @Meerson2014; @Bouchet2014], which has proven successful in the study of current or activity fluctuations, can be extended to calculate the large-deviation function of the largest LE. We validate our MFT-based analytical results using simulations of a lattice model. Finally, we present how our results on the LEs connect, somewhat unexpectedly, to damage spreading and to a two-species pair annihilation reaction-diffusion process. The KMP model is a chain of $L$ oscillators[^1], in which the energy ${\varepsilon}_i \geqslant 0$ is redistributed stochastically between nearest neighbours at fixed rate $\gamma$ according to: $$\left( {\varepsilon}_j, \, {\varepsilon}_{j+1} \right) \xrightarrow[]{\text{rate }\gamma} \left( p \left({\varepsilon}_j + {\varepsilon}_{j+1}\right), \, (1-p) \left({\varepsilon}_j + {\varepsilon}_{j+1}\right) \right)$$ where for each event $p$ is sampled from a uniform distribution on $[0,1]$. The total energy is conserved in each update, which accounts for this model being one of the simplest for which Fourier’s law can be proven analytically [@Kipnis1982]. To define the LEs, let us consider two copies of the system, $\{{\varepsilon}_{i}\}$ and $\{{\varepsilon}'_{i}\}$, which evolve with the same noise realisations. In practice this means taking the same redistribution time (given by an exponential law of parameter $\gamma$) for each bond and the same redistribution parameter $p$ at each activation in the two copies. We can then follow the time evolution of the difference between the two copies, $u_{i} = {\varepsilon}'_{i} - {\varepsilon}_{i}$, and define from this the largest (finite-time) LE $\tilde{\lambda}(t)$ as $$\tilde{\lambda}(t) \equiv \frac{1}{t} \ln \frac{{\left|\mathbf{u}(t)\right|}}{{\left|\mathbf{u}(0)\right|}}\,, \quad \text{with} \quad {\left|\mathbf{u}(t)\right|}^2 \equiv {\sum_{i=1}^{L} u_{i}^{2}(t)}\,.$$ The LE $\tilde{\lambda}(t)$ – we omit the adjective “largest” below – tells us how small perturbations are amplified or eliminated by the dynamics. If $\tilde{\lambda}(t) < 0$, the copies of the systems converge towards identical energy profiles. Conversely, if $\tilde{\lambda}(t) > 0$, the difference between the two copies diverges, and a small perturbation on the initial configuration completely changes its subsequent evolution. Since this system is stochastic, generic initial conditions are quickly forgotten and the LE should not depend on them in the large-time limit. It would be a formidable task to keep track of the $L$ individual stochastic variables. Since we are interested in the macroscopic properties of our model, we adopt a fluctuating hydrodynamics description, which accounts for the stochastic evolution of collective modes in the large $L$ limit. In this approach, space and time are rescaled by the system length and the diffusive relaxation time of a macroscopic fluctuation: $x=i/L$ and $\tau=t/L^2$. The local energy ${\varepsilon}_{i}(t)$ then turns into a smoothly varying field $\rho(x, \tau)$, which evolves according to a continuity equation $$\partial_{\tau} \rho(x, \tau) + \partial_{x} \,j(x, \tau) = 0\,. \label{FH}$$ The current $j(x,\tau)$ comprises a deterministic contribution arising from Fick’s law and a stochastic one accounting for the fluctuations around this typical behaviour: $$j(x, \tau) = - D(\rho)\,\partial_{x} \rho - \sqrt{\frac{\sigma(\rho)}{L}}\,\xi(x, \tau) \label{current}$$ where $\xi(x, \tau)$ is a Gaussian white noise with correlations ${\langle\xi(x, \tau) ~\xi(x', \tau')\rangle} = \delta(x-x') ~\delta(\tau - \tau')$. Equation  shows the benefit of replacing microscopic variables by a continuous stochastic field: the noise vanishes in the large $L$ limit. This description is generic for conserved quantities in diffusive systems and we can recover different microscopic models by appropriate choice of the $\rho$-dependence of $D$ and $\sigma$. The KMP model with $\gamma=2$ has $D(\rho) = 1$ and $\sigma(\rho) = 2 \rho^{2}$. Equations  and  can also describe the local number of particles in lattice gas models: $D(\rho) = 1$ and $\sigma(\rho) = 2 \rho$ corresponds to free particles performing a symmetric random walk with a unit hopping rate, whereas $D(\rho) = 1$ and $\sigma(\rho) = 2 \rho (1-\rho)$ corresponds to the Symmetric Simple Exclusion Process (SSEP). For the sake of generality, we will keep $D$ and $\sigma$ arbitrary for now. A small perturbation $u(x,\,\tau)$ of the field $\rho(x,\,\tau)$ evolves according to the linearisation of the continuity equation  $$\partial_{\tau} u(x, \, \tau) = \mathbf{A}\, u(x, \, \tau);\: \mathbf{A} = {\frac{\partial ^{2}}{\partial x^{2}}} D(\rho) + {\frac{\partial }{\partial x}} \frac{\sigma'(\rho)}{2 \sqrt{L \sigma(\rho)}} \xi$$ where $\xi$ is the same noise as in (\[current\]) and the differential operator ${\frac{\partial }{\partial x}}$ applies to everything on its right. Linearising the dynamics amounts to considering two close-by copies of the system $\rho$ and $\rho'$, and to examining the evolution of the difference $u = \rho' - \rho$. In this formalism, the definition of the “same noise” is straightforward: we simply take the exact same realisation of $\xi(x, \, \tau)$ for the two copies of our system. The LE $\lambda$ is then defined as $$\lambda(\tau) \equiv \frac{1}{\tau} \ln \frac{{\left|u(\tau)\right|}}{{\left|u(0)\right|}}\,, \; \text{with} \; {\left|u(\tau)\right|}^2 \equiv {\int_{0}^{1} {\mathrm{d}}x \, u^2(x, \tau)}\,.$$ We can now introduce the normalised tangent vector $v(x, \, \tau) = \frac{u(x, \, \tau)}{{\left|u(\tau)\right|}}$, which evolves according to $$\partial_{\tau} v(x, \, \tau) = \mathbf{A}\, v(x, \, \tau) - v(x, \, \tau) \int_{0}^{1} {\mathrm{d}}y \, v(y, \, \tau) \, \mathbf{A} v(y, \, \tau)\,,$$ to obtain an explicit expression for $\lambda(\tau)$ as [@Laffargue2013] $$\lambda(\tau) = \frac{1}{\tau} \int_{0}^{\tau} {\mathrm{d}}\tau \, \int_{0}^{1} {\mathrm{d}}x \, v(x, \, \tau) \, \mathbf{A} v(x, \, \tau)\,.$$ The LE is a fluctuating quantity that depends on the noise realisation. To characterise its fluctuations, it is convenient to introduce the moment-generating function $$Z(\alpha, L, \tau) \equiv {\langlee^{\alpha L \tau \lambda(\tau)}\rangle} = \int {\mathrm{d}}\lambda ~P(\lambda, L, \tau) ~e^{\alpha L \tau \lambda(\tau)}$$ instead of trying to directly calculate the probability distribution $P(\lambda, L, \tau)$. In analogy to the canonical ensemble in equilibrium statistical physics the parameter $\alpha$, which is conjugate to the LE, plays the role of an inverse temperature for chaoticity. Taking $\alpha > 0$ favours trajectories with large LE, i.e. abnormally chaotic trajectories, whereas $\alpha < 0$ favours trajectories with small LE that are abnormally stable. Our next step is technical: we carry out the evaluation of the partition function $Z(\alpha)$. Using standard path-integral methods [@Janssen1976; @DeDominicis1976; @Tailleur2008; @Imparato2009; @Lecomte2010], $Z$ can be expressed as $$Z(\alpha, L, \tau) = \int {{\cal D}}\left[ \rho, {\bar{\rho}}, v, {\bar{v}}\right] e^{- L S[\rho, {\bar{\rho}}, v, {\bar{v}}]}$$ where the explicit dependence on the noise is replaced by response fields ${\bar{\rho}}$ and ${\bar{v}}$ and the path integral has to be performed over fields that respect the constraints ${\int_{0}^{1} {\mathrm{d}}x \, \rho(x,\tau)}=\rho_0$, with $\rho_0$ the overall density, ${\int_{0}^{1} {\mathrm{d}}x \, v(x,\tau)}=0$, ${\int_{0}^{1} {\mathrm{d}}x \, v^{2}(x,\tau)}=1$ and the periodic boundary conditions (for the fields and their first derivatives). The action $S$ reads $$\begin{gathered} S = \int_{0}^{\tau} {\mathrm{d}}t \int_{0}^{1} {\mathrm{d}}x \,\Big[ {\bar{\rho}}\, \partial_{t} \rho + {\bar{v}}\, \partial_{t} v + D \, \partial_{x} \rho \, \partial_{x} {\bar{\rho}}\\ + \left( \left( I - \alpha \right) v- {\bar{v}}\right) \partial_{x}^{2} \left( D \, v \right) - \frac{\left( \mathcal{J} + D \, \partial_{x} \rho \right)^{2}}{2 \sigma} \Big] \label{eqn:action}\end{gathered}$$ $$\begin{aligned} \text{where} \qquad I &= \int_{0}^{1} {\mathrm{d}}y \, v(y, \,t) \, {\bar{v}}(y, \, t)\\ \text{and}\qquad \mathcal{J} &= - D \, \partial_{x} \rho + \frac{\sigma'}{2} v \, \partial_{x} \left[ ( I - \alpha) v - {\bar{v}}\right] - \sigma \, \partial_{x} {\bar{\rho}}\end{aligned}$$ have been introduced to make $S$ as compact as possible. The specific form of the path integral, with system size $L$ factored out in front of the action, gives the gist of the MFT: we may use a saddle-point approximation [@Tailleur2008; @Imparato2009; @Lecomte2010] to compute $Z$ in the large $L$ limit $$Z(\alpha, \, L, \, \tau) \approx e^{L \tau \varphi(\alpha)}\,,$$ where $\varphi$ is the dynamical counterpart to a free energy. It allows one to extend the language of phase transitions to dynamical systems [@Gaspard1998] and also yields the cumulants of $\lambda$ in the large $L$ limit since $${\langle\lambda^{n}\rangle}_{c} = \frac{1}{(L \tau)^{n-1}} \left. {\frac{\mathrm{d} ^{n} \varphi}{\mathrm{d} \alpha^n}} \right|_{\alpha=0}.$$ Enforcing the constraints with Lagrange multipliers, performing a perturbation expansion in $\alpha$ of the saddle-point equations and looking for stationary solutions, we get $$\varphi(\alpha) = -4 \pi^2 D(\rho_0)\, \alpha \left[1 - \frac{\alpha}{8} \frac{\kappa'(\rho_0)^2}{\kappa(\rho_0)^3} + \mathcal{O}(\alpha^2) \right] \label{eqn:LDF}$$ where $\kappa = \frac{2 D}{\sigma}$ is basically the compressibility. Note that the saddle-point equations yield $\dot \rho=-\partial_x \mathcal{J}$, showing that $\mathcal{J}$ can be seen as a particle current at the saddle-point level. The analytical result  is the first important result of our work. At this point, we notice that the mean value of the largest LE is negative and equal to $\varphi'(0) = - 4 \pi^{2} D(\rho_0)$, which corresponds to the largest LE of a diffusion equation with diffusity $D(\rho_0)$. All other LEs are thus also negative. This reflects the fact that diffusive dynamics tends to smooth out density profiles, hence eliminating perturbations rather than amplifying them. Taking $\alpha > 0$ in this case first detects less stable trajectories rather than chaotic ones. Equation  can be extended, upon painful but systematic algebra, to higher order. For instance, we show here the series up to the 5th order for the case $D=1$: $$\begin{gathered} \varphi(\alpha) = - 4 \pi^2 \alpha + \frac{\pi^{2} \, \sigma'(\rho_{0})^{2}}{2 \, \sigma(\rho_{0})} \frac{\alpha^2}{2} - \frac{9 \pi^{2} \, \sigma'(\rho_0)^{4}}{2^{6} \, \sigma(\rho_0)^2} \frac{\alpha^3}{3!}\\ + \frac{3 \pi^{2} (55 \, \sigma'(\rho_0)^6 - 72 \, \sigma(\rho_0) \, \sigma'(\rho_0)^4 \, \sigma''(\rho_0) + 8 \, \sigma(\rho_0)^2 \, \sigma'(\rho_0)^2 \, \sigma''(\rho_0)^2 + 32 \, \sigma(\rho_0)^2 \, \sigma'(\rho_0)^3 \, \sigma'''(\rho_0))}{2^{10} \, \sigma(\rho_0)^3} \frac{\alpha^4}{4!}\\ - \frac{15 \pi^2 (309 \, \sigma'(\rho_0)^8 - 512 \, \sigma(\rho_0) \, \sigma'(\rho_0)^6 \, \sigma''(\rho_0) + 256 \, \sigma(\rho_0)^2 \sigma'(\rho_0)^5 \, \sigma'''(\rho_0)} {2^{16} \, \sigma'(\rho_0)^4} \frac{\alpha^5}{5!} + {\cal O}(\alpha^{6}) \label{eqn:LDFD1} \end{gathered}$$ . We have not been able to infer a generic form of the coefficients from the first few contributions. Our approach can also be used to visualise how the system develops nontrivial structures to produce a Lyapunov exponent that deviates from its typical value, by calculating the density profiles $\rho(x)$ and tangent vector $v(x)$ – both assumed stationary in time – that extremise $S$ for a given value of $\alpha$. Figure \[fig:profiles\] shows such realisations for $\alpha=0.4$, leading to a $25\%$ increase of the Lyapunov exponent [^2]. For this value, $\rho(x)$ is well approximated by a simple harmonic modulation but this would develop into more complex nonlinear shapes at larger $\alpha$. Since we have relied on a fluctuating hydrodynamic description, we would like to check whether the LE $\lambda$ calculated within this formalism is identical to the microscopic LE $\tilde{\lambda}$ of the original lattice model. We expect the “discrete” LE $\tilde{\lambda}(t)$ to be related to the “fluctuating hydrodynamics” LE $\lambda(\tau)$ by the relation $\tau\lambda(\tau)=\left. t\tilde{\lambda}(t)\right|_{t=L^{2} \tau}$, i.e. $\lambda=L^2\tilde\lambda$. If this relation is correct, the cumulant of $\tilde\lambda$ should be given by ${\langle\tilde \lambda^{n}\rangle}_{c} = L^{1-3n}\tau^{1-n} \varphi^{(n)}(0)$ in the large-size and large-time limit. We have checked this numerically for the specific choice of the original KMP model. As can be seen in fig. \[fig:KMP\_simu\], the first two cumulants (mean and variance) are in good agreement with this prediction. The mean reaches its long-time limit for $\tau \sim {\cal O}(10^{-2})$ but the variance requires $\tau \sim {\cal O}(1)$. Our calculation of the LE in the hydrodynamic regime is thus fully consistent with the LE measured in the microscopic model. This shows that the MFT can indeed be extended to compute LEs of spatially extended diffusive systems. In the second part of this Letter, we turn to models with discrete degrees of freedom and show an unexpected connection to damage spreading and annihilation processes. For the sake of concreteness, take a Symmetric Simple Exclusion Process (SSEP), in which particles perform a symmetric random walk with mutual exclusion. We consider a chain of size $L$, with unit hopping rate. In order to define the LE, we consider two copies $A$ and $B$ of this system, and we apply the same noise to both copies. Specifically, we assume that hops are triggered by the environment: when a site tries to expel a particle in one system, it will also expel a particle in the other (if there is one at this site). If $n_{i}^{A}$ and $n_{i}^{B}$ are respectively the occupation numbers at site $i$ in copy $A$ and in copy $B$, the local difference between the two copies is $u_{i} = n_{i}^{A} - n_{i}^{B}$, and we are interested in the evolution of its norm $\left| \mathbf{u} \right| = \sum_{i=1}^{L} \left| u_{i} \right|$. Here we chose the 1-norm because $p$-norms with $p>1$ are singular when studying macroscopic effects as $u_i=0,\pm1$ and thus $\left| u_{i} \right|^{p} = \left| u_{i} \right|$. Readers will now realise that the calculation of the LE is closely linked to the issue of damage spreading [@Derrida1987; @Glotzer1991; @Vojta1997]. There, one studies the propagation of a spatial defect, i.e. a small difference between two nearly identical copies of the same system, over time and asks whether this defect spreads or recedes. With the LE, one further looks at the rate at which the defect vanishes or completely changes the subsequent evolution of the system. Coupled-SSEPs dynamics Effective dynamics for $u_{i}$ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Effective dynamics for $\mathbf{u}$ in the two coupled SSEPs. We look at all the cases when particles at a given site try to hop to the right. In the left column, the top chain is copy $A$ and the bottom one copy $B$. In the right column, a red particle represents $u_{i} = +1$ and a blue one $u_{i} = -1$. This is summarised in fig. \[fig:mapping\][]{data-label="fig:detail_mapping"}](mapping_tabular-figure0.pdf "fig:"){width="0.3\linewidth"} ![Effective dynamics for $\mathbf{u}$ in the two coupled SSEPs. We look at all the cases when particles at a given site try to hop to the right. In the left column, the top chain is copy $A$ and the bottom one copy $B$. In the right column, a red particle represents $u_{i} = +1$ and a blue one $u_{i} = -1$. This is summarised in fig. \[fig:mapping\][]{data-label="fig:detail_mapping"}](mapping_tabular-figure1.pdf "fig:"){width="0.3\linewidth"} ![Effective dynamics for $\mathbf{u}$ in the two coupled SSEPs. We look at all the cases when particles at a given site try to hop to the right. In the left column, the top chain is copy $A$ and the bottom one copy $B$. In the right column, a red particle represents $u_{i} = +1$ and a blue one $u_{i} = -1$. This is summarised in fig. \[fig:mapping\][]{data-label="fig:detail_mapping"}](mapping_tabular-figure6.pdf "fig:"){width="0.3\linewidth"} ![Effective dynamics for $\mathbf{u}$ in the two coupled SSEPs. We look at all the cases when particles at a given site try to hop to the right. In the left column, the top chain is copy $A$ and the bottom one copy $B$. In the right column, a red particle represents $u_{i} = +1$ and a blue one $u_{i} = -1$. This is summarised in fig. \[fig:mapping\][]{data-label="fig:detail_mapping"}](mapping_tabular-figure7.pdf "fig:"){width="0.3\linewidth"} ![Effective dynamics for $\mathbf{u}$ in the two coupled SSEPs. We look at all the cases when particles at a given site try to hop to the right. In the left column, the top chain is copy $A$ and the bottom one copy $B$. In the right column, a red particle represents $u_{i} = +1$ and a blue one $u_{i} = -1$. This is summarised in fig. \[fig:mapping\][]{data-label="fig:detail_mapping"}](mapping_tabular-figure2.pdf "fig:"){width="0.3\linewidth"} ![Effective dynamics for $\mathbf{u}$ in the two coupled SSEPs. We look at all the cases when particles at a given site try to hop to the right. In the left column, the top chain is copy $A$ and the bottom one copy $B$. In the right column, a red particle represents $u_{i} = +1$ and a blue one $u_{i} = -1$. This is summarised in fig. \[fig:mapping\][]{data-label="fig:detail_mapping"}](mapping_tabular-figure3.pdf "fig:"){width="0.3\linewidth"} ![Effective dynamics for $\mathbf{u}$ in the two coupled SSEPs. We look at all the cases when particles at a given site try to hop to the right. In the left column, the top chain is copy $A$ and the bottom one copy $B$. In the right column, a red particle represents $u_{i} = +1$ and a blue one $u_{i} = -1$. This is summarised in fig. \[fig:mapping\][]{data-label="fig:detail_mapping"}](mapping_tabular-figure4.pdf "fig:"){width="0.3\linewidth"} ![Effective dynamics for $\mathbf{u}$ in the two coupled SSEPs. We look at all the cases when particles at a given site try to hop to the right. In the left column, the top chain is copy $A$ and the bottom one copy $B$. In the right column, a red particle represents $u_{i} = +1$ and a blue one $u_{i} = -1$. This is summarised in fig. \[fig:mapping\][]{data-label="fig:detail_mapping"}](mapping_tabular-figure5.pdf "fig:"){width="0.3\linewidth"} The dynamics of $u_i = n_{i}^{A} - n_{i}^{B}$ in the two coupled SSEPs can be mapped onto the same quantity in the two-species pair annihilation reaction-diffusion process ${A + B \rightarrow \varnothing}$ [@Zeldovich1978; @Toussaint1983]. In this two-species models, the $A$ particles perform a symmetric random walk with mutual exclusion, the $B$ particles do the same, and when $A$ and $B$ particles meet at the same site, they immediately annihilate. The mapping arises from the fact that when a site is occupied in both SSEPs, removing the particles in both systems will not affect the subsequent dynamics of $u_i$ (see fig. \[fig:detail\_mapping\] and fig. \[fig:mapping\]). The asymptotics of the ${A + B \rightarrow \varnothing}$ process are fully understood for infinite system size [@Bramson1988], but there are no exact results in finite size for averages let alone fluctuations. Since the SSEP can be described by fluctuating hydrodynamics with $D(\rho) = 1$ and $\sigma(\rho) = 2 \rho (1 - \rho)$, we know from eq.  that $\left| \mathbf{u} (t)\right| \approx e^{\tilde \lambda t}$ with $\langle\tilde \lambda\rangle=-\frac{4\pi^2}{L^2}$. Hence, thanks to our mapping, we can predict that in the large-size and large-time limit, the total number $N(t)$ of particles in the ${A+B\rightarrow\varnothing}$ process scales as $${{\langleN(t)\rangle} \approx e^{- 4 \pi^{2} {t}/{L^{2}}}}.$$ This regime, which was out of reach of previous numerical studies [@Simon1995; @Lee2000], is in perfect agreement with our simulations (see fig. \[fig:AB0\]). Note that we also have predictions for the fluctuations of $N(t)$, from eq. , but confirming these numerically is difficult since the absorbing (empty) state is reached too quickly. In this letter, we have shown how a generalisation of the MFT can be used to compute analytically the fluctuations of the largest LE of spatially extended systems described by diffusive fluctuating hydrodynamics. The relevance of our approach has been confirmed by direct comparison to a microscopic model. Interestingly, the mapping of the SSEP to ${A + B \rightarrow \varnothing}$ suggests a generic correspondence between damage spreading/LE determination for systems with discrete degrees of freedom and reaction-diffusion processes with absorbing states. This would be an interesting direction to pursue. But perhaps as challenging would be to exploit similar techniques to study suspensions of interacting colloids, which could open the way to identifying slow modes in glass formers. A concrete starting point would be the stochastic evolution equation established by Dean [@Dean1996], for which the strategy deployed in this work would apply, though alternative approximation schemes would have to be adopted. Leaving the realm of glasses for that of dynamical systems, it would be interesting to apply our approach to the recently derived fluctuating hydrodynamics of the FPU chain [@Mendl2013; @Das2014], for which large deviations of the largest LE are associated with the emergence of breathers and solitons [@Tailleur2007; @Laffargue2013]. We warmly acknowledge early discussions with Peter Grassberger and Henk van Beijeren. JT thanks the Galileo Galilei Institute for Theoretical Physics for the hospitality and the INFN for partial support during the completion of this work. FvW acknowledges the support of the Pitzer Center of the UC Berkeley’s Department of Chemistry. [^1]: Periodic boundary conditions are assumed throughout the entire article. [^2]: For $\alpha=0.4$, the $\alpha^5$ term shown in eq. amounts to $1\%$ of $\varphi(\alpha)$.
--- abstract: 'It is known that infinitely many Medvedev degrees exist inside the Muchnik degree of any nontrivial $\Pi^0_1$ subset of Cantor space. We shed light on the fine structures inside these Muchnik degrees related to learnability and piecewise computability. As for nonempty $\Pi^0_1$ subsets of Cantor space, we show the existence of a finite-$\Delta^0_2$-piecewise degree containing infinitely many finite-$(\Pi^0_1)_2$-piecewise degrees, and a finite-$(\Pi^0_2)_2$-piecewise degree containing infinitely many finite-$\Delta^0_2$-piecewise degrees (where $(\Pi^0_n)_2$ denotes the difference of two $\Pi^0_n$ sets), whereas the greatest degrees in these three “finite-$\Gamma$-piecewise” degree structures coincide. Moreover, as for nonempty $\Pi^0_1$ subsets of Cantor space, we also show that every nonzero finite-$(\Pi^0_1)_2$-piecewise degree includes infinitely many Medvedev (i.e., one-piecewise) degrees, every nonzero countable-$\Delta^0_2$-piecewise degree includes infinitely many finite-piecewise degrees, every nonzero finite-$(\Pi^0_2)_2$-countable-$\Delta^0_2$-piecewise degree includes infinitely many countable-$\Delta^0_2$-piecewise degrees, and every nonzero Muchnik (i.e., countable-$\Pi^0_2$-piecewise) degree includes infinitely many finite-$(\Pi^0_2)_2$-countable-$\Delta^0_2$-piecewise degrees. Indeed, we show that any nonzero Medvedev degree and nonzero countable-$\Delta^0_2$-piecewise degree of a nonempty $\Pi^0_1$ subset of Cantor space have the strong anticupping properties. Finally, we obtain an elementary difference between the Medvedev (Muchnik) degree structure and the finite-$\Gamma$-piecewise degree structure of all subsets of Baire space by showing that none of the finite-$\Gamma$-piecewise structures are Brouwerian, where $\Gamma$ is any of the Wadge classes mentioned above.' address: - 'Department of Mathematics and Informatics, Chiba University, 1-33 Yayoi-cho, Inage, Chiba, Japan' - 'School of Information Science, Japan Advanced Institute of Science and Technology, Nomi 923-1292, Japan' author: - 'K. Higuchi' - 'T. Kihara' bibliography: - 'IMref.bib' title: 'Inside the Muchnik Degrees II: The Degree Structures induced by the Arithmetical Hierarchy of Countably Continuous Functions' --- The authors were partially supported by Grant-in-Aid for JSPS fellows. The second author (Kihara) would like to thank Douglas Cenzer, Hajime Ishihara, Dick de Jongh, Arno Pauly, and Albert Visser, for valuable comments and helpful discussion, and the second author also would like to thank Makoto Tatsuta and Yoriyuki Yamagata for introducing him to the syntactical study on Limit Computable Mathematics. The second author is also grateful to Sam Sanders who helped his English writing. Finally, the authors would like to thank the anonymous referees for their valuable comments and suggestions.
  **New Penrose Limits and AdS/CFT** 1.8cm 0.5cm *$^1$ Dipartimento di Fisica, Università di Perugia,\ I.N.F.N. Sezione di Perugia,\ Via Pascoli, I-06123 Perugia, Italy\ 0.4cm *$^2$ NORDITA\ Roslagstullsbacken 23, SE-106 91 Stockholm, Sweden 0.4cm *$^3$ The Niels Bohr Institute\ *Blegdamsvej 17, DK-2100 Copenhagen Ø, Denmark\ **** 0.5cm grignani@pg.infn.it, harmark@nordita.org, andrea.marini@fisica.unipg.it, orselli@nbi.dk 1.5cm **Abstract** 0.2cm We find a new Penrose limit of $\mbox{AdS}_5 \times S^5$ giving the maximally supersymmetric pp-wave background with two explicit space-like isometries. This is an important missing piece in studying the AdS/CFT correspondence in certain subsectors. In particular whereas the Penrose limit giving one space-like isometry is useful for the $SU(2)$ sector of ${\mathcal{N}}=4$ SYM, this new Penrose limit is instead useful for studying the $SU(2|3)$ and $SU(1,2|3)$ sectors. In addition to the new Penrose limit of $\mbox{AdS}_5 \times S^5$ we also find a new Penrose limit of $\mbox{AdS}_4 \times {\mathbb{C}}P^3$. 0.5cm Introduction {#sec:intro} ============ AdS/CFT duality identifies ${\mathcal{N}}=4$ superconformal Yang-Mills (SYM) theory with gauge group $SU(N)$ to type IIB superstring theory on the $\mbox{AdS}_5\times S^5$ background [@Maldacena:1997re; @Gubser:1998bc; @Witten:1998qj]. The AdS/CFT correspondence relates gauge theory and string theory in different regimes, thus, on the one hand, this makes it powerful as it can be used to compute the strong coupling regime of either theory using the weak coupling limit of the other, on the other hand this makes it hard to test directly since it is not easy to find situations where approximate computations in both theories have an overlapping domain of validity. In [@Berenstein:2002jq] a way out of this difficulty was presented by introducing a Penrose limit of the $\mbox{AdS}_5\times S^5$ background. Taking the Penrose limit one gets the maximally supersymmetric pp-wave background [@Blau:2001ne; @Blau:2002dy] where type IIB string theory can be quantized [@Metsaev:2001bj; @Metsaev:2002re]. On the gauge theory side the Penrose limit corresponds to considering a certain sector of the operators. This enables one to compare directly the spectrum of operators in the planar limit of ${\mathcal{N}}=4$ SYM to the energy spectrum of quantum strings on the pp-wave. In [@Bertolini:2002nr] an alternative Penrose limit of $\mbox{AdS}_5\times S^5$ was found also giving the maximally supersymmetric background but in a coordinate system with an explicit space-like isometry [@Michelson:2002wa; @Bertolini:2002nr]. As explained in [@Harmark:2006ta] having this explicit isometry makes it particularly well-suited to study the $SU(2)$ sector of ${\mathcal{N}}=4$ SYM. Building on the Penrose limit of [@Berenstein:2002jq] many very interesting results in matching gauge theory and string theory were found in the case of the planar limit using the idea of integrability and the connection to spin chains [@Minahan:2002ve; @Beisert:2003tq; @Beisert:2003yb][^1] particularly by considering a near plane wave limit with curvature corrections to the pp-wave background [@Callan:2003xr; @Callan:2004uv]. A high point of this is the development of the Asymptotic Bethe Ansatz describing the dimension of infinitely long operators for any ’t Hooft coupling in the planar limit [@Staudacher:2004tk; @Beisert:2005tm; @Beisert:2006ez]. Going beyond the planar limit seems instead to be very difficult [@Kristjansen:2002bb]. New ideas are needed in order to further explore the AdS/CFT correspondence in the non-planar limit and its potential applications. Recently another example of an exact duality between ${\mathcal{N}}= 6$ superconformal Chern-Simons theory (ABJM theory) and type IIA string theory on $\mbox{AdS}_4 \times CP^3$ have been found [@Aharony:2008ug]. Also here certain Penrose limits and near plane wave limits have been explored [@Nishioka:2008gz; @Gaiotto:2008cg; @Grignani:2008is; @Astolfi:2008ji; @Astolfi:2009qh]. The difficulty of going beyond the planar limit, where integrability most likely is absent, makes it desirable to consider alternative approaches to match the spectrum of operators and string states. One of the cornerstones in comparing the operator spectrum to the string spectrum in a Penrose limit or near-plane wave limit is that in comparing the spectrum of operators one assumes that most of the operators of the gauge theory receive an infinitely large correction to the bare dimension in the large ’t Hooft coupling limit $\lambda \rightarrow \infty$. This is of course a built in feature of the Asymptotic Bethe Ansatz for ${\mathcal{N}}=4$ SYM. However, an alternative approach to this problem of taking the strong coupling limit of ${\mathcal{N}}=4$ SYM has been proposed in [@Harmark:2006di; @Harmark:2006ta; @Harmark:2006ie; @Harmark:2007et; @Harmark:2007px; @Harmark:2008gm] where a regime of AdS/CFT was found in which both gauge theory and string theory are reliable and the correspondence can be tested in a precise way. Applying the approach of [@Harmark:2006di; @Harmark:2006ta; @Harmark:2006ie; @Harmark:2007et; @Harmark:2007px; @Harmark:2008gm][^2] to match the spectrum of operators and string states in the $SU(2)$ sector uses in an essential way the alternative Penrose limit of [@Bertolini:2002nr] where the maximally supersymmetric pp-wave has an explicit isometry. This is because for this pp-wave background the string states having an energy just above the vacuum energy are the states dual to the operators in the $SU(2)$ sector of ${\mathcal{N}}=4$ SYM. However, as shown in [@Harmark:2007px] there are several other sectors of ${\mathcal{N}}=4$ SYM that one can explore as well, and these sectors are crucial for approaching non-perturbative physics of type IIB string theory in $\mbox{AdS}_5\times S^5$, such as D-branes and black holes. This means that there should be additional Penrose limits of $\mbox{AdS}_5\times S^5$ in addition to the ones of [@Blau:2002dy; @Berenstein:2002jq; @Bertolini:2002nr]. In this paper we address these issues by deriving a new Penrose limit of $\mbox{AdS}_5 \times S^5$ which leads to a new pp-wave background with two explicit space-like isometries. As for the two previously found Penrose limits [@Blau:2002dy; @Berenstein:2002jq; @Bertolini:2002nr] this leads to a pp-wave background where type IIB string theory can be quantized and the spectrum can be matched to the spectrum of operators of ${\mathcal{N}}=4$ SYM. Our analysis completes the study of all possible pp-wave backgrounds which can be obtained as Penrose limits of the $\mbox{AdS}_5 \times S^5$ geometry. It also represents a further step in the investigation of the matching of strongly coupled gauge theory and string theory in certain sectors which are relevant for describing non-perturbative physics of type IIB string theory on $\mbox{AdS}_5\times S^5$. In particular, the new Penrose limit is relevant for studying the $SU(1,2|3)$ sector, which is the maximally possible subsector of ${\mathcal{N}}=4$ SYM [@Harmark:2007px]. In addition to the new Penrose limit of $\mbox{AdS}_5\times S^5$ we also explore Penrose limits of $\mbox{AdS}_4 \times {\mathbb{C}}P^3$. Here two different classes of Penrose limits have been found, one in which there are no explicit space-like isometries [@Nishioka:2008gz; @Gaiotto:2008cg] and another in which there are two explicit space-like isometries [@Grignani:2008is; @Astolfi:2009qh] which makes it suitable for studying the $SU(2)\times SU(2)$ sector of ABJM theory. We find in this paper a new Penrose limit of the $\mbox{AdS}_4 \times {\mathbb{C}}P^3$ background giving a pp-wave background with one explicit space-like isometry. The new Penrose limit of $\mbox{AdS}_5\times S^5$ found in this paper is also relevant for studying the finite temperature behavior of AdS/CFT. It is conjectured that the confinement/deconfinement transition temperature of planar $\mathcal{N}=4$ SYM on $R\times S^3$ is dual to the Hagedorn temperature of type IIB string theory on $\mbox{AdS}_5 \times S^5$ [@Witten:1998zw; @Sundborg:1999ue; @Polyakov:2001af; @Aharony:2003sx]. Using the Penrose limit [@Bertolini:2002nr] this was shown quantitatively to be true [@Harmark:2006ta] by matching the confiment/deconfinement temperature of planar $\mathcal{N}=4$ SYM on $R\times S^3$ in a limit with R-charge chemical potentials to the Hagedorn temperature of type IIB string on the pp-wave background of [@Bertolini:2002nr][^3]. We furthermore expect that our results could help in understanding more generally the behavior of string theory above the Hagedorn temperature and to study the connection between gauge theory and black holes in $\mbox{AdS}_5 \times S^5$ [@Grignani:2009ua][^4]. Interesting related work in other less supersymmetric gauge theories can be found in Refs. [@Grignani:2007xz; @Larsen:2007bm; @Hamilton:2007he]. The paper is organized as follows. In Section \[sec:stringtheory\] we first review the Penrose limit of string theory that lead to pp-wave backgrounds with zero and one spatial isometry. Then, we find a new Penrose limit giving rise to a pp-wave background with two space-like isometries in which string theory can be quantized. In Section \[sec:stringrotspectra\] we obtain a general form for a pp-wave metric that reproduces all the pp-wave backgrounds analyzed in the previous section. We moreover show that string theory can be directly quantized on this background which we dub “[*rotated pp-wave background*]{} " and we compute the spectrum. In Section \[sec:decsectors\] we show that, after taking an appropriate limit, the spectrum of type IIB string theory on the rotated pp-wave background can be exactly matched to the spectrum of the dual gauge theory operators in certain decoupled sectors of ${\mathcal{N}}=4$ SYM. Finally, in Section \[sec:ads4\] we find a new Penrose limit of the ${\mbox{AdS}}_4\times {\mathbb{C}}P^3$ background of type IIA supergravity with one explicit space-like isometry. Penrose limits and pp-waves with explicit isometries {#sec:stringtheory} ==================================================== In this section we derive a Penrose limit of $\mbox{AdS}_5 \times S^5$ which results in a new pp-wave background with two space-like isometries. We then show how to obtain a general pp-wave background which, for appropriate choices of the parameters of the background, reproduces all the known pp-wave backgrounds which are obtained through a Penrose limit procedure on $\mbox{AdS}_5 \times S^5$. We begin the section by writing down a slightly generalized version of the previously found Penrose limits of $\mbox{AdS}_5 \times S^5$ with zero and one explicit space-like isometries [@Blau:2002dy; @Berenstein:2002jq; @Bertolini:2002nr]. In $\mbox{AdS}_5 \times S^5$, the Penrose limit consists in considering a particle in the center of $\mbox{AdS}_5 $ that is moving very rapidly on a geodesic of $S^5$. This means that the angular momentum along the direction in which the particle is moving is very large ($J \to \infty$). Then by taking the limit $R \to \infty$, where $R$ is the radius of $\mbox{AdS}_5$ and $S^5$, but such that the ratio $J/R^2$ remains fixed, the geometry of $\mbox{AdS}_5 \times S^5$ reduces to a plane-wave geometry. An important point to emphasize is that one can choose any light-like geodesic of $\mbox{AdS}_5 \times S^5$ for implementing the procedure. While the pp-wave background always corresponds to the maximally supersymmetric pp-wave background of type IIB supergravity [@Blau:2001ne], different choices of light-like geodesics can give this background in different coordinate systems [@Bertolini:2002nr]. Naively this should not matter, however, the different coordinate systems can correspond to different choices of lightcone time on the pp-wave background. And this corresponds moreover to different dictionaries between the physical quantities of the $\mbox{AdS}_5\times S^5$ background and of the maximally supersymmetric pp-wave background. Therefore, the different coordinate systems for the pp-wave background are connected to the fact that the different Penrose limits that we consider correspond to zooming in to different regimes of type IIB string theory on $\mbox{AdS}_5\times S^5$. This in turns corresponds to zooming in to different regimes of ${\mathcal{N}}=4$ SYM. Furthermore, as we discuss in section \[sec:decsectors\], the different Penrose limits correspond to different decoupling limits of ${\mathcal{N}}=4$ SYM on ${\mathbb{R}}\times S^3$. In the literature the “canonical” coordinate system used for the maximally supersymmetric pp-wave background is that of [@Blau:2001ne; @Blau:2002dy; @Berenstein:2002jq] which we here dub the [*BMN pp-wave background*]{}. This coordinate system is such that the quadratic potential terms for the transverse directions are massive for all eight transverse directions. Another coordinate system was introduced in [@Michelson:2002wa; @Bertolini:2002nr] and we will refer to it as the [*one flat direction pp-wave background*]{} due to the presence of a space-like isometry in the pp-wave metric and since in this case the quadratic terms for the transverse directions have a massless direction. Here we find a new pp-wave background corresponding to a new coordinate system for the maximally supersymmetric pp-wave of type IIB supergravity. This new background is again obtained as a Penrose limit of $\mbox{AdS}_5 \times S^5$ with an appropriate choice of light-cone coordinates. The new pp-wave background differs from the other two because of the presence of two spacial isometries in the metric, namely two flat directions, corresponding to two massless directions in the potential terms for the transverse directions. Hence we call it the [*two flat directions pp-wave background*]{}. This new pp-wave background is important in the context of the AdS/CFT correspondence. In fact, as shown explicitly in Section \[sec:stringrotspectra\], string theory can be quantized on this background. Moreover, as discussed in Section \[sec:decsectors\], after taking a certain limit on the spectrum of type IIB string theory in this new background, we can complete the matching between the spectrum of anomalous dimensions of gauge theory operators in certain sectors of $\neqf$ SYM theory and the spectrum of the dual string theory states. We show below in Section \[sec:stringrotspectra\] that all the pp-wave s achievable through the Penrose limit are connected by a time-dependent coordinate transformation. This proves that mathematically they are all equivalent. The same is not true from the physical point of view, since the transformation involves time. Thus what changes from a  to another is what we call time, and consequently what we call Hamiltonian. Therefore the physics is different when we consider the theory on different pp-wave backgrounds. It is also interesting to notice which regimes of ${\mathcal{N}}=4$ SYM the different Penrose limits correspond to. We give these regimes for each of the three different limits below. To consider this, we record the following dictionary between strings on $\mbox{AdS}_5\times S^5$ and ${\mathcal{N}}=4$ SYM on ${\mathbb{R}}\times S^3$. We have $$\frac{R^4}{l_s^4} = 4 \pi^2 \lambda {\,, \ \ }g_s = \frac{\pi \lambda}{N}$$ where $R$ is the radius of $\mbox{AdS}_5$ and $S^5$, $g_s$ and $l_s$ are the string coupling and string length, respectively, and $\lambda = {g_{\rm YM}}^2 N/(4\pi^2)$ is the ’t Hooft coupling of $SU(N)$ ${\mathcal{N}}=4$ SYM.[^5] The energy $E$ of type IIB string states on $\mbox{AdS}_5\times S^5$ is identified with the energy $E$ of the dual ${\mathcal{N}}=4$ SYM states on ${\mathbb{R}}\times S^3$, or equivalently, with the scaling dimension of the dual operators of ${\mathcal{N}}=4$ SYM on ${\mathbb{R}}^4$. Similarly the angular momenta $J_{1,2,3}$ on $S^5$ for string states are identified with the three R-charges $J_{1,2,3}$ for states/operators of ${\mathcal{N}}=4$ SYM. Moreover the angular momenta $S_{1,2}$ for strings on $\mbox{AdS}_5$ are identified with the Cartan generators for the $SO(4)$ symmetry of the $S^3$ for the dual ${\mathcal{N}}=4$ SYM states on ${\mathbb{R}}\times S^3$, or equivalently, the $SO(4)$ symmetry of the ${\mathbb{R}}^4$ for the dual operators of ${\mathcal{N}}=4$ SYM on ${\mathbb{R}}^4$. The string theory that we are interested in is type IIB string theory on $\mbox{AdS}_5 \times S^5$. The metric for this background is given by $$\label{adsmet} ds^2 = R^2 \left[ - \cosh^2 \rho dt^2 + d\rho^2 + \sinh^2 \rho d{\Omega'_3}^2 + d\theta^2 + \sin^2 \theta d\alpha^2 + \cos^2 \theta d\Omega_3^2 \right]\, ,$$ with the five-form Ramond-Ramond field strength $$\label{adsF5} F_{(5)} = 2 R^4 ( \cosh \rho \sinh^3 \rho dt d\rho d\Omega_3' + \sin \theta \cos^3 \theta d\theta d\alpha d\Omega_3 )\, .$$ We parameterize the two three-spheres as $$\begin{aligned} \label{3sph} d\Omega_3^2 &= d\psi^2 + \sin^2 \psi d\phi^2 + \cos^2 \psi d\chi^2\, , \\ \label{3sphAdS} d\Omega_3'^2 &= d\beta^2 + \sin^2 \beta d\gamma^2 + \cos^2 \beta d\xi^2\, .\end{aligned}$$ The three angular momenta on the five sphere $S^5$ are defined as $$\begin{aligned} \label{eq:JJJ} J_1= -i\partial_\chi\, , \quad J_2= -i\partial_\phi\, , \quad J_3= -i\partial_\alpha\, ,\end{aligned}$$ and the two angular momenta on the $S^3$ inside $\mbox{AdS}_5$ are defined as $$\begin{aligned} \label{eq:SS} S_1 = -i \partial_\gamma\, , \qquad S_2=-i\partial_\xi \, .\end{aligned}$$ We moreover define the quantity $J\equiv J_1 + \eta_1 J_2 + \eta_2 J_3 + \eta_3 S_1 + \eta_4 S_2$, where $\eta_1$, $\eta_2$, $\eta_3$, $\eta_4$ are some parameters that characterize the background. We will show that they play an important role in Section \[sec:decsectors\] where we compare the results we obtain on the string theory side with previous computations done in the dual gauge theory. The “no flat direction” Penrose limit ------------------------------------- In order to derive the new Penrose limit, we first review the Penrose limit giving rise to the [*BMN pp-wave* ]{}. We introduce new coordinates $\varphi_0,...,\varphi_4$ defined by $$\begin{aligned} \label{eq:noflatphi} \chi &= \varphi_0, \quad \phi = \eta_1 \varphi_0 + \varphi_1\, , \quad \alpha = \eta_2 \varphi_0 + \varphi_2\, , \quad \gamma = \eta_3 \varphi_0 + \varphi_3\, , \quad \xi = \eta_4 \varphi_0 + \varphi_4\,,\end{aligned}$$ and we define the light-cone coordinates as $$\begin{aligned} z^- = \frac{1}{2} \mu R^2 (t-\varphi_0)\, , \quad z^+ = \frac{1}{2\mu} (t+\varphi_0)\, . \label{lcc}\end{aligned}$$ By defining $r_1,...,r_4$ such that $$\begin{aligned} r_1= R \psi\, , \quad r_2 = R \theta\, ,\quad r_3 = R \rho \sin\beta\, ,\quad r_4= R \rho \cos\beta\, .\end{aligned}$$ we can parametrize the eight $z^i$ coordinates in the following way $$\begin{aligned} \label{coordinates} z^1+iz^2 = r_1e^{i\varphi_1}\, , \quad z^3+iz^4 = r_2e^{i\varphi_2}\, , \cr z^5+iz^6 = r_3e^{i\varphi_3}\, , \quad z^7+iz^8 = r_4e^{i\varphi_4}\, .\end{aligned}$$ Writing the background – in terms of the coordinate $z^\pm$ and $z^i$ and taking the Penrose limit by sending $R\to\infty$ while keeping $z^\pm$ and $z^i$ fixed, we obtain the following metric $$\label{eq:dsnoflat} \begin{split} ds^2=&-4dz^+dz^- + dz^i dz^i - \mu^2 \sum_{k=1}^{4} \left(1-\eta_{k}^{2}\right)\left[\left(z^{2k-1}\right)^2+\left(z^{2k}\right)^2\right]\left(dz^+\right)^2 \\ &+ 2\mu \sum_{k=1}^{4}\eta_k \left[z^{2k-1}dz^{2k}- z^{2k}dz^{2k-1}\right]dz^+. \end{split}$$ and five-form field strength $$\begin{aligned} \label{eq:F5z} F_{(5)} = 2 \mu \,dz^+ \left(dz^1 dz^2 dz^3 dz^4 + dz^5 dz^6 dz^7 dz^8 \right)\, .\end{aligned}$$ We see that by setting the parameters $\eta_k$’s all to zero, we precisely recover the pp-wave background derived in  [@Blau:2002mw; @Berenstein:2002jq]. In this sense, the background – is a generalization of it. Type IIB string theory can be quantized on this background and the light-cone Hamiltonian that one obtains is $$\begin{aligned} H_\textrm{lc} \sim E-J_1, \qquad p^+ \sim \frac{E+J_1}{R^2}\, .\end{aligned}$$ From the condition that $H_\textrm{lc}$ and $p^+$ should stay finite in the limit, we get that $J_1=-i\partial_{\varphi_0}$ must be large. On the other hand since $\varphi_1, ..., \varphi_4$ are all fixed in the limit $R\to \infty$, we deduce from , and that $J_2$, $J_3$, $S_1$ and $S_2$ are also fixed. We see from the above that the “no flat direction” Penrose limit corresponds to the following regime of type IIB string theory on $\mbox{AdS}_5\times S^5$ $$R \rightarrow \infty \ \mbox{with}\ E-J_1\ \mbox{fixed} , \quad \frac{E+J_1}{R^2}\ \mbox{fixed}, \quad \frac{J_1}{R^2} \ \mbox{fixed}, \quad g_s,l_s \ \mbox{fixed}$$ Translating this into ${\mathcal{N}}=4$ SYM language, it corresponds to the regime $$N \rightarrow \infty \ \mbox{with}\ E-J_1\ \mbox{fixed} , \quad \frac{E+J_1}{\sqrt{N}}\ \mbox{fixed}, \quad \frac{J_1}{\sqrt{N}} \ \mbox{fixed}, \quad {g_{\rm YM}}^2 \ \mbox{fixed}$$ The “one flat direction” Penrose limit -------------------------------------- Now we repeat an analogous procedure and show that, by a different choice of light-cone coordinates, we obtain a generalization of the pp-wave background derived in [@Bertolini:2002nr]. We define the coordinates $\varphi_0,...,\varphi_4$ in the following way $$\begin{aligned} \label{eq:oneflatphi} \chi = \varphi_0 -\varphi_1\, , \quad \phi = \varphi_0 + \varphi_1\, , \quad \alpha = \eta_2 \varphi_0 + \varphi_2\, , \quad \gamma = \eta_3\varphi_0 + \varphi_3\, , \quad \xi = \eta_4\varphi_0 + \varphi_4\, ,\end{aligned}$$ with the light-cone variables still given by eq.n . We moreover define $z^1$ and $z^2$ as $$\begin{aligned} z^1 = R\varphi_1\, , \quad z^2=R\left(\frac{\pi}{4}-\psi\right)\, ,\end{aligned}$$ while $z^3,...,z^8$ are defined as before (see Eq.) and $$\begin{aligned} r_2 = R \theta\, , \quad r_3 = R \rho \sin\beta\, ,\quad r_4= R \rho \cos\beta\, ,\end{aligned}$$ $$\begin{aligned} z^3+iz^4 =r_2 e^{i\varphi_2}\, , \quad z_5+iz_6 = r_3e^{i\varphi_3}\, , \quad z_7+iz_8 = r_4e^{i\varphi_4}\, .\end{aligned}$$ The Penrose limit is then the limit $R\to\infty$ keeping $z^\pm,z^i$ fixed. Plugging the coordinates $z^\pm, z^i$ into the background – and taking the limit described above the metric becomes $$\label{eq:dsoneflat} \begin{split} ds^2=&-4dz^+dz^- + dz^i dz^i - \mu^2 \sum_{k=2}^{4} \left(1-\eta_{k}^{2}\right)\left[\left(z^{2k-1}\right)^2+\left(z^{2k}\right)^2\right]\left(dz^+\right)^2 \\ &+ 2\mu \sum_{k=2}^{4}\eta_k \left[z^{2k-1}dz^{2k}- z^{2k}dz^{2k-1}\right]dz^+ -4 \mu z^2 dz^+ dz^1. \end{split}$$ with the five-form given by . From we see that $z^1$ is an explicit isometry of the above pp-wave background and therefore we call this background [*one flat direction pp-wave background*]{}. As before we have that $\varphi_2,\varphi_3,\varphi_4$ are fixed in the Penrose limit which, using , means that $J_3$, $S_1$ and $S_2$ are fixed. But now the condition that $H_\textrm{lc}$, $p^+$ and $p^1$ have to remain finite in the limit tells us that the quantities $$E-J_1-J_2 , \quad \frac{E+J_1+J_2}{R^2}, \quad \frac{J_1+J_2}{R^2} , \quad \frac{J_1-J_2}{R} , \quad g_s,l_s$$ are all fixed when $R \to \infty$. This is the regime corresponding to the “one flat direction” Penrose limit of type IIB string theory on $\mbox{AdS}_5\times S^5$, as found in [@Bertolini:2002nr]. Translating this into ${\mathcal{N}}=4$ SYM language, it corresponds to the regime where [@Bertolini:2002nr] $$E-J_1-J_2 , \quad \frac{E+J_1+J_2}{\sqrt{N}}, \quad \frac{J_1+J_2}{\sqrt{N}} , \quad \frac{J_1-J_2}{N^{1/4}} , \quad {g_{\rm YM}}^2$$ are fixed for $N \to \infty$. The “two flat directions” Penrose limit --------------------------------------- We finally consider the Penrose limit that leads to a new pp-wave  with two flat directions. The variables $\varphi_0,$ $\varphi_1,$ $\varphi_2,$ $\varphi_3,$ $\varphi_4$ are now defined as $$\begin{gathered} \label{phi2fd} \chi = \varphi_0 - \sqrt{2}\varphi_1 - \varphi_2 \, , \qquad \phi = \varphi_0 + \sqrt{2}\varphi_1 - \varphi_2\, , \qquad \alpha = \varphi_0 + \varphi_2 \, ,{\nonumber}\\[2mm] \gamma = \eta_3 \varphi_0 + \varphi_3 \, , \qquad \xi = \eta_4 \varphi_0 + \varphi_4 \, ,\end{gathered}$$ whereas the light-cone coordinate are as usual given by . The coordinates $z^1$, $z^2$, $z^3$ and $z^4$ are defined as $$\begin{array}{lcl} z^1 = R \varphi_1 \, , & \phantom{qquad} & z^2 = \displaystyle{ \frac{R}{\sqrt{2}}} \left(\displaystyle{\frac{ \pi}{4}-\psi}\right) \, , \\[4mm] z^3 = R \varphi_2 \, , & & z^4 = R \left(\displaystyle{\frac{ \pi}{4}}-\theta \right) \, . \end{array}$$ while $z^5$, $z^6$, $z^7$, $z^8$ are again given by Eq.. More explicitly we have $$\begin{aligned} r_3 = R \rho \sin \beta \, , \qquad r_4 = R \rho \cos \beta\, ,\end{aligned}$$ $$\begin{aligned} z^5 + i z^6 = r_3 \displaystyle{ e^{i \varphi_3}} \, , \qquad z^7 + i z^8 = r_4 \displaystyle{ e^{i \varphi_4}}\, .\end{aligned}$$ Substituting the new coordinates in the background – and taking the Penrose limit we get the following pp-wave metric $$\label{eq:dstwoflat} \begin{split} ds^2&=-4dz^+dz^- + dz^i dz^i - \mu^2 \sum_{k=3,4} \left(1-\eta_{k}^{2}\right)\left[\left(z^{2k-1}\right)^2+\left(z^{2k}\right)^2\right]\left(dz^+\right)^2 \\ &+ 2\mu \sum_{k=3,4}\eta_k\left[ z^{2k-1}dz^{2k}- z^{2k}dz^{2k-1}\right]dz^+ - 4\mu\left(z^2 dz^1 + z^4 dz^3\right)dz^+. \end{split}$$ and the five-form is defined in . This is a new pp-wave background and it has two explicit isometries, $z^1$ and $z^3$ We will therefore refer to it as [*two flat directions pp-wave background*]{}. In this case $\varphi_3,\varphi_4$ are fixed, thus, keeping in mind , we have that also the angular momenta $S_1$ and $S_2$ are fixed. In a similar fashion as before if we compute $H_\textrm{lc}$, $p^+$, $p^1$ and $p^3$ and request that they should stay finite in the Penrose limit we get that the quantities $$E-J_1-J_2-J_3 , \quad \frac{E+J_1+J_2+J_3}{R^2}, \quad \frac{J_1+J_2+J_3}{R^2}, \quad \frac{J_1 - J_2}{R} ,\quad \frac{J_3 -J_1 - J_2}{R}, \quad g_s,l_s$$ are fixed as $R$ goes to infinity. This is the regime corresponding to the “two flat directions” Penrose limit of type IIB string theory on $\mbox{AdS}_5\times S^5$. Translating this into ${\mathcal{N}}=4$ SYM it corresponds to the regime where $$E-J_1-J_2-J_3 , \quad \frac{E+J_1+J_2+J_3}{\sqrt{N}}, \quad \frac{J_1+J_2+J_3}{\sqrt{N}}, \quad \frac{J_1 - J_2}{N^{1/4}} ,\quad \frac{J_3 -J_1 - J_2}{N^{1/4}} , \quad {g_{\rm YM}}^2$$ are fixed for $N \rightarrow \infty$. Here $J_1-J_2$ and $J_3-J_1-J_2$ correspond to the two momenta for the two space-like isometries of the [*two flat directions pp-wave background*]{} . Type IIB string theory on the pp-wave backgrounds , (with five-form field strength given by ) can be easily quantized. The spectra in all these three cases are worked out in the next section. String theory spectrum on a rotated pp-wave background {#sec:stringrotspectra} ====================================================== In this section we obtain a pp-wave metric, which depends on parameters introduced through a coordinate transformation on the maximally   of [@Blau:2001ne]. For this reason, in practice, this metric describes an infinite set of pp-wave s (one for each point of the parameter space). We refer to them as to *rotated pp-wave backgrounds*. Note that the backgrounds obtained in this way do not necessarily have any specific meaning in an AdS/CFT context. They will only have a meaning in the AdS/CFT context if we derive them from a Penrose limit of $\mbox{AdS}_5 \times S^5$. Despite this, the procedure that we are going to show results to be very useful because allows to obtain a general formula that contains all the physically interesting pp-wave s. In fact we will show that by appropriately choosing the values of the parameters of the background, this general formula describes exactly the s studied in the previous section which are indeed obtained by taking Penrose limits of the $\mbox{AdS}_5 \times S^5$ . We can then proceed in finding the spectra on these generic rotated s. An important result is that, by taking an appropriate limit on these spectra, we will show that one can reproduce the spectra found in [@Harmark:2007px] for the nine decoupled sectors of $\neqf$ SYM which contain scalars. Coordinate transformation ------------------------- We start from the simplest pp-wave background metric without flat directions $$\label{BMNmetric} ds^2=-4dx^+dx^- - \mu^2 x^ix^i\left(dx^+\right)^2+dx^idx^i\, ,$$ where $i=1,2,\dots,8$ and five-form field strength $$\label{fff} F_{(5)}=2\mu dx^{+}\left(dx^{1}dx^{2}dx^{3}dx^{4}+dx^{5}dx^{6}dx^{7}dx^{8}\right)\, .$$ We consider the following coordinate transformation $$\label{transfrot} \begin{split} x^- =z^- &+\frac{\mu}{2}\left(C_1 z^1z^2 + C_2z^3z^4 + C_3z^5z^6 + C_4z^7z^8\right)\, , \\[2mm] \left( \begin{array}{c} x^{2k-1} \\[2mm] x^{2k} \end{array} \right) &= \left( \begin{array}{cc} \cos(\eta_k \mu z^+) & -\sin(\eta_k \mu z^+) \\[2mm] \sin(\eta_k \mu z^+) & \cos(\eta_k \mu z^+) \end{array} \right) \left( \begin{array}{c} z^{2k-1} \\[2mm] z^{2k} \end{array} \right)\, , \end{split}$$ where $C_{k}$ and $\eta_{k}$, $k=1, 2, 3, 4$, are parameters. Note that the transformations for the transverse coordinates are rotations whose angles depend on the $\eta_k$ parameters, hence the name “[*rotated pp-wave* ]{}”. The metric then becomes $$\label{rotmetric} \begin{split} ds^2=&-4dz^+dz^- + dz^i dz^i - \mu^2 \sum_{k=1}^{4} \left(1-\eta_{k}^{2}\right)\left[\left(z^{2k-1}\right)^2+\left(z^{2k}\right)^2\right]\left(dz^+\right)^2 \\ &- 2\mu \sum_{k=1}^{4}\left[(C_k-\eta_k)z^{2k-1}dz^{2k}+(C_k+\eta_k)z^{2k}dz^{2k-1}\right]dz^+\, , \end{split}$$ while the five-form field strength is invariant under the coordinate transformation . It is straightforward to check that the metric contains all the s obtained in Section \[sec:stringtheory\]. In fact, for various values of the $C_k$ and $\eta_k$ parameters, we have the following possibilities ------------------------------------------- --------------- ---------------------- $C_1=C_2=C_3=C_4=0$ $\Rightarrow$ no flat direction; $C_1=\eta_1=1$ and $C_2=C_3=C_4=0$ $\Rightarrow$ one flat direction; $C_1=\eta_1=C_2=\eta_2=1$ and $C_3=C_4=0$ $\Rightarrow$ two flat directions. ------------------------------------------- --------------- ---------------------- String theory can be quantized on the general background  and we now proceed in finding the superstring spectrum. Bosonic sector -------------- We work in the light-cone gauge $z^+ = p^+ \tau$ with $l_s=1$. The light-cone Lagrangian density of the bosonic $\sigma$-model is given by $$\label{boslagr} \begin{split} \mathscr{L}_{lc}^{B}= &- \frac{1}{4\pi p^+}\left(\partial^{\alpha}z^i\partial_{\alpha}z^i+ f^2 \sum_{k=1}^{4}\left(1-\eta_{k}^{2}\right)\left[\left(z^{2k-1}\right)^2+\left(z^{2k}\right)^2\right] \right. \\ &+\left. 2f \sum_{k=1}^{4}\left[(C_k-\eta_k)z^{2k-1}\dot{z}^{2k}+(C_k+\eta_k)z^{2k}\dot{z}^{2k-1}\right]\right)\, , \end{split}$$ where we have defined $f = \mu p^+$. The conjugate momenta are computed to be $$\Pi_{2k-1} = \frac{\dot{z}^{2k-1}-f\left(C_k + \eta_k \right) z^{2k}}{2\pi }\, ,~~~~~ \Pi_{2k} = \frac{\dot{z}^{2k}-f\left(C_k - \eta_k \right) z^{2k-1}}{2\pi }\, ,$$ and the bosonic light-cone Hamiltonian is given by $$H_{lc}^{B}= \frac{1}{4\pi p^+}\int_{0}^{2\pi}d\sigma \Bigg[ \dot{z}^i \dot{z}^i+ (z^i)'(z^i)' +f^2 \sum_{k=1}^{4}\left(1-\eta_{k}^{2}\right)\left[\left(z^{2k-1}\right)^2+\left(z^{2k}\right)^2\right]\Bigg]\, .$$ In order to solve the equations of motion $$\begin{aligned} &\partial^{\alpha}\partial_{\alpha}z^{2k-1}+2f\eta_k \dot{z}^{2k} - f^2 \left(1-\eta_{k}^{2}\right) z^{2k-1}=0\label{moteq1}\, ,\\ &\partial^{\alpha}\partial_{\alpha}z^{2k}-2f\eta_k \dot{z}^{2k-1} - f^2 \left(1-\eta_{k}^{2}\right) z^{2k}=0\label{moteq2}\, ,\end{aligned}$$ it is useful to introduce four complex fields $$X^k = z^{2k-1}+ iz^{2k}\, ,$$ in terms of which the above equations read $$\begin{aligned} &\partial^{\alpha}\partial_{\alpha}X^{k}-2 i f\eta_k \dot{X}^{k} - f^2 \left(1-\eta_{k}^{2}\right) X^{k}=0\, ,\label{moteqd1}\\ &\partial^{\alpha}\partial_{\alpha}\bar{X}^{k}+2 i f\eta_k \dot{\bar{X}}^{k} - f^2 \left(1-\eta_{k}^{2}\right) \bar{X}^{k}=0\label{moteqd2}\, .\end{aligned}$$ One can see that a solution of the form $$X^k=e^{-i f \eta_k \tau} Y^k$$ solves if $Y^k$ satisfy the equation $$\partial^{\alpha}\partial_{\alpha}Y^{k} -f^2 Y^k=0\, .$$ Therefore for $Y^k$ and its conjugate $\bar{Y}^k$ we have the following mode expansions \[bosmodeex\] $$\begin{aligned} Y^k&=i \sum_{n=-\infty}^{+\infty} \frac{1}{\sqrt{\omega_n}}\left(a_{n}^{k}e^{-i (\omega_n \tau -n\sigma)}- \left(\tilde{a}_{n}^{k}\right)^\dagger e^{i (\omega_n \tau -n\sigma)}\right)\, , \\ \bar{Y}^k&=i \sum_{n=-\infty}^{+\infty} \frac{1}{\sqrt{\omega_n}}\left(\tilde{a}_{n}^{k}e^{-i (\omega_n \tau -n\sigma)}- \left(a_{n}^{k}\right)^\dagger e^{i (\omega_n \tau -n\sigma)}\right)\, .\end{aligned}$$ The bosonic Hamiltonian now reads $$\label{Hcomplexfield} H_{lc}^{B}= \frac{1}{4\pi p^+}\int_{0}^{2\pi}d\sigma \sum_{k=1}^{4}\left(\dot{\bar{X}}^k \dot{X}^k+ (\bar{X}^k)'(X^k)' +f^2 \left(1-\eta_{k}^{2}\right)\bar{X}^{k}X^{k}\right)\, .$$ Then we quantize the theory imposing the canonical equal time commutation relations $$\label{etcr} \left[a_{n}^{k},a_{m}^{k'}\right]=0\, , \qquad \left[a_{n}^{k},(a_{m}^{k'})^{\dagger}\right]=\left[\tilde{a}_{n}^{k},(\tilde{a}_{m}^{k'})^{\dagger}\right]=\delta^{kk'}\delta_{nm}\, .$$ We obtain the following bosonic spectrum in this background $$\label{rotbosH} \begin{split} H_{lc}^{B}=& \frac{1}{ p^+}\sum_{n=-\infty}^{+\infty} \sum_{k=1}^2 \left[\left(\omega_n + \eta_k f\right) M_{n}^{(k)}+\left(\omega_n - \eta_k f\right) \tilde{M}_{n}^{(k)}\right. \\ +&\left.\left(\omega_n + \eta_{(k+2)} f\right) N_{n}^{(k)}+\left(\omega_n - \eta_{(k+2)} f\right) \tilde{N}_{n}^{(k)}\right]\, , \end{split}$$ where $\omega_n = \sqrt{n^2 + f^2}$ for all $n\in \mathbb{Z}$ and the number operators are defined as $$M_{n}^{(k)}=a_{n}^{k\dagger}a_{n}^{k}\, , ~~ \tilde{M}_{n}^{(k)} =\tilde{a}_{n}^{k\dagger}\tilde{a}_{n}^{k} \, ,~~N_{n}^{(k)}=a_{n}^{(k+2)\dagger}a_{n}^{(k+2)}\, , ~~\tilde{N}_{n}^{(k)} =\tilde{a}_{n}^{(k+2)\dagger}\tilde{a}_{n}^{(k+2)}$$ for $k=1,2$. Fermionic sector ---------------- We now work out the fermionic part of the spectrum. The light-cone gauge and $\kappa$-symmetry gauge fixing condition are $$z^+ = p^+ \tau, \qquad \Gamma^{+}\theta^A=0\,$$ where $\theta^A$, with $A=1,2$, is a Majorana-Weyl spinor with $32$ components. The Green-Schwarz fermionic light-cone action is then given by [@Metsaev:2002re] $$\label{GSaction} S_{lc}^{F}= \frac{i}{4\pi p^+}\int d\tau d\sigma \left[ \left(\eta^{\alpha\beta}\delta_{AB}-\epsilon^{\alpha\beta}\left(\sigma_{3}\right)_{AB}\right)\partial_{\alpha}z^+ \bar{\theta}^A \Gamma_+ \left(\mathcal{D}_{\beta}\theta\right)^B\right]\, ,$$ with covariant derivative $$\mathcal{D}_{\alpha}=\partial_{\alpha}+\frac{1}{4}\partial_{\alpha}z^+ \left(\omega_{+\rho\sigma}\Gamma^{\rho \sigma}-\frac{1}{2\cdot 5!}F_{\lambda\nu\rho\sigma\kappa}\Gamma^{\lambda\nu\rho\sigma\kappa}i\sigma_2 \Gamma_+ \right)\, ,$$ where $\sigma_{k}$’s are the Pauli matrices and $\omega_{a,b,c}$ are the spin connections. The non-vanishing components of the five-form field strength are $F_{+1234}=F_{+5678}=2\mu$. We can write the action as $$\label{feract} \begin{split} S_{lc}^{F}=& \frac{i}{2\pi p^+ }\int d\tau d\sigma \Bigg\{{\left(S^1\right)^T} \left[\partial_{+}-\frac{f}{2}\sum_{k=1}^{4}\eta_{k}\gamma^{2k-1,2k}\right]S^1\\ +& {\left(S^2\right)^T} \left[\partial_{-}-\frac{f}{2}\sum_{k=1}^{4}\eta_{k}\gamma^{2k-1,2k}\right]S^2 -2f {\left(S^1\right)^T} \Pi S^2\Bigg\}\, . \end{split}$$ where $S^A$, $A=1,2$, is a eight component real spinor and we introduced the matrix $\Pi=\gamma^{1234}$, where $\gamma_i$ are $8\times 8$ Dirac matrices [^6]. Moreover, $\partial_{\pm}=\partial_{\tau}\pm\partial_{\sigma}$. The equations of motion are \[eqmotferm\] $$\begin{aligned} &\left(\partial_{+}-\frac{f}{2}\sum_{k=1}^{4}\eta_{k}\gamma^{2k-1,2k}\right)S^{1}-f\Pi S^{2}=0\, ,\\ &\left(\partial_{-}-\frac{f}{2}\sum_{k=1}^{4}\eta_{k}\gamma^{2k-1,2k}\right)S^{2}+f\Pi S^{1}=0\, .\end{aligned}$$ It is useful to observe that a field of the form $$S^{A}=e^{\displaystyle \frac{f}{2}\sum_{k=1}^{4}\eta_{k}\gamma^{2k-1,2k}\tau}\Sigma^{A}$$ satisfies the above equations if the fields $\Sigma^{A}$ obey the equations of motion of the fermionic fields in the usual pp-wave background [@Metsaev:2001bj; @Metsaev:2002re]: $$\partial_{+}\Sigma^{1}-f\Pi \Sigma^{2}=0\, ,~~~~~~\partial_{-}\Sigma^{2}+f\Pi \Sigma^{1}=0\, ,$$ whose solutions are $$\begin{aligned} &\Sigma^{1}=c_0\, e^{-i f \tau}S_0 - \sum_{n>0}c_n e^{-i \omega_{n}\tau} \left(S_n e^{i n \sigma}+\frac{\omega_{n}-n}{f} S_{-n}e^{-i n \sigma} \right) +\textrm{h.c. },\\ &\Sigma^{2}=-c_0\, e^{-i f \tau}i\Pi S_0 - i \Pi\sum_{n>0}c_n e^{-i \omega_{n}\tau} \left(S_{-n} e^{-i n \sigma}-\frac{\omega_{n}-n}{f} S_{n}e^{i n \sigma} \right)+\textrm{h.c. },\end{aligned}$$ where, for all values of $n$, $\omega_{n}=\sqrt{n^2+f^2}$, while $c_n = \frac{1}{\sqrt{2}}[1+(\frac{\omega_{n}-n}{f})^{2}]^{-1/2}$. The fermionic conjugate momenta can be computed from the action $$\lambda^{A}=\frac{i}{2\pi}S^{A}\, ,$$ and the fermionic part of the Hamiltonian can be written in the form $$H_{lc}^{F}= \frac{i}{2\pi p^+ }\int^{2\pi}_{0}d\sigma \left({\left(S^1\right)^T}\dot{S^1}+{\left(S^2\right)^T}\dot{S^2}\right)\,$$ where we used the equations of motion . Now we quantize the theory imposing the canonical equal time anticommutation relations $$\left\{S_{n}^{a},\left(S_{m}^{b}\right)^{\dagger}\right\}=\delta^{ab}\delta_{nm}\,$$ and the fermionic Hamiltonian reads $$H_{lc}^{F}=\frac{1}{ p^+ }{\sum_{n=-\infty}^{+\infty}}S_{n}^{\dagger} \left(\omega_{n}+i\frac{f}{2}{\sum_{k=1}^{4}\eta_{k}\gamma^{2k-1,2k}}\right)S_{n}\, .$$ The matrices $i\,\gamma^{2k-1,2k}$ are commuting matrices and have eigenvalues $\pm 1$, each with multiplicity four. Since they commute we can find a set of common eigenvectors. Choosing this set as basis we can write the fermionic spectrum as $$\label{rotferH} H_{lc}^{F}= \frac{1}{ p^+}{\sum_{n=-\infty}^{+\infty}} \sum_{b=1}^{8} \left(\omega_n + \frac{f}{2} d_b \right)F_{n}^{(b)}\, ,$$ where $F_{n}^{(b)}$ are the fermionic number operators defined by the relation $$F_{n}^{(b)}=\left(S_{n}^{b}\right)^{\dagger}S_{n}^{b}\,$$ and where we have defined the coefficients $d_b$ as the following combinations of the $\eta_k$ parameters $$\begin{array}{lll} d_1 = -\eta_{1}-\eta_{2}+\eta_{3}+\eta_{4} \, , \phantom{qquad} & d_5 = -\eta_{1}+\eta_{2}+\eta_{3}-\eta_{4} \, ,\\[1mm] d_2 = -\eta_{1}-\eta_{2}-\eta_{3}-\eta_{4} \, , & d_6 = \eta_{1}-\eta_{2}+\eta_{3}-\eta_{4} \, ,\\[1mm] d_3 = \eta_{1}+\eta_{2}+\eta_{3}+\eta_{4} \, , & d_7 = \eta_{1}-\eta_{2}-\eta_{3}+\eta_{4} \, ,\\[1mm] d_4 = \eta_{1}+\eta_{2}-\eta_{3}-\eta_{4} \, , & d_8 = -\eta_{1}+\eta_{2}-\eta_{3}+\eta_{4} \, . \end{array}$$ At this point we can write the total light-cone Hamiltonian, $H_{lc}$, of type IIB string theory on the [*rotated pp-wave s*]{} $$\label{eq:rotH} \begin{split} H_{lc}=&H_{lc}^{B} +H_{lc}^{F}= \frac{1}{ p^+}\sum_{n=-\infty}^{+\infty} \left\{\sum_{k=1}^2 \left[\left(\omega_n + \eta_k f\right) M_{n}^{(k)}+\left(\omega_n - \eta_k f\right) \tilde{M}_{n}^{(k)}\right]\right. \\ +&\left.\sum_{k=1}^2\left[\left(\omega_n + \eta_{(k+2)} f\right) N_{n}^{(k)}+\left(\omega_n - \eta_{(k+2)} f\right) \tilde{N}_{n}^{(k)}\right] + \sum_{b=1}^{8}\left(\omega_n + \frac{f}{2} d_b \right)F_{n}^{(b)}\right\}\, , \end{split}$$ and the level matching condition is $$\sum_{n=-\infty}^{+\infty}\left[\sum_{k=1}^2\left(M_{n}^{(k)}+\tilde{M}_{n}^{(k)} +N_{n}^{(k)}+\tilde{N}_{n}^{(k)}\right)+ \sum_{b=1}^{8}F_{n}^{(b)}\right]=0 \, .$$ Note that the spectrum does not depend on the $C_k$ parameters since they just represent a gauge choice, but only on the $\eta_k$ parameters. The decoupled sectors {#sec:decsectors} ===================== In this section we show that by taking a certain limit of the spectra , we can reproduce the spectrum of anomalous dimensions of gauge theory operators in the dual sectors of $\mathcal{N}=4$ SYM theory found in [@Harmark:2007px]. The procedure follows that of [@Harmark:2006ta] where the spectrum in the $SU(2)$ sector is matched. Here we generalize this to all sectors that include scalar fields on the gauge theory side. According to the AdS/CFT correspondence, the string light-cone Hamiltonian $H_{\rm lc}$ should be dual to $D-J$ on the gauge theory side, $$\frac{H_{\rm lc}}{\mu}\, \longleftrightarrow \, D-J \, .$$ where $D$ is the dilatation operator and $J$ is the total charge defined by $J = n_1 S_1 + n_2 S_2 + n_3 J_1 + n_4 J_2 + n_5 J_3$ with the $n_i$ characterizing the decoupling limit giving a particular sector of ${\mathcal{N}}=4$ SYM [@Harmark:2007px]. As explained in more detail below, the decoupling limit on the gauge theory consists of taking the limit $D-J \rightarrow 0$ and $\lambda\rightarrow 0$ keeping $(D-J)/\lambda$ fixed. On the string theory side, this decoupling limit corresponds to the limit $\mu \to \infty$, or equivalently $f \to \infty$. We now apply this limit to the string spectra . Remembering the definition of $\omega_n$, its expansion for $f \to \infty$ takes the form $$\omega_n=\sqrt{f^2 + n^2}\simeq f+\frac{n^2}{2f} +\mathcal{O}(f^{-2})\, .$$ In order for the spectra to be finite, the divergent term contained in the expansion of $\omega_n$ should cancel. In the bosonic part of the Hamiltonian we deal with terms of the kind $$\begin{aligned} \left(\omega_n + \eta_k f\right)M_{n}^{(k)} & \simeq \left[f\left(1+\eta_k \right) + \frac{n^2}{2f} +\mathcal{O}(f^{-2})\right] M_{n}^{(k)}\, ,\\ \left(\omega_n - \eta_k f\right)\tilde{M}_{n}^{(k)} & \simeq \left[f\left(1-\eta_k \right) + \frac{n^2}{2f} +\mathcal{O}(f^{-2})\right] \tilde{M}_{n}^{(k)}\, , \end{aligned}$$ and the analogous ones for $N_{n}^{(k)}$ and $\tilde{N}_{n}^{(k)}$. Instead in the fermionic part of the Hamiltonian we have $$\left( \omega_n + \frac{f}{2} d_b\right)F_{n}^{(b)} \simeq \left[f\left(1+\frac{d_b}{2}\right) + \frac{n^2}{2f} +\mathcal{O}(f^{-2})\right]F_{n}^{(b)}\, .$$ The only terms that survive the limit $f \to \infty$ are those for which the coefficient of the linear part in $f$ vanishes. All the other terms are divergent and thus decouple in the large $f$ limit. The bosonic number operators will survive only if the corresponding $\eta_k$ results to be $\pm 1$ and the fermionic number operators only if the corresponding $d_b$ results to be $-2$. In the following we want to show that by appropriately fixing the values of the parameters $\eta_k$, the string theory spectra that survive the limit $\mu \to \infty$ precisely reproduce the spectra of the dual gauge theory sectors. As an important consequence of the matching of the spectra, it follows that also the Hagedorn temperature of the gauge theory matches the one of string theory in these sectors. This can also be used to verify the conjectured relation between the Hagedorn/deconfinement temperature of planar ${\mathcal{N}}=4$ SYM on ${\mathbb{R}}\times S^3$ and the Hagedorn temperature of string theory on $\mbox{AdS}_5\times S^5$. Moreover, these results show that the decoupling limits [@Harmark:2006di; @Harmark:2006ta; @Harmark:2006ie; @Harmark:2007et; @Harmark:2007px] of thermal $SU(N)$ $\mathcal{N}=4$ SYM on ${\mathbb{R}}\times S^3$ provide a very useful and powerful tool to match gauge theory and string theory. On the gauge theory side the idea [@Harmark:2006di; @Harmark:2006ta; @Harmark:2006ie; @Harmark:2007et; @Harmark:2007px; @Harmark:2008gm] is to consider decoupling limits of weakly coupled ${\mathcal{N}}=4$ SYM on ${\mathbb{R}}\times S^3$ with gauge group $SU(N)$. The decoupling limit is defined by $$\label{limit2} \lambda \rightarrow 0 {\,, \ \ }J_i,\, N \ \mbox{fixed} {\,, \ \ }H_{\rm g.t.} \equiv \frac{E-J}{\lambda} \ \mbox{fixed}$$ where $\lambda={g_{\rm YM}}^2 N/4\pi^2$ is the ’t Hooft coupling of $\mathcal{N}=4$ SYM theory, $E$ is the energy of a state measured in units of the three sphere radius and $J\equiv n_1 S_1 + n_2 S_2 + n_3 J_1 + n_4 J_2 + n_5 J_3$ is the total charge with $n_i$, $i=1,\ldots,5$ being fixed numbers. $S_1$ and $S_2$ denote the two charges of the $SO(4)$ group of $S^3$ and $J_1$, $J_2$ and $J_3$ are the three R-charges. Here we only consider the gauge theory in the planar limit $N=\infty$. In terms of operators we have that the Hamiltonian is given by $H_{\rm g.t.} = (D-J)/\lambda$. $D$ is the dilatation operator of $\mathcal{N}=4$ SYM which, at weak ’t Hooft coupling, can be expanded as $$D = D_0 + \lambda D_2 + \lambda^{\frac{3}{2}}D_3 + \lambda^2D_4 + \ldots$$ where $D_0$ is the bare scaling dimension, $D_2$ is the one-loop part of the dilatation operator and so on. One can see that in the limit , the operators with $D_0>J$ decouple and only the ones with $D_0=J$ survive the limit. One thus gets the effective Hamiltonian $H_{\rm g.t.}=D_2$, namely only the one-loop part of the dilatation operator survive the limit  [@Harmark:2006di; @Harmark:2006ta; @Harmark:2006ie; @Harmark:2007et; @Harmark:2007px; @Harmark:2008gm]. Among the possible decoupling limits of $\mathcal{N}=4$ SYM theory found in [@Harmark:2007px], here we are interested only in the decoupled sectors that contain scalars. The presence of the scalars is in fact crucial in order to analyze the regime of the gauge theory which is related to the dual string theory. These sectors are the $SU(2)$, $SU(1|1)$, $SU(1|2)$, $SU(2|3)$, bosonic $SU(1,1)$, $SU(1,1|1)$, $SU(1,1|2)$, $SU(1,2|2)$ and $SU(1,2|3)$ sectors. **Sector** $(n_1,n_2,n_3,n_4,n_5)$ --------------- -------------------------------------------------------- $SU(2)$ (0,0,1,1,0) $SU(1,1)_{b}$ (1,0,1,0,0) $SU(1|1)$ $\left(\frac{2}{3},0,1,\frac{2}{3},\frac{2}{3}\right)$ $SU(1|2)$ $\left(\frac{1}{2},0,1,1,\frac{1}{2}\right)$ $SU(2|3)$ (0,0,1,1,1) $SU(1,1|1)$ $\left(1,0,1,\frac{1}{2},\frac{1}{2}\right)$ $SU(1,1|2)$ (1,0,1,1,0) $SU(1,2|2)$ (1,1,1,0,0) $SU(1,2|3)$ (1,1,1,1,1) : The table shows the nine decoupled sectors that contain at least one scalar: in the left column are listed the sectors that survive the decoupling limit for the corresponding choice of $n=(n_1,n_2,n_3,n_4,n_5)$ reported in the right column. $SU(1,1)_b$ is the bosonic $SU(1,1)$ sector.[]{data-label="tab:sectors"} For more details see Ref. [@Harmark:2007px]. The spectra for these nine different sectors all take the form [@Harmark:2007px] $$\label{eq:ABCspectrum2} H_{\rm g.t.} = \frac{2\pi^2}{J^2} \sum_{n\in \mathbb{Z}} n^2 \left( \sum_{i=1}^a M_n^{(i)} +\sum_{j=1}^b N_n^{(j)} + \sum_{\alpha=1}^c F_n^{(\alpha)} \right)$$ The cyclicity (zero momentum) constraint is $$\begin{aligned} \label{eq:ABCconstraint} P \equiv \sum_{n\in \mathbb{Z}} n \left( \sum_{i=1}^a M_n^{(i)} +\sum_{j=1}^b N_n^{(j)} + \sum_{\alpha=1}^c F_n^{(\alpha)} \right) = 0.\end{aligned}$$ Note that $F_n^{(\alpha)} \in \{0,1\}$ while $M_n^{(i)}, N_n^{(j)} \in \{0,1,2,...\}$. The numbers $a,b$ and $c$ are given in Tab. \[tab:abc\]. $SU(\cdot)$ $(2)$ $(1,1)_b$ $(1|1)$ $(1|2)$ $(2|3)$ $(1,1|1)$ $(1,1|2)$ $(1,2|2)$ $(1,2|3)$ ------------- ------- ----------- --------- --------- --------- ----------- ----------- ----------- ----------- $a$ 1 0 0 1 2 0 1 0 2 $b$ 0 1 0 0 0 1 1 2 2 $c$ 0 0 1 1 2 1 2 2 4 : The table shows how many number operators we have of each type ($a$ for scalars $M_n$, $b$ for derivatives $N_n$, and $c$ for fermions $F_n$) in each of the nine theories that contain at least one scalar. $SU(1,1)_b$ is the bosonic $SU(1,1)$ sector. \[tab:abc\] We want to show that there is a direct relation between the critical values of the numbers $(n_1,...,n_5)$ that characterize the various sectors on the gauge theory side and the parameters $\eta_1,...,\eta_4,$ that give the corresponding decoupled sectors on the string theory side. From table \[tab:sectors\], we see that all the nine sectors containing scalars have $n_3 = 1$. It is not hard to see that a suitable choice of $\eta_k$ parameters to match the string theory spectrum with the spectrum of the gauge theory side is the following $$\label{eq:etaasn} \eta_1 =n_4 \, ,\phantom{qquad} \eta_2 =n_5 \, , \phantom{qquad} \eta_3 =-n_1 \, ,\phantom{qquad} \eta_4 =n_2 \, .$$ Using the previous relations in the spectrum and taking the limit $f\to \infty$ we see that the string theory spectrum precisely matches the spectrum of the nine decoupled sectors of the gauge theory side. As an example we can consider the $SU(1,1|1)$ sector: in this case $n=\left(1,0,1,\frac{1}{2},\frac{1}{2}\right)$ (see Table \[tab:sectors\]) so using the relations we have that $\eta=\left(\frac{1}{2},\frac{1}{2},-1,0\right)$. Since the only $\eta_k$ equal to -1 is $\eta_3$ and the only $d_b$ equal to -2 is $d_1$ we have that only one bosonic and one fermionic number operator survive the limit $f \to \infty$. The string theory spectrum thus becomes $$\label{strsect2} \frac{H_{lc}}{\mu}\sim \frac{1}{2 \mu p^+ f} \sum_{n\in \mathbb{Z}} n^2 \left( N_n^{(1)} + F_n^{(1)} \right)\, ,$$ which, using the dictionary between gauge theory and string theory, can be written as $$\label{strsect} \frac{H_{lc}}{\mu}= \lambda D_2=\frac{2\pi^2\lambda}{J^2} \sum_{n\in \mathbb{Z}} n^2 \left( N_n^{(1)} + F_n^{(1)} \right)\, ,$$ where we used that $f=J/(2\pi\sqrt{\lambda})$. It is easy to check that is in accordance with the corresponding result in the gauge theory side which can be deduced from . We can repeat an analogous check for all the other decoupled sectors and we can show that the field content of the surviving spectrum is exactly the same as the one obtained on the gauge theory side. Using again Table \[tab:abc\], we can thus write the reduced spectrum for all the nine sectors on the string theory side at once. It is given by $$\frac{H_{lc}}{\mu}=\frac{1}{2 \mu p^+ f} \sum_{n\in \mathbb{Z}} n^2 \left( \sum_{i=1}^a M_n^{(i)} +\sum_{j=1}^b N_n^{(j)} + \sum_{\alpha=1}^c F_n^{(\alpha)} \right)$$ which indeed coincides with Eq. once we use the dictionary between gauge theory and string theory. New Penrose limit of ${\mbox{AdS}}_4 \times {\mathbb{C}}P^3$ {#sec:ads4} ============================================================ In the above we have found a new Penrose limit of ${\mbox{AdS}}_5 \times S^5$ with two explicit space-like isometries in addition to the existing Penrose limits with zero and one space-like isometries [@Blau:2002dy; @Berenstein:2002jq; @Bertolini:2002nr]. A natural question is whether one can similarly find new Penrose limits of the ${\mbox{AdS}}_4\times {\mathbb{C}}P^3$ background of type IIA supergravity. The known Penrose limits for this background are with either zero explicit space-like isometries [@Nishioka:2008gz; @Gaiotto:2008cg] or with two space-like isometries [@Grignani:2008is; @Astolfi:2009qh]. In particular the one with two space-like isometries of [@Grignani:2008is; @Astolfi:2009qh] is connected to studying the $SU(2) \times SU(2)$ sector of string theory on ${\mbox{AdS}}_4\times {\mathbb{C}}P^3$. We find in this section a new Penrose limit of the ${\mbox{AdS}}_4\times {\mathbb{C}}P^3$ background of type IIA supergravity with one explicit space-like isometry, $i.e.$ with one flat direction. We find furthermore the spectrum of type IIA string theory on this background by finding the spectrum for a general rotated pp-wave background that for certain choices of parameters corresponds to both the new pp-wave background with one explicit space-like isometry, as well as the two known backgrounds with zero and two explicit space-like isometries. The “one flat direction” Penrose limit -------------------------------------- In this section we present a new Penrose limit of ${\mbox{AdS}}_4 \times {\mathbb{C}}P^3$, here called the “one flat direction” Penrose limit. The  metric is given by $$ds^2=R^2\left(\frac{1}{4}ds^2_{AdS_4}+ ds^2_{\CP^3}\right) \, ,$$ where $$\label{metricAdS4} ds^2_{AdS_4} = -\cosh^2\rho \, dt^2 +d\rho^2 +\sinh^2 \rho \, d\Omega_2^2 \, ,$$ and $$\label{metricCP3} \begin{split} ds^2_{\CP^3} & = d\theta^2+4\cos^2 \theta \sin^2 \theta \left(d\delta+\frac{\cos\theta_1}{4}d\vp_1- \frac{\cos\theta_2}{4}d\vp_2\right)^2 \\ &+\frac{1}{4}\cos^2 \theta\left(d\theta_1^2+\sin^2\theta_1 d\vp_1^2\right)+\frac{1}{4}\sin^2 \theta (d\theta_2^2+\sin^2\theta_2 d\vp_2^2)\, . \end{split}$$ We introduce the new variables $\chi$, $\xi$ and $\psi$ by $$2\delta = \chi + \frac{\vp_2}{2}\, ,\qquad \vp_2=\xi+b\chi\,, \qquad 2\theta = \psi+ \frac{\pi}{2}\, ,$$ where $b$ is a parameter. The coordinate transformation that defines the Penrose limit is $$\begin{split} &x^+ = \frac{t+\chi}{2} \,, \qquad x^- = R^2\frac{t-\chi}{8} \,, \qquad \rho= \frac{2r}{R} \, ,\qquad \psi=\frac{2 u_4}{R} \, ,\\ &\vp_1 = \frac{2\sqrt{2}\,x_1}{R} \, ,\qquad \theta_1=\frac{2\sqrt{2}\,y_1}{R}+\frac{\pi}{2} \, ,\qquad \theta_2=\frac{2\sqrt{2}\,z}{R}\, . \end{split}$$ Taking the limit $R \to \infty$ while keeping $x^\pm$, $r$, $u_4$, $x_1$, $y_1$, $z$ finite, the metric becomes $$\label{metric1fd} \begin{split} ds^2 = &-4dx^+ dx^- + {\sum_{i=1}^{4}}\left(du_i^2-u_i^2 {dx^+}^2\right) + \sum_{a=1}^{2}\left(dx_a^2+dy_a^2 \right) \\ &+b(1+b)\left(x_2^2+y_2^2\right){dx^+}^2- 2 y_1 dx_1 dx^+ +(1+2b)\left[x_2 dy_2 - y_2 dx_2\right]dx^+ \, , \end{split}$$ where $x_2+iy_2=z\,e^{i\xi}$. The metric describes exactly a a pp-wave  with a flat direction, namely $x_1$. Rotated s and the string spectrum --------------------------------- Let us start from the pp-wave metric found in [@Nishioka:2008gz] $$ds^2=-4d\tilde{x}^+d\tilde{x}^- -\left(\sum_{i=1}^4 \tilde{x}_i^2+\frac{1}{4}\sum_{i=5}^8\tilde{x}_i^2\right){d\tilde{x}^+}^2+\sum_{i=1}^8 d\tilde{x}_i^2,$$ We consider the following coordinate transformation $$\label{transfrot2} \begin{split} \tilde x^+ &=x^+ \, \\[2mm] \tilde x^- &=x^- + \sum_{a=1}^{2} C_a x_a y_a \, , \\[2mm] \tilde x_i &=u_i \, , \quad i=1,\dots,4\, ,\\[2mm] \left( \begin{array}{c} \tilde x_{3+2a} \\[2mm] \tilde x_{4+2a} \end{array} \right) &= \left( \begin{array}{cc} \cos(\eta_a x^+ ) & -\sin(\eta_a x^+ ) \\[2mm] \sin(\eta_a x^+ ) & \cos(\eta_a x^+ ) \end{array} \right) \left( \begin{array}{c} x_a \\[2mm] y_a \end{array} \right)\, , \quad a=1,2\, , \end{split}$$ where $C_{1}, C_{2}$ and $\eta_{1}, \eta_{2}$ are parameters. Under this tranformation the metric becomes $$\begin{aligned} \label{rotmet} ds^2 =& -4dx^+ dx^- + {\sum_{i=1}^{4}}\left(du_i^2-u_i^2 {dx^+ }^2\right) + \sum_{a=1}^{2}\Bigg[dx_a^2+dy_a^2+\left(\eta_a^2-\frac{1}{4}\right)\left(x_a^2+y_a^2\right){dx^+ }^2 {\nonumber}\\ & +2\left(\eta_a-2C_a\right)x_a dy_a dx^+ - 2\left(\eta_a+2C_a\right)y_a dx_a dx^+ \Bigg]\, .\end{aligned}$$ It is easy to see that if one chooses the $C_a$ and $\eta_a$ parameters so the terms $dx^+ {}^2$ and $dx_a dx^+ $ in the metric vanish, i.e. $$\eta_a=-\frac{1}{2}\, ,\qquad \quad C_a=\frac{1}{4} \, ,$$ then one gets the  with two flat directions found in [@Grignani:2008is] $$ ds^2 = -4dx^+ dx^- + {\sum_{i=1}^{4}}\left(du_i^2-u_i^2 {dx^+ }^2\right) + \sum_{a=1}^{2}\left[dx_a^2+dy_a^2 - 2 y_a dx_a dx^+ \right]$$ Eq.  also contains the pp-wave with one flat direction that we just obtained through a Penrose limit of the geometry for the following choice of parameters $$\begin{array}{lcl} \eta_1=- \displaystyle \frac{1}{2} \, , & \phantom{aaa}& \eta_2=b+ \displaystyle \frac{1}{2} \, , \\[2mm] C_1= \displaystyle \frac{1}{4} \, ,& & C_2=0 \, . \end{array}$$ ### Spectrum {#spectrum .unnumbered} Now we derive the string spectrum on the rotated pp-wave  . In the light-cone gauge $x^+ = c \tau$ the bosonic Lagrangian density is $$\label{penboslagr} \begin{split} &\mathscr{L}_{\rm lc}^{B}= - \frac{1}{4\pi c}\bigg\{\sum_{i=1}^4\left[\dot{u}_i^2 -u_i'^2-c^2u_i^2\right]+\sum_{a=1}^2\Big[\dot{x}_a^2+\dot{y}_a^2 -x_a'^2-y_a'^2 \\ &+c^2\left(\eta_a^2-\frac{1}{4}\right)\left(x_a^2+y_a^2\right)+2c\left(\eta_a-2 C_a\right)x_a \dot{y}_a -2c\left(\eta_a+2C_a\right)y_a \dot{x}_a\Big]\bigg\}\, . \end{split}$$ where $c$ is fixed by requiring that the conjugate momentum to $x^-$ is constant. The bosonic light-cone Hamiltonian is then given by $$\label{penbosham} \begin{split} c H^B_{\rm lc}=& \frac{1}{4\pi } \int_0^{2\pi} d\sigma \bigg\{\sum_{i=1}^4\left[\dot{u}_i^2 +u_i'^2+c^2u_i^2\right] \\ &+\sum_{a=1}^2\left[\dot{x}_a^2+\dot{y}_a^2 +x_a'^2+y_a'^2+c^2\left(\frac{1}{4}-\eta_a\right)\left(x_a^2+y_a^2\right)\right] \bigg\}\, . \end{split}$$ The mode expansion for the bosonic fields can be written as $$u_i (\tau,\sigma ) = \frac{i}{\sqrt{2}} \sum_{n\in {\mathbb{Z}}} \frac{1}{\sqrt{\Omega_n}} \Big[ \hat{a}^i_n e^{-i ( \Omega_n \tau - n \sigma ) } - (\hat{a}^i_n)^\dagger e^{i ( \Omega_n \tau - n \sigma ) } \Big] \, ,$$ $$\label{zmode} z_a(\tau,\sigma) = \, e^{-i c \eta_a \tau} \sum_{n \in {\mathbb{Z}}} \frac{1}{\sqrt{\omega_n}} \Big[ a_n^a e^{-i ( \omega_n \tau - n \sigma ) } - (\tilde{a}^a)^\dagger_n e^{i ( \omega_n \tau - n \sigma ) } \Big]\, ,$$ where $\Omega_n=\sqrt{c^2+n^2}$, $\omega_n=\sqrt{\frac{c^2}{4}+n^2}$ and we defined $z_a(\tau,\sigma)=x_a(\tau,\sigma)+iy_a(\tau,\sigma)$. The canonical commutation relations $[x_a(\tau,\sigma),p_{x_b}(\tau,\sigma')] = i\delta_{ab} \delta (\sigma-\sigma')$, $[y_a(\tau,\sigma),p_{y_b}(\tau,\sigma')] = i\delta_{ab}\delta (\sigma-\sigma')$ and $[u_i(\tau,\sigma),p_j(\tau,\sigma')] = i\delta_{ij} \delta (\sigma-\sigma')$ follows from $$\label{comrel} [a_m^a,(a_n^b)^\dagger] = \delta_{mn} \delta_{ab}{\,, \ \ }[\tilde{a}_m^a,(\tilde{a}_n^b)^\dagger] = \delta_{mn} \delta_{ab}{\,, \ \ }[\hat{a}^i_m,(\hat{a}^j_n)^\dagger] = \delta_{mn} \delta_{ij} \, .$$ Employing we obtain the bosonic spectrum $$\label{penspectrum} c H^B_{\rm lc} = \sum_{i=1}^4 \sum_{n\in {\mathbb{Z}}} \sqrt{n^2+c^2}\, \hat{N}^i_n+\sum_{a=1}^2\sum_{n\in {\mathbb{Z}}} \left\{ \left(\sqrt{\frac{c^2}{4}+n^2} + \eta_a c \right) M_n^a + \left(\sqrt{\frac{c^2}{4}+n^2}- \eta_a c\right) N_n^a \right\} \, ,$$ with the number operators $\hat{N}^i_n = (\hat{a}^i_n)^\dagger \hat{a}^i_n$, $M_n^a = (a^a)^\dagger_n a^a_n$ and $N_n^a = (\tilde{a}^a)^\dagger_n \tilde{a}_n^a$. Now we compute the fermionic part of the spectrum. We start from the type IIA superstring Lagrangian density on the   $$\label{penferlagr} \mathscr{L}^{F}= \frac{i \,c }{2} \, \bar{\theta} \Gamma_+ \left[\partial_\tau -\Gamma_{11} \partial_\sigma +\frac{c}{4}\left(-2\eta_1\Gamma_{56}-2\eta_2\Gamma_{78}+\Gamma_{11}\Gamma_4-3\Gamma_{123}\right)\right]\theta \, ,$$ where $\theta$ is a 32 component real spinor and we used the zehnbeins $$\begin{split} &e^+_{\phantom{+}+}=\frac{1}{2} \, , \qquad e^-_{\phantom{+}+}=\frac{1}{2}\left[\left({\sum_{i=1}^{4}}u_i^2\right) - \sum_{a=1}^{2} \left(\eta_a^2-\frac{1}{4}\right) \left(x_a^2+y_a^2\right)\right] \, ,\\ &e^-_{\phantom{+}-}=2\, , \qquad e^-_{\phantom{+}x_a}=\left(\eta_a+2C_a\right)y_a \, , \qquad e^-_{\phantom{+}y_a}=-\left(\eta_a-2C_a\right)x_a \, , \\ &e^i_{\phantom{+}u_i}=1 \, , \qquad e^5_{\phantom{+}x_1}=1\, , \qquad e^6_{\phantom{+}y_1}=1\, , \qquad e^7_{\phantom{+}x_2}=1\, , \qquad e^8_{\phantom{+}y_2}=1\, , \end{split}$$ where $i=1,2,3,4$, and the relevant components of the spin connection $$\omega_+^{\phantom{+}56}=-\eta_1\, , \qquad \omega_+^{\phantom{+}78}=-\eta_2\, .$$ Let us decompose $\theta=\theta_+ +\theta_-$ by writing $$\Gamma_{5678}\theta_\pm=\pm \theta_\pm \, ,$$ In terms of $\theta_\pm$ the light-cone gauge conditions are [@Astolfi:2009qh] $$\Gamma_- \theta_- =0\, , \qquad \Gamma_{4956}\theta_+=\theta_+\, .$$ Using the spinor conventions of Appendix \[AppendixA\] we can write the Lagrangian as $$\mathscr{L}^{F} = \mathscr{L}_+ +\mathscr{L}_- \, ,$$ with $\mathscr{L}_+$ and $\mathscr{L}_-$ given by $$\mathscr{L}_+=i \psi^* \dot{\psi} - \frac{i}{2}\left(\psi \psi' + \psi^* {\psi^*}'\right) +\frac{i\, c}{2}\Delta_1 \psi \gamma_{56} \psi^* + \frac{c}{2} \psi \psi^* \, ,$$ $$\mathscr{L}_-=i \chi^* \dot{\chi} - \frac{i}{2}\left(\chi \chi' + \chi^* {\chi^*}'\right) -\frac{i\, c}{2}\Delta_2 \chi \gamma_{56} \chi^* - c \chi \chi^* \, ,$$ where $\Delta_1=\eta_2-\eta_1$ and $\Delta_2=\eta_1+\eta_2$. The mode expansions for the 8 component spinors $\psi$ and $\chi$ are $$\psi_{\alpha} = \left( e^{- \frac{c}{2} \Delta_1 \gamma_{56}\tau} \right)_{\alpha \beta} \sum_{n\in Z} \left[ f^+_n d_{n,\alpha}e^{-i ( \omega_n \tau - n \sigma ) } - f^-_n d^\dagger_{n,\alpha} e^{i ( \omega_n \tau - n \sigma ) } \right]\, ,$$ $$\chi_{\alpha} = \left( e^{ \frac{c}{2} \Delta_2 \gamma_{56}\tau} \right)_{\alpha \beta} \sum_{n\in Z} \left[ - g^-_n b_{n,\beta} e^{-i ( \Omega_n \tau - n \sigma ) } + g^+_n b^\dagger_{n,\beta} e^{i ( \Omega_n \tau - n \sigma ) } \right] \, ,$$ with the constants $f^\pm_n$ and $g^\pm_n$ defined by $$f^\pm_n = \frac{\sqrt{\omega_n+n} \pm \sqrt{\omega_n-n}}{2\sqrt{\omega_n}} {\,, \ \ }g^\pm_n = \frac{\sqrt{\Omega_n+n} \pm \sqrt{\Omega_n-n}}{2\sqrt{\Omega_n}}$$ The fermionic Hamiltonian density is therefore $$\label{CH2F} c \mathcal{H}^{F}_{\rm lc} = \frac{i}{2} \left( \psi \psi' -\rho \rho' \right) + \frac{c}{2} \Delta_1 \psi \gamma_{56}\rho - \frac{i\, c}{2} \psi \rho + \frac{i}{2}\left(\chi \chi' - \lambda \lambda'\right) - \frac{i\,c}{2} \Delta_2 \chi \gamma_{56}\lambda + i c \chi \lambda \, ,$$ where the fermionic momenta are $$\rho = - i \psi^* {\,, \ \ }\lambda = - i \chi^* \, .$$ The fermionic spectrum can then be computed and reads $$\label{fermppwave} \begin{split} c H^{F}_{\rm lc} &= \sum_{n\in {\mathbb{Z}}} \Bigg[ \sum_{b=1,2}\left(\omega_n +\frac{c}{2} \Delta_1 \right)F_n^{(b)} + \sum_{b=3,4}\left(\omega_n -\frac{c}{2} \Delta_1 \right)F_n^{(b)} \\ &+ \sum_{b=5,6} \left( \Omega_n - \frac{c}{2}\Delta_2 \right) F_n^{(b)} + \sum_{b=7,8} \left( \Omega_n + \frac{c}{2}\Delta_2 \right) F_n^{(b)} \Bigg] \end{split}$$ with the number operators $F^{(b)}_n= d_{n,\alpha}^\dagger d_{n,\alpha}$ for $b=1,\ldots ,4$, and $F_n^{(b)} = b^\dagger_{n,\alpha} b_{n,\alpha}$ for $b=5,\ldots,8$. The level-matching condition, including also the bosonic part, is $$\label{levelmbf} \sum_{n\in {\mathbb{Z}}}n \left[\sum_{i=1}^4 \hat{N}^i_n+\sum_{a=1}^2 \left(M_n^a + N_n^a\right) +\sum_{b=1}^8 F^{(b)}_n\right] = 0$$ Acknowledgments {#acknowledgments .unnumbered} =============== GG and AM thank the Galielo Galilei Institute for Theoretical Physics for hospitality and the INFN for partial support during the completion of this work. The work of GG is supported in part by the MIUR-PRIN contract 2007-5ATT78. Gamma matrices and spinors {#AppendixA} ========================== We briefly review our conventions for the representations of Dirac matrices in ten dimensions and for Majorana-Weyl spinors. As usual, we shall use the mostly plus metric. Gamma matrices {#gamma-matrices .unnumbered} -------------- Let $I_n$ denote the $n \times n$ unit matrix, $\sigma_1,\, \sigma_2,\, \sigma_3$ the $2\times 2$ Pauli matrices $$\label{Paulimatr} \sigma_1 = {\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right) }{\ \ ,\ \ \ \ }\sigma_2 = {\left( \begin{array}{cc} 0 & -i \\ i & 0 \end{array} \right) } {\ \ ,\ \ \ \ }\sigma_3 = {\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right) }\, ,$$ and $\epsilon$ the antisymmetric tensor of rank two $$\epsilon = i \sigma_2 ={\left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right) } \, .$$ We can define the real $8 \times 8$ matrices $\gamma_1,...,\gamma_8$ as $$\label{gammadirac} \begin{array}{ll} \gamma_1 = \epsilon \times \epsilon \times \epsilon\, , \phantom{qquad} & \gamma_5 = \sigma_3 \times \epsilon \times I_2\, , \\[1mm] \gamma_2 = I_2 \times \sigma_1 \times \epsilon \, , & \gamma_6 =\epsilon \times I_2 \times \sigma_1 \, , \\[1mm] \gamma_3 = I_2 \times \sigma_3 \times \epsilon \, ,& \gamma_7 = \epsilon \times I_2 \times \sigma_3 \, , \\[1mm] \gamma_4 = \sigma_1 \times \epsilon \times I_2 \, , & \gamma_8 = I_2\times I_2 \times I_2\, . \end{array}$$ This should be read as $$\label{notgammamatr} \gamma_7 = \epsilon \times I_2 \times \sigma_3 = {\left( \begin{array}{cc} 0 & I_2 \times \sigma_3 \\ -I_2 \times \sigma_3 & 0 \end{array} \right) } {\,, \ \ }I_2 \times \sigma_3 = {\left( \begin{array}{cc} \sigma_3 & 0 \\ 0 & \sigma_3 \end{array} \right) }\, ,$$ and so on. It is easy to verify that the matrices $\gamma_1,...,\gamma_8$ obey the following relations $$\label{smallgamma9} \begin{split} &\gamma_i \gamma_j^T + \gamma_j \gamma_i^T = \gamma_i^T \gamma_j + \gamma_j^T \gamma_i = 2 \delta_{ij} I_8 {\ \ ,\ \ \ \ }i,j=1,...,8 \\[1mm] & \gamma_1 \gamma_2^T \gamma_3 \gamma_4^T \gamma_5 \gamma_6^T \gamma_7 \gamma_8^T = I_8 {\ \ ,\ \ \ \ }\gamma_1^T \gamma_2 \gamma_3^T \gamma_4 \gamma_5^T \gamma_6 \gamma_7^T \gamma_8 = - I_8\, . \end{split}$$ Now we introduce the $16 \times 16$ matrices $\hat{\gamma}_1,...,\hat{\gamma}_9$ defined as $$\label{gam9} \begin{split} &\hat{\gamma}_i = {\left( \begin{array}{cc} 0 & \gamma_i \\ \gamma_i^T & 0 \end{array} \right) }\, , \qquad i,j=1,...,8 \\[1mm] &\hat{\gamma}_{9} = \sigma_3 \times I_8 = {\left( \begin{array}{cc} I_8 & 0 \\ 0 & -I_8 \end{array} \right) }\, . \end{split}$$ The matrices $\hat{\gamma}_1,...,\hat{\gamma}_9$ are symmetric and real, and they obey $$\begin{split} \{ \hat{\gamma}_i, \hat{\gamma}_j \} &= 2 \delta_{ij} I_{16} \, , \qquad i,j=1,...,9 \\[1mm] &\hat{\gamma}_9 = \hat{\gamma}_1 \hat{\gamma}_2 \cdots \hat{\gamma}_8 \, . \end{split}$$ At this point we are ready to define the Dirac matrices in ten dimensions, which are the following $32 \times 32$ matrices: $$\begin{split} \Gamma_0 &= - \epsilon \times I_{16} = {\left( \begin{array}{cc} 0 & -I_{16} \\ I_{16} & 0 \end{array} \right) } \, , \\ \Gamma_i &= \sigma_1 \times \hat{\gamma} = {\left( \begin{array}{cc} 0 & \hat{\gamma}_i \\ \hat{\gamma}_i & 0 \end{array} \right) } \, , \quad i=1,...,9 \\ \Gamma_{11} &= \sigma_3 \times I_{16} = {\left( \begin{array}{cc} I_{16} & 0 \\ 0 & -I_{16} \end{array} \right) }\, . \end{split}$$ We see that these matrices are real and satisfy the relations $$\label{gam11} \begin{split} \{ \Gamma_a,\Gamma_b \} = 2 \eta_{ab} I_{32} \, ,& \quad a,b=0,1,...,9,11 \\[1mm] \Gamma_{11} = \Gamma^0 & \Gamma^1 \cdots \Gamma^9 \, . \end{split}$$ It is convenient to introduce the light-cone Dirac matrices $\Gamma_\pm$, given by $$\begin{split} \Gamma_\pm = &\Gamma_0 \pm \Gamma_9 \, , \\ \Gamma^\pm = - \frac{1}{2} \Gamma_\mp &= \frac{1}{2} ( \Gamma^0 \pm \Gamma^9 )\, . \end{split}$$ The raising and lowering of these indices are done according to a flat space metric with $\eta_{+-} = -2$.\ We then define $$\Gamma_{a_1 a_2 \cdots a_n} = \Gamma_{[a_1} \Gamma_{a_2} \cdots \Gamma_{a_n]} \, ,$$ and analogously the $16 \times 16$ matrices $$\hat{\gamma}_{i_1 \cdots i_n } = \hat{\gamma}_{[i_1} \hat{\gamma}_{i_2} \cdots \hat{\gamma}_{i_n]} \, ,$$ with $i_l = 1,...,8$. Since $\hat{\gamma}_i$ is symmetric we have that $$\hat{\gamma}_{ijkl}^T = \hat{\gamma}_{ijkl} \, ,$$ i.e. that $\hat{\gamma}_{ijkl}$ is also symmetric. Furthermore we define the $8\times 8$ matrices $$\label{gammai1ik} \gamma_{i_1 \cdots i_{2k} } = \gamma_{[i_1} \gamma^T_{i_2}\cdots \gamma^T_{i_{2k}]} {\,, \ \ }\gamma_{i_1 i_2 \cdots i_{2k+1} } = \gamma^T_{[i_1} \gamma_{i_2} \cdots \gamma^T_{i_{2k+1}]} \, .$$ with $i_l = 1,...,8$. In particular we call $\Pi$ the matrix $$\Pi \equiv \gamma_{1234} = \gamma_1 \gamma_2^T \gamma_3 \gamma_4^T \, ,$$ which has the following proprieties $$\label{piids1} \Pi^2 = I_8 {\,, \ \ }\Pi^T = \Pi {\,, \ \ }\Pi = \gamma_{5678} \, .$$ The last equation follows from . Finally it is possible to show that $\Pi$ satisfies the relations $$\label{piids2} \Pi \gamma_{ij} = \gamma_{ij} \Pi = - \epsilon_{ijkl} \gamma^{kl} {\,, \ \ }\Pi \gamma_{i'j'} = \gamma_{i'j'} \Pi = - \epsilon_{i'j'k'l'} \gamma^{k'l'} \, ,$$ with $i,j=1,2,3,4$ and $i',j'=5,6,7,8$. Spinors for type IIB {#spinors-for-type-iib .unnumbered} -------------------- The spinors $\theta^A$ are 32-component Majorana-Weyl spinors. The Majorana condition imposes that the 32 components of $\theta^A$ are real. The Weyl condition is $$\label{weylcond} \Gamma_{11} \theta^A = \theta^A \, ,$$ for both $A=1,2$. Note here that we choose the two spinors to have the same chirality since we are considering type IIB string theory. Using we see that the Weyl condition means that only the first 16 components of $\theta^A$ are non-zero, whereas the last 16 components are zero. We write therefore $$\label{defpsi} \theta^A = {\left( \begin{array}{c} \psi^A \\ 0 \end{array} \right) } \, ,$$ where $\psi^A$, $A=1,2$, are two real 16 component spinors. The light-cone gauge $\Gamma_- \theta^A = 0$ results to be equivalent to $$\hat{\gamma}_9\psi^A = \psi^A \, ,$$ which resembles a Weyl condition for the transverse directions. Indeed, using , we see that the last 8 components of $\psi^A$ are zero. Thus, we write $$\label{defS} \psi^A = {\left( \begin{array}{c} S^A \\ 0 \end{array} \right) } \, ,$$ where $S^A$, $A=1,2$, are two real 8 component spinors. Spinors for type IIA {#spinors-for-type-iia .unnumbered} -------------------- For the type IIA GS string we have two Majorana-Weyl spinors $\theta^{1,2}$ with opposite chirality, $i.e.$ $\Gamma_{11} \theta^1 = \theta^1$ and $\Gamma_{11} \theta^2 = - \theta^2$. We collect these into a 32 component real spinor $\theta = \theta^1 + \theta^2$. We can then decompose $\theta$ in terms of eigenstates of $\Gamma_{5678}$ namely $\theta=\theta_+ +\theta_-$ with $\Gamma_{5678}\theta_{\pm}=\pm\theta_{\pm}$ so that, keeping into account the representation we chose for $\Gamma_{11}$, (\[gam11\]), $\theta_{\pm}$ has the following decomposition in terms of 16-component spinors $$\theta_\pm={\left( \begin{array}{c} \vartheta^1_{\pm} \\ \vartheta^2_\pm \end{array} \right) } \, ,$$ The gauge conditions that should be imposed to fix $\kappa$-symmetry are different on $\theta_+$ and on $\theta_-$ [@Astolfi:2009qh] and read $$\label{kappacond} \Gamma_{-} \theta_- =0~~~{\,, \ \ }~~~\Gamma_{4956}\theta_+=\theta_+$$ It is thus useful to rotate the $\theta_+$ spinor so as to impose also on the rotated spinor the same gauge condition we have on $\theta_-$. This is done by defining $\widetilde\theta_+$ according to $$\label{tildetheta} \theta_+=(I-\Gamma_{0456})\widetilde\theta_+$$ Again we have the decomposition in terms of spinors of opposite chirality $$\widetilde\theta_+={\left( \begin{array}{c} \widetilde\vartheta^1_{+} \\ \widetilde\vartheta^2_+ \end{array} \right) } \, ,$$ The gauge choice on $\widetilde\theta_+$ is thus $\Gamma_{-} \widetilde\theta_+ =0$. It is then useful to define also a rotated 16-component spinor $\hat\vartheta^2_+=\hat\gamma_4\widetilde\vartheta^2_+$ so that both $\widetilde\vartheta^{1}_+$ and $\hat\vartheta^2_+$ have the same eigenvalue +1 of $\hat\gamma_9$. This rotations make the quantization on this type IIA background very similar to that of the type IIB. We can now define the rescaled 8-component spinors $$\label{defS2}\widetilde\vartheta^{1}_+ = \frac{1}{\sqrt{c}}{\left( \begin{array}{c} S^1_+ \\ 0 \end{array} \right) }~~{\,, \ \ }~~~\hat\vartheta^2_+ = \frac{1}{\sqrt{c}}{\left( \begin{array}{c} S^2_+ \\ 0 \end{array} \right) } \, ,$$ In the main text we used then the 8-component complex spinors $$\psi=S^1_++i S^2_+~~~{\,, \ \ }~~~\psi^*=S^1_+-i S^2_+$$ Let us now turn to $\theta_-$. Again to have the same eigenvalue +1 of $\hat\gamma_9$ for the upper and the lower 16-component spinors, we perform a rotation of $\vartheta_-^2$ with $\hat\gamma_4$ according to $\hat\vartheta_-^2=\gamma_4\vartheta_-^2$. We can now define as before the rescaled 8-component spinors $$\label{defS3}\widetilde\vartheta^{1}_- = \frac{1}{\sqrt{c}}{\left( \begin{array}{c} S^1_- \\ 0 \end{array} \right) }~~{\,, \ \ }~~~\hat\vartheta^2_- = \frac{1}{\sqrt{c}}{\left( \begin{array}{c} S^2_- \\ 0 \end{array} \right) } \, ,$$ In the main text we then used then the 8-component complex spinors $$\chi=S^1_-+i S^2_-~~~{\,, \ \ }~~~\chi^*=S^1_--i S^2_-$$ [10]{} J. M. Maldacena, “[The large N limit of superconformal field theories and supergravity]{},” [*Adv. Theor. Math. Phys.*]{} [**2**]{} (1998) 231–252, [[arXiv:hep-th/9711200]{}](http://arxiv.org/abs/hep-th/9711200). S. S. Gubser, I. R. Klebanov, and A. M. Polyakov, “[Gauge theory correlators from non-critical string theory]{},” [[*Phys. Lett.*]{} [ **B428**]{} (1998) 105–114](http://dx.doi.org/10.1016/S0370-2693(98)00377-3), [[arXiv:hep-th/9802109]{}](http://arxiv.org/abs/hep-th/9802109). E. Witten, “[Anti-de Sitter space and holography]{},” [*Adv. Theor. Math. Phys.*]{} [**2**]{} (1998) 253–291, [[arXiv:hep-th/9802150]{}](http://arxiv.org/abs/hep-th/9802150). D. Berenstein, J. M. Maldacena, and H. Nastase, “Strings in flat space and pp waves from [${\mathcal{N}}= 4$]{} super [Yang Mills]{},” [*JHEP*]{} [**04**]{} (2002) 013, [[hep-th/0202021]{}](http://arxiv.org/abs/hep-th/0202021). M. Blau, J. Figueroa-O’Farrill, C. Hull, and G. Papadopoulos, “A new maximally supersymmetric background of [IIB]{} superstring theory,” [*JHEP*]{} [**01**]{} (2002) 047, [[hep-th/0110242]{}](http://arxiv.org/abs/hep-th/0110242). M. Blau, J. M. Figueroa-O’Farrill, C. Hull, and G. Papadopoulos, “[Penrose limits and maximal supersymmetry]{},” [*Class. Quant. Grav.*]{} [**19**]{} (2002) L87–L95, [[arXiv:hep-th/0201081]{}](http://arxiv.org/abs/hep-th/0201081). R. R. Metsaev, “[Type IIB Green-Schwarz superstring in plane wave Ramond- Ramond background]{},” [[*Nucl. Phys.*]{} [ **B625**]{} (2002) 70–96](http://dx.doi.org/10.1016/S0550-3213(02)00003-2), [[arXiv:hep-th/0112044]{}](http://arxiv.org/abs/hep-th/0112044). R. R. Metsaev and A. A. Tseytlin, “[Exactly solvable model of superstring in plane wave Ramond-Ramond background]{},” [[*Phys. Rev.*]{} [ **D65**]{} (2002) 126004](http://dx.doi.org/10.1103/PhysRevD.65.126004), [[arXiv:hep-th/0202109]{}](http://arxiv.org/abs/hep-th/0202109). M. Bertolini, J. de Boer, T. Harmark, E. Imeroni, and N. A. Obers, “Gauge theory description of compactified pp-waves,” [*JHEP*]{} [**01**]{} (2003) 016, [[hep-th/0209201]{}](http://arxiv.org/abs/hep-th/0209201). J. Michelson, “(twisted) toroidal compactification of pp-waves,” [*Phys. Rev.*]{} [**D66**]{} (2002) 066002, [[hep-th/0203140]{}](http://arxiv.org/abs/hep-th/0203140). T. Harmark and M. Orselli, “Matching the [Hagedorn]{} temperature in [AdS/CFT]{},” [*Phys. Rev.*]{} [**D74**]{} (2006) 126009, [[hep-th/0608115]{}](http://arxiv.org/abs/hep-th/0608115). J. A. Minahan and K. Zarembo, “The [Bethe-ansatz]{} for [${\mathcal{N}}= 4$]{} super [Yang-Mills]{},” [*JHEP*]{} [**03**]{} (2003) 013, [[hep-th/0212208]{}](http://arxiv.org/abs/hep-th/0212208). N. Beisert, C. Kristjansen, and M. Staudacher, “The dilatation operator of [${\mathcal{N}}= 4$]{} super [Yang-Mills]{} theory,” [*Nucl. Phys.*]{} [**B664**]{} (2003) 131–184, [[hep-th/0303060]{}](http://arxiv.org/abs/hep-th/0303060). N. Beisert and M. Staudacher, “[The N=4 SYM Integrable Super Spin Chain]{},” [[*Nucl. Phys.*]{} [**B670**]{} (2003) 439–463](http://dx.doi.org/10.1016/j.nuclphysb.2003.08.015), [[arXiv:hep-th/0307042]{}](http://arxiv.org/abs/hep-th/0307042). F. Berruto, G. Grignani, G. W. Semenoff, and P. Sodano, “[Chiral symmetry breaking on the lattice: A study of the strongly coupled lattice Schwinger model]{},” [[*Phys. Rev.*]{} [**D57**]{} (1998) 5070–5083](http://dx.doi.org/10.1103/PhysRevD.57.5070), [[arXiv:hep-lat/9710066]{}](http://arxiv.org/abs/hep-lat/9710066). F. Berruto, G. Grignani, G. W. Semenoff, and P. Sodano, “On the correspondence between the strongly coupled 2-flavor lattice [Schwinger]{} model and the [Heisenberg]{} antiferromagnetic chain,” [*Annals Phys.*]{} [**275**]{} (1999) 254–296, [[hep-th/9901142]{}](http://arxiv.org/abs/hep-th/9901142). J. Callan, Curtis G. [*et al.*]{}, “Quantizing string theory in [${\mbox{AdS}}_5 \times S^5$]{}: Beyond the pp- wave,” [[*Nucl. Phys.*]{} [**B673**]{} (2003) 3–40](http://dx.doi.org/10.1016/j.nuclphysb.2003.09.008), [[arXiv:hep-th/0307032]{}](http://arxiv.org/abs/hep-th/0307032). J. Callan, Curtis G., T. McLoughlin, and I. Swanson, “[Holography beyond the Penrose limit]{},” [[*Nucl. Phys.*]{} [**B694**]{} (2004) 115–169](http://dx.doi.org/10.1016/j.nuclphysb.2004.06.033), [[arXiv:hep-th/0404007]{}](http://arxiv.org/abs/hep-th/0404007). M. Staudacher, “The factorized [S]{}-matrix of [CFT/AdS]{},” [*JHEP*]{} [**05**]{} (2005) 054, [[hep-th/0412188]{}](http://arxiv.org/abs/hep-th/0412188). N. Beisert, “The [$\mathfrak{su}(2|2)$]{} dynamic [S]{}-matrix,” [[hep-th/0511082]{}](http://arxiv.org/abs/hep-th/0511082). N. Beisert, B. Eden, and M. Staudacher, “[Transcendentality and crossing]{},” [*J. Stat. Mech.*]{} [**0701**]{} (2007) P021, [[hep-th/0610251]{}](http://arxiv.org/abs/hep-th/0610251). C. Kristjansen, J. Plefka, G. W. Semenoff, and M. Staudacher, “[A new double-scaling limit of N = 4 super Yang-Mills theory and PP-wave strings]{},” [[*Nucl. Phys.*]{} [ **B643**]{} (2002) 3–30](http://dx.doi.org/10.1016/S0550-3213(02)00749-6), [[arXiv:hep-th/0205033]{}](http://arxiv.org/abs/hep-th/0205033). N. R. Constable [*et al.*]{}, “[PP-wave string interactions from perturbative Yang-Mills theory]{},” [*JHEP*]{} [**07**]{} (2002) 017, [[arXiv:hep-th/0205089]{}](http://arxiv.org/abs/hep-th/0205089). M. Spradlin and A. Volovich, “[Superstring interactions in a pp-wave background]{},” [[*Phys. Rev.*]{} [**D66**]{} (2002) 086004](http://dx.doi.org/10.1103/PhysRevD.66.086004), [[arXiv:hep-th/0204146]{}](http://arxiv.org/abs/hep-th/0204146). G. De Risi, G. Grignani, M. Orselli, and G. W. Semenoff, “[DLCQ]{} string spectrum from [${\mathcal{N}}= 2$]{} [SYM]{} theory,” [[*JHEP*]{} [**11**]{} (2004) 053](http://dx.doi.org/10.1088/1126-6708/2004/11/053), [[arXiv:hep-th/0409315]{}](http://arxiv.org/abs/hep-th/0409315). G. Grignani, M. Orselli, B. Ramadanovic, G. W. Semenoff, and D. Young, “[Divergence cancellation and loop corrections in string field theory on a plane wave background]{},” [*JHEP*]{} [**12**]{} (2005) 017, [[arXiv:hep-th/0508126]{}](http://arxiv.org/abs/hep-th/0508126). G. Grignani, M. Orselli, B. Ramadanovic, G. W. Semenoff, and D. Young, “[AdS/CFT vs. string loops]{},” [[*JHEP*]{} [**06**]{} (2006) 040](http://dx.doi.org/10.1088/1126-6708/2006/06/040), [[arXiv:hep-th/0605080]{}](http://arxiv.org/abs/hep-th/0605080). P. Y. Casteill, R. A. Janik, A. Jarosz, and C. Kristjansen, “[Quasilocality of joining/splitting strings from coherent states]{},” [[*JHEP*]{} [**12**]{} (2007) 069](http://dx.doi.org/10.1088/1126-6708/2007/12/069), [[arXiv:0710.4166 \[hep-th\]]{}](http://arxiv.org/abs/0710.4166). C. Kristjansen, M. Orselli, and K. Zoubos, “[Non-planar ABJM Theory and Integrability]{},” [[ *JHEP*]{} [**03**]{} (2009) 037](http://dx.doi.org/10.1088/1126-6708/2009/03/037), [[arXiv:0811.2150 \[hep-th\]]{}](http://arxiv.org/abs/0811.2150). O. Aharony, O. Bergman, D. L. Jafferis, and J. Maldacena, “[${\mathcal{N}}=6$]{} superconformal [Chern-Simons-matter]{} theories, [M2-branes]{} and their gravity duals,” [[arXiv:0806.1218 \[hep-th\]]{}](http://arxiv.org/abs/0806.1218). T. Nishioka and T. Takayanagi, “[On Type IIA Penrose Limit and N=6 Chern-Simons Theories]{},” [[arXiv:0806.3391 \[hep-th\]]{}](http://arxiv.org/abs/0806.3391). D. Gaiotto, S. Giombi, and X. Yin, “[Spin Chains in N=6 Superconformal Chern-Simons-Matter Theory]{},” [[*JHEP*]{} [**04**]{} (2009) 066](http://dx.doi.org/10.1088/1126-6708/2009/04/066), [[arXiv:0806.4589 \[hep-th\]]{}](http://arxiv.org/abs/0806.4589). G. Grignani, T. Harmark, and M. Orselli, “[The SU(2) x SU(2) sector in the string dual of N=6 superconformal Chern-Simons theory]{},” [[*Nucl. Phys.*]{} [**B810**]{} (2009) 115–134](http://dx.doi.org/10.1016/j.nuclphysb.2008.10.019), [[arXiv:0806.4959 \[hep-th\]]{}](http://arxiv.org/abs/0806.4959). D. Astolfi, V. G. M. Puletti, G. Grignani, T. Harmark, and M. Orselli, “[Finite-size corrections in the SU(2) $\times$ SU(2) sector of type IIA string theory on [${\mbox{AdS}}_4 \times {\mathbb{C}}P^3$]{}]{},” [[*Nucl. Phys.*]{} [**B810**]{} (2009) 150–173](http://dx.doi.org/10.1016/j.nuclphysb.2008.10.020), [[arXiv:0807.1527 \[hep-th\]]{}](http://arxiv.org/abs/0807.1527). D. Astolfi, V. G. M. Puletti, G. Grignani, T. Harmark, and M. Orselli, “[Full Lagrangian and Hamiltonian for quantum strings on [${\mbox{AdS}}_4 \times {\mathbb{C}}P^3$]{} in a near plane wave limit]{},” [[arXiv:0912.2257 \[hep-th\]]{}](http://arxiv.org/abs/0912.2257). T. Harmark and M. Orselli, “Quantum mechanical sectors in thermal [${\mathcal{N}}= 4$]{} super [Yang-Mills]{} on [${\mathbb{R}}\times S^3$]{},” [*Nucl. Phys.*]{} [**B757**]{} (2006) 117–145, [[hep-th/0605234]{}](http://arxiv.org/abs/hep-th/0605234). T. Harmark, K. R. Kristjansson, and M. Orselli, “Magnetic [Heisenberg-chain]{} / pp-wave correspondence,” [*JHEP*]{} [**02**]{} (2007) 085, [[hep-th/0611242]{}](http://arxiv.org/abs/hep-th/0611242). T. Harmark, K. R. Kristjansson, and M. Orselli, “[The [Hagedorn]{} temperature in a decoupled sector of [AdS/CFT]{}]{},” [*Fortsch. Phys.*]{} [**55**]{} (2007) 754–759, [[hep-th/0701088]{}](http://arxiv.org/abs/hep-th/0701088). T. Harmark, K. R. Kristjansson, and M. Orselli, “Decoupling limits of [${\mathcal{N}}=4$]{} super [Yang-Mills]{} on [${\mathbb{R}}\times S^3$]{},” [[*JHEP*]{} [**09**]{} (2007) 115](http://dx.doi.org/10.1088/1126-6708/2007/09/115), [[arXiv:0707.1621 \[hep-th\]]{}](http://arxiv.org/abs/0707.1621). T. Harmark, K. R. Kristjansson, and M. Orselli, “[Matching gauge theory and string theory in a decoupling limit of AdS/CFT]{},” [[arXiv:0806.3370 \[hep-th\]]{}](http://arxiv.org/abs/0806.3370). D. Astolfi, G. Grignani, T. Harmark, and M. Orselli, “[Finite-size corrections to the rotating string and the winding state]{},” [[*JHEP*]{} [**08**]{} (2008) 099](http://dx.doi.org/10.1088/1126-6708/2008/08/099), [[arXiv:0804.3301 \[hep-th\]]{}](http://arxiv.org/abs/0804.3301). E. Witten, “[Anti-de Sitter space, thermal phase transition, and confinement in gauge theories]{},” [*Adv. Theor. Math. Phys.*]{} [**2**]{} (1998) 505–532, [[arXiv:hep-th/9803131]{}](http://arxiv.org/abs/hep-th/9803131). B. Sundborg, “The [Hagedorn]{} transition, deconfinement and [${\mathcal{N}}= 4$ SYM]{} theory,” [*Nucl. Phys.*]{} [**B573**]{} (2000) 349–363, [[hep-th/9908001]{}](http://arxiv.org/abs/hep-th/9908001). A. M. Polyakov, “Gauge fields and space-time,” [*Int. J. Mod. Phys.*]{} [ **A17S1**]{} (2002) 119–136, [[hep-th/0110196]{}](http://arxiv.org/abs/hep-th/0110196). O. Aharony, J. Marsano, S. Minwalla, K. Papadodimas, and M. Van Raamsdonk, “[The Hagedorn / deconfinement phase transition in weakly coupled large N gauge theories]{},” [*Adv. Theor. Math. Phys.*]{} [**8**]{} (2004) 603–696, [[arXiv:hep-th/0310285]{}](http://arxiv.org/abs/hep-th/0310285). N. Deo, S. Jain, and C.-I. Tan, “String statistical mechanics above [Hagedorn]{} energy density,” [*Phys. Rev.*]{} [**D40**]{} (1989) 2626. R. C. Brower, J. McGreevy, and C. I. Tan, “[Stringy model for QCD at finite density and generalized Hagedorn temperature]{},” [[arXiv:hep-ph/9907258]{}](http://arxiv.org/abs/hep-ph/9907258). G. Grignani, M. Orselli, and G. W. Semenoff, “[Matrix strings in a B-field]{},” [*JHEP*]{} [**07**]{} (2001) 004, [[arXiv:hep-th/0104112]{}](http://arxiv.org/abs/hep-th/0104112). G. Grignani, M. Orselli, and G. W. Semenoff, “[The target space dependence of the Hagedorn temperature]{},” [*JHEP*]{} [**11**]{} (2001) 058, [[arXiv:hep-th/0110152]{}](http://arxiv.org/abs/hep-th/0110152). G. De Risi, G. Grignani, and M. Orselli, “[Space / time noncommutativity in string theories without background electric field]{},” [*JHEP*]{} [**12**]{} (2002) 031, [[arXiv:hep-th/0211056]{}](http://arxiv.org/abs/hep-th/0211056). G. Grignani, J. L. Karczmarek, and G. W. Semenoff, “[Hot Giant Loop Holography]{},” [[arXiv:0904.3750 \[hep-th\]]{}](http://arxiv.org/abs/0904.3750). T. Harmark and N. A. Obers, “[Thermodynamics of spinning branes and their dual field theories]{},” [*JHEP*]{} [**01**]{} (2000) 008, [[arXiv:hep-th/9910036]{}](http://arxiv.org/abs/hep-th/9910036). G. Grignani, L. Griguolo, N. Mori, and D. Seminara, “Thermodynamics of theories with sixteen supercharges in non-trivial vacua,” [[arXiv:0707.0052 \[hep-th\]]{}](http://arxiv.org/abs/arXiv:0707.0052 [hep-th]). K. J. Larsen and N. A. Obers, “[Phases of Thermal [${\mathcal{N}}=2$]{} [Quiver Gauge Theories]{}]{},” [ [ **JHEP0801**]{} (2008) 057](http://dx.doi.org/10.1088/1126-6708/2008/01/057), [[arXiv:0708.3199 \[hep-th\]]{}](http://arxiv.org/abs/0708.3199). A. Hamilton, J. Murugan, and A. Prinsloo, “[A note on the universality of the Hagedorn behavior of pp- wave strings]{},” [[*JHEP*]{} [**02**]{} (2008) 108](http://dx.doi.org/10.1088/1126-6708/2008/02/108), [[arXiv:0712.3059 \[hep-th\]]{}](http://arxiv.org/abs/0712.3059). M. Blau, J. M. Figueroa-O’Farrill, and G. Papadopoulos, “[Penrose limits, supergravity and brane dynamics]{},” [[*Class. Quant. Grav.*]{} [**19**]{} (2002) 4753](http://dx.doi.org/10.1088/0264-9381/19/18/310), [[arXiv:hep-th/0202111]{}](http://arxiv.org/abs/hep-th/0202111). [^1]: Early attempts in describing gauge theories in terms of spin chains can be found in [@Berruto:1997jv; @Berruto:1999ga]. [^2]: See also [@Astolfi:2008yw] for work on the winding state with the space-like isometry compactified. [^3]: For related computations of the Hagedorn temperature in the presence of background fields see for example Refs. [@Deo:1989bv]. [^4]: See also [@Harmark:1999xt] for a related study of black holes with R-charged chemical potentials. [^5]: The $4\pi^2$ factor is included in the ’t Hooft coupling for our convenience. [^6]: See Appendix \[AppendixA\] for our conventions on the spinors and the representation of the Dirac matrices.
--- abstract: 'In this paper we discuss the strong coupling limit of chiral $N=1$ supersymmetric gauge theory via their embedding into M-theory. In particular we focus on the brane box models of Hanany and Zaffaroni and show that after a T-duality transformation their M-theory embedding is described by supersymmetric 3-cycles; its geometry will encode the holomorphic non-perturbative information about the gauge theory.' bibliography: - '3-cycle1.bib' --- -5mm ==== I \#1\#2[\#2]{} **$N=1$ Supersymmetric Gauge Theories and Supersymmetric 3-cycles** **Andreas Karch[^1] ,   Dieter Lüst[^2] and André Miemiec[^3]** Introduction ============ Recently it was demonstrated that many interesting non-perturbative results in superstring theory and supersymmetric field theories can be derived from 11-dimensional M-theory. In particular Witten [@witten] has shown that $N=2$ supersymmetric gauge theories can be solved via M-theory by lifting the corresponding 10-dimensional type II brane configurations [@hanwit] to 11 dimensions. As a result the intersection of $n$ parallel NS 5-branes and $k$ suspended D 4-branes is described in 11 dimensions by an one-dimensional complex curve, which can be seen as a two-dimensional supersymmetric cycle embedded in the flat four-dimensional space ${\Bbb{R}}^3\times S^1$. This curve precisely agrees with the famous Seiberg-Witten curves [@sw] of $N=2$ supersymmetric field theories. Non-chiral $N=1$ gauge theories can be obtained by rotating one or several of the NS 5-branes such that they intersect by a certain angle. The corresponding continuous parameter can be regarded in field theory as a mass parameter which explicitly breaks $N=2$ supersymmetry down to $N=1$. The M-theory embedding of the non-chiral $N=1$ models, constructed in this way, leads again to supersymmetric 2-cycles, now embedded in the flat six-dimensional space ${\Bbb{R}}^5\times S^1$ [@hori; @witten1; @9706127]. Analyzing these curves, the form of the corresponding $N=1$ superpotentials can be derived. A generic way to construct [*chiral*]{} $N=1$ gauge theories in four dimensions is provided by the brane box models of Hanany and Zaffaroni [@hanzaf][^4]. Here one deals with a type IIB configuration of intersecting NS and D 5-branes. We will show that upon a T-duality transformation to the IIA superstring, the M-theory lifting of the chiral $N=1$ brane box configurations leads to supersymmetric 3-cycles suitably embedded into ${\Bbb{C}}^3$. These supersymmetric 3-cycles precisely correspond to the $SU(3)$ special Lagrangian calibrations (SLAG), discussed e.g. in [@gibpap; @9803216], which reduce the original supersymmetry by a factor 1/8. This is just the right amount of supersymmetry breaking for a generic chiral $N=1$ gauge theory which cannot be smoothly deformed into a $N=2$ gauge theory. Following [@witten] one can study quantum effects in the corresponding gauge theory by analyzing how the branes bend each other. Especially it is easy to study the $\beta$ function: the gauge coupling is encoded in the distance between two NS branes or in the area of the box in the case of boxes respectively. So the shape of the bent branes directly gives some information about the running gauge coupling. In order to obtain the exact quantum information (or at least the information protected by holomorphy) one can lift the brane configuration to M-theory and use 11d SUGRA to solve it. For IIA brane configurations like those studied in [@witten] this is straight forward, since 11d SUGRA on a circle is dual to IIA. The boxes of [@hanzaf] live in IIB theory, so in order to perform the M-theory lift on has to use the relation that IIB on a circle is M-theory on a torus, so we have to compactify one of the common wordvolume directions. Like in the 5d case studied in [@kol] this means that we are really solving the 4d theory on $R^3 \times S^1$. In the limit where the area of the M-theory torus shrinks to zero or grows to infinity we recover the $N=1$ $d=4$ and $N=2$ $d=3$ gauge theories respectively. We will show that this way all the holomorphic information about these gauge theories is encoded in the geometry of a SUSY 3-cycle. Especially we expect the superpotential to correspond to the volume and the gauge couplings on the Coulomb branches to be related to periods of the cycle. The possible 3-cycles for given boundary values will encode the vacuum structure of the theory. Gauge theories which dynamical SUSY breaking will correspond to situation, where the minimal area cycle for the given boundary problem is not a SUSY 3-cycle. In Section 2 we will review Witten’s rediscovery of the Seiberg-Witten curve in terms of the lift to M-theory. In Section 3 we will introduce the classical brane box setups and show how they relate to 3-cycles. We will develop some tools which we believe are very helpful in constructing the 3-cycles. For the special case of finite theories and theories that satisfy the uniform bending requirement of [@gimongremm] we are able to perform the lift explicitly. However the corresponding cycles turn out to be superpositions of special 3-cycles that are of the form 2-cycle times line. We end with some preliminary results about more general cases. [***N=2***]{} Gauge Theories and 2-cycles ========================================= In order to understand better the supersymmetric 3-cycles in the $N=1$ brane box models, let us first recall the M-theory embedding of the $N=2$ models which leads to supersymmetric 2-cycles. The four-dimensional $N=2$ gauge theories are based on the following brane set up in type IIA superstring theory on ${\Bbb R}^{10}$: - $n$ NS 5-branes with world volumes parametrized by $x^0$, $x^1$, $x^2$, $x^3$, $x^4$ and $x^5$; they are located at $x^7=x^8=x^9=0$ and at some fixed value of $x^6_\alpha$ ($\alpha=1,\dots n$), at least in classical approximation. - $k_\alpha$ ($\alpha=0,\dots n$) D 4-branes with world volumes in the $x^0$, $x^1$, $x^2$, $x^3$ and $x^6$ directions, being suspended in the $x^6$ direction between the $(\alpha)^{\rm th}$ and $\alpha^{\rm th}+1$ NS 5-brane. For $\alpha=0$ or $\alpha=n$ the D 4-branes are semi-infinite to the left of the first NS 5-brane or, respectively, to the right of the $n^{\rm th}$ NS 5-brane[^5]. The 4-branes are located in $x^4$-$x^5$ plane at the complex coordinate $v=x^4+ix^5$. This $N=2$ brane configuration, which preserves 1/4 of the original supersymmetries, is summarized in figure \[figureBraneConfiguration\]. The four-dimensional gauge group is given by $$G=\prod_{\alpha=1}^{n-1} SU(k_\alpha).$$ In addition one finds hypermultiplets in the representations: $$(k_0,\bar k_1)\oplus (k_1,\bar k_2)\oplus\dots\oplus (k_{n-1},\bar k_n).$$ (Here, $k_0$ and $\bar k_n$ act as global flavor representations.) The gauge coupling constants $g_\alpha$ of every $SU(k_\alpha)$ group factor are determined by the differences between the positions of the NS 5-branes: $$\frac{1}{g^2_\alpha}=\frac{x^6_{\alpha+1}-x^6_{\alpha}}{g_s},\label{gaugeco}$$ where $g_s$ is the string coupling constant. After having discussed the classical picture, let us now come to solution of the model after taking into account the quantum corrections. The D 4-branes exert a force on the NS 5-branes causing them to bend. Since the tensions of the NS 5-brane and the tensions of the D branes have a different behavior in terms of the string coupling constant, $T_{NS}\sim g_s^{-2}$ compared to $T_{D}\sim g_s^{-1}$, this bending is a quantum effect. As a result, a NS 5-brane on which 4-branes end does not have a definite value of $x^6$ in contrast to the classical picture. More specifically, when there is a bending which moves the two NS branes towards each other, the coupling becomes strong at high energies, i.e. we deal with an IR free theory. Conversely, if the bending is outwards, there is an asymptotically free theory. In four dimensions the bending is logarithmic with the distance $r$, whereas in $d$ dimensions the local bending of the NS branes goes like $r^{d-4}$, $r$ being the ‘position’ of the D brane on the world volume of the NS brane. This shows that in dimensions $d<4$ all gauge theories are asymptotically free, and all gauge theories with finite gauge coupling are IR free for $d>4$. The bending is absent if the number of left D branes ending on a given NS 5-brane is equal to the number of D branes ending from the right. In one-loop perturbation theory in four dimensions, the bending of the NS 5-branes leads to a logarithmic variation of $x^6_\alpha$ in terms of $v$: $$x^6_\alpha=(k_\alpha-k_{\alpha-1})\log |v|. \label{1loop}$$ Inserting this into eq.(\[gaugeco\]), the logarithmic running of the gauge coupling $g_\alpha$ is determined as $$\frac{1}{g_\alpha^2}=\frac{-2k_\alpha+k_{\alpha+1}+k_{\alpha-1}}{g_s}\log |v|.$$ Note that the prefactor $b_\alpha^{N=2}=-2k_\alpha+k_{\alpha+1}+k_{\alpha-1}$ precisely agrees with the $N=2$ $\beta$-function coefficient of the gauge group $SU(k_\alpha)$ with $N_f=k_{\alpha+1}+k_{\alpha-1}$ fundamental matter fields. In this way the shape of the branes incorporates the 1-loop effects in field theory. In $N=2$ field theory there are no higher loop effects. However there are still non-perturbative effects due to instantons. These instantons can be seen directly in the brane picture, namely the D 0-branes are instantons within D 4-branes. The problem is now to solve the theory by including all these effects. This can be done by “lifting” the IIA configuration to M-theory [@witten]. The advantage of considering the above configuration of branes in M-theory is that the D 4-branes and NS 5-branes are in fact the same object, the M 5-brane. The intersection of the D 4- and NS 5-branes in IIA was singular but this is smoothed out in M-theory and in fact it is possible to consider all the D 4-branes and NS 5-branes as a single M 5-brane with complicated worldvolume. However the conditions for preserving N=2 supersymmetry restrict the embedding of the M 5-brane worldvolume and it is possible to find the function describing this embedding explicitly. Clearly this must incorporate the classical IIA brane set up as well as the field theory 1-loop corrections through the shape of the M 5-brane. However the non-perturbative instantons are also automatically included since D 0-branes are simply Kaluza-Klein momentum modes of compactified M-theory. Let us discuss in slightly more detail how the M-theory embedding of the $N=2$ brane configurations is constructed. The lifting to M-theory is performed by adding the 11th coordinate $x^{10}$ which is periodic with period $2\pi R_{11}$. Now, the complex coordinate $s_\alpha=(x^6_\alpha+ix^{10}_\alpha)/R_{11}$ describes the asymptotic positions of the M 5-branes in the $x^6$-$x^{10}$ plane, whereas as before $v$ denotes the asymptotic positions of the M 5-branes in the $x^4$-$x^5$ plane (all M 5-branes have common world volume in the 0123-directions, and are fixed at $x^7=x^8=x^9=0$). Regarding $x^{10}$ as $\theta$-parameter one can introduce a complex coupling constant $$\tau_\alpha=\frac{\theta_\alpha}{2\pi}+i\frac{4\pi}{g^2_\alpha}= i(s_{\alpha+1}-s_{\alpha}).$$ Concentrating on the directions $x^4$, $x^5$, $x^6$ and $x^{10}$ we see that the M 5-brane world volume spans a two-dimensional surface $\Sigma^{(2)}_{n,k_\alpha}$ in the four-manifold ${\mathbb{R}}^3\times S^1$. A priori, $\Sigma^{(2)}_{n,k_\alpha}$ is determined by two real functions of the coordinates $x^4$, $x^5$, $x^6$ and $x^{10}$: $$\begin{aligned} f_{n,k_\alpha}(x^4,x^5,x^6,x^{10})&=&0,\nonumber \\ g_{n,k_\alpha}(x^4,x^5,x^6,x^{10})&=&0.\end{aligned}$$ However $N=2$ space-time supersymmetry requires that $s$ varies holomorphically with $v$, such that $\Sigma^{(2)}_{n,k_\alpha}$ is a Riemann surface in ${\Bbb{R}}^3\times S^1$: $$\Sigma^{(2)}_{n,k_\alpha}:\quad F_{n,k_\alpha}(s,v)=f_{n,k_\alpha}+ ig_{n,k_\alpha} =0.$$ This means that $\Sigma^{(2)}_{n,k_\alpha}$ is a supersymmetric 2-cycle, or in terms of [@gibpap], $\Sigma^{(2)}_{n,k_\alpha}$ describes a $SU(2)$ Kähler calibration. The holomorphy of the equation $F_{n,k_\alpha}(s,v)$ implies that the real functions have to obey the Cauchy-Riemann differential equations, where $f_i$ denotes $\frac{\partial f}{\partial x_i}$: $$\begin{aligned} & & f_4=g_5,\qquad\;\;\;\; f_6=g_{10},\nonumber\\ & & f_5=-g_4,\qquad f_{10}=-g_6.\end{aligned}$$ As explained in [@witten], it is very useful to perform an holomorphic change of variables, $t=\exp(-s)$, in order to describe the asymptotic positions of the M 5-branes in the correct way. Specifically consider the complex equation $$F(t,v)=0.\label{sw}$$ At a given value of $v$, the roots of $F$ in $t$ are the positions of the NS 5-branes, i.e. $F$ is a polynomial of degree $n$ in $t$. On the other hand, for fixed $t$, the roots of $F$ in $v$ are the positions of the IIA D 4-branes. Therefore $n$ parallel NS 5-branes with positions at $t_i$ ($i=1,\dots ,n$) are simply described by the function $F(t,v)=\prod_{i=1}^n(t-t_i)$, and $k$ parallel D 4-branes, positioned at values $v_j$ ($j=1,\dots k$), correspond to the choice $F(t,v)=\prod_{j=1}^k(v-v_j)$. Then it immediately follows that $n$ NS 5-branes intersected by $k$ D 4-branes correspond to $$F(t,v)=\prod_{i=1}^n(t-t_i)\prod_{j=1}^k(v-v_j).\label{orth}$$ In this case, since the number of D 4-branes to the left and to the right of each NS 5-brane is the same, there is no bending of the NS 5-branes by the D 4-branes. Next, let us briefly discuss, how the perturbative, one-loop bending can be described [@witten]. Consider the situation of one NS 5-brane with $k_0$ ($k_1$) D 4-branes ending on it from the left (right). Then the induced bending in the region, where $t$ is very large, corresponds to the following choice for $F$: $$F(t,v)=v^{k_0}(t-\epsilon v^{k_1-k_0}),\label{bendtv}$$ where $\epsilon$ is a constant. Introducing back the variable $s$, this equation immediately leads to eq.(\[1loop\]), namely the logarithmic bending of $s$ in terms of the Higgs field $v$: $$s=(k_1-k_0)\log v.$$ Putting all these informations from eqs.(\[orth\],\[bendtv\]) together, the supersymmetric 2-cycle equation for a general $N=2$ brane configuration takes the form $$\Sigma^{(2)}_{n,k_\alpha}: \quad F_{n,k_\alpha}(t,v)=p_{k_0}(v)t^n+p_{k_1}(v)t^{n-1}+\dots +p_{k_{n-1}}(v)t+p_{k_n}(v),\label{n=2pol}$$ where the $p_{k_\alpha}(v)$ are polynomials in $v$ of degree $k_\alpha$. The up to now unspecified parameters of the polynomials $p_{k_\alpha}(v)$ appear as the moduli of the gauge theory. This describes the non-perturbative solution of the model with gauge group $\prod_{\alpha=1}^{n-1}SU(k_\alpha)$ with hypermultiplets in bi-fundamental representations. For example, the pure $SU(k)$ gauge theory without matter fields, i.e $n=2$, $k_0=k_2=0$, $k_1=k$, is described by the curve $$F(t,v)=t^2+p_k(v)t+1.$$ This is nothing else than the famous Seiberg-Witten curve of genus $k-1$ [@sw; @9411048; @klemm; @argyres]. Why does this procedure work? The solution of the gauge theory is found by using the duality between 11d SUGRA and IIA string theory. When $R_{11}$ is large, 11d SUGRA is valid. Solving for the exact shape of the M5 brane means solving the classical minimal area problem for a tensile brane (the shape of a soap bubble) with given boundary conditions. The requirement of supersymmetry tells us that we should only consider special minimal area configurations, the SUSY 2-cycles. On the other hand we identified the gauge group and matter content at small $R_{11}$, where we have weakly coupled IIA string theory and the analysis of [@hanwit] applies. In order to decouple the bulk modes from the gauge theory we have to take the string scale $M_s \rightarrow \infty$ and $M_{pl} \rightarrow \infty$ and in order to decouple the KK modes from the finite interval, we have to also send $L \rightarrow 0$, where $L$ is the length of the $x^6$ intervals. This all has to be done holding $g^2_{YM}=\frac{g_s}{M_s L}$ fixed. In 11d units $g^2_{YM}=\frac{R_{11}} {L}$. In order to keep the interacting gauge theory (fixed $g^2_{YM}$) while decoupling the KK modes ($L\rightarrow 0$) $R_{11}$ has to go to zero! This is the limit in which the brane setup reduces to 4d SYM. But this is the opposite limit of the one we were able to solve, where $R_{11}$ and hence $L$ have to be very large and we can use 11d SUGRA. So we should only expect quantities protected by holomorphy, like the $N=2$ prepotential, which can’t depend on the real parameter $R_{11}$, to come out correctly. Indeed it was shown in [@berkeley] that unprotected quantities like the 4-derivative terms in the effective action disagree with field theory results. [***N=1***]{} Gauge Theories and Supersymmetric 3-cycles ======================================================== $N=1$ brane boxes {#sectionN1BraneBox} ----------------- Now let us discuss the brane boxes of [@hanzaf] which can be used to construct $N=1$ supersymmetric gauge theories with chiral matter content[^6]. The starting point is now a type IIB superstring with the following branes included (see figure \[figureGeneralBraneBoxWeb\]): - $n$ parallel NS 5-branes with world volumes along the (012345)-directions. These 5-branes are fixed at $x^7=x^8=x^9=0$ and are placed at arbitrary positions in $x^6$. - $n'$ parallel NS’ 5-branes with world volumes along the (012367)-directions. The NS’ branes are fixed at $x^5=x^8=x^9=0$ and are placed at arbitrary positions in $x^4$. - D 5-branes with world volumes along the (012346)-directions. The D 5-branes can take different position on the NS 5-branes in the $x^5$-direction and also different positions on the NS’ 5-branes in the $x^7$-direction. Depending on the specific model one likes to discuss, the directions $x^4$ and $x^6$ can be either uncompactified or periodic (elliptic models). We will concentrate on the non-elliptic models. It follows that the D 5-branes are finite in the directions $x^6$, $x^4$ in case they are placed inside the ‘inner’ boxes. However they are semi-infinite in case they end only on one NS (NS’) brane (‘outer’ boxes). This brane configuration preserves 1/8 of the original supersymmetry. We see that a generic configurations consists of a grid of $(n+1)(n'+1)$ boxes built by $n$ NS 5-branes branes and $n'$ NS’ 5-branes in the $x^4$-$x^6$ plane. We are labelling the boxes by the two indices $\alpha,\alpha'$ where $\alpha=0,\dots , n$ and $\alpha'=0,\dots , n'$. The $(n-1)(n'-1)$ ‘inner’ boxes with $\alpha=1,\dots, n-1$, $\alpha'=1,\dots ,n'-1$ have always finite area whereas the remaining ‘outer’ boxes have infinite size in case of uncompactified directions $x^4$ and $x^6$. Now, let $k_{\alpha,\alpha'}$ denote the number of D 5-branes which are placed in the box $\lbrack\alpha,\alpha'\rbrack$. The suspended D 5-branes inside the inner boxes give rise to the following gauge group in four dimensions: $$G=\prod_{\alpha=1}^{n-1}\prod_{\alpha'=1}^{n'-1}SU(k_{\alpha,\alpha'}).$$ The associated classical gauge coupling constants are given by the area of the corresponding box $\lbrack\alpha,\alpha'\rbrack$: $$\frac{1}{g_{\alpha,\alpha'}^2}= \frac{(x^4_{\alpha+1}-x^4_\alpha)(x^6_{\alpha'+1}-x^6_{\alpha'})}{g_s}. \label{n=1gaugec}$$ The matter content of the model consists of three types of chiral $N=1$ representations. First they are ‘horizontal’ chiral bi-fundamentals $H_{\alpha,\alpha'}$ in the representations $(k_{\alpha,\alpha'},\bar k_{\alpha+1,\alpha'})$ of $SU(k_{\alpha,\alpha'})\times SU(k_{\alpha+1,\alpha'})$. Second there exist ‘vertical’ chiral bi-fundamentals $V_{\alpha,\alpha'}$ in the representations $(k_{\alpha,\alpha'},\bar k_{\alpha,\alpha'+1})$ of $SU(k_{\alpha,\alpha'})\times SU(k_{\alpha,\alpha'+1})$; finally we have ‘diagonal’ chiral fields $D_{\alpha,\alpha'}$ in the representations $(k_{\alpha,\alpha'},\bar k_{\alpha-1,\alpha'-1})$ of $SU(k_{\alpha,\alpha'})\times SU(k_{\alpha-1,\alpha'-1})$ ($\alpha,\alpha'>1$). In this context the groups $SU(k_{\alpha,\alpha'})$ with $\alpha=0,n$ or $\alpha'=0,n'$ act as global flavor symmetries if the directions $x^4$ and $x^6$ are uncompactified. Note that the choices for the $k_{\alpha,\alpha'}$ are severely constrained by the requirement of absence of anomalies. If all three types of matter multiplets are present then there exists a classical superpotential of the following form: $$W=\sum_{\alpha,\alpha'}H_{\alpha,\alpha'}V_{\alpha+1,\alpha'} D_{\alpha+1,\alpha'+1}-\sum_{\alpha,\alpha'}H_{\alpha,\alpha'+1} V_{\alpha,\alpha'}D_{\alpha+1,\alpha'+1}. \label{supo}$$ One of the simplest (non-elliptic) models is given by the choice $n=n'=2$, $k_{1,1}=N_c$ and $k_{0,1}=k_{2,1}=N_f$, whereas $k_{\alpha,\alpha'}=0$ for $\alpha'=0,2$. This choice of brane boxes corresponds to the supersymmetric QCD with $G=SU(N_c)$ and with $N_f$ fundamental plus antifundamental chiral fields. A second way to obtain SUSY QCD with $N_f$ fundamental plus antifundamental matter fields is given by the choice $k_{1,1}=N_c$ and $k_{0,1}=k_{2,1}= k_{1,0}=k_{1,2}=N_f/2$ and zero otherwise. Finally the same spectrum can by realized by the choice $k_{1,1}=N_c$ and $k_{\alpha,\alpha'}=N_f/3$ ($\alpha,\alpha'\neq1$). However in this case a superpotential of the type eq.(\[supo\]) is present. So far we have only discussed the classical field theory. Of course it is essential to understand the quantum features as well. Chiral $N=1$ exhibit a huge variety of interesting quantum phenomena. Especially the generic theory will have an anomaly which should show up as an inconsistency of the brane box as a string background. In these general cases the bending of the brane boxes isn’t well understood yet, but some special cases can be analyzed. It is clear that the bending of the NS and NS’ branes depends on the number $k_{\alpha,\alpha'}$ of D 5-branes in each box. A very special class of $N=1$ gauge theories is given by the [*finite*]{} models for which all $\beta$-functions and all anomalous dimensions vanish to all orders in perturbation theory [@leighstrassler; @HaStrUr]. This condition includes the vanishing of the one-loop $\beta$-functions. In the brane picture complete finiteness means that all NS and NS’ branes do not bend at all, i.e. if the number of D 5-branes in every box is the same [@HaStrUr]. Then obviously, $N_f=3N_c$ for every gauge group factor, and the one-loop $\beta$-functions are zero. The corresponding brane setup consists of several branes put on top of each other. Each of the branes preserves 1/2 of the supersymmetries, together they still preserve 1/8, so the intersection is BPS. This ensures that the static branes don’t exert any force on each other. We can freely move the constituent branes since they don’t feel the presence of the other branes at all. Motions of the branes in the 46 plane corresponds to changing the areas of the various boxes and hence to changing the gauge couplings. Taking NS (NS’) branes away along the $x^7$ ($x^5$) direction destroys the box structure and corresponds to turning on FI terms. Another special situation is that of uniform bending. The condition of uniform bending was first introduced by [@gimongremm]. There it was argued to be necessary for consistency. As we will see this is too stringent. However uniform bent setups are very special and allow for a precise treatment of the quantum properties. To motivate the uniform bending requirement consider the basic cross configuration of figure \[figureUniformBasic\]. For $x^6 \rightarrow - \infty$ (to the far left) the effects of the NS brane on the bending of the NS’ brane should be negligible. The D5 ending on the NS’ brane just looks like a 5d gauge theory with 8 supercharges and leads to the standard linear bending [@aharonyhanany]. The slope of the bending is given by the difference of branes ending from either side, hence $$\mbox{slope}_{x^6 \rightarrow - \infty} = k_{\alpha,\alpha'} - k_{\alpha,\alpha'+1}.$$ For the same reason we will have linear bending to the far right, that is for $x^6 \rightarrow \infty$, the slope this time given by $$\mbox{slope}_{x^6 \rightarrow \infty} = k_{\alpha+1,\alpha'} - k_{\alpha+1,\alpha'+1}.$$ The observation of [@gimongremm] was that if $$\label{uniform} k_{\alpha,\alpha'} - k_{\alpha,\alpha'+1} = k_{\alpha+1,\alpha'} - k_{\alpha+1,\alpha'+1}$$ the bending on the far left is the same as on the far right and one may expect that the shape of the NS’ in fact does not change at all as a function of $x^6$. In [@haur] it was observed that the most general setup compatible with condition (\[uniform\]) can be achieved by “sewing” together $N=2$ models, that is we take branes corresponding to 5d gauge theory built out of NS and D5 branes and move them on top of a similar setup build out of NS’ and D5 branes[^7], as illustrated in figure \[figureUniformBraneBoxWeb\]. This sewing can be taken quite literally: as in the finite case there is a no force condition between the constituent pieces, since their intersection still preserves some supersymmetry. So we are free to move them independently. These deformations should correspond to marginal couplings in the field theory. Since we are not just tuning the distance between two NS branes but are actually moving around compound systems, these marginal operators won’t just be the gauge coupling as in the finite case but will also involve the superpotential couplings. Using the methods of [@leighstrassler], [@HaStrUr] were indeed able to show that the field theory has these marginal operators if the conditions (\[uniform\]) are satisfied for all boxes. Since the subsystems don’t influence each other, the exact bending is given in terms of the linear bending of the subsystems. All uniform bent systems are anomaly free. The question remains, how to decide whether the more general setups are anomaly free. So far we were only able to discuss some very special cases, e.g. not including pure SYM. Some progress in understanding the bending in these cases has been made in [@randall], however without reaching a final answer. Several aspects of this problem can be easier understood in a T-dual picture. Consider an elliptic model, that is we take the $x^4 - x^6$ plane to be compactified on a torus. For a finite model, T-dualizing these two compact directions turns the D5 branes into D3 branes on an ${\Bbb{C}}^3/\Gamma$ orbifold background. Following [@LKS] it was argued in [@haur] that the generic box T-dualizes into an orbifold with fractional branes, that is where the orbifold group is embedded into the gauge group via some other representation than the regular one. On the orbifold side one immediately faces the issue of tadpole cancellations. Non-vanishing tadpoles signal the presence of a source, that is a net charge sitting in the internal space. So in orbifold compactifications the tadpoles have to vanish for consistency. Since we are dealing with non-compact backgrounds some non-vanishing tadpoles may be tolerated. The relevant space is the FP set of the orbifold action transverse to the D3 brane. For the $N=2$ case (a D3 brane in 0123 with an orbifold acting on 6789 space) the FP set is 2 dimensional. A net charge in 2d will give rise to a logarithmic singularity. In [@LKS] it was shown that this divergence is nothing else but the running of the gauge coupling. Tadpole cancellation is hence not required for consistency. Vanishing of the tadpoles is equivalent to finiteness of the gauge theory. In the same spirit Leigh and Rozali analyzed the orbifold dual of the brane boxes [@leighrozali]. A generic orbifold element will leave a 0d FP set. These tadpoles have to vanish. Otherwise the gauge theory is anomalous. However some orbifold elements will leave a 2d FP set untouched. The corresponding tadpoles will only lead to logarithmic divergences which once more can be identified as the running gauge coupling. So vanishing of all tadpoles again implies finiteness of the gauge theory. Vanishing of the tadpoles for the 0d FP is required for anomaly freedom. Leigh and Rozali indeed showed that for all brane boxes leading to anomaly free gauge theories these 0d tadpoles vanish. T-duality, M-theory embedding and the emergence of the 3-cycles --------------------------------------------------------------- Now let us describe the the strong coupling limit of the $N=1$ via embedding the brane boxes into M-theory. Since our original brane configurations is in the type IIB string, we have first to perform a T-duality transformation to the type IIA superstring before we can perform the M-theory embedding. We do not want to touch the NS and NS’ 5-branes, and we also do not want to create any D6-branes; therefore we will T-dualize over one of the spatial directions common to all branes. To be specific we now assume that $x^3$ is periodic with radius $R_3^{IIB}$ and we perform the T-duality with respect to the $x^3$-direction. This leads to the following brane configuration: - $n$ parallel NS 5-branes with world volumes along the (012345)-directions. These 5-branes are fixed at $x^7=x^8=x^9=0$ and are placed at arbitrary positions in $x^6$. - $n'$ parallel NS’ 5-branes with world volumes along the (012367)-directions. The NS’ branes are fixed at $x^5=x^8=x^9=0$ and are placed at arbitrary positions in $x^4$. - D 4-branes with world volumes along the (01246)-directions. These D 4-branes take different $x^5$ positions on the NS 5-branes and also different $x^7$ positions on the NS’ 5-branes. In addition the D 4-branes can have arbitrary positions in the compactified spatial $x^3$-direction. This configuration preserves like before 1/8 of the original supersymmetries and corresponds to a three-dimensional gauge theory with $N=2$ space-time supersymmetry. The three-dimensional gauge theory can be simply obtained from the four-dimensional $N=1$ models by circle compactification on $S^1$ in the $x^3$-direction. In the decompactification limit, $R_3^{IIB}\rightarrow\infty$, the four-dimensional $N=1$ gauge theories are rediscovered. On the hand, for $R_3^{IIB}\rightarrow 0$, the theory is truly three-dimensional. Note that in three dimensions, a new Coulomb branch can be opened, since the three-dimensional vector multiplets contain one real scalar degree of freedom. The corresponding modulus $v$ is associated in the brane picture with the positions of the D 4-branes in the $x^3$-direction. The three-dimensional gauge coupling is classically related to the 4d gauge coupling as $1/g_3^2=R_3^{IIB}/g_4^2$. So in the limit $R_3^{IIB}\rightarrow\infty$ one must send $g_3\rightarrow 0$ in order to have a finite coupling $g_4$. The scalar field in the vector multiplet live on a dual circle with radius $R_3^{IIA}=1/R_3^{IIB}$. So in the 4d limit, $R_3^{IIA}\rightarrow 0$, one has to integrate out the fields with masses of order $v$ corresponding to the Coulomb branch. In this way we can regard $v$ as the parameter which sets the scale $\Lambda$ of the four-dimensional gauge theory. So in order to determine the logarithmic ‘running’ of the four-dimensional gauge coupling $g_4$ in terms of $\Lambda$, $1/g_4^2=b^{N=1}\log\Lambda$, we will be in particular interested in the bending of the coordinates $x^4$ and $x^6$ in terms of $x^3$. (This precisely corresponds to the logarithmic running of $x^6$ in terms of the Higgs field vev $v=x^4+ix^5$ in case of the $N=2$ brane models.) This will be further discussed in section \[sectionUniformBending\]. Note that in the 3-dimensional limit, $R_3^{IIB}\rightarrow 0$, the pure Yang-Mills gauge theory has no stable supersymmetric groundstate unlike the 4d theory. Many more details of the dynamics and superpotentials of three-dimensional, $N=2$ supersymmetric gauge theories can be found in [@3dgauge]. After the duality in $x^3$ we are now ready for lifting the above configuration to M-theory by adding the period direction $x^{10}$ with radius $R_{11}$. Then the intersection of all branes is described by the smooth configurations of M 5-branes. Like in the $N=2$ case the singular intersections of the NS, NS’ and D 4-branes are described in M-theory by a single smooth M 5-brane. Asymptotically, this M 5-brane takes the shape of the classical IIA branes: - The NS 5-branes asymptotically correspond to M 5-branes which extend in the (012345)-space and take different positions in the $x^6$, $x^7$, $x^8$, $x^9$ and $x^{10}$ directions. - The NS’ 5-branes asymptotically look like M 5-branes with world volumes along the (012367)-space and positions in $x^4$, $x^5$, $x^8$, $x^9$ and $x^{10}$. - Finally, the asymptotics of the D 4-branes is given by M 5-branes with world volumes in $x^0$, $x^1$, $x^2$, $x^4$, $x^6$ and $x^{10}$ and positions in $x^3$, $x^5$, $x^7$, $x^8$ and $x^9$. So all branes have common world volumes in the (012)-space and are all located at $x^8=x^9=0$. Therefore, to characterize the M-theory configurations we have to focus on the six-dimensional space spanned by the coordinates $x^3$, $x^4$, $x^5$, $x^6$, $x^7$ and $x^{10}$. Each asymptotic brane fills three particular directions in this space. This means that the general embedding of the M-theory 5-branes is described by a three-dimensional submanifold in ${\mathbb{R}}^4\times T^2$. Supersymmetry requires this to be a supersymmetric 3-cycle. In the language of [@gibpap] this is called a $SU(3)$ special Lagrangian calibration (SLAG) which breaks 7/8 of the supersymmetries. Let us again briefly consider the validity of our approach. In 11d units the 3d gauge coupling is given by $g^2_{YM}=\frac{R_{11}} {A}$ where $A$ is the area of the box. As in the $N=2$ case there are two distinct limits if we want to keep $g^2_{YM}$ fixed: for small $R_{11}$ and $A$ the KK modes decouple and the brane setup reproduces the gauge theory. For large $R_{11}$ we can solve using 11d SUGRA, that is by solving for the SUSY 3-cycle. So we are again only solving MQCD and not really the gauge theory. However, as for $N=2$, all holomorphic quantities should be encoded in the geometry of the 3-cycle. That is of the terms in the 2-derivative approximation of the effective action, the holomorphic gauge coupling and the superpotential should be encoded in the 3-cycle, whereas the Kähler potential probably escapes our control. Brane Cubes and M-theory ------------------------ As already mentioned in the original work of [@hanzaf] the idea of brane boxes can naturally be extended to brane cubes and brane hypercubes. Each time we add one more NS brane with yet another orientation breaking another half of the supersymmetry. The D brane ends on these new NS branes as well, so that the brane spans a 3d cube or 4d hypercube instead of the 2d box we considered so far. Let us briefly discuss, how these configurations are lifted to M-theory. We will find that the situation is especially interesting in the case of brane cubes, where one can find two distinct models, one with chiral and one with non-chiral SUSY. For simplicity let us consider the brane cubes directly on the IIA side with the D4 brane suspended between the NS branes. This is the setup that lifts to M-theory in a straight forward fashion. The third NS brane that we add should have a fixed $x^2$ position, so that the D4 brane is finite in this direction as well as in $x^4$ and $x^6$. There are two distinct possibilities to do so. The first is to add an NS” brane along 014567. This is the setup considered in [@9806177]. It leads to a chiral $N=(2,0)$ supersymmetric gauge theory in $d=2$. Rotations in 89 space give rise to the $U(1)_R$ symmetry. The dual orbifold consists of D1 branes living on top of an ${\mathbb{C}}^4/ \Gamma$ orbifold, where $\Gamma$ is a subgroup of $SU(4)$. By the same reasoning as for the brane box we find that this chiral brane cube should lift to M-theory via a SUSY 4-cycle in the 7d 234567 space, that is via a SUSY 4-cycle associated with $G_2$ holonomy. The second possibility is to have the NS” brane in 012468. This leaves us with $N=(1,1)$ in 2 dimensions. Since this time all three types of NS branes have a common direction ($x^3$) we can perform a T-duality to type IIB as for the box, leading to a 3d $N=1$ theory. This time the dual orbifold is a $G_2$ orbifold while the lift to M-theory has to be performed via an $SU(4)$ SLAG 4-cycle in ${\mathbb{C}}^4$. Therefor this non-chiral cube requires the same techniques as the SLAG 3-cycle. By simply adding another NS along 23469 we find the brane hypercube and its lift via an $SU(5)$ SLAG. The supersymmetric $d$-cycles ----------------------------- ### The $d$-cycle equations {#SectiondCycleEquation} A d-dimensional ‘curve’ $\Sigma^{(d)}$, embedded into $2d$-dimensional flat space ${\mathbb{R}}^{2d}$ with coordinates $x^i$ ($i=1,{\ldots\,},2d$) can be described at least locally by the zero locus of $d$ real functions $f^m(x^1,{\ldots\,},x^{2d})$: $$\Sigma^{(d)} = {\Bbb{V}}(f^1,{\ldots\,},f^d) = \{(x_1,{\ldots\,},x_{2d})\,|\, f^m(x^1,{\ldots\,},x^{2d})=0,\; m=1,{\ldots\,},d\}. \nonumber$$ If one wants to deal with a so called supersymmetric $d$-cycle, the choice of the functions $f^m$ is highly constrained. To study these restrictions we first introduce $d$ real coordinates $\xi_i$ ($i=1,\dots ,d$) which parametrize the curve $\Sigma^{(d)}$. Furthermore we consider complex coordinates $u^i$, $u^i=x^{2i-1}+ix^{2i}$, of ${\mathbb{C}}^d$. Then the $d$-cycle can be characterized by making the complex $u^i$ to be functions of the real coordinates $\xi_i$, i.e. by the following embedding map $i$ from $\Sigma^{(d)}$ into ${\mathbb{C}}^d$: $$i:\Sigma^{(d)}\longrightarrow {\mathbb{C}}^d: \qquad \xi_i \longrightarrow u^i(\xi_i),\quad i=1,\ldots, d.$$ The intersection configuration (for the case $d=3$) is depicted in figure \[figureIntersection\]. Now by applying the partial derivative $\partial_{\xi_k}$ to the defining equations $f^m$ of the $d$-cycle, we get the following relations: $$\sum_{n=1}^d (f^m_{u^n} u^n_{\xi_k} + f^m_{\bar{u}^n} \bar{u}^n_{\xi_k}) =0$$ (here $f^m_{u^n}=\frac{\partial f^m}{\partial u^n}$, $u^n_{\xi_k}=\frac{\partial u^n}{ \partial\xi_k}$). These can be grouped into the following matrix expressions $$\begin{aligned} \left(\begin{array}{cccc} f^1_{u^1} & f^1_{u^2} & \dots & f^1_{u^d}\\ f^2_{u^1} & f^2_{u^2} & \dots & f^2_{u^d} \\ \dots &\dots &\dots &\dots \\ f^d_{u^1} & f^d_{u^2} & \dots &f^d_{u^d} \\ \end{array} \right)\cdot \left(\begin{array}{c} u^1_{\xi_k} \\ u^2_{\xi_k} \\ \dots \\ u^d_{\xi_i} \\ \end{array} \right) &=& (-1)^{d} \left(\begin{array}{cccc} f^1_{\bar{u}^1} & f^1_{\bar{u}^2} & \dots &f^1_{\bar{u}^d} \\ f^2_{\bar{u}^1} & f^2_{\bar{u}^2} & \dots &f^2_{\bar{u}^d} \\ \dots & \dots &\dots &\dots \\ f^d_{\bar{u}^1} & f^d_{\bar{u}^2} & \dots &f^d_{\bar{u}^d} \\ \end{array} \right)\cdot \left(\begin{array}{c} \bar{u}^1_{\xi_k} \\ \bar{u}^2_{\xi_k} \\ \dots \\ \bar{u}^d_{\xi_k} \\ \end{array} \right)\nonumber\end{aligned}$$ We will denote the left matrix by $M$ and the right matrix as $\bar{M}$, henceforth. Note, the sign in front of $\bar{M}$ depends on the dimension $d$ of the cycle. With the help of these matrices we can express the bared derivatives by the unbared ones in the following way: $$\ \partial_k \bar{U} = (-1)^d \bar{M}^{-1}M\cdot\partial_k U = N\cdot\partial_k U .\label{lemma}$$ By definition $N$ shares the properties: 1. $N^{-1} = (-1)^d M^{-1}\bar{M} = \bar{N}$ 2. $\left|\det \, N\right| = 1$ 3. If $\;N=N^T\;\;\;\Rightarrow N\in U(d)$ 4. If further $\;\det \, N=1\;\;\Rightarrow N\in SU(d)$ Remembering the $d$-cycle should be supersymmetric we can ask for restrictions of the matrix $N$ following from this condition. It is well known that the notion of supersymmetric cycles [@BBS] coincides with the notion of special Lagrangian submanifolds [@HL][^8] which can be rephrased in terms of the embedding map $i$: $d$-cycle $\longrightarrow {\mathbb{C}}^d$ and the two conditions: $$\begin{aligned} i^\ast {\Im{\mathfrak{m}}}\Omega &=& 0 \;\;\;\; {\rm volume\;\; minimizing} \nonumber\\ i^\ast \omega &=& 0 \;\;\;\; {\rm Lagrangian\;\; submanifold} \label{volmin}\end{aligned}$$ Here $\Omega$ is the complex structure of ${\mathbb{C}}^d$, which we can choose to be $\Omega=du^1\wedge du^2\wedge\dots\wedge du^d$; $\omega$ is the canonical Kähler form, $\omega = \frac{1}{2i}\sum_{i} du^i\wedge d\bar{u}^i$. As it is shown in appendix \[AppendixDerivationOfTheCycleEquation\] for the case of $d$-cycles, from the first equation we derive straightforward that $N$ restricted to the $d$-cycle must be of unit determinant. $$\begin{aligned} \det \, N\,|_{{\Bbb{V}}(f^1,{\ldots\,},f^d)} & = & 1\end{aligned}$$ But then to utilize this for computation in the embedding space we reformulate that condition a little bit. If $$\begin{aligned} I({\Bbb{V}}) = \{f\in C^{\infty}({\Bbb{R}}^{2d})\,|\, f|_{\Bbb{V}} = 0 \} \nonumber\end{aligned}$$ denotes the ideal of functions vanishing on ${\Bbb{V}}(f^1,{\ldots\,},f^d)$, the above equation can be rewritten as $$\begin{aligned} \det \, N & = & 1 + \left[\,{\rm some}\; g\in I({\Bbb{V}})\,\right] \nonumber\end{aligned}$$ This kind of non uniqueness is apparent through out the equations. To get a handle for that is the main obstruction for concrete computations. To keep that difference in mind but deal with the equations as there is no difference at the same time, we replace the equality sign by the congruence symbol ($\equiv$), i.e. $$\begin{aligned} \det \, N & \equiv & 1.\label{detcond}\end{aligned}$$ By close inspection of the second equation in (\[volmin\]) (see again appendix \[AppendixDerivationOfTheCycleEquation\]) one is led to a further restriction on $N$, namely $$N\equiv N^T.\label{ntcond}$$ So, in this way, we have translated the conditions of having a supersymmetric cycle to restrictions on our defining equations $f^m$. In summary, all what we have done so far can be formulated in a short but important proposition which is the starting point for all further computations:\ [**Proposition:** ]{}[*A $d$-cycle, represented as an intersection of $d$ real valued functions issupersymmetric, iff $N\equiv N^{T}$ and $\det \, N\equiv 1$.*]{}\ It will turn out to be very useful to reformulate the last proposition $N\equiv N^{T}$ in a different, but equivalent way. Namely, it is not difficult to show that the requirement $N\equiv N^T$ is equivalent to the condition that the matrix $MM^+$ should be real modulo $I({\Bbb{V}})$. To prepare this reformulation we remark that by the split of the coordinates of ${\Bbb{R}}^{2d}$ into the coordinates of ${\mathbb{C}}^d$ they inherit an intrinsic meaning as the spatial and momentum variables of symplectic geometry. This is given by $$u^i=q^i+ip^i,$$ i.e. the real part of $u^i$ gets the meaning of a spatial coordinate whereas the $p^i$ is a momentum variable. Then we are free to define the convenient Poisson brackets of phase-space functions $\{ f,g\}$. This is done in the standard way as $$\{ f,g\} = \sum_{i=1}^d \left( \frac{\partial f}{\partial q^i}\frac{\partial g}{\partial p^i} -\frac{\partial f}{\partial p^i}\frac{\partial g}{\partial q^i} \right) = \sum_{i=1}^d (f_{2i-1}g_{2i}-f_{2i}g_{2i-1}),$$ where $f_{2i-1}=\frac{\partial f}{ \partial q^i}=\frac{\partial f}{\partial x^{2i-1}}$ and $f_{2i}=\frac{\partial f}{\partial p^i}=\frac{\partial f}{\partial x^{2i}}$. Then the matrix $MM^+$ reads $$\begin{aligned} (MM^+)_{mn} =\,<\nabla\,f^m,\nabla\,f^n\,>\pm\, i\cdot\{ f^m,f^n\} .\end{aligned}$$ So $MM^+$ is a real matrix modulo $I({\Bbb{V}})$, i.e. $N\equiv N^T$, if all Poisson brackets among the defining functions $f^m$ and $f^n$ vanish: $$\{ f^m,f^n\}\equiv 0.\label{poisson}$$ So we get a more suitable set of equations for concrete calculations. On the other side the last equations can be understood very natural (see appendix \[AppendixLiouville\]).\ [**Corollary:** ]{}[*A $d$-cycle, represented as an intersection of $d$ real valued functions issupersymmetric, iff $\{f^i,f^j\}\equiv 0$ and $\det \, N\equiv 1$\[Tragik\][^9].*]{} ### Supersymmetric 2-cycles Now we want to rederive the known result for the case of supersymmetric two-cycles to give a simple check of our formalism and to establish our point of view on the meaning of the defining equations of the $d$-cycle. If we look at the brane configuration, ------ --- --- --- --- --- --- --- -- -- -- ---- NS : 0 1 2 3 4 5 D4 : 0 1 2 3 6 10 ------ --- --- --- --- --- --- --- -- -- -- ---- : N=2 Hanany-Witten setup[]{data-label="HananyWitten"} we parametrise 4 space by $u^1=x_4+ix_{10}$ and $u^2=x_5+ix_{6}$. Note that D 4-brane positions $x^4$ and $x^5$ correspond to the $q^i$-variables, whereas the NS 5-brane positions $x^{10}$ and $x^6$ are the conjugated $p^i$ variables. Now we work out the two-cycle conditions on the two real function $f^1=f(x^4,x^5,x^6,x^{10})$ and $f^2=g(x^4,x^5,x^6,x^{10})$. With $$\begin{aligned} M=\left( \begin{array}{cc} f_{u^1} & f_{u^2} \\ g_{u^1} & g_{u^2} \\ \end{array}\right)\nonumber\end{aligned}$$ one can calculate the two-cycle equations which result in: $$\begin{aligned} \{f,g\}\equiv 0\;\;\; \Rightarrow \;\;\; 0&\equiv& g_4f_{10}-g_{10}f_4+g_5f_6-g_6f_5,\\ \det \, N\equiv 1\;\;\; \Rightarrow \;\;\; 0&\equiv& \; g_6f_4-g_{10}f_5-g_4f_6+g_5f_{10}.\end{aligned}$$ In analysing these equations it is a simple task to verify that all functions $f$ and $g$ satisfying $$\begin{aligned} f_6 &=&g_{10}\;\;\;\;\;\;f_{10} = - g_6\nonumber\\ f_4 &=&g_5 \;\;\;\;\;\;\;\;\, f_5 = - g_4\nonumber\end{aligned}$$ do solve our equations. These are the “Cauchy-Riemann” differential equations which state that $f$ and $g$ are the real and imaginary part of a holomorphic function in the variables $v=x_4+ix_5$ and $s=x_6+ix_{10}$, respectively. Then we choose as our coordinates $$\begin{aligned} v &=& x_4+i\cdot x_{5}\nonumber \\ t &=& e^{-s} = e^{-(x_6+i\cdot x_{10})}\nonumber \end{aligned}$$ to respect the compactness of the $x_{10}$ direction. Hence $f$ and $g$ fit into a holomorphic function in two variables $v$ and $t$. Up to now we have shown, that there is a subclass of solutions to our equations, which coincides with the well known holomorphicity argument. But as they stand our equations are more general and we have to think about that difference. Nevertheless in a certain sense every geometrical two-cycle should be described by a holomorphic function, yet. That is to say in the whole variety of functions specifying the same geometrical two-cycle, there is a distinguished holomorphic function, i.e. there is a lot of redundancy in the description, which could be exploited for constructing solutions, maybe. Therefore we are looking for a way of to mod out that redundancy. This will be done by imposing some additional differential constraints in a generic way. To do that, recall the following properties of the matrix $N$: 1. By definition: $N=(-1)^d \bar{M}^{-1} M$. 2. As shown in appendix \[AppendixDerivationOfTheCycleEquation\], N can be factorized through $a \in U(d)$ $$\begin{aligned} N=\lambda^{-1} = \bar{a} a^{-1}.\nonumber \end{aligned}$$ Note that this does not imply that $M$ must be unitary. But we want to choose M as close as possible to being unitary. We hope that this resolves the redundancy problem. A unitary matrix $M$ does have orthonormalized rows and columns. Thus to begin with the construction of an unitary $M$ we want to orthogonalize the rows and columns of $M$. Since we know the expression for $MM^+$ in geometrical this is straightforward. Simply we have to require orthogonality of the gradients of $F$ and $G$ $$\begin{aligned} <\nabla\,F,\nabla\,G >\,=\,0.\nonumber\end{aligned}$$ Of course one has to be careful in doing that. One has to ensure that by requiring these additional properties, the common zero set is unchanged. In fact, this can be done without getting into trouble. If the length of these both gradients coincides, our equations reduce to the Cauchy-Riemann equations, indeed. There are problems in generalizing this nice looking observation to higher dimensional cycles but we hope that this construction works, too. ### Supersymmetric 3-cycles Recall the characteristic $N=1$ brane configuration:\ Since the NS branes together with the D 4-branes build a $N=2$ subsystem, two conjugated $(q,p)$ pairs are given by $(q_1,p_1)=(x^{10},x^{3})$ and $(q_2,p_2)=(x^6,x^5)$. Then a single NS brane as well as a single D 4-brane is automatically a supersymmetric 3-cycle, namely a supersymmetric 2-cycle in the $x^3$-$x^5$-$x^6$-$x^{10}$ space times the line $x^7={\rm const}$ in $x^4$-$x^7$ space. The last pair of coordinates is fixed by the requirement that also the NS’ brane is a supersymmetric 3-cycle: $(q_3,p_3)=(x^4,x^7)$. Note that with this choice the three coordinates $x^3$, $x^5$, $x^7$ of the D 4-branes are all momentum variables. In summary, the complex structure of ${\mathbb{C}}^3$ takes the following form: $$\begin{aligned} u^1 &=& x^{10}+i\cdot x^3\nonumber \\ u^2 &=& x^4 +i\cdot x^7\nonumber \\ u^3 &=& x^6 +i\cdot x^5\nonumber\end{aligned}$$ Now we can work out the supersymmetric 3-cycle conditions on the three functions$f^1=f(x^3,x^4,x^5,x^6,x^7,x^{10})$, $f^2=g(x^3,x^4,x^5,x^6,x^7,x^{10})$ and $f^3=h(x^3,x^4,x^5,x^6,x^7,x^{10})$. First, the three Poisson brackets are given by the following set of equations: $$\begin{aligned} 0\equiv\{ f,g\}&=&f_{10}g_3-f_3g_{10}+f_4g_7-f_7g_4+f_6g_5-f_5g_6, \nonumber\\ 0\equiv\{ f,h\}&=&f_{10}h_3-f_3h_{10}+f_4h_7-f_7h_4+f_6h_5-f_5h_6, \nonumber\\ 0\equiv\{ g,h\}&=&g_{10}h_3-g_3h_{10}+g_4h_7-g_7h_4+g_6h_5-g_5h_6. \label{poisson3}\end{aligned}$$ The $\det \, N\equiv 1$ equation takes the following form: $$\begin{aligned} 0&\equiv&\left(f_4g_6-f_7g_5-f_6g_4+f_5g_7 \right)h_{10} +\left(g_{10}f_6-g_{3}f_5-f_{10}g_6+f_{3}g_5\right)h_4\nonumber\\ &+&\left(f_{10}g_4-f_{3}g_7-g_{10}f_4+g_{3}f_7\right)h_6 +\left(g_{3}f_4+g_{10}f_7-f_{10}g_7-f_{3}g_4\right)h_5\nonumber\\ &+&\left(f_{10}g_5+f_{3}g_6-g_{3}f_6-g_{10}f_5\right)h_7 +\left(f_6g_7-f_7g_6-f_4g_5+f_5g_4 \right)h_{3}.\label{detn3}\end{aligned}$$ For a supersymmetric 3-cycles these four equations must be zero, but not necessarily identically, but in general only on the 3-cycle, i.e. modulo the ideal of vanishing functions determined by $f$, $g$ and $h$. One particular class of solutions for these equations is of course given by all 3-cycles which are a supersymmetric 2-cycle in the $x^3$-$x^5$-$x^6$-$x^{10}$ space times the line $x^7={\rm const}$ in $x^4$-$x^7$ space: $\Sigma^{(3)}=\Sigma^{(2)}\times {\Bbb{R}}$. The corresponding choice of functions is $f=f(x^3,x^5,x^6,x^{10})$, $g=g(x^3,x^5,x^6,x^{10})$, $f$ and $g$ being real and imaginary parts of a holomorphic function $F(x^3+ix^5,x^6+ix^{10})$, and $h=x^7-{\rm const}$. As a first and very simple check we can verify that flat, parallel M5-branes in their three possible asymptotic limits, namely being NS, NS’ or D4-branes, are indeed supersymmetric 3-cycles. For example consider the $n$ parallel NS 5-brane, positioned at $x^6_i$, $x^7_i$ and $x^{10}_i$ ($i=1,\dots ,n$). Hence the three functions $f$, $g$ and $h$ are given as $$\begin{aligned} f&=&\prod_{i=1}^n(x^6-x^6_i),\nonumber\\ g&=&\prod_{i=1}^n(x^7-x^7_i),\nonumber \\ h&=&\prod_{i=1}^n(x^{10}-x^{10}_i).\end{aligned}$$ It is easy to show that all eqs.(\[poisson3\]) and (\[detn3\]) are identically zero. The same is of course true for $n'$ parallel NS’ 5-branes and $k$ parallel D 4-branes. In the following sections we will discuss more complicate brane intersections and bent brane configurations. Supersymmetric 3-cycles for intersecting branes and $N=1$ brane boxes --------------------------------------------------------------------- ### Branes as quaternionic coordinates In the following sections we like to construct the defining equations $f$, $g$ and $h$ for those supersymmetric 3-cycles, which correspond to intersecting NS, NS’ and D 4-branes, and in particular for those, which correspond to $N=1$ brane box configurations. For this purpose we would like to introduce three types of ‘coordinates’, called $s$, $s'$ and $v$, which denote the asymptotic positions in ${\Bbb{C}}^3$ of the NS, NS’ and D 4-branes respectively. These ‘coordinates’ should be one the same footing as the complex variables $s=x^6+ix^{10}$ and $v=x^4+ix^5$ of the $N=2$ (NS-D4) brane configurations. To achieve this aim we will now extend the dimension of the space by including also the directions $x^2$ and $x^8$. This means that we are now dealing with supersymmetric 4-cycles which are embedded into the space ${\mathbb{C}}^4$, which is spanned by the directions (2,3,4,5,6,7,8,10). All our branes now fill 4 dimensions of this eight dimensional space: their world volumes completely fill $x^2$, and they are all positioned at $x^8=0$. That means that the 4-cycles, which correspond to the brane boxes of the NS, NS’ and D 4-branes are in fact nothing else than supersymmetric 3-cycles times the line $x^8=0$. As discussed in detail above, we could add yet another type of NS-branes, called NS” branes, with world volumes along the (3,4,6,8)-directions and positions in the (2,5,7,10) space. Considering intersections of all four types of branes (NS-NS’-NS”-D4) one can construct brane cube models, where the D 4-branes are now finite in the directions $x^2$, $x^4$ and $x^6$. These brane cubes provide two-dimensional gauge theories with (1,1) supersymmetry. A generic brane cube configuration corresponds to a supersymmetric 4-cycle, which is not a direct product of supersymmetric 3-cycle times ${\Bbb R}$. The positions of the branes in ${\Bbb{C}}^4$ can now nicely described by introducing quaternionic numbers. A general quaternion $q\in {\Bbb{H}}$ has the structure $$q=q^0\sigma_0+q^1\sigma_1+q^2\sigma_2+q^3\sigma^3,$$ where $\sigma_0={{\1I}}_2$ and the $\sigma_i$ ($i=1,2,3$) are the Pauli matrices, satisfying $\sigma_i\sigma_j=\epsilon_{ijk}\sigma_k$. Clearly, a quaternion is zero, $q=0$, if all its components $q_i$ ($i=0,\dots ,3$) are vanishing. Alternatively, we can also define the quaternions via two complex numbers $z_1=q^0+iq^1$ and $z_2=q^2-iq^3$ as $q=z_1+jz_2$, where $i=\sigma_1$, $j=\sigma_2$ and $k=i\cdot j=\sigma_3$. Now we can associate to every brane a particular quaternion $q$, which describes its asymptotic position in ${\Bbb{C}}^4$, and hence is a function of the position variables of every brane: $$\begin{aligned} NS\,:\; \quad q_{NS}&=&q(x^6,x^7,x^8,x^{10}),\nonumber\\ NS': \quad q_{NS'}&=&q(x^4,x^5,x^8,x^{10}),\nonumber\\ D4\;\,:\, \quad q_{D4}\,\,&=&q(x^3,x^5,x^7,x^{8}).\end{aligned}$$ The four defining equations $f^m(x^2,x^3,x^4,x^5,x^6,x^7,x^8,x^{10})$ ($m=1,\dots ,4$) for the 4-cycle can be now simply written in terms of a single quaternionic function function $F(q_{NS},q_{NS'},q_{D4})$: $$F(q_{NS},q_{NS'},q_{D4}) = f^1(x^i)+if^2(x^i)+jf^3(x^i)+kf^4(x^i).$$ Of course, for a general function $F(q_{NS},q_{NS'},q_{D4})$ one still has to verify whether the 4-cycle is supersymmetric. This is not automatic unlike the case of the supersymmetric 2-cycles, where every holomorphic function corresponds to a supersymmetric 2-cycle. Specifically, as discussed in section \[SectiondCycleEquation\], the supersymmetry conditions are given by the requirement that six Poisson brackets $\{ f^m,f^n\}$ plus $(\det N-1)$ have to vanish (modulo the ideal of vanishing functions determined by the zero locus of the $f^m$). In addition, since we want the supersymmetric 4-cycle $\Sigma^{(4)}$ to be of the form $\Sigma^{(4)}=\Sigma^{(3)}\times{\mathbb{R}}_{x^8=0}$, the function $F(s,s',v)$ has to be chosen in such a way that the common zero locus of the $f^m$ always contains the line $x^8=0$. In principle it is also possible to obtain the three 3-cycle equations $f$, $g$ and $h$ by solving one of the four equations $f^m$ with respect to $x^8$ and substituting the result into the remaining equations. To understand this procedure of constructing supersymmetric 3-cycles let us first consider case of classical brane configurations which are not bent by quantum effects. To describe flat branes we introduce the following quaternionic coordinates in analogy to the complex variables $s$ and $v$[^10]: $$\begin{aligned} NS\,: \quad s\,&=&x^6+ix^{10}+jx^7-kx^8,\nonumber\\ NS': \quad s'&=&x^{4}+ix^5+jx^{10}-kx^8,\nonumber\\ D4\;\;: \quad v\,&=&x^3+ix^5+jx^7-kx^8.\end{aligned}$$ A single NS brane is a supersymmetric 4-cycle simply defined by the equation $F=s=0$ and likewise for the other branes. Next consider the triple intersection of $n$ parallel NS branes with $n'$ parallel NS’ branes and $k$ parallel D 4-branes. This configuration corresponds to un-bent NS and NS’ branes. In the language of field theory it leads to finite $N=1$ gauge theories. The associated quaternionic function $F$ is given by the following polynomial: $$F(s,s',v)=\prod_{i=1}^n(s-s_i)\prod_{j=1}^{n'}(s'-s_j')\prod_{l=1}^k(v-v_l).$$ Here $s_i$, $s_j'$ and $v_l$ are constant quaternionic numbers with zero $\sigma_3$-component, which denote the positions of the three types of branes. It is not a difficult but a tedious calculation to show that this function $F$ corresponds to a supersymmetric 4-cycle. However note that the supersymmetric 4-cycle equations are not identically satisfied but only on the branes themselves, i.e. only modulo the ideal of vanishing functions of the $f^m$. ### Uniform Bending – Sewing of $N=2$ models {#sectionUniformBending} Now we will construct the supersymmetric 3-cycles which correspond to those $N=1$ brane boxes which can be obtained via the sewing or superposition of two $N=2$ subsystems. As explained in section (3.1), this means that all the NS branes as well as all the NS’ branes are bent in an uniform way. In general, the bending of the NS and NS’ branes should be parametrized by the $x^3$ position of the D 4-branes, where $x^3$ is nothing else that the parameter which is associated to the Coulomb branch in three dimensions. In addition, we roughly expect that the bending of the NS brane is encoded in the functions $x^6(x^3)$ and $x^{10}(x^3)$, and analogously, the bending of the NS’ branes is determined by $x^4(x^3)$ and $x^{10}(x^3)$. Since $x^3$ takes in four dimensions the role of $\Lambda_{QCD}$, $x^4$, $x^6$ and $\cos{x^{10}}$ ($x^{10}$ is periodic!) should be logarithmic functions of $x^3$. For the case of uniform bending we can be much more explicit. Consider first the uniform bending of the NS brane caused by $k$ D 4-branes. From the $N=2$ models we know that the perturbative bending is described by a two dimensional Laplace equation with the holomorphic, logarithmic solution $x^6+ix^{10}=k\log (x^3+ix^5)$. In the same way, for the other $N=2$ subsystem, NS’– k’D4, the following perturbative solution for the bending holds: $x^4+ix^{10}=k'\log ( x^7+ix^3)$. This behaviour now suggest that we define the following quaternionic coordinates which describe the asymptotic positions of the bent branes in a correct way: $$\begin{aligned} NS: &{}&\quad t\;=e^{x^6}\cos x^{10}+ie^{x^6}\sin x^{10}, \nonumber\\ NS': &{}&\quad t'=e^{x^4}\cos x^{10}+je^{x^4}\sin x^{10}, \nonumber\\ D4: &{}&\quad v\,=x^3+ix^5+jx^7-kx^8.\label{ttv}\end{aligned}$$ Sewing together the perturbative bending of the two $N=2$ subsystems provides us with the following quaternionic function for the supersymmetric 3-cycle, which corresponds to the simple brane box shown in figure \[figureN2BraneBox\]: For $k=k'=1$ the quaternionic function simply takes the form $$F(t,t',v)=\lbrack t-v\rbrack\lbrack t'-v\rbrack =0.\label{n=1bend}$$ Similar one can write down an expression for arbitrary $k$ and $k'$. It is possible to show that this function satisfies the conditions for a supersymmetric cycle. The vanishing locus which is defined by $F(t,t',v)$ is a true 3-cycle; it consists out of two branches, namely the superposition of the curve $t-v^k=0$, which is a 2-cycle in the $3-5-6-10$-directions times the $x^4$-axis, together with the curve $t'-v^{k'}=0$, which represents a 2-cycle, now in the directions $3-4-7-10$ times the $x^6$-axis. After having understood the most simple $N=1$ brane box with uniform bending (see figure \[figureN2BraneBox\]) we can now construct the non-perturbative, supersymmetric 3-cycle equations which describe the generic $N=1$ brane box with uniform bending situation. It is given by the superposition of two $N=2$ subsystems: the first one consists out of $n$ NS 5-branes with $k_\alpha$ D 4-branes suspended between the NS branes (see figure \[figureBraneConfiguration\]). The second $N=2$ subsystem has the same structure, but now $n'$ NS’ branes with $k_{\alpha '}'$ suspended D 4-branes. After sewing together these two subsystems, the $N=1$ brane box has the form shown in figure \[figureUniformBraneBoxWeb\]. Now recall that, non-perturbatively, every $N=2$ system of this kind is characterized by the complex 2-cycle polynomial eq.(\[n=2pol\]). Then the sewing procedure simply corresponds to the multiplication of the two $N=2$ polynomials, where we replace the complex variables by the corresponding quaternionic variables. In this way we get a supersymmetric 3-cycle which consists out of two branches, namely the direct sum $$\Sigma^{(3)}_{n,n',k_\alpha,k_{\alpha '}'}= (\Sigma^{(2)}_{n,k_\alpha} \times{\mathbb{R}})\oplus (\Sigma^{(2)}_{n',k_{\alpha '}'}\times{\mathbb{R}}) .$$ Note that the two superposed 3-cycles have a common volume in the 3-10 space. In general the quaternionic 3-cycle equations will have the following structure: $$\Sigma^{(3)}:\quad F(t,t',v)=\lbrack p_{k_0}(v)t^n+\dots +p_{k_n}(v)\rbrack\lbrack p_{k_0'}(v){t'}^n+\dots +p_{k_{n'}'}(v)\rbrack .\label{n=3pol}$$ This expression can be expanded and one obtains a polynomial of the following structure: $$F(t,t',v)=\sum_{\alpha=0}^n\sum_{\alpha'=0}^{n'}p_{k_\alpha}(v) p_{k_{\alpha '}'}(v) \, t^{n-\alpha}\, {t'}^{n'-\alpha '}.\label{unifpol}$$ Note that the degree of the polynomial in $v$ in front of each term $t^{n-\alpha}\, {t'}^{n'-\alpha '}$ precisely agrees with the number of D 4-branes in each box $\lbrack \alpha,\alpha '\rbrack$. For example the sewing of two pure $N=2$ gauge theories with $G=SU(k)$ and $G'=SU(k')$ leads to a $N=1$ gauge theory with $N_c=N_f=k+k'$ (see figure \[figureUniformBending\]). The corresponding 3-cycle equations are then simply given in terms of the product of two Seiberg-Witten elliptic curves of genus $(k-1)$ resp. $(k'-1)$. This strongly suggests that the instanton numbers of the pure $N=2$ Yang-Mills theory with gauge group $SU(k)$ are intimately related to those of SUSY QCD with $G=SU(2k)$ and $N_f=2k$. At the end of this section let us compute the perturbative running of the $N=1$ gauge coupling constant. A priori we deal with two different Coulomb branches parametrized by $x^3+ix^5$ resp. by $x^3+ix^7$. In the following we will consider the common direction, $x^3$, and freeze the other directions, i.e. $x^5=x^7=0$. Now consider the box $\lbrack \alpha, \alpha '\rbrack$ with the corresponding gauge group $SU(k_\alpha+k_{\alpha '}')$. From eq.(\[n=1bend\]) we derive that $$\begin{aligned} x^4_{\alpha '+1}-x^4_{\alpha '}&=& L+(k_{\alpha '+1}'+k_{\alpha '-1}'- 2k_{\alpha '}')\log x^3,\nonumber\\ x^6_{\alpha +1}-x^6_{\alpha }&=& L+({k}_{\alpha +1}+{k}_{\alpha -1}- 2{k}_{\alpha })\log x^3,\end{aligned}$$ where $L$ is the classical distance between the NS and NS’ branes. Then using eq.(\[n=1gaugec\]), the gauge coupling constants exhibits the following running behaviour: $$\begin{aligned} \frac{1}{g_{\alpha,\alpha'}^2}&=& ({g_s})^{-1}\biggl(L^2+L( k_{\alpha '+1}'+k_{\alpha '-1}'+ {k}_{\alpha +1}+{k}_{\alpha -1}-2k_{\alpha '}'-2{k}_{\alpha })\log x^3 \nonumber\\ &+&(k_{\alpha ' +1}'+k_{\alpha '-1}'- 2k_{\alpha '}')({k}_{\alpha +1}+{k}_{\alpha -1}- 2{k}_{\alpha })(\log x^3)^2 \biggr). \label{n=1gaugerun}\end{aligned}$$ Since $N_c=k_\alpha+k_{\alpha '}'$ and $N_f=k_{\alpha '+1}'+k_{\alpha '-1}'+ k_{\alpha '}'+{k}_{\alpha +1}+{k}_{\alpha -1}+ {k}_{\alpha }$, the coefficient in front of $\log x^3$ precisely agrees with the one-loop $N=1$ $\beta$-function coefficient $b_{N=1}=-3N_c+N_f$[^11]. ### General $N=1$ brane boxes In the last section we have discussed already a quite large class of $N=1$ gauge theories, namely those $N=1$ models with $N_f\geq N_c$ can be obtained by sewing $N=2$ brane configurations. On the other hand, $N=1$ gauge theories with $N_f<N_c$ like pure $N=1$ Yang-Mills and also models with chiral fermions are more general and cannot be obtained by the $N=2$ sewing procedure. Of course these models are very interesting to study dynamical supersymmetry breaking and the effect of anomalies. In general, we expect that a brane box which corresponds in field theory to a model without vacuum at finite value of the moduli, like supersymmetric QCD with $0<N_f<N_c$, leads to a 3-cycle which does not satisfy the minimal area requirement. In this case the 3 Poisson brackets may still be zero, i.e. $N\equiv N^T$, but $Re(\det \, N)$ is non-vanishing. Similarly, in chiral $N=1$ gauge theories with dynamical supersymmetry breaking we expect a stable non-supersymmetric ground state. That is the $det$ requirement will be satisfied, while the cyle won’t be a Lagrangian submanifold anymore. If furthermore the model is anomalous, the 3-cycle should not exist at all. In the following we like to propose a specific ansatz for the 3-cycle equations for a general $N=1$ brane box model. We will again use the quaternionic formalism with quaternions $t$, $t'$ and $v$ (see eq.(\[ttv\])). Motivated by the previous discussions, our ansatz will consist out of an polynomial in these variables, where the degree of the polynomial in $t$ ($t'$) corresponds to the number of NS (NS’) branes in the corresponding brane box. Hence, for a general brane box as shown in figure (\[figureGeneralBraneBoxWeb\]) the quaternionic 3-cycle equations are assumed to take the following form $$\Sigma^{(3)}:\quad F(t,t',v)=\sum_{\alpha=0}^n\sum_{\alpha'=0}^{n'}p_{k_{\alpha,\alpha '}'}(v) \, t^{n-\alpha}\, {t'}^{n'-\alpha '}.\label{n=3pol1}$$ $p_{k_{\alpha,\alpha '}}(v)$ is a polynomial in $v$ whose degree is given by the number $k_{\alpha,\alpha '}$ of D 4-branes in each box $\lbrack \alpha,\alpha '\rbrack$. As already said, for a brane box which corresponds in field theory to an anomaly free gauge theory with supersymmetric vacuum, one should be able to proof [@workinpro] that this polynomial provides a supersymmetric 3-cycle. At the moment it is however not possible for us to show this in general; the main technical difficulty is the observation that the supersymmetric 3-cycle equations must be satisfied only modulo the ideal $I({\Bbb{V}})$ of functions vanishing on the 3-cycle ${\Bbb{V}}(f,g,h)$. Note however that in case of uniform bending the polynomial eq.(\[n=3pol1\]) takes the form of eq.(\[unifpol\]), namely it factorizes as in eq.(\[n=3pol\]), and the supersymmetric 3-cycle equations are satisfied. A particularly interesting case is pure supersymmetric QCD with gauge group $SU(k)$. Here the 3-cycle polynomial should have the following structure. $$F(t,t',v)= \sum_{\alpha=0}^2\sum_{\alpha'=0}^{2} \, t^{2-\alpha}\, {t'}^{2-\alpha '}+p_k(v) \, t \, t',$$ where $p_k(v)$ is a polynomial in $v$ of degree $k$. For finite $R_3^{IIB}$ which includes the decompactification limit to four dimensions, $R_3^{IIB}\rightarrow\infty$, there exist a supersymmetric vacuum in field theory such the supersymmetric 3-cycle equations should be satisfied for this ansatz. On the other hand, in the 3-dimensional limit, $R_3^{IIB}\rightarrow 0$, the supersymmetric 3-cycle equations should be violated, since there is no supersymmetric vacuum in 3-dimensional pure Yang-Mills gauge theory [@workinpro]. Conclusions =========== We have shown that SUSY 3-cycles play a similar role in $N=1$ SUSY gauge theories as the Seiberg-Witten curve in $N=2$ in the sense that their geometry encodes the holomorphic information about the gauge theory. Especially we expect the superpotential to correspond to the volume and the couplings on the Coulomb branch (if present) to the periods of the cycle [@workinpro]. We were able to construct these cycles for gauge theories that satisfy the uniform bending requirement of [@gimongremm]. The tools we used in establishing these cycles should be useful for the more general cases as well.\ See the footnote on page .\ [**Acknowledgements:**]{} Work partially supported by the E.C. project ERBFMRXCT960090 and the Deutsche Forschungs Gemeinschaft. We like to thank Douglas Smith for collaboration during early stages of this work. In addition we acknowledge useful discussions with A. Hanany, A. Krause, Y. Oz, R. Reinbacher, S. Theisen and A. Zaffaroni. Appendix : Derivation of the d-cycle equations {#AppendixDerivationOfTheCycleEquation .unnumbered} ============================================== Consider the embedding map $i:d-{\rm cycle}\longrightarrow M^{2d}_{\mathbb{R}}$ and the two conditions: $$\begin{aligned} i^\ast {\Im{\mathfrak{m}}}\Omega &=& 0 \;\;\;\; {\rm volume\;\; minimizing} \nonumber\\ i^\ast \omega &=& 0 \;\;\;\; {\rm Lagrangian\;\; submanifold} \nonumber \end{aligned}$$ With $\Omega=du^1\wedge\ldots\wedge du^d$ the requirement of minimal volume reads $$\begin{aligned} 0 = i^\ast{\Im{\mathfrak{m}}}\Omega&=&{\Im{\mathfrak{m}}}(du^1(\xi_1,{\ldots},\xi_d)\wedge \ldots\wedge du^d(\xi_1,{\ldots},\xi_d))\nonumber\\ &=&{\Im{\mathfrak{m}}}( \epsilon_{i_1{\ldots}i_d}u^{i_1}_{\xi_1}u^{i_2}_{\xi_2}{\ldots}u^{i_d}_{\xi_d})\; d\xi^1\wedge{\ldots}\wedge d\xi^d\nonumber\\ \Rightarrow\;\;\;\; 0 &=&\frac{1}{2i}(\epsilon_{i_1{\ldots}i_d}u^{i_1}_{\xi_1}{\ldots}u^{i_d}_{\xi_d} - \epsilon_{i_1{\ldots}i_d}\bar{u}^{i_1}_{\xi_1}{\ldots} \bar{u}^{i_d}_{\xi_d} )\nonumber\\ &=&\frac{1}{2i}(\epsilon_{i_1{\ldots}i_d}u^{i_1}_{\xi_1}{\ldots}u^{i_d}_{\xi_d} -\epsilon_{i_1{\ldots}i_d} N^{i_1}_{j_1}{\ldots}N^{i_d}_{j_d} u^{j_1}_{\xi_1} {\ldots}u^{j_d}_{\xi_d})\nonumber\\ &=&\frac{1}{2i}(\epsilon_{i_1{\ldots}i_d} -\epsilon_{j_1{\ldots}j_d}N^{j_1}_{i_1}{\ldots}N^{j_d}_{i_d}) u^{i_1}_{\xi_1}{\ldots}u^{i_d}_{\xi_d}\nonumber\\ &=&\frac{1}{2i}\left(1-\det\,N\right) \cdot\frac{\partial (u^1,{\ldots},u^d)}{\partial (\xi_1,{\ldots},\xi_d)} \nonumber\end{aligned}$$ which yields $$\begin{aligned} \det \, N|_{{\Bbb{V}}(f^1,{\ldots},f^n)} = 1\;\;\;\; &{\rm or\; for\; short}&\;\;\;\; \det\, N \equiv 1.\nonumber\end{aligned}$$ For the calculation of the det-equation the following relation is useful. $$\begin{aligned} \det\, N &\equiv& 1\;\;\;\Leftrightarrow\;\;\; \det\, M - (-1)^d \det\, \bar{M}\equiv 0\nonumber\end{aligned}$$ Now we turn to the second equation. Choosing the canonical Kähler (symplectic) form $\omega = \frac{1}{2i}\sum\limits_{i} du^i\wedge d\bar{u}^i$, the pull back operation results in $$\begin{aligned} 0 = i^\ast\omega &=&\frac{1}{2i}\sum\limits_{i}du^i\left(\xi_1\ldots\xi_d\right) \wedge d\bar{u}^i\left(\xi_1\ldots\xi_d\right)\nonumber\\ &=& \frac{1}{2i}\sum\limits_{i}\left(\sum\limits_k u^i_{\xi_k}d\xi_k\right) \wedge\left(\sum\limits_l\bar{u}^i_{\xi_l}d\xi_l\right)\nonumber\\ &=& \frac{1}{2i}\sum\limits_{k<l}\sum\limits_{i} \left[ u^i_{\xi_k}\bar{u}^i_{\xi_l} - u^i_{\xi_l}\bar{u}^i_{\xi_k} \right] d\xi_k\wedge d\xi_l\nonumber\end{aligned}$$ $$\begin{aligned} \Rightarrow\;\;\; 0 &=& \sum\limits_{i} \left[ u^i_{\xi_k}\bar{u}^i_{\xi_l}- u^i_{\xi_l}\bar{u}^i_{\xi_k} \right] \nonumber\\ &=& \sum\limits_{i} \left[ u^i_{\xi_k}\left(\sum\limits_m N^i_m u^m_{\xi_l}\right)- u^i_{\xi_l}\left(\sum\limits_m N^i_m u^m_{\xi_k}\right) \right] \nonumber\\ &=& \sum\limits_{i} u^i_{\xi_k}\left(\sum\limits_m N^i_m u^m_{\xi_l}\right)- \sum\limits_{i} u^i_{\xi_l}\left(\sum\limits_m N^i_m u^m_{\xi_k}\right) \nonumber\\ &=& \sum\limits_{i} u^i_{\xi_k}\left(\sum\limits_m N^i_m u^m_{\xi_l}\right)- \sum\limits_{m}\left(\sum\limits_i u^i_{\xi_l} N^i_m\right) u^m_{\xi_k} \nonumber\\ &=& \sum\limits_{i} u^i_{\xi_k}\left(\sum\limits_m N^i_m u^m_{\xi_l}\right)- \sum\limits_{i}\left(\sum\limits_m u^m_{\xi_l} {N^T}^i_m\right) u^i_{\xi_k} \nonumber\\ &=& \sum\limits_{i,m}\left(N^i_m-{N^T}^i_m\right)u^i_{\xi_k} u^m_{\xi_l} \nonumber\end{aligned}$$ which is satisfied if we set $N\equiv N^T$. However, as it stands, this requirement is sufficient, only. Now we intend to give a proof that the condition is necessary, too. To proof $N\equiv N^T$ we remember some facts from symplectic geometry especially various ways of characterising Lagrangian planes in symplectic vector spaces. The utility of this investigation rest on the simple observation that our conditions on the d-cycle to be a special Lagrangian submanifolds are in fact conditions on its tangent bundle, i.e. Lagrangian planes locally.\ To begin with, we consider a complex vector space ${\mathbb{C}}^d$ furnished with a Hermitian structure $$\begin{aligned} <x,y>\, = \sum\limits_i x_i\bar{y}_i = g(x,y) + i\,\sigma(x,y) \nonumber\end{aligned}$$ which splits into an Euclidean metric $g$ and a symplectic form $\sigma$. One can check that $\sigma$ coincides with $$\begin{aligned} \omega = \frac{1}{2i}\sum\limits_i du^i\wedge d\bar{u}^i. \nonumber\end{aligned}$$ given before. Therefore we identify both objects. The two-form $\omega$ is non degenerated, antisymmetric and bilinear. With help of $\omega$ we can define the notion of symplectic orthogonality. The orthogonal complement of a vector subspace $E\in {\mathbb{C}}^d$ is defined by $$\begin{aligned} E^\perp = \{ x\in {\mathbb{C}}^d \mid \omega(x,E) = 0 \} \nonumber \end{aligned}$$ In the special case that $E=E^\perp$ we call $E$ a Lagrangian plane. Obviously on a Lagrangian plane the symplectic form restricts to zero. So we recognize the content of the constraint $i^\ast\omega=0$. It simply states that all tangent spaces to the supersymmetric cycle are Lagrangian planes embedded in the tangent space of the embedding space. Here we collect some facts: 1. $Sp(E)$ operates transitively on Lagrangian planes 2. Since $U(d)$ preserves the Hermitian form, it is contained in $Sp(E)$. 3. By $\Lambda(d)$ we denote the Gra[ß]{}mannian of Lagrangian planes 4. $\lambda\in\Lambda(d)$ is characterized by choosing an orthonormal basis $(a_1,\ldots ,a_n)$ with respect to the Euclidean metric $g$. But then it is orthonormal with respect to the Hermitian form, too: $$\begin{aligned} < a_i,a_j >\, = g(a_i,a_j) + i\,\omega(a_i,a_j) \buildrel !\over = \delta_{ij}, \nonumber \end{aligned}$$ i.e. the matrix $a=(a_1,\ldots , a_n)$ is unitary. The other direction works, too. Hence $$\begin{aligned} \lambda\in \Lambda(d) \;\; \Leftrightarrow\;\; \exists\, a\in U(d),\; \lambda = a({\mathbb{R}}^d) \nonumber \end{aligned}$$ 5. Obviously each Lagrangian plane will be stabilized by any element in $O(n)$, i.e. we can regard the Gra[ß]{}mannian of Lagrangian planes as the quotient space $$\begin{aligned} \Lambda(d) = \frac{U(d)}{O(d)} \nonumber \end{aligned}$$ How can we define a projection from $U(d)$ onto $\Lambda(d)$? We observe that two elements $a$ and $a'$ determine the same Lagrangian plane, iff $$\begin{aligned} \lambda = a({\mathbb{R}}^d) ={a'}({\mathbb{R}}^d) \Leftrightarrow\;\;\; a\bar{a}^{-1} = {a'}{\bar{a}}^{\prime -1}, \nonumber\end{aligned}$$ which is constant on the $O(d)$-orbits of the fibration. Now we can identify $\Lambda(d)$ with the image of the projection map $$\begin{aligned} \pi : U(d) &\rightarrow& \Lambda(d)\nonumber\\ a &\mapsto& \lambda = a\bar{a}^{-1} \nonumber\end{aligned}$$ By abuse of language we denote the matrix representative $a\bar{a}^{-1}$ of the Lagrangian plane $\lambda=a({\Bbb{R}}^n)$ by $\lambda$ again. But how can we associate the geometrical object with this artificial matrix representative? The connection between the matrix $\lambda$ on the one side and the concrete Lagrangian plane $\lambda$ on the other side is given through the central equation $$\begin{aligned} x\in\lambda\; \Leftrightarrow\; x=\lambda\bar{x} \nonumber \end{aligned}$$ In the last formula we recognize the familiar equation (\[lemma\]). But now we know, that we can represent $\lambda$ as $\lambda=a\bar{a}^{-1}$ and this yields straight forward $$\begin{aligned} \lambda^{+} &=& {\bar{a}^{-1^{+}}}a^{+} = {a^{-1}}^T a^{-1} = \bar{a}a^{-1} = \bar{\lambda} \nonumber\\ \Rightarrow\;\; \lambda^T &=& \lambda \nonumber\end{aligned}$$ But then we can finally conclude by identifying $\lambda = N^{-1}$ and performing some mild manipulations that $$\begin{aligned} N \equiv N^T\nonumber\end{aligned}$$ Appendix : Some facts from Hamiltonian dynamics {#AppendixLiouville .unnumbered} =============================================== [(Liouville)]{} Suppose $(f^1,\ldots, f^d)$ is a set of smooth functions on a symplectic manifold $M^{2d}$ that are pairwise in involution, i.e. $\{f^i,f^j\}=0$. Let $M_{\xi}$ be the joint level surface determined by a system of equations$f^1(x)=\xi_1,\ldots,f^d(x)=\xi_d$. Suppose the functions are functionally independent on $M_\xi$ (that is, the gradients of the functions are linearly independent at each point of $M_\xi$). Then the following assertions are true: 1. The level surface $M_{\xi}$ is a smooth $n$-dimensional submanifold that is invariant with respect to the flows determined by the vector fields $X_{f^i}$. 2. The connected components of $M_\xi$ are diffeomorphic to $T^k\times{\mathbb{R}}^{d-k}$. 3. If $M_{\xi}$ is compact and connected, then it is diffeomorphic to the $d$-dimensional torus $T^d$ Now we want to show how we utilize this theorem for our purposes. At first we identify the functions $(f^1,\ldots, f^d)$ as the defining equations of our searched for intersection. Then we start with such functions $f_i$, such that the gradients are linear independent everywhere.\ Then the gradients span the normal directions to our d-cycle. Can we construct in a canonical way a set of vector fields, which form a basis for the orthogonal complement of these normal directions? To answer this questions we have a look on some simple properties of these vector fields and very natural associated objects. We start with a simple but important definition: [(Hamiltonian vector field)]{}[A Hamiltonian vector field $X$ is defined by the property $$\begin{aligned} d(X{{\raisebox{0.2ex}{$\,\lrcorner\:$}}}\omega) =0,\nonumber \end{aligned}$$ i.e. ${\mathcal{L}}_X\omega = 0$ which reflects the property of the Hamiltonian flow to preserve the symplectic form. In the case of mild topology closeness implies exactness and we can write $$\begin{aligned} df + X_f{{\raisebox{0.2ex}{$\,\lrcorner\:$}}}\omega = 0\nonumber \end{aligned}$$ assigning to the Hamiltonian vector field its generating function $f$. ]{} We will show that the Hamiltonian vector fields corresponding to the $f_i$ do span the orthogonal complement mentioned before. At first we observe that $X_f\perp grad\,f$ by construction. By using the “symplectic involution” $\sigma$ given by $$\begin{aligned} \sigma&=&\left(\begin{array}{cc} 0 & -{\1I}_d \\ {\1I}_d & 0 \end{array}\right) \;\;\;\;\;\; \sigma^2 = -{\1I}_{2d} \nonumber\end{aligned}$$ we can write $X_f$ as $X_f=\sigma\cdot grad\, f$. Sometimes $X_f$ is called the symplectic gradient, therefore. Now we calculate $$\begin{aligned} <grad\,f,\sigma\cdot grad\,f>\,&=&\,-<\sigma^2\cdot grad\,f, \sigma\cdot grad\,f>\nonumber\\ &=&\,-<\sigma\cdot grad\,f, \sigma^+\sigma \cdot grad\,f> \nonumber\\ &=&\,-<grad\,f,\sigma\cdot grad\,f>\nonumber\end{aligned}$$ Evidently $grad\,f$ and $X_f$ are orthogonal vectors. But is $X_{f^i}$ orthogonal to all gradients $grad\,f^i$? Now we exploit the integrability condition. Since all functions do commute with respect to the Poisson bracket we conclude: $$\begin{aligned} 0 \buildrel !\over = \{f^i,f^j\} &\buildrel Def \over =& \omega(X_{f^i},X_{f^j}) = X_{f^j}{{\raisebox{0.2ex}{$\,\lrcorner\:$}}}X_{f^i}{{\raisebox{0.2ex}{$\,\lrcorner\:$}}}\omega = -X_{f^j}{{\raisebox{0.2ex}{$\,\lrcorner\:$}}}df^i = -df^i(X_{f^j}) \nonumber\\ &=& -< grad\,f^i,\sigma\cdot grad\,f^j > \nonumber \end{aligned}$$ So we recognize that our integrability condition guarantees the orthogonality of the span $X_{f^i}$ to the normal directions. Since the normal span is linear independent and $$\begin{aligned} <X_{f^i},X_{f^j}>\,&=&\,<\nabla\,f^i,\nabla\,f^j>\nonumber\end{aligned}$$ the Hamiltonian vector fields are independent, too. Now we have to care for the Lagrangian property. Does the symplectic form $\omega$ vanish on the subspace spanned by the $X_{f^i}$? Reading the last formula in the other direction $$\begin{aligned} \omega(X_{f^i},X_{f^j}) &=& \{f^i,f^j\} \nonumber\end{aligned}$$ this wish becomes true, too. The next question touches the sore spot of the whole business. Is the space spanned by the $X_{f^i}$ tangent to the level surface $M_\xi$? We want to investigate the Hamiltonian flow generated by $f^i$. Obviously the Hamiltonian $f^i$ is a constant of motion. Further since the other $f_j$ are in involution with $f^i$ they are constants of motion, too. Hence the level surface $M_\xi$ is preserved by all Hamiltonian flows corresponding to the associated Hamiltonian vector fields $X_{f^i}$. But for $f^i$ to be a constant of motion $$\begin{aligned} X_{f^j}(f^i) \buildrel ! \over = 0, \nonumber\end{aligned}$$ i.e. $X_{f^i}$ is tangent to $M_\xi$ everywhere. The level surface $M_\xi$ is a Lagrangian submanifold. [^1]: karch@physik.hu-berlin.de [^2]: luest@physik.hu-berlin.de [^3]: miemiec@physik.hu-berlin.de [^4]: An alternative, but more restricted construction of chiral $N=1$ models via orientifolds was introduced in [@Landsteiner; @alter]. [^5]: Alternatively, within the so called elliptic models, the coordinate $x^6$ is compact, such that $k_0=k_n$ and the correspond D 4-branes are also finite and suspended between the first and the $n^{\rm th}$ NS 5-brane. [^6]: Like for the $N=2$ brane models, the $N=1$ brane box models can be related to fractional branes via T-duality [@haur]. [^7]: In [@haur] they also allowed sewing in a third kind of $N=2$ system connected to diagonal lines in the box setup. This doesn’t lead to uniform bent models anymore and should hence be treated separately. [^8]: see the footnote on page [^9]: After finishing and submitting our paper we became aware of [@HL] where it was already stated, however without detailed proofs, that a special Lagrangian submanifold is determined by the eqs. (\[detcond\], \[poisson\]) [^10]: The NS”-brane corresponds to the quaternion $s''=x^{10}+ix^5+jx^7-kx^2$. [^11]: It was already observed in refs.[@haur; @AB] that the brane box models with uniform bending lead to the correct $N=1$ $\beta$-function coefficients.
--- abstract: 'Let $\gg$ be a simple, finite-dimensional complex Lie algebra, and let $V^k(\gg)$ denote the universal affine vertex algebra associated to $\gg$ at level $k$. The Cartan involution on $\gg$ lifts to an involution on $V^k(\gg)$, and we denote by $V^k(\gg)^{\mathbb{Z}_2}$ the orbifold, or fixed-point subalgebra, under this involution. Our main result is an explicit minimal strong finite generating set for $V^k(\gg)^{\mathbb{Z}_2}$ for generic values of $k$. In the case $\gg = \gs\gl_2$, we also determine the set of nongeneric values of $k$, where this set does not work.' author: - 'Masoumah Al-Ali' title: 'The $\mathbb{Z}_{2}$-orbifold of the universal affine vertex algebra' --- Introduction ============ Starting with a vertex algebra $\mathcal{V}$ and a group $G$ of automorphisms of $\mathcal{V}$, the invariant subalgebra $\mathcal{V}^G$ is called a $G$-[*orbifold*]{} of $\mathcal{V}$. Many interesting vertex algebras can be constructed either as orbifolds or as extensions of orbifolds. A remarkable example is the Moonshine vertex algebra $V^{\natural}$, which is an extension of the $\mathbb{Z}_2$-orbifold of the lattice vertex algebra associated to the Leech lattice [@B; @FLM]. A substantial literature has evolved on the structure and representation theory of orbifolds under finite group actions including [@DVVV; @DHVW; @DM; @DLMI; @DLMII; @DRX]. It is widely believed that nice properties of $\mathcal{V}$ such as $C_2$-cofiniteness and rationality will be inherited by $\mathcal{V}^G$ when $G$ is finite. In [@M], Miyamoto proved the $C_2$-cofiniteness of $\mathcal{V}^G$ when $G$ is cyclic. Also, he recently established the rationality with Carnahan in [@CM]. Many vertex algebras depend continuously on a parameter $k$ such as the universal affine vertex algebra $V^k(\lie{g})$ associated to a simple, finite-dimensional Lie algebra $\lie{g}$. Another example is the $\mathcal{W}$-algebra $\mathcal{W}^k(\lie{g},f)$ associated to $\lie{g}$ together with a nilpotent element $f\in \lie{g}$. Typically, if $\mathcal{V}^k$ is such a vertex algebra depending on $k$, it is simple for generic values of $k$ but has a nontrivial maximal proper graded ideal $\mathcal{I}_k$ for special values. Often, one is interested in the structure and representation theory of the simple graded quotient $ \mathcal{V}^k / \cI_k$ at these points. This is illustrated by Frenkel and Zhu in [@FZ] to prove the $C_2$-cofiniteness and rationality of simple affine vertex algebras at positive integer level, and by Arakawa in [@A] to prove the $C_2$-cofiniteness and rationality of several families of $\mathcal{W}$-algebras. Let $\cV^k$ be such a vertex algebra and let $G \subset \text{Aut}(\cV^k)$ be a reductive group of automorphisms. In addition to determining the generic structure of $(\cV^k)^G$, it is important to determine the *nongeneric* set, where the strong generating set does not work. By a general result of [@CL], this set is always finite and consists at most of the poles of the structure constants of the OPE algebra among the generators. Determining this set explicitly is not an easy problem even using a computer, although examples where it has been worked out appear in [@ACL; @ACKL; @AL]. The primary objective of finding the nongeneric points is that it allows us to study orbifolds of the simple quotient $\cV_k$ of $\cV^k$, provided that $k$ is generic in the above sense. The quotient homomorphism $\cV^k \rightarrow \cV_k$ always restricts to a surjective homomorphism $$(\cV^k)^{G} \rightarrow (\cV_k)^{G},$$ so a strong generating set for $(\cV^k)^{G}$ descends to a strong generating set for $(\cV_k)^{G}$. In the examples we consider, the most interesting values of $k$, for which $(\cV)^G$ is highly reducible, turn out to be generic, so we obtain strong generators for $(\cV_k)^{G}$ as well. In this paper, we study $V^k(\lie{g})^{\mathbb{Z}_{2}}$. Here $\lie{g}$ is a simple, finite-dimensional Lie algebra and $V^k(\lie{g})$ denotes the universal affine vertex algebra of $\mathfrak{g}$ at level $k$. There is an involution of $\lie{g}$ known as the [*Cartan involution*]{}, and it gives rise to the action of $\mathbb{Z}_2$ on $V^k(\lie{g})$. Let $l = \text{rank}(\lie{g})$ and let $m$ be the number of positive roots, so that $\text{dim}(\lie{g}) = 2m+ l$. Our main result is that for any $\lie{g}$ with $\text{dim}(\lie{g})> 3$, $V^{k}(\lie{g})^{\mathbb{Z}_{2}}$ is of type $$\mathcal{W}\big(1^{m},2^{d+ \binom{d}{2}},3^{ \binom{d}{2}},4\big),$$ for generic values of $k$. Here $d = m + l$. In this notation, we say that a vertex algebra is of type $\mathcal{W}((d_1)^{n_1},\dots (d_r)^{n_r})$ if it has a minimal strong generating set consisting of $n_i$ fields in weight $d_i$, for $i=1,\dots,r$. In the case $\lie{g} = \mathfrak{sl}_2$, there is one extra field in weight $4$, so that $V^k(\lie{g})^{\mathbb{Z}_{2}}$ is of type $\mathcal{W}(1,2^{3},3,4^{2})$ for generic values of $k$. The proof of this result can be done by using a deformation argument [@LII; @CL] in the sense that, $$\lim_{k\rightarrow \infty} V^k(\lie{g})^{\mathbb{Z}_2} \cong \mathcal{H}(m) \otimes \big(\mathcal{H}(d)^{\mathbb{Z}_2}\big).$$ Here $\mathcal{H}(k)$ denotes the rank $k$ Heisenberg vertex algebra, and $\mathbb{Z}_2$ acts on the generators by multiplication by $-1$. Moreover, the limiting structure has a minimal strong generating set of the same type as $V^k(\lie{g})^{\mathbb{Z}_2}$ for generic values of $k$. So the problem of finding a minimal strong generating set for $V^k(\lie{g})^{\mathbb{Z}_2}$ is reduced to finding the minimal strong generating set for $\mathcal{H}(d)^{\mathbb{Z}_2}$ for all $d$. In the case $d = 1$, $\mathcal{H}(1)^{\mathbb{Z}_2}$ is of type $\mathcal{W}(2,4)$ by a celebrated theorem of Dong and Nagatomo [@DNI]. We will show that for $d = 2$, $\mathcal{H}(2)^{\mathbb{Z}_2}$ is of type $\mathcal{W}(2^3, 3, 4^2)$ and for $d \geq 3$, $\mathcal{H}(d)^{\mathbb{Z}_2}$ is of type $\mathcal{W}(2^{d+ \binom{d}{2}},3^{ \binom{d}{2}},4)$. To the best of our knowledge, the minimal strong generating set for $\mathcal{H}(d)^{\mathbb{Z}_2}$ does not appear previously in the literature. However, in [@DNII], the representation theory $\mathcal{H}(d)^{\mathbb{Z}_2}$ was studied and the irreducible, positive-energy modules of $\mathcal{H}(d)^{\mathbb{Z}_2}$ were classified. Finally, in the case $\lie{g}=\mathfrak{sl}_2$ we determine the set of nongeneric values; it consists only of $\{0,\frac{16}{51},\frac{16}{9},-\frac{32}{3}\}$. It follows that for all other values of $k$, the strong generating set for $V^k(\mathfrak{sl}_2)^{\mathbb{Z}_2}$ will descend to a strong generating set for the simple orbifold $L_k(\mathfrak{sl}_2)^{\mathbb{Z}_2}$. Here $L_k(\mathfrak{sl}_2)$ denotes the simple quotient of $V^k(\mathfrak{sl}_2)$. Preliminaries ============= **Vertex algebras.** The notion of vertex algebra was introduced by Borcherds [@B] back in the eighties. Since then, it has been in a remarkable progress of study (see for example [@B; @FLM; @K; @FBZ]). We will use the formalism developed in [@LZ] and [@LiI]. Roughly speaking, a *vertex algebra* is a quantum operator algebra $\mathcal{A}$ in which any two elements $a, b$ are local. By *local*, we mean that there exists some positive integer $N$ such that $(z - w)^{N} [a(z), b(w)] = 0.$ Here $\mathcal{A}$ is assumed to be a $\mathbb{Z}_{2}$-graded, and $[a(z),b(w)]$ is the super bracket, that is $[a(z),b(w)]=a(z)b(w)-(-1)^{|a||b|}b(w)a(z).$ There are many definitions for vertex algebra, and this is an equivalent definition in [@FLM]. Each $a \in \mathcal{A}$ has a unique representation by the following formal distribution: $$a=a(z):=\sum_{n \in \mathbb{Z}} a(n) z^{-n-1} \in \textit{End} (V)[[z,z^{-1}]] .$$ The normally ordered product of fields in a vertex algebra $\mathcal{A}$ is called the Wick product, and is defined by $$:a(z)b(w):\ =a(z)_{-} b(z) +(-1)^{|a||b|}b(z)a(z)_{+},$$ where $a(z),b(w) \in \mathcal{A},$ and $$a(z)_{-} =\sum _{n<0}a(n)z^{-n-1} \enspace \enspace \enspace a(z)_{+} = \sum _{n \geq 0} a(n) z^{-n-1}.$$ The $k$-fold iterated Wick product is defined inductively as follows: $$:a_{1}(z)\cdots a_{n}(z):\ =\ :a_{1}(z)(:a_{2}(z)\cdots a_{n}(z):):,$$ for $a_{1}(z),\dots,a_{n}(z) \in \mathcal{A}.$ The operators product expansion (OPE) formula for $a,b \in \mathcal{A}$ is defined by $$a(z)b(w) \sim \sum _{n \geq 0}a(w) \circ _{n} b(w)( z-w)^{-n-1},$$ where $\sim$ means equal modulo terms that are regular at $z=w.$ Here $\circ_{n}$ denotes the $n^{th}$ circle product, which is defined by $$a(w) \circ _{n} b(w)= Res_{z}a(z)b(w)\imath _{|z|>|w|}(z-w)^{n}-(-1)^{|a||b|}Res_{z}b(w)a(z)\imath _{|w|>|z|}(z-w)^{n},$$ where $\imath _{|z|>|w|} f(z,w) \in \mathbb{C}[[z,z^{-1},w,w^{-1}]]$ denotes the power series expansion of a rational function $f$ which converges in the domain $|z|>|w|,$ while $Res_{z}a(z)$ denotes the coefficient of $z^{-1}.$ The $\partial_{z} a(z)$ is the formal derivative $\partial_{z}=\frac{d}{dz}.$ A subset $S=\{a_{i}|i \in I\} \subset \mathcal{A}$ is said to *generate* $\mathcal{A}$ if every $a \in \mathcal{A}$ is a linear combination of words in $a_{i},\circ_{n}$ for $i \in I$ and $n\in \mathbb{Z}.$ Moreover, we say that $S$ *strongly generates* $\mathcal{A}$ if every $a \in \mathcal{A}$ is a linear combination of words in $a_{i},\circ_{n}$ for $n<0.$ Equivalently, $\mathcal{A}$ is spanned by the ordered monomials $$\label{monomial} \{:\partial^{k_{1}} a_{i_{1}} \cdots \partial^{k_{m}} a_{i_{m}}:\ |i_{1},\dots ,i_{m}\in I, \enspace 0\leq k_{1}\leq \cdots \leq k_{m}\}.$$ If $I$ can be chosen to be finite, then $\mathcal{A}$ is called *strongly finitely generated*. We say that $S$ *freely generates* $\mathcal{ A}$ if there are no nontrivial normally ordered polynomial relations among the generators and their derivatives. Our work hinges mainly on the *universal affine vertex algebras*. Let $\mathfrak{g}$ be a finite-dimensional Lie algebra over $\mathbb{C},$ furnished with a nondegenerate, symmetric, invariant bilinear form $B$. The *affine Kac-Moody algebra* $\hat{\lie{g}}=\lie{g}[t,t^{-1}] \oplus \mathbb{C}_{\kappa},$ determined by $B,$ is the one-dimensional central extension of the loop algebra $\lie{g}[t,t^{-1}]=\lie{g}\otimes \mathbb{C}[t,t^{-1}],$ where a generator ${\kappa}$ is the central charge. The Lie algebra $\hat{\lie{g}}$ is spanned by $\langle\zeta \otimes t^{n},\zeta \in \lie{g},n \in \mathbb{Z},\kappa \rangle,$ and these generators satisfy the following Lie bracket: $$\label{general lie algebra} [\zeta \otimes t^{n},\eta \otimes t^{m}]=[\zeta,\eta] \otimes t^{n+m}+nB(\zeta, \eta)\delta_{n+m,0}\kappa,\enspace\enspace\enspace [\kappa,\zeta \otimes t^{n}]=0,$$ and $\mathbb{Z}$-gradation $deg(\zeta \otimes t^{n})=n,$ and $deg(\kappa)=0.$ Let $\hat{\lie{g}}_{+}=\sum_{n\geq 0}\hat{\lie{g}}_{n}$ where $\hat{\lie{g}}_{n}$ denotes the subspace of degree n. For $k \in \mathbb{C},$ let $C_{k }$ be the one-dimensional $\hat{\lie{g}}_{+}$-module on which $\zeta \otimes t^n$ acts trivially for $n \geq 0,$ and $\kappa$ as a multiplication by scalar $k.$ Define $V_{k} = U(\hat{\lie{g}}) \otimes _{U(\hat{\lie{g}}_{+})} C_{k},$ and let $X^{\zeta}(n) \in End(V_{k})$ be the linear operator representing $\zeta \otimes t^{n}$ on $V_{k}.$ Define $X^{\zeta}(z)=\sum_{n \in \mathbb{Z}}X^{\zeta}(n)z^{-n-1}$ to be an even generating field of conformal weight $1,$ and satisfies the OPE relation $$\label{ope affine universal} X^{\zeta}(z)X^{\eta}(w)\sim kB(\zeta,\eta)(z-w)^{-2}+X^{[\zeta,\eta]}(w)(z-w)^{-1}.$$ The vertex algebra $V^{k}(\lie{g},B)$ generated by $\{X^{\zeta_{i}}|\zeta_{i} \in \lie{g}\}$ is called the universal affine vertex algebra associated to $\mathfrak{g}$ and $B$ at level $k$. It has a PBW basis $$\label{basis g universal} :\partial ^{k_{1}^{1}} X^{\zeta_{1}}\dots \partial ^{k_{s_{1}}^{1}} X^{\zeta_{1}} \dots \partial ^{k_{1}^{m}} X^{\zeta_{m}} \dots \partial^{k_{s_{m}}^{m}} X^{\zeta_{m}} :, \enspace \enspace s_{i}\geq0, \enspace k_{1}^{i} \geq \dots \geq k_{s_{i}}^{i} \geq 0.$$ A special case is when $\lie{g}$ is a simple Lie algebra, the bilinear form $B$ is then defined to be the normalized Killing form. Then can be defined in this case as follows: $$[\zeta \otimes t^{n},\eta \otimes t^{m}]=[\zeta,\eta] \otimes t^{n+m}+n(\zeta|\eta)\delta_{n+m,0}\kappa,\enspace\enspace\enspace [\kappa,\zeta \otimes t^{n}]=0,$$ for $\zeta, \eta \in \lie{g},n,m \in \mathbb{Z}.$ Here $(.|.)$ is *the normalized Killing form*, and is defined as $$(.|.)=\frac{1}{2h^{\vee}}(.,.)_{\kappa_{\lie{g}}},$$ where $h^{\vee}$ is the dual Coxeter number of $\lie{g}.$ In this case, we denote $V^{k}(\mathfrak{g},B)$ by $V^{k}(\mathfrak{g}).$ According to the Cartan-Killing classification, we have the following list for the simple, finite-dimensinal Lie algebras over $\mathbb{C}$. **Classical Lie Algebras:** 1. $\mathfrak{sl}_{n+1},n\geq 1,$ and it has Cartan notation $A_{n},$ 2. $\mathfrak{so}_{2n+1},n\geq 2,$ and it has Cartan notation $B_{n},$ 3. $\mathfrak{sp}_{2n},n\geq 3,$ and it has Cartan notation $C_{n},$ 4. $\mathfrak{so}_{2n},n\geq 4,$ and it has Cartan notation $D_{n}.$ **Exceptional Lie Algebras:** 1. $G_{2},$ 2. $F_{4},$ 3. $E_{6},E_{7},$ or $E_{8}.$ Let $\{\zeta_{1},\dots ,\zeta_{n}\}$ be an orthonormal basis for $\lie{g}$ relative to $(.|.).$ There is a natural conformal structure of central charge $\frac{k \cdot \text{dim}(\lie{g})}{k+h^{\vee}}$ on $V^{k}(\mathfrak{g})$ with the Virasoro element $L(z),$ that is $$L(z)=\frac{1}{2(k+h^{\vee})}\sum_{i=1}^{n}:X^{\zeta_{i}}(z)X^{\zeta_{i}}(z):,$$ where $k\neq -h^{\vee}.$ In this case, the Virasoro element is called the *Sugawara conformal vector.* For $k= -h^{\vee},$ the Virasoro element $L(z)$ does not exist. For the case where $\lie{g}$ is an abelian Lie algebra. Since $B$ is nondegenerate, $V^{k}(\mathfrak{g},B)$ is just the rank $n$ Heisenberg vertex algebra $\mathcal{H}(n).$ If we choose an orthonormal basis $\{\zeta_{1},\dots,\zeta_{n}\}$ for $\lie{g},$ then $\mathcal{H}(n)$ is generated by $\{\alpha^{i}= X^{\zeta_{i}}|i=1,\dots,n\}.$ **Good increasing filtrations.** [@LiII] A good increasing filtration on a vertex algebra $\mathcal{A}$ is a $\mathbb{Z}_{\geq0}$-filtration $$\label{eq11} \mathcal{A}_{(0)} \subset \mathcal{A}_{(1)}\subset \mathcal{A}_{(2)}\cdots ,\enspace \enspace\enspace \mathcal{A}=\bigcup _{d\geq 0}\mathcal{A}_{(d)}$$ satisfying that $\mathcal{A}_{(0)}= \mathbb{C}$, and for all $a \in \mathcal{ A}_{(r)}, b \in \mathcal{ A}_{(s)}$ we have $$a \circ_{n} b \in \mathcal{ A}_{(r+s)}, \enspace \enspace \text{for} \enspace n < 0,$$ $$a \circ_{n} b \in \mathcal{ A}_{(r+s-1)}, \enspace \enspace \text{for} \enspace n \geq 0.$$ Let $\mathcal{A}_{-1}=\{0\}.$ An element $a(z)\in \mathcal{A}$ has at most degree $d$ if $a(z) \in \mathcal{A}_{(d)}.$ The associated graded algebra $\text{gr}(\mathcal{A}) =\bigoplus_{d\geq0} \mathcal{A}_{(d)}/\mathcal{A}_{(d-1)}$ is a $\mathbb{Z}_{\geq0}$-graded associative, (super)commutative algebra with a unit 1 under a product induced by the Wick product on $\mathcal{A}.$ It has a derivation $\partial$ of degree zero. For each $r \geq 1,$ we have the projections $$\phi_{r} : \mathcal{A}_{(r)} \rightarrow \mathcal{A}_{(r)}/\mathcal{A}_{(r-1)} \subset \text{gr}(\mathcal{A}).$$ Let $\mathcal{ R}$ be the category of vertex algebras associated with a $\mathbb{Z}_{\geq0}$-filtration. For any vertex algebra $\mathcal{A} \in \mathcal{ R},$ the $\partial$-ring, namely $\text{gr}(\mathcal{A})$ is an abelian vertex algebra. The following reconstruction property is the key peculiarity of $\mathcal{R}$ [@LL]: \[8\] Given a vertex algebra $\mathcal{A} \in \mathcal{R}.$ Consider the collection $\{a_{i}| i \in I\}$ that generates $\text{gr}(\mathcal{A})$ as a $\partial$-ring, where $a_{i}$ is homogenous of degree $d_{i}.$ Then $\mathcal{A}$ is strongly generated by the collection $\{a_{i}(z)| i \in I\},$ where $a_{i}(z) \in \mathcal{A}_{(d_{i})}$ such that $\phi_{d_{i}}(a_{i}(z))=a_{i}.$ We define an increasing filtration on $V^{k}(\lie{g})$ for any simple Lie algebra $\lie{g}$ as follows:$$\label{filg} V^{k}(\lie{g})_{(0)} \subset V^{k}(\lie{g})_{(1)} \subset \cdots, \enspace \enspace \enspace V^{k}(\lie{g})=\bigcup_{r\geq 0} V^{k}(\lie{g})_{(r)} ,$$ where $V^{k}(\lie{g})_{(-1)}=\{0\},$ and $V^{k}(\lie{g})_{(r)}$ is spanned by the iterated Wick products of the generators $X^{\zeta_{i}}$ and their derivatives, such that at most $r$ of the generators and their derivatives appear. So, $V^{k}(\lie{g})$ equipped with this filtration lies in the category $\mathcal{R}.$ The $\mathbb{Z}_{\geq 0}$-associated graded algebra $$\text{gr}(V^{k}(\lie{g}))=\bigoplus_{d\geq 0}V^{k}(\lie{g})_{(d)}/V^{k}(\lie{g})_{(d-1)}$$ is now an abelian vertex algebra freely generated by $X^{\zeta_{i}}.$ Then, $V^{k}(\lie{g})\cong \text{gr}( V^{k}(\lie{g}))$ as linear spaces, and as commutative algebras we have $$\text{gr}(V^{k}(\lie{g}))\cong \mathbb{C}[ X^{\zeta_{i}},\partial X^{\zeta_{i}},\partial^{2}X^{\zeta_{i}},\dots ].$$ The $\mathbb{Z}_{2}$-orbifold of $\mathcal{H}(n)$ ================================================= The rank $n$ Heisenberg vertex algebra $\mathcal{H}(n)$ is the tensor product of $n$ copies of rank $1$ Heisenberg vertex algebra $\mathcal{H}$ with even generating fields $\alpha^{1},\dots,\alpha^{n}.$ These satisfy the OPE relations $$\label{nheisenberg} \alpha^{i}(z)\alpha^{j}(w)\sim \delta _{i,j} (z-w)^{-2}.$$ There is a conformal structure of central charge $n$ on $\mathcal{H}(n)$ with the Virasoro element $L(z)$ $$L(z)=\frac{1}{2}\sum_{i=1}^{n}:\alpha^{i}(z)\alpha^{i}(z):,$$ where each $\alpha^{i}$ is primary of weight $1$. $\mathcal{H}(n)$ is freely generated by $\alpha^{1},\dots,\alpha^{n}$ and has a PBW basis as follows $$\label{eq22} :\partial ^{k_{1}^{1}} \alpha^{1}\dots \partial ^{k_{s_{1}}^{1}} \alpha^{1} \dots \partial ^{k_{1}^{n}} \alpha^{n} \dots \partial^{k_{s_{n}}^{n}} \alpha ^{n} :, \enspace \enspace s_{i}\geq0, \enspace k_{1}^{i} \geq \dots \geq k_{s_{i}}^{i} \geq 0.$$ **Filtrations on $\mathcal{H}(n)$.** Define an increasing filtration on $\mathcal{H}(n)$ as follows: $$\label{filh} \mathcal{H}(n)_{(0)} \subset \mathcal{H}(n)_{(1)} \subset \mathcal{H}(n)_{(2)} \subset \cdots ,\enspace \enspace \enspace \mathcal{H}(n)=\bigcup _{d\geq 0} \mathcal{H}(n)_{(d)} ,$$ where $\mathcal{H}(n)_{(-1)}=\{0\},$ and $\mathcal{H}(n)_{(r)}$ is spanned by the iterated Wick products of the generators $\alpha^{i}$ and their derivatives such that at most $r$ of $\alpha^{i}$ and their derivatives appear. From defining the OPE relation , this is a good increasing filtration, and so, $\mathcal{H}(n)$ equipped with such a good filtration lies in the category $\mathcal{R}.$ The OPE relation will be replaced with $\alpha^{i}(z)\alpha^{j}(w)\sim 0.$ The $\mathbb{Z}_{\geq 0}$-associated graded algebra $$\text{gr}(\mathcal{H}(n))=\bigoplus_{d\geq0} \mathcal{H}(n)_{(d)}/\mathcal{H}(n)_{(d-1)}$$ is now an abelian vertex algebra freely generated by $\alpha^{i}.$ The rank $n$ Heisenberg vertex algebra $\mathcal{H}(n)$ equipped with such filtrations lies in the category $\mathcal{R}.$ Then, $\mathcal{H}(n)\cong \text{gr}( \mathcal{H}(n))$ as linear spaces, and as commutative algebras, we have $$\text{gr}(\mathcal{H}(n))\cong \mathbb{C}[ \partial^{a} \alpha^{i}|a\geq0,i=1,\dots,n].$$ The subgroup $\mathbb{Z}_{2}$ of the automorphism group of $\mathcal{H}(n)$ generated by the nontrivial involution $\theta$ which acts on the generators as follows: $$\label{action of Z2H(n)} \theta(\alpha^{i})= -\alpha^{i}.$$ The OPE relations will be preserved by this action on $\mathcal{H}(n),$ that is $$\alpha^{i} \circ_{m}\alpha^{j}=\theta(\alpha^{i})\circ_{m} \theta( \alpha^{j})$$ for all $m$ as well as the filtration , and induces an action of $\mathbb{Z}_{2}$ on $\text{gr}(\mathcal{H}(n)).$ Going back to $\mathcal{H}(n)^{\mathbb{Z}_{2}},$ it is also spanned by all normally ordered monomials of the form , where the length $s_1 + \cdots + s_n$ is even. Since $\mathcal{H}(n)$ is freely generated by $\alpha^{i}$, these monomials form a basis for $\mathcal{H}(n)^{\mathbb{Z}_{2}}$, and the normal form is unique. The filtration on $\mathcal{H}(n)^{\mathbb{Z}_{2}}$ is obtained from the filtration after restriction as follows: $$(\mathcal{H}(n)^{\mathbb{Z}_{2}})_{(0)} \subset (\mathcal{H}(n)^{\mathbb{Z}_{2}})_{(1)}\subset \cdots ,\enspace \enspace \enspace \mathcal{H}(n)^{\mathbb{Z}_{2}}=\bigcup _{d\geq 0} (\mathcal{H}(n)^{\mathbb{Z}_{2}})_{(d)},$$ where $(\mathcal{H}(n)^{\mathbb{Z}_{2}})_{(r)}=\mathcal{H}(n)^{\mathbb{Z}_{2}} \cap \mathcal{H}(n)_{(r)}.$ The action of $\mathbb{Z}_{2}$ on $\mathcal{H}(n)$ descends to an action on $\text{gr}(\mathcal{H}(n))$, and so we have a linear isomorphism $ \mathcal{H}(n)^{\mathbb{Z}_{2}}\cong \text{gr}(\mathcal{H}(n)^{\mathbb{Z}_{2} })$ as linear spaces. Similarly, $\mathbb{Z}_{2}$ acts on $\text{gr}(\mathcal{H}(n))\cong \mathbb{C}[\partial^{a} \alpha^{i}|a\geq 0,i=1,\dots ,n],$ and so we have a linear isomorphism $$\label{grZ2} \text{gr}(\mathcal{H}(n)^{\mathbb{Z}_{2}}) \cong \text{gr}(\mathcal{H}(n))^{\mathbb{Z}_{2}} \cong \mathbb{C}[\partial^{a} \alpha^{i}|a\geq0,i=1,\dots ,n]^{\mathbb{Z}_{2}}$$ as commutative algebras. The weight and degree are preserved by where $wt(\partial^{a}\alpha^{i})=a+1.$ Recall, the $\text{gr}(\mathcal{H}(n))^{\mathbb{Z}_{2}}$ is a commutative algebra of even degree with a differential $\partial$ of degree zero, and it extends to $\text{gr}(\mathcal{H}(n))^{\mathbb{Z}_{2}}$ by the product rule, that is $$\begin{aligned} \partial (\alpha_{a}^{i} \alpha_{b}^{j}) =\alpha_{a+1}^{i} \alpha_{b}^{j}+\alpha_{a}^{i} \alpha_{b+1}^{j}.\end{aligned}$$ Define $$q^{i,i}_{a,b}=\alpha_{a}^{i} \alpha_{b}^{i},\enspace\enspace\enspace q^{i,j}_{a,b}=\alpha_{a}^{i} \alpha_{b}^{j},$$ as generators for $\text{gr}(\mathcal{H}(n))^{\mathbb{Z}_{2}}.$ The action of $\partial $ on these generators is defined as follows: $$\label{action of derivation Hn} \partial(q^{i,i}_{a,b}) =q^{i,i}_{a+1,b}+ q^{i,i}_{a,b+1}, \enspace \enspace \enspace \partial (q_{a,b}^{i,j}) =q^{i,j}_{a+1,b}+ q^{i,j}_{a,b+1}.$$ The action of $\mathbb{Z}_{2}$ on the $\text{gr}(\mathcal{H}(n))$ which is given by $\theta(\alpha_{a}^{i} )=-\alpha_{a}^{i} $ guarantees that $\text{gr}(\mathcal{H}(n))^{\mathbb{Z}_{2}}$ is generated by the subset $\{q^{i,i}_{a,b},q^{i,j}_{a,b}| a,b\geq0,\enspace 1 \leq i, j\leq n \}.$ Since $q^{i,i}_{a,b}=q^{i,i}_{b,a},$ and $q^{i,j}_{a,b}=q^{j,i}_{b,a},$ so $\text{gr}(\mathcal{H}(n))^{\mathbb{Z}_{2}}$ is generated by the subset $$\label{generating set} \{q^{i,i}_{a,b}| 0 \leq a \leq b,\enspace i=1,\dots ,n\} \bigcup \{q^{i,j}_{a,b}| 0 \leq a , b,\enspace 1 \leq i < j \leq n\}.$$ Among these generators, the ideal of relations is generated by $$\label{ideal'} q^{i,j}_{r,s} q^{k,l}_{t,u} - q_{r,u}^{i,l} q_{s,t}^{j,k},\enspace i,j,k,l=1,\dots,n, \enspace 0 \leq r,s,t,u.$$ Under the projection $$\phi_{2}:(\mathcal{H}(n)^{\mathbb{Z}_{2}})_{(2)}\rightarrow (\mathcal{H}(n)^{\mathbb{Z}_{2}})_{(2)}/(\mathcal{H}(n)^{\mathbb{Z}_{2}})_{(1)}\subset \text{gr}(\mathcal{H}(n)^{\mathbb{Z}_{2}}),$$ the generators $q^{i,i}_{a,b},q_{a,b}^{i,j}$ of $\text{gr}(\mathcal{H}(n))^{\mathbb{Z}_{2}}$ correspond to fields $ \omega ^{i,i}_{a,b}, \omega _{a,b}^{i,j},$ respectively defined by $$\omega_{a,b}^{i,i}= \ :\partial^{a} \alpha^{i}(z) \partial^{b} \alpha^{i}(z):\ \in (\mathcal{H}(n)^{\mathbb{Z}_{2}})_{(2)},\enspace \enspace \enspace 0\leq a \leq b ,\enspace \enspace i = 1,\dots, n,$$ $$\omega^{i,j}_{a,b}= \ :\partial^{a} \alpha^{i}(z) \partial^{b} \alpha^{j}(z): \ \in (\mathcal{H}(n)^{\mathbb{Z}_{2}})_{(2)},\enspace \enspace \enspace a,b \geq 0, \enspace\enspace 1\leq i < j \leq n.$$ The fields $ \omega _{a,b}^{i,i}, \omega _{a,b}^{i,j}$ satisfy $\phi_{2}(\omega _{a,b}^{i,i})=q^{i,i}_{a,b},$ $\phi_{2}(\omega _{a,b}^{i,j})=q_{a,b}^{i,j},$ respectively and have weight $a+b+2.$ Note that $ \sum_{i=1}^{n}\omega_{0,0}^{i,i}=2L,$ where $L$ is the Virasoro element. The subspace $(\mathcal{H}(n)^{\mathbb{Z}_{2}})_{(2)}$ has degree at most $2,$ and has a basis $\{1\} \cup \{ \omega^{i,i} _{a,b},\omega _{a,b}^{i,j}\}.$ Moreover, for $m \geq 0,$ the operators $\omega _{a,b}^{i,j} \circ_{m}$ preserve this vector space [@LI]. For $a,b,c \geq0,\enspace 0 \leq m \leq a+b+c+1,$ and $i < j,$ we have $$\omega _{a,b}^{i,j} \circ _{m} \partial^{c} \alpha ^{i}=(-1)^{a} \frac{(a+c+1)!}{(a+c+1-m)!} \partial ^{a+b+c+1-m} \alpha ^{j},$$ $$\omega _{a,b}^{i,j} \circ _{m} \partial^{c} \alpha ^{j}=(-1)^{b} \frac{(b+c+1)!}{(b+c+1-m)!} \partial ^{a+b+c+1-m} \alpha ^{i}.$$ $$\omega _{a,b}^{i,i} \circ _{m} \partial^{c} \alpha ^{i}=\lambda _{a,b,c,m} \partial ^{a+b+c+1-m} \alpha ^{i},$$ where $$\lambda _{a,b,c,m}=(-1)^{b} \frac{(b+c+1)!}{(b+c+1-m)!}+(-1)^{a}\frac{(a+c+1)!}{(a+c+1-m)!}.$$ It follows that for $m \leq a+b+c+1,$ and $i < j < k$ we have $$\begin{aligned} \omega _{a,b}^{i,j} \circ _{m} \omega _{c,d}^{i,j} &= (-1)^{a} \frac{(a+c+1)!}{(a+c+1-m)!} \omega _{a+b+c+1-m,d}^{j,j} \notag \\ &+(-1)^{b} \frac{(b+d+1)!}{(b+d+1-m)!}\omega _{a+b+d+1-m,c}^{i,i} ,\end{aligned}$$ $$\label{wijjk} \omega _{a,b}^{i,j} \circ _{m} \omega _{c,d}^{j,k} = (-1)^{b} \frac{(b+c+1)!}{(b+c+1-m)!} \omega _{a+b+c+1-m,d}^{i,k} ,$$ $$\label{wiiij} \omega _{a,b}^{i,i} \circ _{m} \omega _{c,d}^{i,j} = \lambda_{a,b,c,m} \omega _{a+b+c+1-m,d}^{i,j} ,$$ $$\label{wjjij} \omega _{a,b}^{j,j} \circ _{m} \omega _{c,d}^{i,j} = \lambda_{a,b,d,m} \omega _{c,a+b+d+1-m}^{i,j},$$ $$\label{wiiii} \omega _{a,b}^{i,i} \circ _{m} \omega ^{i,i}_{c,d}= \lambda _{a,b,c,m} \omega^{i,i} _{a+b+c+1-m,d} +\lambda _{a,b,d,m}\omega^{i,i} _{c,a+b+d+1-m}.$$ As a differential algebra with derivation $\partial$, some of the generators in the generating set for $\text{gr}(\mathcal{H}(n))^{\mathbb{Z}_{2}}$ can be eliminated due to . For $m\geq0,$ let $$A_{m}=\text{span}\{\omega _{a,b}^{i,i}|a+b=m\}$$ be the vector space which is homogenous of weight $m+2$. Using the relation $\partial \omega _{a,b}^{i,i}=\omega_{a+1,b}^{i,i}+\omega _{a,b+1}^{i,i}$, we see that $\text{dim}(A_{2m})=m+1=\text{dim}(A_{2m+1}),$ for $m \geq 0.$ Moreover, $\partial(A_{m}) \subset A_{m+1},$ and $$\text{dim}(A_{2m}/\partial(A_{2m-1}))=1, \enspace \enspace \enspace \text{dim}(A_{2m+1}/\partial(A_{2m}))=0.$$ Thus, $A_{2m}$ has a decomposition $$A_{2m}=\partial(A_{2m-1}) \oplus \langle \omega_{0,2m}^{i,i} \rangle = \partial^{2}(A_{2m-2}) \oplus \langle \omega_{0,2m}^{i,i} \rangle,$$ where $\langle \omega_{0,2m}^{i,i}\rangle$ is the linear span of $ \omega^{i,i}_{0,2m}.$ Similarly, $A_{2m+1}$ has a decomposition $$A_{2m+1}=\partial^{2}(A_{2m-1}) \oplus \langle \partial \omega_{0,2m}^{i,i} \rangle = \partial^{3}(A_{2m-2}) \oplus \langle \partial \omega_{0,2m}^{i,i} \rangle.$$ Therefore, $$\text{span} \{\omega^{i,i}_{a,b}|a+b=2m\}=\text{span}\{\partial ^{2k} \omega^{i,i}_{0,2m-2k}|0\leq k\leq m\}$$ and $$\text{span} \{\omega^{i,i}_{a,b}|a+b=2m+1\}=\text{span}\{\partial ^{2k+1} \omega^{i,i}_{0,2m-2k}|0\leq k\leq m\}$$ are bases of $A_{2m}$ and $A_{2m+1},$ respectively and so for each $\omega_{a,b}^{i,i}\in A_{2m}$ and $\omega_{c,d}^{i,i} \in A_{2m+1}$ can be written uniquely in the form $$\label{linear} \omega_{a,b}^{i,i}=\sum_{k= 0}^{m} \lambda_{k} \partial^{2k} \omega^{i,i}_{0,2m-2k}, \enspace \enspace \omega_{c,d}^{i,i}=\sum_{k= 0}^{m} \mu_{k} \partial^{2k+1} \omega^{i,i}_{0,2m-2k},$$ for constants $\lambda_{k},\mu_{k}.$ Similarly, for $m\geq 0$, let $A'_{m}=\text{span}\{\omega _{a,b}^{i,j}|a+b=m\},$ and use the relation $\partial \omega _{a,b}^{i,j}=\omega_{a+1,b}^{i,j}+\omega _{a,b+1}^{i,j}.$ We have $\text{dim}(A'_{m})=m+1,$ for $m \geq0.$ Moreover, $\partial(A'_{m}) \subset A'_{m+1},$ and $$\text{dim}(A'_{m}/\partial(A'_{m-1}))=1.$$ Hence, $A'_{m}$ has a decomposition $$A'_{m}=\partial(A'_{m-1}) \oplus \langle \omega^{i,j}_{0,m}\rangle$$ where $\langle \omega^{i,j}_{0,m}\rangle$ is the linear span of $ \omega^{i,j}_{0,m}.$ Therefore, $$\text{span} \{\omega^{i,i}_{a,b}|a+b=m\}=\text{span}\{\partial ^{k} \omega^{i,j}_{0,m-k}|\\0\leq k\leq m\}$$ is a basis of $A'_{m}.$ It follows that for each $\omega_{r,m-r}^{i,j} \in A'_{m}$ can be written uniquely in the form $$\label{linear combination of w} \omega_{r,m-r}^{i,j}=\sum_{k= 0}^{r} (-1)^{r+k}\binom {r}{k}\partial^{k} \omega^{i,j}_{0,m-k},$$ where $r=0,\dots, m.$ The following lemma gives a strong generating set for $\mathcal{H}(n)^{\mathbb{Z}_{2}}$, which as we shall see is far from minimal. \[strong generators for H(n)\] $\mathcal{H}(n)^{\mathbb{Z}_{2}}$ is strongly generated as a vertex algebra by the subset $$\label{geneh} \{\omega_{0,2m}^{i,i}|m\geq0, \enspace and \enspace i=1,\dots, n\}\bigcup \{\omega^{i,j}_{0,m}|m\geq0, \enspace and \enspace 1\leq i<j \leq n \}.$$ Since $\text{gr}(\mathcal{H}(n))^{\mathbb{Z}_{2}}=\text{gr}(\mathcal{H}(n)^{\mathbb{Z}_{2}})$ is generated by the subset $$\{q_{0,2m}^{i,i}|m\geq 0 ,\enspace i=1,\dots ,n\} \\ \bigcup \{q_{0,m}^{i,j}|m\geq0, \enspace and \enspace 1\leq i<j \leq n\}$$ as a $\partial$-ring, Lemma \[8\] shows that the corresponding set strongly generates $\mathcal{H}(n)^{\mathbb{Z}_{2}}$ as a vertex algebra. Minimal strong generating set for $\mathcal{H}(n)^{\mathbb{Z}_2}$ {#decoupling hEIENBERG relations} ================================================================= In this section, we give a minimal strong generating set for $\mathcal{H}(n)^{\mathbb{Z}_2}$. First, we recall the case $n=1$, which is due to Dong and Nagatomo [@DNI]. For simplicity of notation, we write $\omega_{a,b} = \omega^{1,1}_{a,b}$ and $q_{a,b} = q^{1,1}_{a,b}$ in this case, and we include the proof for the benefit of the reader. (Dong-Nagatomo) $\mathcal{H}(1)^{\mathbb{Z}_2}$ has a minimal strong generating set $ \{\omega_{0,0},\omega_{0,2}\}$ and is of type $\mathcal{W}(2,4).$ Among the generators $\{q_{0,2m}|\ m\geq0\}$ of $\text{gr}(\mathcal{H}(1))^{\mathbb{Z}_{2}}$, the first relation of the form occurs of minimal weight $6$, and has the form $$\label{deal'1} q_{0,0} q _{1,1} -q_{0,1} q_{0,1}=0.$$ This relation is unique up to scalar. The corresponding element $: \omega_{0,0} \omega_{1,1} :-: \omega_{0,1}\omega _{0,1} :$ lies in $(\mathcal{H}(n)^{\mathbb{Z}_{2}})_{(2)}$. This element does not vanish, but it has a correction of the form $$\label {H1 rank} : \omega_{0,0} \omega _{1,1} :-: \omega_{0,1}\omega _{0,1}: \ =-\frac{5}{4} \omega _{0,4}+\frac{7}{4} \partial ^{2} \omega_{0,2}-\frac{7}{24} \partial ^{4} \omega_{0,0} .$$ Furthermore, we have $$\label{w0111} \omega_{0,1}=\frac{1}{2} \partial \omega_{0,0},\qquad \omega_{1,1}= - \omega_{0,2}+\frac {1}{2} \partial^{2} \omega_{0,0}.$$ Thus, can be rewritten in the form $$\label{PH14} \omega_{0,4}=P_4(\omega_{0,0},\omega_{0,2}),$$ where $P_4(\omega_{0,0},\omega_{0,2})$ is a normally ordered polynomial in $\omega_{0,0}, \omega_{0,2}$, and their derivatives. This is called a *decoupling relation*, as $\omega _{0,4}$ can then be expressed as a normally ordered polynomial in $\omega_{0,0},\omega_{0,2}$ and their derivatives. Next, we can construct decoupling relations $$\label{n=1:higherdecoup} \omega_{0,2m} = P_{2m}(\omega_{0,0}, \omega_{0,2}),$$ expressing $\omega_{0,2m}$ as a normally ordered polynomial in $\omega_{0,0}, \omega_{0,2}$ and their derivatives, for all $m>2$. We need the calculation $$\omega_{0,2} \circ_{1} \omega_{0,2k}=(8+4k) \omega_{0,2k+2} +\partial^{2} \mu,$$ where $\mu$ is a linear combination of $\partial ^{2r}\omega_{0,2k-2r}$ for $r=0,\dots ,k$. We can then construct the relations inductively by applying the operator $\omega_{0,2} \circ_1$ repeatedly to . It follows that $\mathcal{H}(1)^{\mathbb{Z}_2}$ is strongly generated by $\{\omega_{0,0}, \omega_{0,2}\}$. To see that this is a [*minimal*]{} strong generating set, it suffices to observe that no decoupling relations for $\omega_{0,0},\omega_{0,2}$ can be found since there are no relations of weight less than $6$ in $\text{gr}(\mathcal{H}(1))^{\mathbb{Z}_{2}}$ of the form . The main result in this section is \[strong generators for Hn\] 1. For $n=2,$ $\mathcal{H}{(2)}^{\mathbb{Z}_{2}}$ has a minimal strong generating set $$\label{strong generating set h2} \{\omega^{1,1}_{0,0},\omega^{1,2}_{0,0}, \omega^{1,2}_{0,1},\omega^{1,2}_{0,2},\omega^{2,2}_{0,0},\omega^{2,2}_{0,2},\},$$ and is of type $\mathcal{W}(2^{3},3,4^{2})$. 2. For $n\geq3,$ $\mathcal{H}{(n)}^{\mathbb{Z}_{2}}$ has a minimal strong generating set $$\label{strong generating set hn} \{\omega_{0,0}^{i,i}|\ i=1,\dots ,n\} \bigcup \{\omega^{i,j}_{0,0}, \omega^{i,j}_{0,1}|\ 1\leq i< j\leq n\} \bigcup \{\omega_{0,2}^{1,1}\},$$ and is of type $\mathcal{W}(2^{n+\binom{n}{2}},3^{\binom{n}{2}},4).$ First, we consider the case $n = 2$. By replacing $\omega_{0,2m}$ with $\omega^{i,i}_{0,2m}$ in and for $i = 1,2$, we obtain decoupling relations $$\omega^{i,i}_{0,2m} = P_{2m}(\omega^{i,i}_{0,0}, \omega^{i,i}_{0,2})$$ for $i = 1,2$ and all $m\geq 2$. Then by Lemma \[strong generators for H(n)\], $\mathcal{H}{(2)}^{\mathbb{Z}_{2}}$ has a strong generating set $$\{\omega^{1,1}_{0,0},\omega^{1,1}_{0,2},\omega^{2,2}_{0,0},\omega^{2,2}_{0,2}\} \bigcup \{\omega^{1,2}_{0,m},m\geq 0\}.$$ Next, we have the following relation at weight $5$ among the generators of $\text{gr}(\mathcal{H}(2)^{\mathbb{Z}_{2}})$ $$\label{deal1} q^{1,2}_{0,0} q ^{2,2}_{0,1} -q^{1,2}_{0,1} q^{2,2}_{0,0} =0.$$ The corresponding element $: \omega^{1,2}_{0,0} \omega ^{2,2}_{0,1} :-: \omega^{1,2}_{0,1} \omega ^{2,2}_{0,0} :$ in $(\mathcal{H}(2)^{\mathbb{Z}_{2}})_{(2)}$ has a correction of the form $$\label{eq3} : \omega^{1,2}_{0,0} \omega ^{2,2}_{0,1} :-: \omega^{1,2}_{0,1} \omega ^{2,2}_{0,0} : \ =-\frac{1}{2} \omega ^{1,2}_{0,3}+{2} \partial \omega^{1,2}_{0,2}-\frac{5}{2} \partial ^{2} \omega^{1,2}_{0,1} + \partial ^{3} \omega^{1,2}_{0,0}.$$ This can be rewritten as follows: $$\label{eqQ4} \omega ^{1,2}_{0,3}= Q_3( \omega_{0,0}^{2,2},\omega^{1,2}_{0,0},\omega^{1,2}_{0,1},\omega^{1,2}_{0,2}),$$ where $Q( \omega_{0,0}^{2,2},\omega^{1,2}_{0,0},\omega^{1,2}_{0,1},\omega^{1,2}_{0,2})$ is a normally ordered polynomial in $\omega^{2,2}_{0,0}, \omega^{1,2}_{0,0},\omega^{1,2}_{0,1},\omega^{1,2}_{0,2}$, and their derivatives Next, by applying the operator $\omega^{2,2}_{0,2}\circ_{1}$ repeatedly, we can get decoupling relations $$\label{n=2:higherdecoup} \omega^{1,2}_{0,m} = Q_m(\omega_{0,0}^{2,2},\omega_{0,2}^{2,2},\omega^{1,2}_{0,0},\omega^{1,2}_{0,1},\omega^{1,2}_{0,2}),$$ for all $m>3$. This follows from the calculation $$\omega^{2,2}_{0,1}\circ_{1} \omega^{1,2}_{0,k}= -\omega^{1,2}_{0,k+1}.$$ Therefore $\omega^{1,2}_{0,m}$ for $m \geq 3$ are not necessary. Finally, we have the relation $$:\omega^{1,2}_{0,0}\omega^{1,2}_{0,0}:\ =\frac{1}{2}\omega^{1,1}_{0,2}+\frac{1}{2}\omega^{2,2}_{0,2}+:\omega^{1,1}_{0,0}\omega^{2,2}_{0,0}:.$$ This shows that $\omega^{2,2}_{0,2}$ is unnecessary, hence the set suffices to strongly generate $\mathcal{H}(2)^{\mathbb{Z}_2}$. The fact that this set is a minimal strong generating set is clear since there are no relations of weight less than $5$ of the form . Finally, we consider the case $n\geq 3$. As above, for $i = 1,\dots, n$ and $m \geq 2$ we have relations $$\omega^{i,i}_{0,2m} = P_m(\omega^{i,i}_{0,0}, \omega^{i,i}_{0,2}).$$ So $\omega^{i,i}_{0,2m}$ can be eliminated for all $m\geq 2$. For $n\geq 3$, we can do a bit better than the relations and , since we have the following relations at weight $4$ in $\text{gr}(\mathcal{H}(n)^{\mathbb{Z}_2})$ for all $i<j<k$, $$\label{deal2} q^{i,j}_{0,0} q ^{j,k}_{0,0} -q^{i,k}_{0,0} q^{j,j}_{0,0} =0.$$ The corresponding element $: \omega_{0,0}^{i,j} \omega _{0,0}^{j,k} :-: \omega_{0,0}^{i,k} \omega _{0,0}^{j,j} :$ lies in $(\mathcal{H}(n)^{\mathbb{Z}_{2}})_{(2)},$ and has a correction of the form $$\label{3copies} : \omega_{0,0}^{i,j} \omega _{0,0}^{j,k} :-: \omega_{0,0}^{i,k} \omega _{0,0}^{j,j} : \ =\frac{1}{2}\omega^{i,k}_{0,2}-\partial \omega^{i,k}_{0,1}+\frac{1}{2}\partial^{2}\omega^{i,k}_{0,0}.$$ We can clearly rewrite in the form $$\label{equationQ'4} \omega^{i,k}_{0,2} = T_{2}( \omega_{0,0}^{j,j},\omega_{0,0}^{i,j} ,\omega _{0,0}^{j,k} ,\omega^{i,k}_{0,0},\omega^{i,k}_{0,1}),$$ where $T_{2}( \omega_{0,0}^{j,j},\omega_{0,0}^{i,j} ,\omega _{0,0}^{j,k} ,\omega^{i,k}_{0,0},\omega^{i,k}_{0,1})$ is a normally ordered polynomial in $ \omega_{0,0}^{j,j},\omega_{0,0}^{i,j} ,\omega _{0,0}^{j,k} ,\omega^{i,k}_{0,0},\omega^{i,k}_{0,1}$, and their derivatives. As above, by applying the operator $\omega^{k,k}_{0,2} \circ_1$ repeatedly, we can construct relations $$\omega^{i,k}_{0,m} = T_m(\omega_{0,0}^{j,j},\omega_{0,0}^{i,j} ,\omega _{0,0}^{j,k},\omega _{0,1}^{j,k} ,\omega^{i,k}_{0,0},\omega^{i,k}_{0,1})$$ for all $m\geq 2$. This shows that $\omega^{i,k}_{0,m}$ can be eliminated for all $m\geq 2$. Finally, for all $j$ with $1<j \leq n$, we have the relation $$:\omega^{1,j}_{0,0}\omega^{1,j}_{0,0}:\ =\frac{1}{2}\omega^{1,1}_{0,2}+\frac{1}{2}\omega^{j,j}_{0,2}+:\omega^{1,1}_{0,0}\omega^{j,j}_{0,0}:.$$ This shows that $\omega^{j,j}_{0,2}$ can be eliminated for $1< j\leq n$. It follows that strongly generates $\mathcal{H}(n)^{\mathbb{Z}_2}$ for $n\geq 3$. The fact that it is a minimal strong generating set is again clear since there are no more relations of weight less than $5$ of the form . Notice that $\omega_{0,0}^{i,i},\omega_{0,2}^{i,i},\omega^{i,j}_{0,0}, \omega^{i,j}_{0,1}, \omega^{i,j}_{0,2}$ are not primary fields with respect to the Virasoro field $$L(z)=\frac{1}{2}\sum_{i= 1}^{n} :a^{i}(z)a^{i}(z): = \sum_{i= 1}^{n} \omega^{i,i}_{0,0}.$$ It is easy to correct them to be a primary ones by adding a normally ordered polynomial in the previous set and their derivatives. By a computer calculation, we obtain the following primary fields: $$\label{10} \begin{aligned} C^{k}&=\frac{1}{2}(\omega^{1,1}_{0,0}-\omega^{k,k}_{0,0}),\enspace where\enspace k=2,\dots ,n.\\ C^{i,i}_{0,2}&=\omega^{i,i}_{0,2}-\frac{2}{9}:\omega^{i,i}_{0,0}\omega^{i,i}_{0,0}:-\frac{1}{6}\partial^{2}\omega^{i,i}_{0,0},\enspace where\enspace i = 1,\dots ,n,\\ C^{i,j}_{0,0}&=\omega^{i,j}_{0,0},\\ C^{i,j}_{0,1}&=\omega^{i,j}_{0,1}-\frac{1}{2}\partial \omega^{i,j}_{0,0},\\ C^{i,j}_{0,2}&=\omega^{i,j}_{0,2}-\frac{4}{9}:\omega^{i,j}_{0,0}\omega^{j,j}_{0,0}:+\frac{5}{9}\partial^{2}\omega^{i,j}_{0,0}-\frac{13}{9}\partial\omega^{i,j}_{0,1}. \end{aligned}$$ The Cartan involution and its extension to $V^k(\lie{g})$ ========================================================= Let us consider a simple Lie algebra $\lie{g}$ as above with $l = \text{rank}(\lie{g})$ and $m$ the number of positive roots. With respect to a choice of base for the root system $\Phi$, we have the triangular decomposition $$\lie{g}=\lie{h}\oplus \lie{n}_{+} \oplus \lie{n}_{-},$$ where $\lie{h}$ is the Cartan subalgebra with basis $h_r$, $r = 1,\dots, l$, and $\lie{n}_+$ has basis $x_{\beta_i}$ for $i = 1,\dots, m$, and $\lie{n}_-$ has basis $y_{\beta_i}$ for $i = 1,\dots, m$. The Cartan involution $\theta$ of $\lie{g}$ is defined as follows: $$\theta(x_{\beta_{i}})=-y_{\beta_{i}}, \enspace \enspace \enspace \theta(y_{\beta_{i}})=-x_{\beta_{i}}, \enspace \enspace \enspace \theta(h_{r})=-h_{r}.$$ Since $\theta$ preserves the Lie bracket as well as the normalized Killing form, it extends to an automorphism of the vertex algebra $V^k(\lie{g})$ given by the same formula, where $h_r, x_{\beta_i}, y_{\beta_i}$ are now considered as the generating fields for $V^k(\lie{g})$. To replace the generators $h_r, x_{\beta_i}, y_{\beta_i}$ of $V^k(\lie{g})$ with a set of eigenvectors for $\theta,$ it is suitable to apply a linear change of variables as follows: $$E_{\beta_{i}}= x_{\beta_{i}}+y_{\beta_{i}}, \enspace \enspace \enspace F_{\beta_{i}}=x_{\beta_{i}}-y_{\beta_{i}} \enspace \enspace \enspace h_{r}.$$ The action of $\theta$ on the new generators will be as follows: $$\theta(E_{\beta_{i}})=-E_{\beta_{i}},\enspace \enspace \enspace \theta(F_{\beta_{i}})=F_{\beta_{i}}, \enspace \enspace \enspace \theta(h_{r})=-h_{r}.$$ There is a PBW basis consisting of normally ordered monomials of the new generators and their derivatives since the new generators are related to the old ones by a linear change of variables. Note that the fields $F_{\beta_i}$ lie in the orbifold $V^k(\lie{g})^{\mathbb{Z}_2}$. Define additional generators of $V^{k}(\lie{g})^{\mathbb{Z}_2}$ as follows: $$\begin{aligned} Q_{a,b}^{\beta_{i},\beta_{j}}&= \ :\partial^{a} E_{\beta_{i}}(z) \partial^{b} E_{\beta_{j}}(z):,\\ Q_{a,b}^{h_{r},\beta_{i}}&= \ : \partial^{a} h_{r}(z)\partial^{b} E_{\beta_{i}}(z):,\\ Q_{a,b}^{h_{r},h_{s}}&= \ :\partial^{a} h_{r}(z) \partial^{b} h_{s}(z):, \end{aligned}$$ which each have weight $a+b+2.$ A special case when $\lie{g} = \mathfrak{sl}_2$, we have only one positive root $\beta.$ An immediate consequence that there is one element $F_{\beta}$, one element $E_{\beta}$, and one basis vector $h$ for $\lie{h}$. The above elements can be given as follows: $$\begin{aligned} Q_{a,b}^{\beta,\beta}&= \ :\partial^{a} E_{\beta}(z) \partial^{b} E_{\beta}(z):,\\ Q_{a,b}^{h,\beta}&= \ : \partial^{a} h(z)\partial^{b} E_{\beta}(z):,\\ Q_{a,b}^{h,h}&= \ :\partial^{a} h(z) \partial^{b} h(z):. \end{aligned}$$ The $\mathbb{Z}_2$-orbifold of $V^k(\lie{g})$ ============================================= At this point, we are ready to state our main theorem which describes $V^{k}(\lie{g})^{\mathbb{Z}_{2}}$ for generic values of $k$. \[ch5:mainresult\] Let $\lie{g}$ be a simple, finite-dimensional Lie algebra, and let $l = \text{rank}(\lie{g})$, $m$ the number of positive roots, and set $d = m+l$. 1. For $\lie{g} \neq \mathfrak{sl}_2$, and $k$ generic, $V^k(\lie{g})^{\mathbb{Z}_{2}}$ has a minimal strong generating set $$\{F_{\beta_{i}},Q_{0,0}^{\beta_{a},\beta_{b}},Q_{0,2}^{\beta_{1},\beta_{1}},Q_{0,0}^{h_{r},h_{s}},Q_{0,0}^{h_{t},\beta_{u}},Q_{0,1}^{h_{t},\beta_{u}}\} ,$$ for $1\leq i \leq m$, $1\leq a \leq b \leq m$, $1\leq r \leq s\leq l$, $1\leq t \leq l$, and $1\leq u \leq m$. In particular, $V^k(\lie{g})^{\mathbb{Z}_{2}}$ is of type $\mathcal{W}(1^{m},2^{d+ \binom{d}{2}},3^{ \binom{d}{2}},4)$. 2. For $\lie{g}=\mathfrak{sl}_{2}$ and $k$ generic, $V^{k}(\lie{g})^{\mathbb{Z}_{2}}$ has a minimal strong generating set $$\{F,Q^{\beta,\beta}_{0,0},Q^{h,h}_{0,0},Q^{h,h}_{0,2},Q^{\beta,h}_{0,0},Q^{\beta,h}_{0,1},Q^{\beta,h}_{0,2}\} ,$$ and in particular, is of type $\mathcal{W}(1,2^{3},3,4^{2}).$ Let $n = 2m+l = \text{dim}(\lie{g})$. By the deformation argument of [@LII], we have $$\lim_{k \to \infty}V^{k}(\lie{g}) \cong \mathcal{H}(n).$$ Here $\mathcal{H}(n)$ is the rank $n$ Heisenberg algebra with generators $F_{\beta_i}, E_{\beta_i}$ and $h_r$. We shall use the same symbols for the limits of these fields when no confusion may arise. Moreover, $\mathbb{Z}_2$ acts trivially on the rank $m$ Heisenberg subalgebra generated by $\{F_{\beta_i}|\ i = 1,\dots, m$, and it acts by $-1$ on the rank $d = m+ l$ Heisenberg algebra with generators $\{E_{\beta_i}, h_r|\ i = 1,\dots, m,\ \ r = 1,\dots, l\}$. We get $$\lim_{k \to \infty}V^{k}(\lie{g})^{\mathbb{Z}_2} \cong \mathcal{H}(m) \otimes \big(\mathcal{H}(d)^{\mathbb{Z}_2}\big).$$ In the limit $k\rightarrow \infty$, the fields $F_{\beta_i}$ are the generators of $\mathcal{H}(m)$, and the remaining quadratic fields are precisely the generators for $\mathcal{H}(d)^{\mathbb{Z}_2}$. The claim in both cases then follows by Theorem \[strong generators for Hn\]. For the reader’s convenience, the following table shows the dimension, rank $l$, and the number of positive roots $m$, for each simple Lie algebra in the Cartan-Killing classification. --------------------------------- ---------------------- ----------- ---------------------------------- Lie algebra Dimension Rank $l $ The number of positive roots $m$ \[0.5ex\] $\mathfrak{sl}_{n+1}$ $(n+1)^{2}-1$ $n$ $\frac{(n^{2}+n)}{2}$ $\mathfrak{so}_{2n+1}$ $\frac{2n(2n+1)}{2}$ $n$ $n^{2}$ $\mathfrak{sp}_{2n}$ $n(2n+1)$ $n$ $n^{2}$ $\mathfrak{so}_{2n}$ $\frac{2n(2n-1)}{2}$ $n$ $n^{2}-n$ $G_{2}$ $14$ 2 $6$ $F_{4}$ $52$ 4 $24$ $E_{6}$ $78$ 6 $36$ $E_{7}$ $133$ 7 $63$ $E_{8}$ $248$ 8 $120$ \[1ex\] --------------------------------- ---------------------- ----------- ---------------------------------- The nongeneric set for $V^{k}(\mathfrak{sl}_{2})^{\mathbb{Z}_{2}}$ ================================================================== In this section we work in the usual root basis for $\mathfrak{sl}_{2},$ so we shall change our notation here. Let $\{x,y,h \}$ be an ordered basis of $\mathfrak{sl}_{2}$ satisfying the following commutation relations: $$[x,y]=h,\enspace [h,x]=2x, \enspace [h,y]=-2y.$$ Let $X^{x}, X^{y},X^{h}$ be generating fields for $V_{k}(\mathfrak{sl}_{2}),$ where each of conformal weight $1,$ and satisfies the OPE relations $$\label{opesl2} X^{x}(z) X^{y}(w) \sim k (z-w)^{-2}+X^{h}(w)(z-w)^{-1},$$ $$X^{h}(z)X^{x}(w) \sim 2 X^{x}(w) (z-w)^{-1},$$ $$X^{h}(z)X^{y}(w)\sim -2 X^{y}(w) (z-w)^{-1},$$ $$\label{opesl2'} X^{h}(z)X^{h}(w)\sim 2k (z-w)^{-2}.$$ The affine vertex algebra $V_{k}(\mathfrak{sl}_{2})$ is freely generated by the even generators $X^{x}, X^{y},X^{h}$ and in particular it has a PBW basis as follows $$\label{basissl2} :\partial^{k^{1}_{1}} X^{x}\cdots \partial^{k^{1}_{s_{1}}} X^{x}\partial^{k^{2}_{1}} X^{y}\cdots \partial^{k^{2}_{s_{2}}} X^{y} \partial^{k^{3}_{1}} X^{h}\cdots \partial^{k^{3}_{s_{3}}} X^{h}:,$$ $ s_{i} \geq0, \enspace k^{i}_{1}\geq \cdots \geq k^{i}_{s_{i}} \geq 0,\enspace for \enspace i=1,2,3.$ The action of $\theta$ is given by $$\theta(X^{x})= -X^{y},\enspace \enspace\enspace \enspace \theta(X^{y})= -X^{x},\enspace \enspace\enspace \enspace \theta(X^{h})=-X^{h}.$$ Changing the basis to the basis of eigenvectors yields: $$\label{newbasis} G=X^{x}+ X^{y},\enspace \enspace \enspace \enspace F=X^{x}-X^{y}, \enspace \enspace \enspace \enspace H=X^{h}.$$ The nontrivial involution $\theta$ acts on the new generators as follows: $$\theta(G)=- G,\enspace \enspace\enspace \enspace \theta(F)= F,\enspace \enspace\enspace \enspace \theta(H)=-H.$$ Define $$\begin{aligned} Q_{i,j}&= \ :\partial^{i} G(z) \partial^{j} G(z):\ \in (V^{k}(\mathfrak{sl}_{2})^{\mathbb{Z}_{2}})_{(2)},\\ U_{i,j}&= \ :\partial^{i} H(z) \partial^{j}H(z):\ \in (V^{k}(\mathfrak{sl}_{2})^{\mathbb{Z}_{2}})_{(2)},\\ V_{i,j}&= \ :\partial^{i} H(z) \partial^{j}G(z):\ \in (V^{k}(\mathfrak{sl}_{2})^{\mathbb{Z}_{2}})_{(2)} \end{aligned}$$ as new generators for $V^{k}(\mathfrak{sl}_{2}),$ where each have weight $i+j+2.$ By Theorem \[ch5:mainresult\], the strong generators for $V^k(\mathfrak{sl}_2)$ are $\{F,Q_{0,0},U_{0,0},U_{0,2},V_{0,0},V_{0,1},V_{0,2}\}$. These fields close under OPE in the sense that for any $\alpha_1, \alpha_2$ in the above set, each term in the OPE of $\alpha_1(z) \alpha_2(w)$ can be expressed as a linear combination of normally ordered monomials in these generators. The coefficients of these monomials are called the [*structure constants*]{} of the OPE algebra, and they are all rational functions of $k$. The set of nongeneric values of $k$ where the strong finite generating set for $V^{k}(\mathfrak{sl}_{2})^{\mathbb{Z}_{2}}$ does not work can be determined here. By Theorem 5.3 of [@CL], the only nongeneric values of $k$ are the poles of these structure constants. Clearly there are at most finitely many such poles. Using K. Thielemans’ Mathematica package [@T], the full OPE algebra among these generators was calculated, and we find that the poles of the structure constants lie in the set $$\{0,\frac{16}{51},\frac{16}{9},-\frac{32}{3}\}.$$ It follows that all other values of $k$ are generic. An immediate consequence is the following: For $k\neq0,\frac{16}{51},\frac{16}{9},-\frac{32}{3},$ $V^{k}(\lie{sl}_{2})^{\mathbb{Z}_{2}}$ is of type $\mathcal{W}(1,2^3, 3, 4^2)$. Acknowledgements ================ I would like to express my deep appreciation and gratitude to Prof. Linshaw for his immense knowledge, useful discussions, and valuable suggestions throughout this work. [ABKS]{} T. Arakawa, *Rationality of $\cW$-algebras: principal nilpotent cases*, Ann. Math. vol. 182, no. 2 (2015), 565-604. T. Arakawa, T. Creutzig, and A. Linshaw, *Cosets of Bershadsky-Polyakov algebras and rational $\cW$-algebras of type $A$*, Selecta Math. New Series, 23, No. 4 (2017), 2369-2395. T. Arakawa, T. Creutzig, K. Kawasetsu, and A. Linshaw, *Orbifolds and cosets of minimal $\cW$-algebras*, Comm. Math. Phys. 355, No. 1 (2017), 339-372. M. Al-Ali and A. Linshaw, *The $\mathbb{Z}_{2}$-orbifold of the $\cW_{3}$-algebra*, Comm. Math. Phys. 353, No. 3 (2017), 1129-1150. R. Borcherds, *Vertex operator algebras, Kac-Moody algebras and the monster*, Proc. Nat. Acad. Sci. USA 83 (1986) 3068-3071. T. Creutzig and A. Linshaw, *Cosets of affine vertex algebras inside larger structures*, arXiv:1407.8512v4. S. Carnahan and M. Miyamoto, *Regularity of fixed-point vertex operator algebras*, arXiv:1603.05645. L. Dixon, J. Harvey, C. Vafa, and E. Witten, *Strings on orbifolds*, Nucl. Phys. B 261 (1985) 678-686. C. Dong, H. Li, and G. Mason, *Compact automorphism groups of vertex operator algebras*, Int. Math. Res. Not. 18 (1996), 913-921. C. Dong, H. Li, and G. Mason, *Twisted representations of vertex operator algebras*, Math. Ann. 310 (1998), 571-600. C. Dong and G. Mason, *On quantum Galois theory*, Duke Math. J. 86 (1997), 305-321. C. Dong and K. Nagatomo, *Classification of irreducible modules for the vertex operator algebra $M(1)_{+}$*, J. Algebra 216 (1999) no. 1, 384-404. C. Dong and K. Nagatomo, *Classification of irreducible modules for the vertex operator algebra $M(1)^+$ II. Higher rank*, J. Algebra 240 (2001) no. 1, 289-325. C. Dong, L. Ren, and F. Xu, *On orbifold theory*, Adv. Math. 321 (2017), 1-30. R. Dijkgraaf, C. Vafa, E. Verlinde, and H. Verlinde, *The operator algebra of orbifold models*, Comm. Math. Phys. 123 (1989), 485-526. E. Frenkel and D. Ben-Zvi, *Vertex Algebras and Algebraic Curves*, Math. Surveys and Monographs, Vol. 88, American Math. Soc., 2001. I. B. Frenkel, J. Lepowsky, and A. Meurman, *Vertex Operator Algebras and the Monster*, Academic Press, New York, 1988. I. B. Frenkel and Y. C. Zhu, *Vertex operator algebras associated to representations of affine and Virasoro algebras*, Duke Math. J, Vol. 66, No. 1, (1992), 123-168. V. Kac, *Vertex Algebras for Beginners*, University Lecture Series, Vol. 10. American Math. Soc., 1998. H. Li, *Local systems of vertex operators, vertex superalgebras and modules*, J. Pure Appl. Algebra 109 (1996), no. 2, 143-195. H. Li, *Vertex algebras and vertex Poisson algebras*, Commun. Contemp. Math. 6 (2004) 61-110. A. Linshaw, *Invariant theory and the Heisenberg vertex algebra*, Int. Math. Res. Notices, 17 (2012), 4014-4050. A. Linshaw, *Invariant subalgebras of affine vertex algebras*, Adv. Math. 234 (2013), 61-84. B. Lian and A. Linshaw, *Howe pairs in the theory of vertex algebras*, J. Algebra 317, 111-152 (2007). B. Lian and G. Zuckerman, *Commutative quantum operator algebras*, J. Pure Appl. Algebra 100 (1995) no. 1-3, 117-139. M. Miyamoto, *$C_2$-cofiniteness of cyclic orbifold models*, Comm. Math. Phys. 335 (2015) 1279-1286. K. Thielemans, *A Mathematica package for computing operator product expansions*, Int. Jour. Mod. Phys. C2 (1991) p.787.
--- abstract: 'Mining tasks over sequential data, such as clickstreams and gene sequences, require a careful design of embeddings usable by learning algorithms. Recent research in feature learning has been extended to sequential data, where each instance consists of a sequence of heterogeneous items with a variable length. However, many real-world applications often involve *attributed sequences*, where each instance is composed of both a sequence of categorical items and a set of attributes. In this paper, we study this new problem of *attributed sequence embedding*, where the goal is to learn the representations of attributed sequences in an unsupervised fashion. This problem is core to many important data mining tasks ranging from user behavior analysis to the clustering of gene sequences. This problem is challenging due to the dependencies between sequences and their associated attributes. We propose a deep multimodal learning framework, called , to produce embeddings of attributed sequences. The embeddings are task independent and can be used on various mining tasks of attributed sequences. We demonstrate the effectiveness of our embeddings of attributed sequences in various unsupervised learning tasks on real-world datasets.' author: - - bibliography: - 'ref.bib' title: Attributed Sequence Embedding --- Sequence, Embedding, Attributed sequence. Introduction {#intro} ============ Sequential data arise naturally in a wide range of applications [@bechet2015sequence; @miliaraki2013mind; @wang2016unsupervised; @wei2013effective]. Examples of sequential data include click streams of web users, purchase histories of online customers, and DNA sequences of genes. Different from conventional multidimensional data [@pedersen1999multidimensional], the sequential data [@yang2003cluseq] are not represented as feature vectors of continuous values, but as sequences of categorical items with variable-lengths. Many real-world applications involve mining tasks over sequential data [@wei2013effective; @tajer2014outlying; @wang2016unsupervised]. For example, in online ticketing systems, administrators are interested in finding fraudulent sequences from the clickstreams of users. In user profiling systems, researchers are interested in grouping purchase histories of customers into clusters. Motivated by these real-world applications, sequential data mining has received considerable attention in recent years [@miliaraki2013mind; @bechet2015sequence]. Sequential data usually requires a careful design of its embedding before being fed to data mining algorithms. One of the feature learning problems on sequential data is called sequence embedding [@cho2014learning; @sutskever2014sequence], where the goal is to transform a sequence into a fixed-length embedding. ![An example of attributed sequences. The three types of dependencies in an attributed sequence: [ item]{} dependencies, [ attribute]{} dependencies and [attribute]{}-[sequence]{} dependencies. []{data-label="fig-complexrel"}](complex-relations.pdf){width="0.9\linewidth"} Conventional methods on sequence embedding focus on learning from sequential data alone [@kalchbrenner2013recurrent; @cho2014learning; @sutskever2014sequence; @luong2015multi]. However, in many real-world applications, sequences are often associated with a set of attributes. We define such data as *attributed sequences*, where each instance is represented by a set of *attributes* associated with a *sequence*. For example, in online ticketing systems as shown in Fig. \[fig-complexrel\], each user transaction includes both a sequence of user actions (*e.g.*, “`login`”, “`search`” and “`pick seats`”) and a set of attributes (*e.g.*, “`user name`”, “`browser`” and “`IP address`”) indicating the context of the transaction. In gene function analysis, each gene can be represented by both a DNA sequence and a set of attributes indicating the expression levels of the gene in different types of cells. Motivated by the recent success in attributed graph embedding [@gibert2012graph; @perozzi2014deepwalk], in this paper, we study the problem of attributed sequence embedding. Building embedding for attributed sequences (as shown in Fig. \[fig-sota-nas\] corresponds to transforming an attributed sequence into a fixed-length embedding with continuous values. Different from the work in [@zhuang2018one; @zhuang2019amas], we do not have labels for any attributed sequence instances in the embedding task. Sequence embedding problems are particularly challenging with additional attributes. In sequence embedding problems (as shown in Fig. \[fig-sota-seqonly\], conventional methods focus on modeling the *item dependencies*, [[*i.e.*]{}]{}, the dependencies between different items within a sequence. However, in attributed sequences, the dependencies between items can be different if the sequence is observed under different contexts (attributes). Even the same ordering of the items can have different meanings if associated with different attribute values. In this paper, instead of building embeddings to model only the dependencies between items in each single sequence, we aim to model three types of dependencies in an attributed sequence jointly: (1) *item dependencies*, (2) *attribute dependencies* ([[*i.e.*]{}]{}, the dependencies between different attributes) and (3) *attribute-sequence dependencies* ([[*i.e.*]{}]{}, the dependencies between attributes and items in a sequence). [0.45]{} [1.1]{} ![Comparison of different embedding problems. ](sota-seq.pdf "fig:"){width="100.00000%"} [1]{} ![Comparison of different embedding problems. ](sota-attr.pdf "fig:"){width="90.00000%"} [1]{} ![Comparison of different embedding problems. ](sota-ts.pdf "fig:"){width="95.00000%"} [0.40]{} ![Comparison of different embedding problems. ](sota-nas.pdf "fig:"){width="100.00000%"} \[fig-as-setting\] Despite its relevance, the problem of producing attributed sequence embeddings in an unsupervised setting remains open. We summarize the major research challenges as follows: 1. **Heterogeneous Dependencies.** The bipartite structure of attributed sequences poses unique challenges in feature learning. As shown in Fig. \[fig-complexrel\], there exist three types of possible dependencies in an attributed sequence: item dependencies, attribute dependencies and attribute-sequence dependencies. ** In Fig. \[fig-as2vec\], we present an example of fraud detection from a user privilege management system in  [@amadeus]. This system logs each user session as an attributed sequence (denoted as $J_1 \sim J_5$). Each attributed sequence consists of a sequence of user’s activities and a set of attributes derived from metadata values. The attributes ([[*e.g.*]{}]{}, “`IP`”, “`OS`” and “`Browser`”) are recorded when a user logs into the system and remain unchanged during each user session. We use different shapes and colors to denote different user activities, [[*e.g.*]{}]{}, “`Reset password`”, “`Delete a user`”. In real-world applications like this, the attributes and the associated sequences are already saved within one integrated record. An important step in this fraud detection system is to “*red flag*” suspicious user sessions for potential security breaches. In Fig. \[fig-as2vec\], we observe three groups of embeddings learned from the [ ]{}application logs. For each group, we use a dendrogram to demonstrate the similarities between embeddings within that group. Neither of the embeddings using only sequences or only attributes detects any outliers due to the lacking of considerations of attribute-sequence dependencies. However, user session $J_5$ is discovered to be fraudulent using a learning algorithm that incorporates all three types of dependencies. ![Dendrograms of embeddings learned from attributed sequences for fraud detection tasks. $J_5$ is a user committing fraud. However, it is considered a normal user session by the embedding generated using either only attributes or only sequences. $J_5$ can only be caught as a fraud instance using the embedding learned using both attributes and sequences.[]{data-label="fig-as2vec"}](motivation-psa.pdf){width="0.7\linewidth"} 2. **Lack of Labeled Data.** With the continuously incoming volume of data and the high labor cost of manually labeling data, it is rare to find attributed sequences from many real-world applications with labels ([[*e.g.*]{}]{}, *fraud*, *normal*) attached. Without proper labels, it is challenging to learn an embedding function that is capable of transforming attributed sequences into compact embeddings concerning the three types of dependencies. ** Continuing with our Motivating Example 1, the [ ]{}records user activities and their session metadata in the log files. Due to the large volume of entries and complex user sessions, the log files do not have labels depicting whether one user session is fraudulent or not. Only when an embedding function that is capable of transforming unlabeled user sessions $J_1 \sim J_5$ respecting the differences between them, an anomaly detection algorithm can identify $J_5$ as a fraudulent session. In this paper, we focus on the *generic* problem of embedding attributed sequences in an unsupervised fashion. We propose a novel framework (called ) using deep learning models to address the above challenges. This paper offers the following contributions: - [We study the problem of attributed sequence embedding without any labels available. ]{} - We propose a framework and a training strategy to exploit the dependencies among the attributed sequences. - We evaluate the embeddings generated by [ ]{}framework on real-world datasets using outlier detection tasks. We also conduct case studies of user behavior analysis and demonstrate the usefulness of [ ]{}in real-world applications. Problem Formulation {#prob-form} =================== Preliminaries ------------- [\[plain-sequence\] Given a set of $r$ categorical items $\mathcal{I} = \{e_1,\!\cdots, e_r\}$, the $k$-th sequence in the dataset $S_k = \left( \alpha_k^{(1)}, \cdots, \alpha_k^{(l_k)} \right)$ is an ordered list of items, where $\alpha_k^{(t)} \in \mathcal{I}, \forall t = 1,\cdots, l_k$.]{} Different sequences can have a varying number of items. For example, the number of user click activities varies between different user sessions. The meanings of items are different in different datasets. For example, in user behavior analysis from clickstreams, each item represents one action in user’s click stream ([[*e.g.*]{}]{}, $\mathcal{I}=\{$`search`, `select`}, where $r=2$). Similarly in DNA sequencing, each item represents one canonical base ([[*e.g.*]{}]{}, $\mathcal{I}\!=\!\{\texttt{A},\! \texttt{T},\! \texttt{G},\! \texttt{C}\}$, where $r=4$). There are dependencies between items in a sequence. Without loss of generality, we use the one-hot encoding of $S_k$, denoted as $ \mathbf{S}_k = \big({\bm{\alpha}}_k^{(1)}, \cdots, {\bm{\alpha}}_k^{(l_k)}\big) \in \mathbb{R}^{l_k \times r} $ where each item ${\bm{\alpha}}_k^{(t)}\in \mathbb{R}^r$ in $\mathbf{S}_k$ is a one-hot vector corresponding to the original item $\alpha_k^{(t)}$ in the sequence $S_k$. Additionally, each sequence is associated with a set of *attributes*. Each attribute value can be either categorical or numerical. The attribute values are denoted using a vector $\mathbf{x}_k \in \mathbb{R}^{u}$, where $u$ is the number of attributes in $\mathbf{x}_k$. For example, in a dataset where each instance has two attributes “`IP`” and “`OS`”, $u=2$. With the attributes and sequences, we now formally define the *attributed sequences* (Def. \[attributed-sequence\]) and the *attribute-sequence dependencies* (Def. \[def-asr\]). [ \[attributed-sequence\] Given a vector of attribute values $\mathbf{x}_k$ and a sequence $\mathbf{S}_k$, an attributed sequence $J_k\! =\! (\mathbf{x}_k,\! \mathbf{S}_k)$ is an ordered pair of the attribute value vector $\mathbf{x}_k$ and the sequence $\mathbf{S}_k$.]{} [\[def-asr\] Given an attributed sequence $J_k \!=\! (\!\mathbf{x}_k,\! \mathbf{S}_k\!)$, the log likelihood of $J_k$ is $\log \Pr(\!\mathbf{x}_k,\! \mathbf{S}_k\!)$.]{} Problem Definition ------------------ The goal of attributed sequence embedding is to learn an embedding function that transforms each attributed sequence with a variable-length sequence of categorical items and a set of attributes into a compact representation in the form of a vector. However, these representations are only valuable if an embedding function is capable of learning all three types of dependencies. Hence, given a set of attributed sequences, we define the learning objective of the embedding function as a minimization of the aggregated negative log likelihood of all three types of dependencies. [\[def-main\] Given a dataset of attributed sequences $\mathcal{J}=\{J_1, \cdots, J_n\}$, the problem of attributed sequence embedding is to find an embedding function $\Theta$ with a set of parameters (denoted as $\theta$) that produces embeddings for $J_k$ in the form of vectors. The problem is formulated as: $$\label{equation-main} {\operatorname*{minimize}_{\theta}}\sum_{k=1}^{n} \sum_{t=1}^{l_k}-\log\Pr \left({\bm{\alpha}}_k^{(t)} | {\bm{\beta}}_k^{(t)}, \mathbf{x}_k\right) $$ where ${\bm{\beta}}_k^{(t)}\! = \!\left(\! {\bm{\alpha}}_k^{(t-1)},\! \cdots,\! {\bm{\alpha}}_k^{(1)}\right), \forall t = 2,\! \cdots,\! l_k$ represents the items prior to ${\bm{\alpha}}_k^{(t)}$ in the sequence. ]{} Our problem can be interpreted as: we want to minimize the prediction error of the ${\bm{\alpha}}_k^{(t)}$ in each attributed sequence given the attribute values $\mathbf{x}_k$ and all the items prior to ${\bm{\alpha}}_k^{(t)}$. The [ ]{}Framework {#the-model} ================== Attribute Network {#rep-attr} ----------------- Fully connected neural network [@liou2014autoencoder] is capable of modeling the dependencies of the inputs and at the same time reduce the dimensionality. Fully connected neural network has been widely used  [@liou2008modeling; @liou2014autoencoder; @phan2016differential] for unsupervised data representations learning, including tasks such as dimensionality reduction and generative data modeling. With the high-dimensional sparse input attribute values $\mathbf{x}_k \in \mathbb{R}^u$, it is ideal to use such a network to learn the attribute dependencies. We design our attribute network as: $$ \label{autoencoder-formula} \begin{split} \vspace{-2pt} \mathbf{V}_k^{(1)} &= \rho\left(\mathbf{W}_A^{(1)}\mathbf{x}_k + \mathbf{b}_A^{(1)}\right)\\[-10pt] &\vdots\\[-4pt] \mathbf{V}_k^{(M+1)} &= \sigma\left(\mathbf{W}_A^{(M+1)}\mathbf{V}_k^{(M)} + \mathbf{b}_A^{(M+1)}\right)\\[-10pt] &\vdots\\[-4pt] \widehat{\mathbf{x}_{k}} &= \sigma\left(\mathbf{W}_A^{(2M)}\mathbf{V}_k^{(2M-1)} + \mathbf{b}_A^{(2M)}\right) \end{split} $$ where $\rho$ and $\sigma$ are two activation functions. In this attribute network, we use the `ReLU` function proposed in [@nair2010rectified] (defined as $\rho(z) = \max(0,z)$) and `sigmoid` function (defined as $\sigma(z) = \frac{1}{1 + e^{-z}}$). The attribute network is an encoder-decoder stack with $2M$ layers, where the first $M$ layers composed of the *encoder* while the next $M$ layers work as the *decoder*. With $d_M$ hidden units in the $M$-th layer, the input attribute vector $\mathbf{x}_k \in \mathbb{R}^u$ is first transformed to $\mathbf{V}_k^{(M)}\in \mathbb{R}^{d_M}, d_M \ll u$ by the encoder. Then the decoder attempts to reconstruct the input and produce the reconstruction result $\widehat{\mathbf{x}_k}\in \mathbb{R}^{u}$. An ideal attribute network should be able to reconstruct the input from the $\mathbf{V}_k^{(M)}$. The smallest attribute network is built with $M=1$, where there are one layer of encoder and one layer of decoder. Sequence Network {#rep-multimodal} ---------------- The proposed sequence network is a variation of the long short-term memory model (LSTM) [@hochreiter1997long]. The sequence network takes advantage of the conventional LSTM to learn the dependencies between items in sequences. [@hochreiter1997long] defines the conventional LSTM model is defined as $$\label{seqnet-formula-a} \begin{split} \mathbf{i}_k^{(t)} &= \sigma\left(\mathbf{W}_i{\bm{\alpha}}_k^{(t)} + \mathbf{U}_i\mathbf{h}_k^{(t-1)} + \mathbf{b}_i\right) \\[-3pt] \mathbf{f}_k^{(t)} &= \sigma\left(\mathbf{W}_f{\bm{\alpha}}_k^{(t)} + \mathbf{U}_f\mathbf{h}_k^{(t-1)} + \mathbf{b}_f\right) \\[-3pt] \mathbf{o}_k^{(t)} &= \sigma\left(\mathbf{W}_o{\bm{\alpha}}_k^{(t)} + \mathbf{U}_o\mathbf{h}_k^{(t-1)} + \mathbf{b}_o\right) \\[-3pt] \mathbf{g}_k^{(t)} &= \sigma\left(\mathbf{W}_g{\bm{\alpha}}_k^{(t)} + \mathbf{U}_g\mathbf{h}_k^{(t-1)} + \mathbf{b}_g\right) \\[-3pt] \mathbf{c}_k^{(t)} &= \mathbf{f}_k^{(t)} \odot \mathbf{c}_k^{(t-1)} + \mathbf{i}_k^{(t)} \odot \mathbf{g}_k^{(t)} \\[-3pt] \mathbf{h}_k^{(t)} &= \mathbf{o}_k^{(t)} \odot \tanh\left(\mathbf{c}_k^{(t)}\right) \\[-4pt] \end{split}$$ where $\odot$ denotes element-wise product, $\sigma$ is a `sigmoid` activation function, $\mathbf{i}_k^{(t)}, \mathbf{f}_k^{(t)}, $ $\mathbf{o}_k^{(t)}$ and $\mathbf{g}_k^{(t)}$ are the internal gates of an LSTM. The cell states (denoted as $\mathbf{c}_k^{(t)}$) and hidden states (denoted as $\mathbf{h}_k^{(t)}$), which store the information of the sequential data, are two important components in the LSTM model. Without loss of generality, we denote the dimension of the cell states and the hidden states as $d$. **Integration of Attribute Network and Sequence Network.** Different from the conventional LSTM, our proposed sequence network also accepts the output from the attribute network to condition the sequence network. In particular, we have redesigned the function of the *hidden states* to integrate the information from the attribute network by conditioning the sequence network at the first time step as $$\small \label{seqnet-formula-b} \mathbf{h}_k^{(t)} = \mathbf{o}_k^{(t)} \odot \tanh\left(\mathbf{c}_k^{(t)}\right) + \mathds{1}(t=1)\odot\mathbf{V}_k^{(M)}$$ This integration requires the attribute network and the sequence network have the same number of the hidden units ([*i.e.*]{}, $d_M = d$). Since the attributed sequences are unlabeled, we designed the sequence network to predict *the next item in the sequence* as the training strategy. The prediction is carried out by designing an output layer applying a `softmax` function on the hidden states as $$\small \label{seqnet-formula-c} \mathbf{y}_k^{(t)} = \delta\left(\mathbf{W}_y\mathbf{h}_k^{(t)} + \mathbf{b}_y\right) \vspace{-2pt}$$ where $\mathbf{y}_k^{(t)} \in \mathbb{R}^{r}$ is the predicted next item in sequence computed using `softmax` function, $\mathbf{W}_y$ and $\mathbf{b}_y$ are the weights and bias of this output layer. With the `softmax` activation function, the $\mathbf{y}_k^{(t)}$ can be interpreted as the probability distribution over $r$ items. Training {#sec-backprop} -------- ### Training Objectives We use two different learning objectives for attribute network and sequence network targeting at the unique characteristics of attribute and sequence data. 1. Attribute network aims at minimizing the differences between the input and reconstructed attribute values. The learning objective function of attribute network is defined as $$\small \label{loss-att-net} L_{A} = \|\mathbf{x}_k - \widehat{\mathbf{x}_k}\|_2^2$$ 2. Sequence network aims at minimizing log likelihood of incorrect prediction of the next item at each time step. Thus, the sequence network learning objective function can be formulated using categorical cross-entropy as $$\label{loss-seq-net} \small L_{S} = -\sum_{t=1}^{l_k} {\bm{\alpha}}_k^{(t)}\log\mathbf{y}_k^{(t)}$$ ### Embedding After the model is trained, we use the parameters in attribute network and sequence network to embed each attributed sequence. Specifically, the attributed sequences are used as inputs to the *trained* model only with the one forward pass, where the parameters within the model remain unchanged. After the last time step for an attributed sequence $\mathbf{S}_k$, the cell state of sequence network, namely $\mathbf{c}_k^{(l_k)}$, is used as the embedding of $\mathbf{S}_k$. Experimental Evaluation {#experiments} ======================= In this section, we evaluate [ ]{}framework using real-world application logs from [ ]{}and public datasets from Wikispeedia [@west2012human; @west2009wikispeedia]. We evaluate the quality of embeddings generated by different methods by measuring the performance of outlier detection algorithms using different embeddings. Experimental Setup ------------------ ### Data Collection We use four datasets in our experiments: two from [ ]{}application log files and two from Wikispeedia[^1]. The numbers of attributed sequences in all four datasets vary between $\sim$58k and $\sim$106k. - <span style="font-variant:small-caps;">AMS-A/B</span>: We extract $\sim$164k instances from log files of an Amadeus internal application. Each record is composed of a profile containing information ranging from system configurations to office name, and a sequence of functions invoked by click activities on the web interface. There are 288 distinct functions, 57,270 distinct profiles in this dataset. The average length of the sequences is 18. - <span style="font-variant:small-caps;">Wiki-A/B</span>: This dataset is sampled from Wikispeedia dataset. Wikispeedia dataset originated from a human-computation game, called Wikispeedia [@west2009wikispeedia]. We use a subset of $\sim$3.5k paths from Wikispeedia with the average length of the path as 6. We also extract 11 sequence context ([*e.g.*]{}, the category of the source page, average time spent on each page) as attributes. ![image](auc_k.pdf){width="\figwidth\linewidth"} ![image](auc_k.pdf){width="\figwidth\linewidth"} ![image](auc_k.pdf){width="\figwidth\linewidth"} ![image](auc_k.pdf){width="\figwidth\linewidth"} ![image](auc_epoch.pdf){width="\figwidth\linewidth"} ![image](auc_epoch.pdf){width="\figwidth\linewidth"} ![image](auc_epoch.pdf){width="\figwidth\linewidth"} ![image](auc_epoch.pdf){width="\figwidth\linewidth"} ![The performance of $k$-NN outlier detection ($k=5$). The *methods not using embeddings* are placed on the left. We vary the number of dimensions on the right. The higher score is better. We observe that the combinations of $k$-NN and [ ]{}embeddings have the best performance on the four datasets. []{data-label="fig-outlier-perf"}](legend-auc){width="0.75\linewidth"} [0.45]{} ![The performance of $k$-NN outlier detection ($k=5$). The *methods not using embeddings* are placed on the left. We vary the number of dimensions on the right. The higher score is better. We observe that the combinations of $k$-NN and [ ]{}embeddings have the best performance on the four datasets. []{data-label="fig-outlier-perf"}](auc_perf_bar.pdf "fig:"){width="\linewidth"} [0.45]{} ![The performance of $k$-NN outlier detection ($k=5$). The *methods not using embeddings* are placed on the left. We vary the number of dimensions on the right. The higher score is better. We observe that the combinations of $k$-NN and [ ]{}embeddings have the best performance on the four datasets. []{data-label="fig-outlier-perf"}](auc_perf_bar.pdf "fig:"){width="\linewidth"} [0.45]{} ![The performance of $k$-NN outlier detection ($k=5$). The *methods not using embeddings* are placed on the left. We vary the number of dimensions on the right. The higher score is better. We observe that the combinations of $k$-NN and [ ]{}embeddings have the best performance on the four datasets. []{data-label="fig-outlier-perf"}](auc_perf_bar.pdf "fig:"){width="\linewidth"} [0.45]{} ![The performance of $k$-NN outlier detection ($k=5$). The *methods not using embeddings* are placed on the left. We vary the number of dimensions on the right. The higher score is better. We observe that the combinations of $k$-NN and [ ]{}embeddings have the best performance on the four datasets. []{data-label="fig-outlier-perf"}](auc_perf_bar.pdf "fig:"){width="\linewidth"} ### Compared Methods To study [ ]{}performance on attributed sequences in real-world applications, we use the following compared methods in our experiments. - <span style="font-variant:small-caps;">LEN</span> [@akata2013label]: The attributes are encoded and directly used in the mining algorithm. - <span style="font-variant:small-caps;">MCC</span> [@bernhard2016clickstream]: <span style="font-variant:small-caps;">MCC</span> uses the sequence component of attributed sequence as input and produces log likelihood for each sequence. - <span style="font-variant:small-caps;">SEQ</span> [@sutskever2014sequence]: Only the sequence inputs are used by an LSTM to generate fixed-length embeddings. - <span style="font-variant:small-caps;">ATR</span> [@wang2014generalized]: A two-layered fully connected neural network is used to generate attribute embeddings. - <span style="font-variant:small-caps;">EML</span>[@yager2014probabilistically]: Aggregate <span style="font-variant:small-caps;">MCC</span> and <span style="font-variant:small-caps;">LEN</span> scores. - <span style="font-variant:small-caps;">CSA</span> [@ngiam2011multimodal]: The attribute embedding and the sequence embedding are independently generated by <span style="font-variant:small-caps;">ATR</span> and <span style="font-variant:small-caps;">SEQ</span>, then concatenated together. ### Network Parameters Following the previous work in [@glorot2010understanding], we initialize weight matrices $\mathbf{W}_A$ and $\mathbf{W}_S$ using the uniform distribution. The recurrent matrix $\mathbf{U}_S$ is initialized using the orthogonal matrix as suggested by [@saxe2013exact]. All the bias vectors are initialized with zero vector $\pmb0$. We use stochastic gradient descent as optimizer with the learning rate of 0.01. We use a two-layer encoder-decoder stack as our attribute network. Outlier Detection Tasks {#outlier} ----------------------- We use outlier detection tasks to evaluate the quality of embeddings produced by . We select $k$-NN outlier detection algorithm as it has only one important parameter ([*i.e.*]{}, the $k$ value). We use ROC AUC as the metric in this set of experiments. For each of the <span style="font-variant:small-caps;">AMS-A</span> and <span style="font-variant:small-caps;">AMS-B</span> datasets, we ask domain experts to select two users as inlier and outlier. These two users have completely different behaviors ([[*i.e.*]{}]{}, sequences) and metadata ([[*i.e.*]{}]{}, attributes). The percentages of outliers in <span style="font-variant:small-caps;">AMS-A</span> and <span style="font-variant:small-caps;">AMS-B</span> are 1.5% and 2.5% of all attributed sequences, respectively. For the <span style="font-variant:small-caps;">Wiki-A</span> and <span style="font-variant:small-caps;">Wiki-B</span> datasets, each path is labeled based on the category of the source page. Similarly to the previous two datasets, we select paths with two labels as inliers and outliers where the percentage of outlier paths is 2%. The feature used to label paths is excluded from the learning and embedding process. **Performance.** The goal of this set of experiments is to demonstrate the performance of outlier detection using all our compared methods. Each method is trained with all the instances. For <span style="font-variant:small-caps;">SEQ</span>, <span style="font-variant:small-caps;">ATR</span> and , the number of learning epochs is set to 10 and we vary the number of embedding dimensions $d$ from 15 to 30. We set $k=5$ for outlier detection tasks for <span style="font-variant:small-caps;">LEN</span>, <span style="font-variant:small-caps;">SEQ</span>, <span style="font-variant:small-caps;">ATR</span>, <span style="font-variant:small-caps;">CSA</span> and . Choosing the *optimal* $k$ value in the outlier detection tasks is beyond the scope of this work, thus we omit its discussions. We summarize the performance results in Fig. \[fig-outlier-perf\]. **Analysis.** We find that the results based on the embeddings generated by [ ]{}are superior to other methods. That is, [ ]{}maximally outperforms other state-of-the-art algorithms by 32.9%, 27.5%, 44.8% and 48% on <span style="font-variant:small-caps;">AMS-A</span>, <span style="font-variant:small-caps;">AMS-B</span>, <span style="font-variant:small-caps;">Wiki-A</span> and <span style="font-variant:small-caps;">Wiki-B</span> datasets, respectively. It is worth mentioning that [ ]{}outperforms a similar baseline method <span style="font-variant:small-caps;">CSA</span> by incorporating the information of attribute-sequence dependencies. **Parameter Study** There are two *key* parameters in our evaluations, [*i.e.*]{}, $k$ value for the $k$-NN algorithm and the learning epochs. In Fig. \[exp-auc-k\], we first show that the embeddings (dimension $d=15$) generated by our [ ]{}assist $k$-NN outlier detection algorithm to achieve superior performance under a wide range of $k$ values ($k=5, 10, 15, 20, 25$). We omit the detailed discussions of selecting the optimal $k$ values due to the scope of this work. Fig. \[exp-auc-epoch\] presents the results when we fix $k=5, d=15$ and vary the number of epochs in the learning phase. We notice that compared to its competitor, the embeddings generated by [ ]{}can achieve a higher AUC even with a relatively fewer number of learning epochs. Compared to other neural network-based methods ([*i.e.*]{}, <span style="font-variant:small-caps;">SEQ</span>, <span style="font-variant:small-caps;">ATR</span> and <span style="font-variant:small-caps;">CSA</span>), [ ]{}have a more stable performance. The [ ]{}performance gain is not due to the advantage of using both attributes and sequences, but because of taking the various dependencies into account, as the other two competitors ([*i.e.*]{}, <span style="font-variant:small-caps;">CSA</span> and <span style="font-variant:small-caps;">EML</span>) also utilize the information from both attributes and sequences. Related Work {#related} ============ **Sequence Mining.** Many sequence mining work focuses on frequent sequence pattern mining. Recent work in [@miliaraki2013mind] targets finding subsequences of possible non-consecutive actions constrained by a gap within sequences. [@egho2015parameter] aims at solving pattern-based sequence classification problems using a parameter-free algorithm from the model space. It defines rule pattern models and a prior distribution on the model space. [@fowkes2016subsequence] builds a subsequence interleaving model for mining the most relevant sequential patterns. **Deep Learning.** Sequence-to-sequence learning in [@sutskever2014sequence] uses long short-term memory model in machine translation. The hidden representations of sentences in the source language are transferred to a decoder to reconstruct in the target language. The idea is that the hidden representation can be used as a compact representation to transfer sequence similarities between two sequences. Multi-task learning in [@luong2015multi] examines three multi-task learning settings for sequence-to-sequence models that aim at sharing either an encoder or decoder in an encoder-decoder model setting. Although the above work is capable of learning the dependencies within a sequence, none of them focuses on learning the dependencies between attributes and sequences. This new bipartite data type of attributed sequence has posed new challenges of heterogeneous dependencies to sequence models, such as RNN and LSTM. Multimodal deep neural networks [@karpathy2015deep; @ngiam2011multimodal; @xu2015show] is designed for information sharing across multiple neural networks, but none of these work focuses on our attributed sequence embedding problem. Conclusion ========== In this paper, we study the problem of *unsupervised attributed sequences embedding*. Different from conventional feature learning approaches, which work on either sequences or attributes without considering the *attribute-sequence dependencies*, we identify the three types of dependencies in attributed sequences. We propose a novel framework, called , to learn the heterogeneous dependencies and embed unlabeled attributed sequences. Empirical studies on real-world tasks demonstrate that the proposed [ ]{}effectively boosts the performance of outlier detection tasks compared to baseline methods. [^1]: Personal identity information is not collected for privacy reasons.
--- author: - 'E. Sissa' - 'R. Gratton' - 'S. Desidera' - 'A. F. Martinez Fiorenzano' - 'A. Bonfanti' - 'E. Carolo' - 'D. Vassallo' - 'R.U. Claudi' - 'M. Endl' - 'R. Cosentino' bibliography: - 'biblio.bib' date: 'Received / Accepted ' title: '[H$_\alpha$]{}-activity and ages for stars in the SARG survey[^1]$^,$[^2] ' --- Introduction ============ Studying the variation in the radial velocity (RV) induced by the chromospheric activity is important to distinguish it from the Keplerian motion of the star that may be caused by a planet [see e.g. @Queloz01; @Dumusque11; @2014Robertson]. On long timescales the active regions can modify measured RVs by introducing a signal related to the stellar activity cycle, while on short timescales the rotational period can become evident. The most widely used activity indicators are based on the Ca II H&K lines [@2010Isaacson; @Lovis11; @Gomes11], which have been shown to correlate with the radial velocity jitter. Other lines were investigated and it was found that the [H$_\alpha$]{} line can be a good alternative [@1990Robinson; @1990Stassmeier; @2010Santos; @Gomes11]. However the correlation of [H$_\alpha$]{} with Ca II H&K indices is high for the most active stars but decreases at a lower activity level, and sometimes becomes an anti-correlation [@Gomes11]. Similar results were also found by [@2007Cincunegui], who added, using simultaneous observations of stars with spectral type later than F, that the correlation is lost when studying individual spectra of single stars and there is no dependence on activity. The correlation between the averaged fluxes for the Ca II and [H$_\alpha$]{} lines can be clarified by considering the dependence of the two indexes on the stellar colour or the spectral type, while the absence of a general relation between the simultaneous Ca II and [H$_\alpha$]{} index can be due to difference in the formation region of the two lines [@2007Cincunegui; @Gomes14]. Studying the solar spectrum as a prototype and extrapolating the results to other stars, [@2009Meunier] discovered that plages and filaments in the chromosphere contribute differently to Ca II and [H$_\alpha$]{} lines: while plages contribute to the emission of all these lines, the absorption due to filaments is remarkable only for [H$_\alpha$]{}. Therefore the saturation of the plage filling factor seems to enhance the correlation between the two indexes in case of high stellar activity and low filament contribution. On the other hand, the anti-correlation between the emission in Ca II and [H$_\alpha$]{} for low active stars seems to depend only on a strong filament contrast if the filaments are well correlated with plages [see also @Gomes14]. A search for planets around the components of wide binaries was performed using SARG (Spettrografo Alta Risoluzione Galileo) at the Telescopio Nazionale Galileo (TNG) in the past years. Two planetary companions were detected around and [@Desidera11; @Desidera12]. @Carolo14 found strong variations in the RVs of that could not be explained by a stable planetary system, but which were well correlated to a [H$_\alpha$]{} based activity indicator, showing that they are due to an $\sim$1100-day activity cycle. Stimulated by this finding, we started a systematic analysis of [H$_\alpha$]{}  in the binaries of the SARG sample to identify activity-induced RV variations and distinguish them from planetary signatures. We report here on the main results of the activity study made within this survey. We also include the measurements for additional stars observed by our group for other programs carried out with SARG.\ Observation and data reduction {#sec:data} ============================== SARG is the High Resolution Spectrograph at TNG, now decommissioned, which worked for about 12 years beginning in 2000 [@Gratton01]. The SARG survey was the first planet research program entirely dedicated to binary systems and aimed to determine the frequency of giant planets up to a few AU separation from their star in nearly equal-mass visual binaries using high-precision radial velocities. The sample of the survey included 47 pairs of stars from the Hipparcos Multiple Star Catalog [@1997Perryman], considering binaries in the magnitude range 7.0 $<$ V $<$ 10.0, with magnitude differences between the components $\Delta$V $<$ 1.0, projected separations larger than 2" (to avoid strong contamination of the spectra), parallaxes larger than 10 mas, and errors smaller than 5 mas, with B - V $>$ 0.45 mag and spectral types later than F7. For more details on the sample see [@Desidera2007]. The stars are typically at distance $<$ 50 pc from the Sun. Between September 2000 and April 2012 we collected up to 81 spectra per star with a typical exposure time of 900 s for a total amount of more than 6000 science images. In this work we also include six bright stars that were observed with SARG looking to search for hot-Neptunes orbiting planets [@GrattonHotNeptunes]. For these stars the integration time was set at 600 s except for and , for which it was shorter to avoid saturation of the images because of their higher luminosity. , and were used as RVs reference stars during the survey, and thei signal-to-noise ratio (S/N) is typically greater then 270. In addition, was observed as benchmark active star [@2005Martinez]. We decided to include this star in our sample as well. Our data set therefore consists made of two sub samples: the binary sample and the bright stars sample. The first is unbiased with respect to activity (except for , which was excluded because of its high rotation), while the latter has a bias toward low-activity stars except for . For all the observations we used the SARG Yellow Grism (spectral range 4600-7900 Å) and the 0.27 arcsec slit to obtain a resolution $R=144000$ with a $2\times1$ pixel binning. The observed spectral range was covered by two chips. The blue chip included the spectral range used for the RV determination: the accuracy was given by a iodine cell superimposing a forest of absorption lines used as reference for the AUSTRAL code [@Endl2000], as shown in @Desidera11. The red chip data are affected by fringing effects at wavelengths longer than $\sim7000$ Å; these were not used in our analysis. The depth of the iodine lines decreases toward longer wavelengths, and the lines are negligible at the wavelength of [H$_\alpha$]{}. Data reduction was performed with standard IRAF[^3] procedures. Stellar parameters {#sec:parameters} ================== [l\*[5]{}[c]{}rc]{} \ Star & V & B-V & [$\log R'_{HK}$]{}& method & $v\sin i$ & $T_{eff}$\ & & & & & \[km/s\] & \[K\]\ \ Star & V & B-V & $\log RHK$ & method & $v\sin i$ & $T_{eff}$\ & & & & & \[km/s\] & \[K\]\ BD 182366A & 9.370 & 0.83 & -4.56 & U & 2.0 & 5308\ BD 182366B & 9.427 & 0.93 & -4.57 & U & 2.4 & 5290\ BD 222706A & 9.594 & 0.62 & -4.97 & D & 2.3 & 5943\ BD 222706B & 9.828 & 0.69 & -4.51 & U & 2.3 & 5674\ BD 231978A & 9.395 & 0.83 & -4.46 & D & 3.5 & 4886\ BD 231978B & 9.530 & 0.75 & -4.44 & U & 3.2 & 4911\ HD 105421A & 7.827 & 0.51 & -4.70 & U & 4.6 & 6324\ HD 105421B & 8.358 & 0.57 & -4.65 & U & 0.9 & 6102\ HD 106515A & 7.960 & 0.79 & -5.04 & D & 1.7 & 5314\ HD 106515B & 8.234 & 0.83 & -5.07 & D & 1.8 & 5157\ HD 108421A & 8.870 & 0.90 & -4.57 & X & 2.6 & 4700\ HD 108421B & 9.274 & 0.88 & -4.53 & X & 3.2 & 4779\ HD 108574 & 7.418 & 0.56 & -4.49 & D & 4.9 & 6205\ HD 108575 & 7.972 & 0.67 & -4.43 & X & 5.1 & 5895\ HD 109628A & 9.073 & 0.57 & -4.51 & U & 3.0 & 6109\ HD 109628B & 9.087 & 0.55 & -4.50 & U & 3.2 & 6127\ HD 117963A & 8.639 & 0.55 & -4.65 & U & 6.2 & 6180\ HD 117963B & 8.924 & 0.49 & -4.61 & U & 3.3 & 6097\ HD 118328A & 9.147 & 0.62 & -4.61 & U & 0.7 & 5943\ HD 118328B & 9.426 & 0.69 & -4.59 & U & 0.7 & 5887\ HD 121298A & 8.604 & 0.50 & -4.91 & D & 1.9 & 6353\ HD 121298B & 8.937 & 0.52 & -4.87 & D & 1.3 & 6266\ HD 123963A & 8.758 & 0.62 & -4.63 & U & 1.6 & 5873\ HD 123963B & 9.511 & 0.60 & -4.55 & U & 1.4 & 5438\ HD 124054A & 8.399 & 0.58 & -4.97 & D & 2.7 & 6081\ HD 124054B & 8.785 & 0.64 & -5.02 & D & 2.5 & 5896\ HD 126246A & 7.466 & 0.54 & -4.40 & D & 7.9 & 6223\ HD 126246B & 7.697 & 0.60 & -4.51 & D & 3.9 & 6074\ HD 128041A & 8.059 & 0.71 & -4.53 & U & 3.4 & 5663\ HD 128041B & 8.827 & 0.78 & -4.51 & U & 3.2 & 5192\ HD 132563A & 8.948 & 0.54 & -4.62 & U & 3.9 & 6168\ HD 132563B & 9.402 & 0.57 & -4.62 & U & 3.4 & 5985\ HD 132844A & 9.022 & 0.55 & -4.66 & U & 3.4 & 5878\ HD 132844B & 9.114 & 0.63 & -4.61 & U & 2.4 & 5809\ HD 13357A & 8.180 & 0.67 & -4.70 & D & 1.7 & 5615\ HD 13357B & 8.647 & 0.73 & -4.61 & D & 1.8 & 5341\ HD 135101A & 6.656 & 0.69 & -4.99 & D & 2.3 & 5631\ HD 135101B & 7.500 & 0.75 & -5.07 & D & 1.1 & 5491\ HD 139569A & 8.482 & 0.54 & -4.55 & U & 8.5 & 6223\ HD 139569B & 8.783 & 0.57 & -4.52 & U & 5.6 & 5922\ HD 143144A & 8.856 & 0.62 & -4.61 & U & 1.9 & 5943\ HD 143144B & 9.025 & 0.61 & -4.59 & U & 1.2 & 5894\ HD 146413A & 9.260 & 0.88 & -4.68 & D & 2.1 & 4779\ HD 146413B & 9.492 & 0.87 & -4.60 & X & 2.0 & 4818\ HD 17159A & 8.775 & 0.54 & -4.64 & U & 3.4 & 6155\ HD 17159B & 8.923 & 0.53 & -4.62 & U & 2.9 & 6051\ HD 186858A & 8.368 & 0.96 & -4.73 & D & 3.6 & 4910\ HD 186858B & 8.578 & 0.93 & -4.62 & X & 3.0 & 4885\ HD 190042A & 8.755 & 0.73 & -4.71 & U & 3.5 & 5474\ HD 190042B & 8.778 & 0.80 & -4.72 & U & 3.8 & 5406\ HD 19440A & 7.874 & 0.47 & -4.73 & U & 4.5 & 6308\ HD 19440B & 8.574 & 0.53 & -4.66 & U & 2.9 & 6108\ HD 200466A & 8.399 & 0.74 & -4.77 & D & 2.0 & 5633\ HD 200466B & 8.528 & 0.76 & -4.69 & D & 2.1 & 5583\ HD 201936A & 8.648 & 0.48 & -4.55 & X & 8.8 & 6441\ HD 201936B & 8.851 & 0.50 & -4.53 & X & 15.5 & 6452\ HD 209965A & 7.980 & 0.55 & -4.96 & D & 4.2 & 6180\ HD 209965B & 8.414 & 0.57 & -4.59 & U & 2.1 & 6115\ HD 213013A & 8.982 & 0.81 & -4.59 & U & 1.7 & 5402\ HD 213013B & 9.612 & 0.93 & -4.53 & U & 2.3 & 4990\ HD 215812A & 7.275 & 0.64 & -4.66 & U & 1.3 & 5688\ HD 215812B & 7.576 & 0.71 & -4.64 & U & 1.5 & 5586\ HD 216122A & 8.062 & 0.58 & -4.73 & U & 6.5 & 6067\ HD 216122B & 8.186 & 0.58 & -4.71 & U & 4.6 & 6066\ HD 219542A & 8.174 & 0.64 & -5.07 & D & 2.1 & 5849\ HD 219542B & 8.547 & 0.72 & -4.81 & D & 1.9 & 5691\ HD 2770A & 9.566 & 0.61 & -4.39 & X & 2.8 & 5970\ HD 2770B & 9.660 & 0.73 & -4.39 & X & 3.9 & 5844\ HD 30101A & 8.782 & 0.82 & -4.72 & D & 1.9 & 5143\ HD 30101B & 8.848 & 0.91 & -4.79 & D & 2.2 & 5061\ HD 33334A & 8.023 & 0.70 & -4.99 & D & 1.9 & 5650\ HD 33334B & 8.857 & 0.80 & -4.63 & U & 1.7 & 5201\ HD 66491A & 9.253 & 0.75 & -4.65 & D & 2.5 & 5497\ HD 66491B & 9.312 & 0.67 & -4.58 & D & 2.4 & 5492\ HD 76037A & 7.688 & 0.50 & -5.14 & D & 7.9 & 6353\ HD 76037B & 8.269 & 0.50 & -5.03 & D & 9.5 & 6442\ HD 8009A & 8.819 & 0.64 & -4.96 & D & 0.2 & 5688\ HD 8009B & 9.724 & 0.82 & -4.95 & D & 0.0 & 5291\ HD 8071A & 7.312 & 0.57 & -4.74 & U & 5.5 & 6218\ HD 8071B & 7.573 & 0.60 & -4.71 & U & 6.0 & 6142\ HD 85441A & 8.907 & 0.70 & -4.60 & X & 1.3 & 5701\ HD 85441B & 9.284 & 0.71 & -4.56 & X & 1.6 & 5537\ HD 86057A & 8.839 & 0.60 & -4.49 & X & 6.0 & 6012\ HD 86057B & 9.676 & 0.73 & -4.40 & X & 4.6 & 5629\ HD 87743A & 8.734 & 0.62 & -4.71 & D & 2.5 & 5943\ HD 87743B & 8.890 & 0.60 & -4.59 & D & 3.0 & 5905\ HD 94399A & 9.407 & 0.61 & -4.54 & X & 3.2 & 5970\ HD 94399B & 9.306 & 0.71 & -4.56 & X & 3.6 & 6017\ HD 9911A & 9.428 & 0.90 & -4.60 & U & 1.3 & 5000\ HD 9911B & 9.448 & 0.89 & -4.60 & U & 1.3 & 4968\ HD 99121A & 8.162 & 0.46 & -4.67 & U & 6.7 & 6501\ HD 99121B & 9.018 & 0.47 & -4.57 & U & 5.0 & 6374\ HIP 104687A & 8.144 & 0.64 & -4.41 & D & 3.0 & 5870\ HIP 104687B & 8.189 & 0.71 & -4.48 & D & 3.4 & 5801\ 14 Her & 6.610 & 0.88 & -5.06 & D & 1.6 & 5388\ 40 Eri & 4.430 & 0.65 & -4.90 & D & 0.5 & 5151\ 51 Peg & 5.450 & 0.67 & -5.08 & D & 2.0 & 5787\ 61 Cyg B & 6.030 & 1.31 & -4.95 & D & 1.7 & 4077\ 83 LeoA & 6.490 & 1.00 & -4.84 & D & 1.4 & 5502\ GJ 380 & 6.610 & 1.33 & -4.72 & D & 1.9 & 3876\ GJ 580A & 6.580 & 0.78 & -5.11 & D & 2.1 & 5174\ HD 166435 & 6.840 & 0.58 & -4.27 & D & 7.6 & 5964\ $\rho$ CrB & 5.390 & 0.61 & -5.08 & D & 1.0 & 5823\ $\tau$ Cet & 3.490 & 0.73 & -4.98 & D & 1.0 & 5283\ [l\*[5]{}[c]{}rrr]{} \ Star & n. obs & $JD_0$& $JD_F$&$<H\alpha>$ & $\Delta H\alpha$ & $\sigma_{H\alpha}$ & RV & RMS(RV)\ & & & & & & & \[km/s\] & \[m/s\]\ \ Star & n. obs & $JD_0$& $JD_F$&$<H\alpha>$ & $\Delta H\alpha$ & $\sigma_{H\alpha}$ & RV & RMS(RV)\ & & & & & & & \[km/s\] & \[m/s\]\ BD+182366A & 20 & 1985.4166 & 4251.3874 & 0.255 & 0.039 & 0.025 & 11.00 & 12.67\ BD+182366B & 18 & 1985.4317 & 4251.3989 & 0.246 & 0.029 & 0.021 & 11.40 & 11.48\ BD+222706A & 18 & 2011.6073 & 4309.4345 & 0.225 & 0.008 & 0.019 & -4.10 & 25.41\ BD+222706B & 18 & 2011.6264 & 4962.4638 & 0.258 & 0.045 & 0.012 & 2.41 & 21.55\ BD+231978A & 14 & 1825.7278 & 4398.6806 & 0.371 & 0.135 & 0.027 & 20.50 & 28.41\ BD+231978B & 13 & 1825.7152 & 4398.6929 & 0.383 & 0.149 & 0.025 & 23.50 & 35.19\ HD 105421A & 21 & 2011.4704 & 4902.4194 & 0.237 & 0.010 & 0.017 & 7.40 & 16.87\ HD 105421B & 19 & 2011.4840 & 4902.4333 & 0.290 & 0.069 & 0.014 & 0.44 & 12.76\ HD 106515A & 31 & 1986.5327 & 6026.5634 & 0.214 & -0.002 & 0.006 & 0.43 & 6.00\ HD 106515B & 30 & 1986.5442 & 6026.5757 & 0.218 & -0.003 & 0.007 & 18.80 & 8.69\ HD 108421A & 17 & 1986.5975 & 4250.4681 & 0.296 & 0.044 & 0.007 & 2.00 & 21.82\ HD 108421B & 13 & 2012.4862 & 4250.4796 & 0.360 & 0.116 & 0.018 & 2.00 & 38.30\ HD 108574 & 22 & 1913.7615 & 4251.4385 & 0.287 & 0.064 & 0.008 & -2.10 & 18.09\ HD 108575 & 22 & 1913.7846 & 4251.4499 & 0.307 & 0.091 & 0.010 & -1.50 & 33.09\ HD 109628A & 14 & 1986.5683 & 4961.3994 & 0.214 & -0.007 & 0.012 & 0.00 & 10.18\ HD 109628B & 13 & 1986.5799 & 4961.4117 & 0.215 & -0.007 & 0.010 & 0.00 & 16.16\ HD 117963A & 15 & 2012.5413 & 5968.6519 & 0.226 & 0.003 & 0.012 & -5.80 & 33.22\ HD 117963B & 13 & 2012.5543 & 5968.6641 & 0.233 & 0.012 & 0.010 & 4.18 & 66.04\ HD 118328A & 15 & 2013.6231 & 4252.5454 & 0.212 & -0.006 & 0.013 & 19.20 & 14.38\ HD 118328B & 14 & 2013.6353 & 4250.5043 & 0.217 & 0.001 & 0.013 & 18.40 & 15.93\ HD 121298A & 14 & 1912.7867 & 4161.5587 & 0.229 & 0.001 & 0.009 & 0.00 & 8.28\ HD 121298B & 12 & 1912.7733 & 4161.5702 & 0.231 & 0.006 & 0.018 & 0.00 & 12.58\ HD 123963A & 15 & 2011.5410 & 4309.4080 & 0.222 & 0.006 & 0.011 & -24.40 & 12.23\ HD 123963B & 13 & 2011.5537 & 4309.4202 & 0.238 & 0.024 & 0.014 & -24.40 & 17.28\ HD 124054A & 13 & 2011.5702 & 4251.4849 & 0.222 & 0.002 & 0.004 & -14.60 & 8.25\ HD 124054B & 14 & 2011.5833 & 4251.4964 & 0.218 & 0.002 & 0.021 & -13.40 & 10.99\ HD 126246A & 18 & 2012.5729 & 4488.7666 & 0.343 & 0.119 & 0.008 & 0.80 & 28.88\ HD 126246B & 16 & 2012.5846 & 4311.3850 & 0.312 & 0.092 & 0.012 & 1.70 & 14.71\ HD 128041A & 23 & 2013.4984 & 4276.4738 & 0.210 & -0.003 & 0.020 & -74.70 & 7.45\ HD 128041B & 21 & 2013.5115 & 4276.4852 & 0.230 & 0.010 & 0.034 & -73.60 & 16.31\ HD 132563A & 63 & 2013.6508 & 5968.6857 & 0.227 & 0.005 & 0.021 & 1.80 & 16.47\ HD 132563B & 56 & 2013.6645 & 5968.7008 & 0.221 & 0.003 & 0.025 & 1.65 & 13.39\ HD 132844A & 27 & 2012.6152 & 4311.4244 & 0.259 & 0.044 & 0.016 & -3.20 & 11.87\ HD 132844B & 26 & 2012.6027 & 4311.4359 & 0.318 & 0.103 & 0.012 & -2.00 & 18.84\ HD 13357A & 29 & 1801.6950 & 4849.4490 & 0.235 & 0.021 & 0.013 & 26.20 & 10.55\ HD 13357B & 25 & 1801.7086 & 4849.4612 & 0.262 & 0.046 & 0.018 & 25.40 & 13.83\ HD 135101A & 14 & 1982.7540 & 4488.7807 & 0.202 & -0.011 & 0.007 & 0.00 & 4.84\ HD 135101B & 12 & 1982.7697 & 4311.4614 & 0.212 & -0.002 & 0.011 & 0.00 & 5.80\ HD 139569A & 18 & 2012.6615 & 4339.4064 & 0.281 & 0.057 & 0.013 & -29.40 & 24.53\ HD 139569B & 20 & 2012.6733 & 4339.4179 & 0.279 & 0.062 & 0.015 & -29.80 & 29.57\ HD 143144A & 19 & 1798.3625 & 4339.3805 & 0.223 & 0.006 & 0.015 & -78.50 & 9.39\ HD 143144B & 18 & 1798.3768 & 4339.3923 & 0.221 & 0.004 & 0.017 & -78.80 & 15.29\ HD 146413A & 20 & 2012.6910 & 4962.5618 & 0.350 & 0.105 & 0.013 & 4.20 & 8.61\ HD 146413B & 19 & 2012.7035 & 4962.5734 & 0.350 & 0.108 & 0.020 & 5.30 & 15.32\ HD 17159A & 28 & 1797.6565 & 4819.3558 & 0.219 & -0.003 & 0.011 & 11.40 & 21.43\ HD 17159B & 28 & 1797.6727 & 4819.3679 & 0.219 & -0.000 & 0.016 & 10.20 & 15.88\ HD 186858A & 44 & 1798.4744 & 4962.5879 & 0.310 & 0.076 & 0.010 & -0.63 & 8.90\ HD 186858B & 41 & 1798.4601 & 4962.6015 & 0.297 & 0.061 & 0.013 & 1.54 & 7.64\ HD 190042A & 23 & 1825.4814 & 4783.3459 & 0.210 & -0.004 & 0.010 & -4.60 & 5.77\ HD 190042B & 22 & 1825.4615 & 4783.3593 & 0.212 & -0.002 & 0.012 & -3.50 & 7.74\ HD 19440A & 19 & 1828.6588 & 4339.6564 & 0.231 & 0.005 & 0.008 & -15.40 & 12.31\ HD 19440B & 19 & 1828.6716 & 4339.6679 & 0.219 & -0.002 & 0.019 & -15.90 & 9.70\ HD 200466A & 79 & 1801.5721 & 5807.6025 & 0.251 & 0.038 & 0.019 & -8.00 & 15.89\ HD 200466B & 71 & 1801.5850 & 5807.6137 & 0.247 & 0.034 & 0.014 & -0.22 & 8.37\ HD 201936A & 15 & 2042.6381 & 4398.3978 & 0.289 & 0.058 & 0.015 & 3.70 & 32.87\ HD 201936B & 15 & 2042.6554 & 4398.4092 & 0.317 & 0.086 & 0.023 & 2.50 & 47.51\ HD 209965A & 26 & 2145.5472 & 4783.3759 & 0.223 & 0.001 & 0.008 & -19.40 & 20.60\ HD 209965B & 22 & 2145.5634 & 4783.3878 & 0.218 & -0.003 & 0.011 & 0.11 & 24.31\ HD 213013A & 34 & 1827.4669 & 4369.5700 & 0.245 & 0.031 & 0.016 & -24.70 & 10.98\ HD 213013B & 32 & 1827.4540 & 4369.5824 & 0.264 & 0.035 & 0.024 & -24.70 & 14.68\ HD 215812A & 29 & 1798.4923 & 4398.4798 & 0.210 & -0.004 & 0.005 & 8.43 & 32.68\ HD 215812B & 18 & 1798.5063 & 4398.4912 & 0.212 & -0.001 & 0.007 & 0.95 & 20.25\ HD 216122A & 24 & 1801.6229 & 4962.6771 & 0.220 & 0.000 & 0.007 & -13.30 & 18.74\ HD 216122B & 27 & 1801.6366 & 4962.6895 & 0.225 & 0.006 & 0.012 & -1.04 & 13.92\ HD 219542A & 43 & 1825.5176 & 4664.6807 & 0.216 & 0.001 & 0.007 & -12.50 & 7.43\ HD 219542B & 48 & 1825.5048 & 4664.6931 & 0.230 & 0.016 & 0.011 & -11.50 & 7.54\ HD 2770A & 21 & 1856.5704 & 4338.6438 & 0.301 & 0.084 & 0.017 & -5.00 & 21.13\ HD 2770B & 21 & 1856.5841 & 4338.6587 & 0.313 & 0.098 & 0.023 & -6.40 & 32.54\ HD 30101A & 33 & 1825.6514 & 5952.4781 & 0.252 & 0.030 & 0.021 & -18.20 & 25.26\ HD 30101B & 33 & 1825.6652 & 5952.4943 & 0.252 & 0.027 & 0.023 & -18.00 & 13.62\ HD 33334A & 57 & 1801.7517 & 5952.5179 & 0.207 & -0.007 & 0.013 & 83.20 & 22.28\ HD 33334B & 51 & 1801.7439 & 5952.5299 & 0.216 & -0.003 & 0.015 & 83.70 & 23.26\ HD 66491A & 24 & 1853.7409 & 4398.7585 & 0.264 & 0.051 & 0.023 & 48.40 & 18.04\ HD 66491B & 21 & 1853.7557 & 4161.4142 & 0.271 & 0.058 & 0.030 & 49.10 & 23.30\ HD 76037A & 35 & 1828.7406 & 5952.6152 & 0.228 & -0.000 & 0.012 & 22.02 & 114.52\ HD 76037B & 34 & 1853.7833 & 5952.6280 & 0.240 & 0.009 & 0.014 & -0.05 & 38.46\ HD 8009A & 33 & 2116.6201 & 4819.3811 & 0.217 & 0.004 & 0.022 & -42.10 & 12.05\ HD 8009B & 26 & 2116.6334 & 4819.3929 & 0.223 & 0.006 & 0.020 & -41.80 & 17.66\ HD 8071A & 12 & 1797.6224 & 4339.6225 & 0.217 & -0.007 & 0.005 & 5.67 & 18.94\ HD 8071B & 9 & 1797.6397 & 3246.6791 & 0.210 & -0.012 & 0.005 & 9.00 & 64.80\ HD 85441A & 15 & 1826.7408 & 4754.7425 & 0.246 & 0.032 & 0.022 & -19.80 & 12.32\ HD 85441B & 15 & 1826.7535 & 4754.7538 & 0.264 & 0.051 & 0.016 & -19.80 & 13.42\ HD 86057A & 18 & 1985.5083 & 4251.3612 & 0.324 & 0.105 & 0.018 & 11.80 & 24.94\ HD 86057B & 18 & 1985.5216 & 4251.3726 & 0.360 & 0.147 & 0.025 & 11.20 & 36.64\ HD 87743A & 23 & 2012.3936 & 4849.6255 & 0.249 & 0.032 & 0.023 & 0.00 & 19.33\ HD 87743B & 25 & 2012.3801 & 4849.6371 & 0.282 & 0.065 & 0.031 & 3.00 & 19.35\ HD 94399A & 19 & 1986.4614 & 4962.3837 & 0.346 & 0.128 & 0.026 & -6.20 & 20.16\ HD 94399B & 17 & 1986.4731 & 4962.3953 & 0.338 & 0.119 & 0.014 & -3.80 & 57.20\ HD 9911A & 22 & 1801.6656 & 4339.6330 & 0.236 & 0.007 & 0.030 & -56.60 & 11.53\ HD 9911B & 20 & 1801.6528 & 4339.6445 & 0.223 & -0.008 & 0.021 & -56.30 & 11.36\ HD 99121A & 24 & 1986.5086 & 4250.4427 & 0.223 & -0.010 & 0.015 & -4.40 & 23.66\ HD 99121B & 20 & 1986.5205 & 4250.4552 & 0.217 & -0.012 & 0.017 & -3.10 & 30.57\ HIP 104687A & 30 & 2070.6622 & 4309.6015 & 0.300 & 0.084 & 0.015 & -20.60 & 23.82\ HIP 104687B & 29 & 2070.6751 & 4309.6148 & 0.294 & 0.079 & 0.013 & -21.20 & 14.20\ 14 Her & 144 & 4515.7409 & 4902.6461 & 0.223 & 0.009 & 0.003 & -2.97 & 4.06\ 40 Eri & 42 & 4515.3574 & 4819.4971 & 0.232 & 0.011 & 0.002 & -42.20 & 7.19\ 51 Peg & 44 & 1774.6139 & 4783.4387 & 0.206 & -0.009 & 0.003 & 0.57 & 6.00\ 61 Cyg B & 127 & 2570.3207 & 4693.6690 & 0.343 & -0.001 & 0.008 & -0.29 & 2.92\ 83 Leo A & 121 & 4512.5548 & 4819.6320 & 0.223 & 0.009 & 0.003 & -2.90 & 6.54\ GJ 380 & 145 & 4512.4711 & 4819.5728 & 0.403 & 0.013 & 0.012 & -26.10 & 5.39\ GJ 580 A & 158 & 4512.6957 & 4694.3972 & 0.213 & -0.008 & 0.004 & -67.90 & 7.34\ HD 166435 & 18 & 2775.6448 & 3872.7162 & 0.411 & 0.193 & 0.010 & -13.70 & 95.27\ $\rho$ CrB & 46 & 2011.7355 & 4663.5609 & 0.210 & -0.006 & 0.004 & -1.32 & 6.09\ $\tau$ Cet & 225 & 1773.7347 & 4819.3086 & 0.211 & -0.006 & 0.003 & -16.40 & 4.86\ For a proper interpretation of the [H$_\alpha$]{} measurements that we derived in Sect. \[sec:HAindex\], some stellar parameters were considered. We describe here the adopted sources or procedures to measure them. Differential radial velocities were derived in @CaroloPHD and have a typical uncertainty of about 4 m/s for stars in the binary survey and less then 2 m/s for the bright stars. We considered measurements of [$\log R'_{HK}$]{} from the literature, with preference for studies including multi-epoch measurements to take temporal variations of activity into account. Overall, we retrieved [$\log R'_{HK}$]{} for 36 stars from @2004ApJS..152..261W, @2010Isaacson, , and @2003AJ....126.2048G. Finally, for the components of , , , and , the value of [$\log R'_{HK}$]{} was derived from HIRES spectra available in the Keck[^4] archive following the procedure described in @Carolo14. For stars without [$\log R'_{HK}$]{} values in the literature we estimated the value from the ratio of X-ray to bolometric luminosity, using the calibration by @2008ApJ...687.1264M. This latter quantity was derived following the procedure described in @Carolo14 and @CaroloPHD for the sources identified in the ROSAT All Sky Survey within 30$"$ from our target stars. For the binaries composing most of our sample, the components are not spatially resolved by ROSAT. We then assumed equal X-ray luminosity for the components. For stars that are not detected in the ROSAT All Sky Survey, this procedure yields an upper limit on [$\log R'_{HK}$]{} . The values of [$\log R'_{HK}$]{} or the upper limits, as other additional parameters we used, are listed in Table \[additionalparam\]. The projected rotational velocity, $v \sin i$, was obtained from a calibration of the full with at half maximum (FWHM) of the cross-correlation function of SARG spectra. Details will be presented elsewhere. For the single stars we adopted the $v \sin i$ from literature sources such as @2005ApJS..159..141V. The effective temperature $T_\mathrm{eff}$ of the primaries was derived from the B-V colour using the calibration by [@1996Alonso] and assuming no reddening, while for the secondaries we relied on the high-precision temperature difference measured as part of the differential abundance analysis of 23 binary systems in and preliminary results by [@tesivassallo] for the others. For the single stars (standard stars and targets of the hot-Neptune program) we adopted the effective temperature from high-quality spectroscopic studies [e.g. @2005ApJS..159..141V]. [H$_\alpha$]{} index {#sec:HAindex} ==================== Since the Ca II H&K lines wavelengths are not included in the SARG yellow grism spectral range, we defined a new activity index based on the [H$_\alpha$]{} line to study the activity of the stars in this sample. We built an IDL procedure optimized for the SARG spectra format: we measured the instrumental flux (not corrected for the blaze function) in a wavelength interval centred on the line core, $F_H$, and in two additional intervals symmetrically located with respect to the centre, $F_{c1}$ and $F_{c2}$. [H$_\alpha$]{} is defined as $H_\alpha=2 F_H/(F_{c1}+F_{c2})$, where $F_{c1}=flux$\[6558.80Å- 6559.80Å\], $F_H=flux$\[6562.60Å- 6563.05Å\], and $F_{c2}=flux$\[6565.20Å- 6566.20Å\]. Since the SARG spectrograph was not built to study in the [H$_\alpha$]{} spectral range, this line appears twice but close to the edges of two orders (close to the blue edge of the order 93 and to the red edge of the order 94), according to the RV of the star. When we choose a wider window for $F_{c1}$ and $F_{c2}$ or increase the distance from $F_H$, the number of spectra in which the selected wavelength exits the detector therefore increases. Our choice is the best compromise. For the same reason we were unable to use the [H$_\alpha$]{} index that was used by other authors : for each order one of the two continuum reference windows used by these authors is outside of the region covered by the detector. Furthermore, we were unable to use a reference continuum to estimate the continuum flux because it is difficult to define the proper blaze function given the presence of the extended wings of the photospheric [H$_\alpha$]{} absorption. To make our measurement more reliable, we used the weighted mean of two [H$_\alpha$]{} values when fluxes for all these spectral bands could be measured in both orders. Error estimation ---------------- We then analysed the possible sources of errors. ### - Internal noise {#internal-noise .unnumbered} We estimated the errors on the fluxes assuming photon noise. The error on [H$_\alpha$]{} index was then derived by error propagation: $$err_{H\alpha}=H_\alpha \sqrt{\left(\frac{1}{SN_H}\right)^2+\frac{1/c_{1err}^2+1/c_{2err}^2}{\left(F_{c1}+F_{c2}\right)^2}}$$ where $SN_i=\sqrt{ gain \cdot F_i}$, $c_{1err}=SN_{c1}/F_{c1}$, $c_{2err}=SN_{c2}/F_{c2}$. We note that because of the lower value of the blaze echelle function, the [H$_\alpha$]{} indexes coming from the order 94 have a lower weight on average. , , , and have high absolute radial velocities (RV &lt; -50 km/s) so that their spectra are remarkably blueshifted. Their [H$_\alpha$]{} indexes have a higher uncertainty because the [H$_\alpha$]{} line is shifted out from the order 93 spectra, therefore we were only able to use the SARG order 94, which yields poorer results. ### - Systematic error {#systematic-error .unnumbered} We also considered that several other sources of noise can introduce errors on the [H$_\alpha$]{} index: flat fielding, background subtraction, bad pixels, instrumental instability, fringing, etc. All these contributions, added to the possible intrinsic variations of activity, increase the standard deviation of the [H$_\alpha$]{} values ($\sigma_{H\alpha}$). $\tau$ Cet was used as a test target for this purpose. It is very bright and its [$\Delta H\alpha$]{} variation is lower than 0.005 dex (peak to valley) with low levels of variability in [$\log R'_{HK}$]{} from the literature. We studied the variation of $\tau$ Cet night by night. We note that the standard deviation of [H$_\alpha$]{} is about 10 times the intrinsic photonic error, therefore we decided to add a jitter to our measurement. Errors significantly larger than the photon noise error have been reported in other cases of 1 Å wide activity indices from echelle spectra, see for instance @2004ApJS..152..261W. We found that this increase does not depend on the activity level of the star. It is instead described by a relation with the stellar magnitude as shown in Fig. \[V\_sigma\]: $ \sigma_{jitter}=\sqrt{(0.0028)^2+(5.27\cdot 10^{0.4 V -6})^2}$. ![Relation between V magnitude of the stars and the standard deviation $\sigma_{H\alpha}$ of the [H$_\alpha$]{} index. Open symbols indicate quiet stars (see Sect. \[subsec:teff\] for details), green diamonds are the bright stars sample. The continuous line represents the $\sigma_{jitter}$ we adopted, while the orange asterisks indicate the mean photon noise value for each star.[]{data-label="V_sigma"}](V_sigma.eps){width="\columnwidth"} Our adopted jitter is compatible with the single night variations of $\tau$ Ceti and we rescaled it for other stars according to their magnitude. The dependence on magnitude is that expected for error sources as background subtraction. Finally the error applied to each measurement of [H$_\alpha$]{} is the sum of the photonic error and the instrumental jitter as derived above. As the jitter is significantly larger then the photon noise, individual errors on [H$_\alpha$]{} index of a given star are very similar. Therefore the estimate of the jitter term has a very limited effect on the periodogram analysis presented in Sec. \[sec:HAtimeseries\]. ### - Contamination by telluric lines {#contamination-by-telluric-lines .unnumbered} In the spectral range of [H$_\alpha$]{} we considered, there are several telluric lines mainly due to the water vapour. These lines can enter in our $F_{c1}$, $F_{c2}$ and $F_H$ intervals and influence the [H$_\alpha$]{} index value. The strongest line is H$_2$O at 6564.206 Å. If this telluric line enters $F_H$, the [H$_\alpha$]{} index will decrease of about 1.5%, giving an error by about 0.005 on a quiet star. This can occur when the geocentric velocity of the star is between $52$ and $75$ km/s. Therefore only a few of our spectra are involved, but none of those discussed below. The effect we have if this line enters $F_{c2}$ is about 0.001 dex which is negligible. The H$_2$O line at 6560.555 Å can also enter the $F_H$ interval with a comparable contribution if the geocentric radial velocities between $-90$ and $ -123$ km/s are involved. These few spectra were rejected. ### - Contamination estimation {#contamination-estimation .unnumbered} Even though during the observations of the binaries the slit was oriented perpendicularly to the separation of the components, some spectra are strongly contaminated by the companion star and were rejected. Furthermore we modelled the contamination for each consecutive observation of the companions assuming a Moffat-like shape for the point spread function (PSF) and taking into account the separation, the magnitude and the seeing. We obtain that the contribution of the contamination to the [H$_\alpha$]{}is lower then 1% in the majority of the case and therefore is negligible. We found instead that for six systems the variation induced by the contamination is greater than the intrinsic variation (Fig. \[contam\]). ![Relation between the standard deviations of the RVs (after the correction for known Keplerian motions) and that induced by the contamination. Systems with a non-negligible contamination are highlighted.[]{data-label="contam"}](RVrmsnokepnocont_testconta.eps){width="\columnwidth"} For example, is a very close binary system ($\rho = 2.183"$ according to Hipparcos) and the primary star is a spectroscopic binary with an amplitude of a few km/s. The effect of contamination on the RV is further modulated by the velocity of the primary at the observing epoch. This causes the RV to vary around the true value by up to a few hundreds m/s in a quite unpredictable way. For more details see [@2005Martinez]. ![Time evolution of the [H$_\alpha$]{} values of all the spectra normalised to the median value for each star, monthly bins. The red points correspond to the $\tau$ Cet data series.[]{data-label="stab"}](primoanno4_pag2.eps){width="\columnwidth"} ![GLSP of the synodic monthly binned [H$_\alpha$]{} values of the SARG sample (top) compared to $\tau$ Cet (bottom).[]{data-label="medieGLSP"}](all_indexHalphaGLS_conf.eps){width="\columnwidth"} Stability of the instrument --------------------------- The stability of the instrument during the survey was tested: for stars in the binary sample, we normalised the [H$_\alpha$]{} value for each spectrum to the median value of its star. We then binned these values into the synodic monthly mean over different stars and compared them to the same results for the $\tau$ Cet data series (see Fig. \[stab\]). For $\tau$ Cet data we found that points are located around zero with $\sigma_{H\alpha}=0.003$. For the stars in the binary sample, the last two years of the campaign were devoted to observing mainly a few stars with candidate companions and/or RV trends, therefore the [H$_\alpha$]{} monthly means depend on the variability of the individual targets, as in the case of HD 200466 [@Carolo14]. We also verified the presence of periodicity by applying the generalize Lomb-Scargle periodogram (GLSP) [@GLSP] [^5] to the two sequences of the binned values: the whole sample sequence shows no significant peak and differs from the $\tau$ Ceti sequence, which shows a long-term trend (see Fig.\[medieGLSP\]).\ Dependence on T$_{eff}$ and [$\Delta H\alpha$]{} definition {#subsec:teff} ----------------------------------------------------------- We divided our sample into two subgroups: as active stars we indicate stars with [$\log R'_{HK}$]{} greater than [-4.80]{}, the others are called quiet stars. Since our [H$_\alpha$]{} index is defined as the ratio between the flux in the line centre and the flux in the wings, we expect that different stars with the same activity level can have different [$\langle H_\alpha \rangle$]{} values because of the different photospheric spectrum. Therefore we compared effective temperature and [$\langle H_\alpha \rangle$]{} to determine the appropriate relation for quiet stars (Fig. \[T\_ha\]). ![Relation between the temperature and the median value of [H$_\alpha$]{} for each star. Colours are given according to $\log R_{HK}$ value sources: values from the literature are plotted in red, while for the blue dots the values are derived from X-ray luminosity. The blue triangles indicate that the [$\log R'_{HK}$]{} value for a star is only an upper limit. Green diamonds indicate the bright stars sample. Open symbols correspond to quiet stars. The line only shows the fit of the binaries to have a sample unbiased by activity. The line shows the best fit for the quiet stars. The position of the Sun is also shown with $\odot$.[]{data-label="T_ha"}](T_ha.eps){width="\columnwidth"} Most of the quiet stars lie at $\langle H_\alpha \rangle\sim0.22$ for $T_{eff}>5000$ K. At lower temperature, the [$\langle H_\alpha \rangle$]{} index for quiet stars seems to increase. Active stars scatter mostly at higher [$\langle H_\alpha \rangle$]{}. We also made a comparison with the Sun: it has an effective temperature of 5780 K and its [$\langle H_\alpha \rangle$]{} is 0.217, as measured in the solar flux atlas [@1984sfat.book.....K], in agreement with the lower envelope for quiet stars. We describe the distribution of the quiet stars in this lower envelope with a three-degree function and define as [H$_\alpha$]{}-excess ([$\Delta H\alpha$]{}) the point distance from this line: [$\Delta H\alpha$]{} is the difference in [H$_\alpha$]{} index of a star with respect to a quiet star that has the same effective temperature. Therefore we decided to use [$\Delta H\alpha$]{} as the activity index; it is more robust than [H$_\alpha$]{} because it allows us to compare the activity of stars with different temperatures. Sample analysis {#sec:sampleanalysis} =============== Correlation with [$\log R'_{HK}$]{}and rotation ----------------------------------------------- ![Relation between the [$\log R'_{HK}$]{} and [$\Delta H\alpha$]{}. Colours are the same as Fig. \[T\_ha\]. Bright stars and stars with upper limits for [$\log R'_{HK}$]{}were not considered to have a sample unbiased by activity.[]{data-label="RHK_Haex"}](RHK_Haexcess.eps){width="\columnwidth"} [$\Delta H\alpha$]{}  correlates quite well with [$\log R'_{HK}$]{} (reduced $\chi^2=2.26$, Fig. \[RHK\_Haex\]). Active stars are more scattered but typically show excess in the [H$_\alpha$]{} index ([$\Delta H\alpha$]{}$> 0$). All the stars for which [$\log R'_{HK}$]{} has been derived from the X-ray luminosity are in the active portion of the diagram. This is due to the flux limit of the ROSAT All Sky Survey, which is only sensitive to the active stars at the typical distance of our program stars. The stars for which only upper limits are derived populate the lower envelope of the distribution in most cases: this is consistent with a low activity level. This new index appears to show that stars are distributed in two groups, which suggests the presence of the Vaughan-Preston gap at $\Delta H\alpha=0.02$ [@1980PASP...92..385V]. The results of also show the presence of a gap between $\log R_{HK} = -4.7 $ and $-5.0$. This corresponds to the interval $\Delta H_\alpha \sim [0.01, 0.04]$. We also found a weak relation between [$\Delta H\alpha$]{} and its standard deviation: the intrinsic variation of the [H$_\alpha$]{} index and internal errors contribute to the increase in scatter in the [H$_\alpha$]{} index measurement for each star, but since the scatter is dominated by intrinsic errors for fainter stars, only the deviation seen in brighter stars is dominated by the intrinsic variability. We also checked the well-known relation between rotation and activity We found, as expected, that a moderate rotation is enough to cause a high activity for cold stars and in this case the $v \sin i$ value increases with activity, while the hottest stars only have high activity values if $v\sin i$ is high: this behaviour can be related to the decrease in thickness of the convective envelope as the stars become hotter [@Charbonneau2012]. Binary components ----------------- ![Relation between the [$\Delta H\alpha$]{} of the two companion stars. The solid line corresponds to the best fit, the dashed line corresponds to the equivalence.[]{data-label="HAab"}](HAexa_b.eps){width="\columnwidth"} We can compare the [$\Delta H\alpha$]{} index for the two components in each binary system: we find a very good relation between the two stars indexes, that is $ \Delta H\alpha_B =(1.11\pm 0.08)\Delta H\alpha_A+0.004\pm0.004$, as shown in Fig. \[HAab\]. The value of the reduced $\chi^2$ suggests that the scatter is dominated by the measurement error. We tested that the long-scale activity cycles (like the solar cycle) induce a variation in [H$_\alpha$]{} that is weaker than our adopted measurement error. HD 108421, HD 132844 and HD 105421 lie above the relation, but we did not note any evidence of errors in our analysis for these stars, so that the discrepancy seems to be real and the two stars of these systems could be in different activity phases. For HD 126246, which lies below the relation, the difference in the [H$_\alpha$]{} activity level between the two components qualitatively agrees with the [$\log R'_{HK}$]{} and $v \sin i$ difference found by , supporting an intrinsic rotation and activity difference between the two components. Age-activity relation --------------------- Prompted by this result, we tested whether our [H$_\alpha$]{} could be an age indicator for these stars . We computed the ages of the binary systems with the isochrone fitting algorithm developed by . The implementations details can be found in . Here we recall that it enables recovering the isochronal age of a field star when at least its \[Fe/H\], $T_\mathrm{eff}$ and $\log g$ are available. In our case we also considered [$\log R'_{HK}$]{}as input parameter, which allowed us to disregard unlikely very young isochrones, so that we could better constrain the stellar age. Since the evolution of low-mass stars is extremely slow, this method works well for stars with $T_\mathrm{eff}$&gt;5500 K; for cooler (less massive) stars, uncertainties in the exact location of a star on the Hertzspurng-Russel (HR) diagram leads to an error so large that practically all ages from 0 up to the age of the Universe are possible. We therefore did not consider such stars in our test. From the differential abundance analysis, $T_\mathrm{eff}$ and $\log g$ have typical uncertainties of $\sim50$ K and $\sim0.15$ dex, respectively, while the differences $\Delta T_{eff}=T_{effA}-T_{effB}$ and $\Delta \log g=\log g_A -\log g_B$ are more reliable and their reference uncertainties have been estimated in $\sim20$ K and $\sim 0.06$ dex, respectively. We therefore constructed a grid in $T_\mathrm{eff}$ and $\log g$ for each binary component, with step sizes of 25 K and 0.05 dex, respectively. We discarded all the points in the grid where the relations $\Delta T_{eff}- \delta T_{eff B} < \vert T_{eff A}-T_{eff B} \vert < \Delta T_{eff}+ \delta T_{eff}$ and $\Delta \log g- \delta \log g < | \log g_A-\log g_B | < \Delta \log g +\delta \log g$ were not satisfied. We computed the ages of each component for each remaining point in the grid and retained only those for which the stars could be considered coeval ($\vert \log t_A - \log t_B \vert <0.05$; 0.05 is the resolution of the isochrone grids). For each analysed star, we built a catalogue reporting the plausible input parameters and the resulting age that was coeval to that of its companion. For each binary system, we synthesised these data providing the youngest and oldest feasible age of the system and the median age. ![Activity as a function of the age. Blue circles represent the star in [@Desidera04] for which we have solid constraints on the temperature; orange crosses show the other stars. The two systems with an uncertain parallax are highlighted in cyan. []{data-label="age"}](eta_Ha_BVT75_all.eps){width="1.05\columnwidth"} In Fig. \[age\] we plot for each star hotter than $T_{eff} = 5500$ K its [$\Delta H\alpha$]{} as a function of the age of the system. We divided the systems into two subsamples according to the reliability of the input parameters, and in particular the $T_\mathrm{eff}$: blue dots represent the systems analysed in [@Desidera04], which are more accurate, while orange crosses correspond to preliminary results for systems analysed in [@tesivassallo]. The result shows that the majority of the active star are younger than 1.5 Gyr, while for older stars the distribution is flattened around zero, that is, they are inactive.\ We found that the activity for young stars is anti-correlated with the age, confirming that the relation between the [$\Delta H\alpha$]{} in the components of the systems younger than 1.5 Gyr is mainly due to age. The position of the pairs HD132844A and B and HD13357A and B in the diagram of Fig. \[age\] does not follow the general trend: the position on colour-diagram of HD132844 below the main sequence [see @Desidera04] is indicative of substantial error in the trigonometric parallax. The two Hipparcos solutions for the parallax of HD13357 are inconsistent with each other. In both cases we can conclude that there is an underestimated error in the parallax. Indeed, the adopted parameters (especially the gravity) depend on the adopted trigonometric parallax: in the abundance analysis the effective temperatures were derived from ionization equilibrium and stellar gravities from luminosities, masses and temperatures, using iterative procedures. It seems therefore that a well-defined activity-age relation persists only for objects younger than $\sim 1.5$ Gyr, and that after this age [H$_\alpha$]{} seems to be less efficient as an age indicator. Our data did not show significant correlation between these quantities: due to the lack of data with such an age, we cannot conclude whether if there is a discontinuity or if the activity of the star decreases with time. The activity-age anti-correlation for younger stars confirms results from [@1988ApJBarry], for example, and the apparent flatness of the plot for older stars seems to agree with ;but owing to the uncertainty on our ages, we cannot confirm or reject the idea that the activity decreases with age also for older stars, with a different slope as found by [@Mamajek2008], for instance. Finally, we found that a large portion (15 over 35) of the stars in our sample with age estimates from the isochrone method are younger than 1.5 Gyr: this could be due to the recent bump in the star formation rate in the solar neighbourhood as claimed by [@1988ApJBarry] or to a bias in the age distribution of the stars in the Hipparcos Multiple Stellar Catalog. Future observations of results from the GAIA satellite may clarify this question.\ Activity vs RV scatter ---------------------- We finally found the well-known relation between the activity of a star and the standard deviation of its radial velocities . In addition, when considering the contamination of the spectra, we found that it is not negligible especially for the systems HD 8071, HD 99121, HD 108421 and HD 209965, which were omitted in this discussion and are detailed below. There are also a number of cases for which the spread in RV during the survey is high ($ > 80 m/s$) and which have a relatively low activity level. ![Relation between the [$\Delta H\alpha$]{} and the RV standard deviation in the survey. The green dots indicate the RV standard deviation of the stars with a known Keplerian trend that is due to a companion, in red we plot the RV standard deviation of stars with a known companion. In both cases we correct the data for their known RV variation.[]{data-label="HAex_RVrms"}](RVrmsnokepnocont_HAex.eps){width="\columnwidth"} Most of these objects have known RV trend of Keplerian origin and after the RV variation induced by the companion was removed, they became part of the main trend (Fig. \[HAex\_RVrms\]). In addition there are at least four stars left outside the general trend. Since these are potentially very interesting objects, we examine them more in detail. and B: this is a wide binary composed of two F-type stars. The SARG spectra show that the primary star is a long-period partially blended SB2 star, therefore we conclude that the excess scatter in RVs is due to the blending of the spectra of the two components. For the secondary, the excess of the RV scatter is fairly large even after resuming the long-term trend with time that indicates the presence of a low-mass companion; in addition, the [H$_\alpha$]{}index also has a trend with time - more likely related to a cycle. The Hipparcos Multiple Stellar Catalog indicates that the system has a separation wide enough to rule out contamination effects ($\rho=3.493"$). is a spectroscopic binary and some of the spectra were taken with low S/N (Desidera et al. in prep.). We cannot exclude Keplerian motion as the origin of the scatter for both stars, therefore a deeper analysis with acquisition of additional data would be required. H$\alpha$ index time-series analysis {#sec:HAtimeseries} ==================================== By analogy with the Sun, emission in the core of [H$_\alpha$]{} is expected to show time variability mainly modulated by stellar rotation over a period of the order of days, and by the activity cycle over periods of hundreds or thousands of days. In addition, secular variations in the activity levels similar to the Maunder minimum can be present. Therefore the different properties of the time series of our objects should be taken into account. Stars in the Hot-Neptunes program were observed for a single season with a moderately dense sampling. In this case rotation periods could be found, but periodicities due to the activity cycle cannot be reliably identified. On the other hand, for the SARG survey objects, the observational campaign was longer and less dense. For only a few targets do we have a larger number of spectra because during the survey they were suspected to host a planet. This was the case of [@Desidera12] and [@Desidera11], for example. In addition we already know that for , the RV variations seen are mainly due to an activity cycle [@Carolo14]. It is known that more active stars have irregular periods that are not easy to determine with the analysis of periodograms. In spite of this, we computed the GLSPs for the [H$_\alpha$]{} index that was obtained using the [@GLSP] procedure. To evaluate the significance of these periodicities, the false alarm probability (FAP) of the highest peak of the periodogram was estimated through a bootstrap method, with 1000 permutations. We used the spectral window function to rule out that our periodicity is due to the sampling. The results for the most interesting objects are listed in Table \[HAresults\]. We found a signature of periodic variations (rotational periods or activity cycles) in 19 stars, whereas 10 stars show a clear overall trend in [H$_\alpha$]{} with time. On the other hand the stars for which we were able to find evidence of activity cycles are all with moderate activity excess and temperatures of between 4800 and 6000 K. The stars showing a long-term trend are hotter than average. It is noteworthy that of the binary stars that show promising cycles, only are quiet and show a long-term trend.\ Of the bright stars, was used as a RV standard to monitor instrument performances during the binary program. The quite good temporal coverage of the data allowed us to detect a significant long-term period of about seven years with FAP of 0.6%. Added to this signal, we also found a periodicity of 86.49 d, which corresponds to an alias of the $21.9\pm 0.4$ d period found by [@2010MNRAS.408.1666S] with one sinodic month. This shortest period seems then to be the rotational signal. We obtained a similar result also for : the GLSP peaks at 16.44 d, which is an alias of the $\sim 37$ d period . shows a periodicity of 22.38 d. In this case the spectral window is complex and we cannot rule out that this period is fake. [@2004ApJS..152..261W] estimated a rotational period of 48 days from the [$\log R'_{HK}$]{} mean value, but this was not detected by [@2010MNRAS.408.1666S].\ All the [H$_\alpha$]{} time series are presented in Table 5, only available in electronic form at the CDS. ------------- --- ---------------------- ---------- ------- -------------- ---------- Star [$\Delta H\alpha$]{} Rotation Cycle Amplitude FAP \[d\] \[d\] \[%\] HD 186858 A 0.077 7.68 – 0.010 1.0 B 0.062 – 2030 0.014 0.3 HD 200466 A 0.038 – 1500 0.024 $<0.1$ B 0.034 – trend &gt;0.015 $<0.1$ BD+182366 A 0.039 – 1432 0.037 0.1 B 0.029 – – – HD 139569 A 0.057 – trend &gt; 0.21 2.2 B 0.062 – trend &gt;0.30 0.6 HD 76037 A -0.002 – trend &gt;0.014 1.0 B 0.014 – trend &gt;0.022 1.3 HD 201936 A 0.059 13.70 – 0.020 0.7 B 0.087 – – HD 213013 A 0.031 – – – B 0.035 3.59 – 0.019 2.1 14 Her 0.009 22.38 – 0.002 $<0.1$ 51 Peg 0.003 86.49 2069 0.001, 0.003 0.8, 0.6 61 Cyg B -0.001 16.44 – 0.011 $<0.1$ GJ 380 0.013 – trend &gt;0.017 $<0.1$ $\tau$ Ceti -0.006 – trend &gt;0.003 $<0.1$ ------------- --- ---------------------- ---------- ------- -------------- ---------- Correlation between RV and [H$_\alpha$]{} {#sec:correlation} ========================================= The high uncertainty on the single measurements of [H$_\alpha$]{} prevent us from properly studying the correlation with the RVs. However this was possible in some particular case, such as spectra with high S/N or stars with a relevant trend in [H$_\alpha$]{}. We used the Spearman correlation coefficient $\rho_S$ and its significance [$\sigma$]{} to quantify the correlation between RV and [H$_\alpha$]{} index (Table \[tab:correlation\]): we obtained an extremely high significance for [@Carolo14 see]. For four other objects, the probability that the correlation is the result of a random effect is lower than 0.0075. and are active stars with a signature of an activity cycle, spectra have a high S/N and show a probable long-term cycle. Plots are presented in the Appendix. In the anti-correlation simply shows that both RV and activity are time-dependent on long scales. We can therefore rule out a strong physical connection between these two quantities for this star. Star $\rho_s$ $\sigma$ $n_\sigma$ ------------ ---------- ---------- ------------ HD 200466A 0.556 0.000 -4.817 HD 76037A -0.665 0.000 3.702 HD 99121A 0.6511 0.002 -2.84 HD 213013A 0.478 0.008 -2.576 GJ 380 0.535 0.018 -2.270 : Rank of the Spearman correlation coefficient $\rho_s$ and its significance between [H$_\alpha$]{} and RV for the stars of the sample. Column 3 reports its false-alarm probability and the last column reports the $n_\sigma$ value. Only stars with significance &lt; 0.02 are indicated. []{data-label="tab:correlation"} Conclusions {#sec:conclusions} =========== The activity of 104 stars observed with the SARG spectrograph was studied using an index based on the [H$_\alpha$]{} line. We found that this index, [$\Delta H\alpha$]{}, correlates well with the index based on Ca II lines, [$\log R'_{HK}$]{}, and therefore it can be used to estimate the average activity level, confirming previous results. It also correlates with the rotation of the star: low activity corresponds to slow rotation, especially for cool stars. After removing a few targets for which contamination of the spectra by their companion is the dominant source of RV scatter, we found that [$\Delta H\alpha$]{} also correlates with the scatter in RV. We obtain that a low-mass companion might be the source of a high residual RV scatter at least for . We also found a strong correlation between the average activity level [$\langle H_\alpha \rangle$]{} of the two components in each binary system and that roughly a half of our systems are active. Finally, we showed that activity as measured by [$\Delta H\alpha$]{} is correlated with the age derived from isochrone fitting. Although these have large error bars due to uncertainties in temperature and parallaxes, we found that active stars are typically younger than 1.5 Gyr, while older stars are typically inactive. We then analysed the time series of the stars: 11 stars ($\sim 8.5$ %) of the SARG sample show a periodicity in [H$_\alpha$]{}with false-alarm probability $<0.5\%$. All these stars have a moderate activity level ($0.029 < \Delta H\alpha < 0.077$) except for the pair HD 76037A and B, but in these cases we only have a hint of a long-term period or magnetic cycle. When we focused on the long-term cycle, we obtained that the temperature interval of these stars is also limited to late-G and early-K stars. Other stars show variabilities on temporal scales certainly different from the rotational periods. In the bright stars sample, we found five stars out of ten with significant periodic variations in [H$_\alpha$]{}. In some cases the physical origin of this type of signal is unclear. Only five stars show a significant correlation between [H$_\alpha$]{} and RVs. We conclude that if care is exerted, [H$_\alpha$]{}  is a useful indicator for activity and can be a good alternative to Ca II [$\log R'_{HK}$]{} for studies based on radial velocity techniques, especially for solar-type stars.\ *Acknowledgements*. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. This research has made use of the Keck Observatory Archive (KOA), which is operated by the W.M. Keck Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration. We thank the TNG staff for contributing to the observations and the TNG TAC for the generous allocation of observing time. This work was partially funded by PRIN-INAF 2008 “Environmental effects in the formation and evolution of extrasolar planetary systems”. Stars with RV-[H$_\alpha$]{} correlation {#app .unnumbered} ======================================== ![RV as a function of the [H$_\alpha$]{} index for HD 76037A.[]{data-label="hd76037bHalphaGLS"}](hd76037a_corr.eps){width="\columnwidth"} ![RV as a function of the [H$_\alpha$]{} index for HD 213013A.[]{data-label="hd213013a_corr"}](hd213013a_corr.eps){width="\columnwidth"} ![Decontaminated RV as a function of the [H$_\alpha$]{} index for HD 99121A.[]{data-label="gj380_corr"}](hd99121a_corr_decont.eps){width="\columnwidth"} ![RV as a function of the [H$_\alpha$]{} index for GJ 380.[]{data-label="gj380RV_corr"}](gj380_corr.eps){width="\columnwidth"} [^1]: Based on observations made with the Italian Telescopio Nazionale Galileo (TNG) operated on the island of La Palma by the Fundacion Galileo Galilei of the INAF (Istituto Nazionale di Astrofisica) at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias [^2]: Table 5 is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/ [^3]: [@IRAF] [^4]: https://koa.ipac.caltech.edu/cgi-bin/KOA/nph-KOAlogin [^5]: <https://github.com/callumenator/idl/blob/master/Routines/Periodogram/generalised_lomb_scargle.pro>
--- abstract: | It is known that the so-called Bercovici-Pata bijection can be explained in terms of certain Hermitian random matrix ensembles $\left( M_{d}\right) _{d\geq1}$ whose asymptotic spectral distributions are free infinitely divisible. We investigate Hermitian Lévy processes with jumps of rank one associated to these random matrix ensembles introduced in [@BG] and [@CD]. A sample path approximation by covariation processes for these matrix Lévy processes is obtained. As a general result we prove that any $d\times d$ complex matrix subordinator with jumps of rank one is the quadratic variation of an $\mathbb{C}^{d}$-valued Lévy process. In particular, we have the corresponding result for matrix subordinators with jumps of rank one associated to the random matrix ensembles $\left( M_{d}\right) _{d\geq1}$. **Key words***: *Infinitely divisible random matrix, matrix subordinator, Bercovici-Pata bijection, matrix semimartingale, matrix compound Poisson. **AMS 2010 Subject Classification**: 60B20; 60E07; 60G51; 60G57. author: - | J. Armando Domínguez-Molina[^1]\ Facultad de Ciencias Físico-Matemáticas\ Universidad Autónoma de Sinaloa, México - | Víctor Pérez-Abreu[^2]\ Departamento de Probabilidad y Estadística\ CIMAT, Guanajuato, México - | Alfonso Rocha-Arteaga[^3]\ Facultad de Ciencias Físico-Matemáticas\ Universidad Autónoma de Sinaloa, México title: '**Covariation representations for Hermitian Lévy process ensembles of free infinitely divisible distributions** ' --- Introduction ============ New models of infinitely divisible random matrices have emerged in recent years from both applications and theory. On the one hand, they have been important in multivariate financial Lévy modelling where stochastic volatility models have been proposed using Lévy and Ornstein-Uhlenbeck matrix valued processes; see [@BNSe07], [@BNSe09], [@BNS11] and [@PiSe09a]. A key role in these models is played by the positive-definite matrix processes and more general matrix covariation processes. On the other hand, in the context of free probability, Bercovici and Pata [@BP] introduced a bijection $\Lambda$ from the set of classical infinitely divisible distributions to the set of free infinitely divisible distributions. This bijection was explained in terms of random matrix ensembles by Benaych-Georges [@BG] and Cabanal-Duvillard [@CD], providing in a more palpable way the bijection $\Lambda$ and producing a new kind of infinitely divisible random matrix ensembles. Moreover, the results in [@BG] and [@CD] constitute a generalization of Wigner’s result for the Gaussian Unitary Ensemble and give an alternative simple infinitely divisible random matrix model for the Marchenko-Pastur distribution, for which the Wishart and other empirical covariance matrix ensembles are not infinitely divisible. More specifically, it is shown in [@BG] and [@CD] that for any one-dimensional infinitely divisible distribution $\mu$ there is an ensemble of Hermitian random matrices $(M_{d})_{d\geq1}$, whose empirical spectral distribution converges weakly almost surely to $\Lambda(\mu)$ as $d$ goes to infinity. Moreover, for each $d\geq1$, $M_{d}$ has a unitary invariant matrix distribution which is also infinitely divisible in the matrix sense. From now on we call these models BGCD matrix ensembles. We consider additional facts of BGCD models in Section 3. A problem of further interest is to understand the matrix Lévy processes $\left \{ M_{d}(t)\right \} _{t\geq0}$ associated to the BGCD matrix ensembles. It was pointed out in [@DRA], [@PAS] that the Lévy measures of these models are concentrated on rank one matrices. This means that the random matrix $M_{d}$ is a realization, at time one, of a matrix valued Lévy process $\left \{ M_{d}(t)\right \} _{t\geq0}$ with rank one jumps $\Delta M_{d}(t)=M_{d}(t)-M_{d}(t-).$ The purpose of this paper is to study the structure of a $d\times d$ Hermitian Lévy process $\left \{ L_{d}(t)\right \} _{t\geq0}$ with rank one jumps. It is shown in Section 4 that if $L_{d}$ is a $d\times d$ complex matrix subordinator, it is the quadratic variation of an $\mathbb{C}^{d}$-valued Lévy process $X_{d}$, being the converse and extension of a known result in dimension one, see [@CT Example 8.5]. The process $X_{d}$ is constructed via its Lévy-Itô decomposition. In Section 5 we consider new realizations in terms of covariation of $\mathbb{C}^{d}$-valued Lévy process for matrix compound Poisson process as well as sample path approximations for Lévy processes associated to general BGCD ensembles. A new insight on Marchenko-Pastur’s type results for empirical covariance matrix ensembles was recently given in [@BGCD] by considering compound Poisson models (then infinitely divisible). In this direction our results show the role of covariation of $d$-dimensional Lévy processes as an alternative to empirical covariance processes. For convenience of the reader, and since the material and notation in the literature is disperse and incomplete, we include Section 2 with a review on preliminaries on complex matrix semimartingales and matrix valued Lévy processes that are used later on in this paper. Preliminaries on matrix semimartingales and matrix Lévy processes ================================================================= Let $\mathbb{M}_{d\times q}=\mathbb{M}_{d\times q}\left( \mathbb{C}\right) $ denote the linear space of $d\times q$ matrices with complex (respectively real) entries with scalar product $\left \langle A,B\right \rangle =~\mathrm{tr}\left( AB^{\ast}\right) $ and the Frobenius norm $\left \Vert A\right \Vert =\left[ \mathrm{tr}\left( AA^{\ast}\right) \right] ^{1/2}$ where $\mathrm{tr}$ denotes the (non normalized) trace. If $q=d,$ we write $\mathbb{M}_{d}=\mathbb{M}_{d\times d}$. The set of Hermitian random matrices in $\mathbb{M}_{d}$ is denoted by $\mathbb{H}_{d}$. Likewise, let $\mathbb{U}_{d\times q}=\mathbb{U}_{d\times q}\left( \mathbb{C}\right) =\left \{ U\in \mathbb{M}_{d\times q}:U^{\ast}U=\mathrm{I}_{q}\right \} .$ If $q=d,$ $\mathbb{U}_{d}=\mathbb{U}_{d\times d}$. We denote by $\mathbb{H}_{d(1)}$ the set of matrices in $\mathbb{H}_{d}$ of rank one and by $\mathbb{H}_{d}^{+}$ ($\overline{\mathbb{H}}_{d}^{+}$) the set of positive (nonnegative) definite matrices in $\mathbb{H}_{d}$. Likewise $\mathbb{H}_{d(1)}^{+}=\mathbb{H}_{d(1)}\cap \overline{\mathbb{H}}_{d}^{+}$ is the closed cone of $d\times d$ nonnegative definite matrices of rank one. Let $\mathbb{S}(\mathbb{H}_{d(1)})$ denote the unit sphere of $\mathbb{H}_{d(1)}$. \[Decomp\](a) Every $V\in \mathbb{H}_{d(1)}^{+}$ can be written as $V=xx^{\ast}$ where $x\in \mathbb{C}^{d}$. One can see that $x$ is unique if we restrict $x$ to the set $C_{+}^{d}=\{x=\left( x_{1},x_{2},\ldots ,x_{d}\right) \allowbreak:\allowbreak x_{1}\allowbreak \geq \allowbreak 0,\allowbreak$ $x_{j}\allowbreak \in \allowbreak \mathbb{C},\allowbreak$ $j\allowbreak=2,\allowbreak...,\allowbreak d\}$. \(b) Every $V\in \mathbb{H}_{d\left( 1\right) }$ can be written as $V=\lambda uu^{\ast}$ where $\lambda$ the eigenvalue of $V$ and $u$ is a unitary vector in $\mathbb{C}^{d}$. In this representation the $d\times d$ matrix $uu^{\ast}$ is unique. #### Covariation of complex matrix semimartingales An $\mathbb{M}_{d\times q}$-valued process $X=\left \{ (x_{ij})(t)\right \} _{t\geq0}$ is a matrix semimartingale if $x_{ij}(t)$ is a complex semimartingale for each $i=1,...,d,j=1,...,q.$ Let $X=\left \{ (x_{ij})(t)\right \} _{t\geq0}$ $\in \mathbb{M}_{d\times q}$ and $Y=\left \{ (y_{ij})(t)\right \} _{t\geq0}\in \mathbb{M}_{q\times r}$ be semimartingales. Similar to the case of matrices with real entries in [@BNSe07], we define the matrix covariation of $X$ and $Y$ as the $\mathbb{M}_{d\times r}$-valued process $\left[ X,Y\right] :=\left \{ \left[ X,Y\right] (t):t\geq 0\right \} $ with entries$$\left[ X,Y\right] _{ij}(t)=\sum \limits_{k=1}^{q}\left[ x_{ik},y_{kj}\right] (t)\text{,} \label{DefCov}$$ where $\left[ x_{ik},y_{kj}\right] (t)$ is the covariation of the $\mathbb{C}$-valued semimartingales $\left \{ x_{ik}(t)\right \} _{t\geq0}$ and $\left \{ x_{kj}(t)\right \} _{t\geq0}$; see [@Pr04 pp 83]. One has the decomposition into a continuous part and a pure jump part as follows$$\left[ X,Y\right] (t)=\left[ X^{c},Y^{c}\right] (t)+\sum_{s\leq t}\left( \Delta X(s)\right) \left( \Delta Y(s)\right) \text{,} \label{ForCov}$$ where $\left[ X^{c},Y^{c}\right] _{ij}(t):=\sum \nolimits_{k=1}^{q}\left[ x_{ik}^{c},y_{kj}^{c}\right] (t).$ We recall that for any semimartingale $x$, the process $x^{c}$ is the $a.s.$ unique continuous local martingale $m$ such that $\left[ x-m\right] $ is purely discontinuous. We will use the facts that $\left[ X\right] =\left[ X,X^{\ast}\right] $ is a nonnegative definite $d\times d$ matrix, that $\left[ X,Y\right] ^{\top }=\left[ Y^{\top},X^{\top}\right] $ and that for any nonrandom matrices $A\in \mathbb{M}_{m\times d},C\in \mathbb{M}_{r\times n}$ and semimartingales $X\in \mathbb{M}_{d\times q},Y\in \mathbb{M}_{q\times r}$,$$\left[ AX,YC\right] =A\left[ X,Y\right] C\text{.} \label{CovBil}$$ The natural example of a continuous semimartingale is the standard complex $d\times q$ matrix Brownian motion $B=\left \{ B(t)\right \} _{t\geq 0}=\left \{ b_{jl}(t)\right \} _{t\geq0}$ consisting of independent $\mathbb{C}$-valued Brownian motions $b_{jl}(t)=\operatorname{Re}(b_{jl}(t))+\mathrm{i}\operatorname{Im}(b_{jl}(t))$ where $\operatorname{Re}(b_{jl}(t)),\operatorname{Im}(b_{jl}(t))$ are independent one-dimensional Brownian motions with common variance $t/2$. Then we have $\left[ B,B^{\ast }\right] ^{ij}(t)=\sum \nolimits_{k=1}^{q}\left[ b_{ik},\overline{b}_{jk}\right] (t)=qt\delta_{ij}$ and hence the matrix quadratic variation of $B$ is given by the $d\times d$ matrix process:$$\left[ B,B^{\ast}\right] (t)=qt\mathrm{I}_{d}\text{.} \label{CovCB}$$ The case $q=1$ corresponds to the $\mathbb{C}^{d}$-valued standard Brownian motion $B$.$\ $We observe this corresponds to $\left[ B,B^{\ast}\right] _{t}=t\mathrm{I}_{d}$ instead of the common $2t\mathrm{I}_{d}$ used in the literature. Other examples of complex matrix semimartingales are Lévy processes considered next. #### Complex matrix Lévy processes An infinitely divisible random matrix $M$ in $\mathbb{M}_{d\times q}$ is characterized by the Lévy-Khintchine representation of its Fourier transform $\mathbb{E}\mathrm{e}^{\mathrm{itr}(\Theta^{\ast}M)}\allowbreak \ =\ \allowbreak \exp(\psi(\Theta))$ with Laplace exponent $$\psi(\Theta)={}\mathrm{itr}(\Theta^{\ast}\Psi \text{ }){}-{}\frac{1}{2}\mathrm{tr}\left( \Theta^{\ast}\mathcal{A}\Theta^{\ast}\right) {}+{}\int_{\mathbb{M}_{d\times q}}\left( \mathrm{e}^{\mathrm{itr}(\Theta^{\ast}\xi)}{}-1{}-\mathrm{i}\frac{\mathrm{tr}(\Theta^{\ast}\xi)}{1+\left \Vert \xi \right \Vert ^{2}}{}\right) \nu(\mathrm{d}\xi),\ \Theta \in \mathbb{M}_{d\times q}, \label{LKRgen}$$ where $\mathcal{A}:\mathbb{M}_{q\times d}\rightarrow \mathbb{M}_{d\times q}$ is a positive symmetric linear operator $($i.e. $\mathrm{tr}\left( \Phi^{\ast }\mathcal{A}\Phi^{\ast}\right) \geq0$ for $\Phi \in \mathbb{M}_{d\times q}$ and $\mathrm{tr}\left( \Theta_{2}^{\ast}\mathcal{A}\Theta_{1}^{\ast}\right) =\mathrm{tr}\left( \Theta_{1}^{\ast}\mathcal{A}\Theta_{2}^{\ast}\right) $ for $\Theta_{1},\Theta_{2}\in \mathbb{M}_{d\times q})$, $\nu$ is a measure on $\mathbb{M}_{d\times q}$ (the Lévy measure) satisfying $\nu(\{0\})=0$ and $\int_{\mathbb{M}_{d\times q}}(1\wedge \left \Vert x\right \Vert ^{2})\nu(\mathrm{d}x)<\infty$, and $\Psi \in \mathbb{M}_{d\times q}$. The triplet $(\mathcal{A},\nu,\Psi)$ uniquely determines the distribution of $M$. \[ObsGaPart\]The notation $\mathcal{A}\Theta^{\ast}$ means the linear operator $\mathcal{A}$ from $\mathbb{M}_{q\times d}$ to $\mathbb{M}_{d\times q}$ acting on $\Theta^{\ast}\in \mathbb{M}_{q\times d}$. Some interesting examples of $\mathcal{A}$ and its corresponding matrix Gaussian distributions are: \(a) $\mathcal{A}\Theta^{\ast}=\Theta$. This corresponds to a Gaussian matrix distribution invariant under left and right unitary transformations in $\mathbb{U}_{d}$ and $\mathbb{U}_{q}$, respectively. \(b) $\mathcal{A}\Theta^{\ast}=\Sigma_{1}\Theta \Sigma_{2}$ for $\Sigma_{1}\in$ $\overline{\mathbb{H}}_{d}^{+}$ and $\Sigma_{2}\in \overline{\mathbb{H}}_{q}^{+}$. In this case the corresponding matrix Gaussian distribution is denoted by $\mathrm{N}_{d\times q}(0,\Sigma_{1}\otimes \Sigma_{2})$  and $\Sigma_{1}\otimes \Sigma_{2}$ is called a Kronecker covariance. It holds that if $N$ has the distribution $\mathrm{N}_{d\times q}(0,\mathrm{I}_{d}{}\otimes \mathrm{I}_{q})$, then $\Sigma_{1}^{1/2}N\Sigma_{2}^{1/2}$ has distribution $\mathrm{N}_{d\times q}(0,\Sigma_{1}\otimes \Sigma_{2})$. \(c) When $q=d$, $\mathcal{A}\Theta^{\ast}=$ $\mathrm{tr}(\Theta)\mathrm{I}_{d}$ is the covariance operator of the Gaussian random matrix $g\mathrm{I}_{d}$ where $g$ is a one-dimensional random variable with a standard Gaussian distribution. Let $\mathbb{S}_{d\times q}$ be the unit sphere of $\mathbb{M}_{d\times q}$ and let $\mathbb{M}_{d\times q}^{0}=\mathbb{M}_{d\times q}\backslash \{0\}$. If $\nu$ is a Lévy measure on $\mathbb{M}_{d\times q}$, then there are a measure $\lambda$ on $\mathbb{S}_{d\times q}$ with $\lambda(\mathbb{S}_{d\times q})\geq0$ and a measure $\nu_{\xi}$ for each $\xi \in \mathbb{S}_{d\times q}$ with $\nu_{\xi}((0,\infty))>0$ such that $$\nu(E)=\int_{\mathbb{S}_{d\times q}}\lambda(\mathrm{d}\xi)\int_{(0,\infty )}1_{E}(u\xi)\nu_{\xi}(\mathrm{d}u),\qquad E\in \mathcal{B}(\mathbb{M}_{d\times q}^{0}).$$ We call $(\lambda,\nu_{\xi})$ a polar decomposition of $\nu$. When $d=q=1$, $\nu$ is a Lévy measure on $\mathbb{R}$ and $\lambda$ is a measure in the unit sphere $\mathbb{S}_{1\times1}=\left \{ -1,1\right \} $ of $\mathbb{R}$. Any $\mathbb{M}_{d\times q}$-valued Lévy process $L=\left \{ L(t)\right \} _{t\geq0}$ with triplet $(\mathcal{A},\nu,\Psi)$ is a semimartingale with the Lévy-Itô decomposition $$L(t)=t\Psi+B_{\mathcal{A}}(t)+\int_{[0,t]}\int_{\left \Vert V\right \Vert \leq 1}V\widetilde{J}_{L}(\mathrm{d}s\mathrm{,d}V)+\int_{[0,t]}\int_{\left \Vert V\right \Vert >1}VJ_{L}(\mathrm{d}s,\mathrm{d}V)\text{, }t\geq0, \label{LID}$$ where: \(a) $\left \{ B_{\mathcal{A}}(t)\right \} _{t\geq0}$ is a $\mathbb{M}_{d\times q}$-valued Brownian motion with covariance $\mathcal{A}$, i.e. it is a Lévy process with continuous sample paths (a.s.) and each $B_{\mathcal{A}}(t)$ is centered Gaussian with $$\mathbb{E}\left \{ \mathrm{tr(}\Theta_{1}^{\ast}B_{\mathcal{A}}(t){})\mathrm{tr}\left( \Theta_{2}^{\ast}B_{\mathcal{A}}(s){}\right) {}\right \} =\min(s,t)\mathrm{tr}\left( \Theta_{1}^{\ast}\mathcal{A}\Theta_{2}^{\ast }\right) {}\text{for each }\Theta_{1},\Theta_{2}\in \mathbb{M}_{d\times q},$$ \(b) $J_{L}(\cdot,\cdot)$ is the Poisson random measure of jumps on $[0,\infty)\times \mathbb{M}_{d\times q}^{0}$. That is, $J_{L}(t,E)=\# \{(0\leq s\leq t:\allowbreak \Delta L_{s}\in E\},$ $E$ $\in \mathbb{M}_{d\times q}^{0},$ with intensity measure $Leb\otimes \nu$, and independent of $\left \{ B_{\mathcal{A}}(t)\right \} _{t\geq0}$, \(c) $\widetilde{J}_{L}$ is the compensator measure of $J_{L}$, i.e. $$\widetilde{J}_{L}(\mathrm{d}t,\mathrm{d}V)=J_{L}(\mathrm{d}t,\mathrm{d}V)-\mathrm{d}t\nu(\mathrm{d}V);$$ see for example [@Ap07] for the most general case of Lévy processes with values in infinite dimensional Banach spaces. An $\mathbb{M}_{d\times q}$-valued Lévy process $L=\left \{ L(t)\right \} _{t\geq0}$ has bounded variation if and only if its Lévy-Itô decomposition takes the form $$L(t)=t\Psi_{0}+\int_{[0,t]}\int_{\mathbb{M}_{d\times q}^{0}}VJ_{L}(\mathrm{d}s,\mathrm{d}V)=t\Psi_{0}+\sum_{s\leq t}\Delta L(s)\text{, }t\geq0, \label{LIFV}$$ where $\Psi_{0}=$ $\Psi-\int_{\left \Vert V\right \Vert \leq1}V\nu (\mathrm{d}V).$ The matrix quadratic variation (\[ForCov\]) of $L$ is given by the $\overline{\mathbb{H}}_{d}^{+}$-valued process $$\lbrack L](t)=\left[ B_{\mathcal{A}},B_{\mathcal{A}}^{\ast}\right] (t)+\int_{[0,t]}\int_{\mathbb{M}_{d\times q}^{0}}VV^{\ast}J_{L}(\mathrm{d}s,\mathrm{d}V)=\left[ B_{\mathcal{A}},B_{\mathcal{A}}^{\ast}\right] (t)\mathcal{+}\sum_{s\leq t}\Delta L(s)\Delta L(s)^{\ast}. \label{QVLP}$$ In Section 3 we prove a partial converse of the last result in the case $q=1.$ \[ObsGaPartqv\] On the lines of Remark \[ObsGaPart\] we have the following observations for the quadratic variation of the continuous part in (\[QVLP\]): \(a) When $\mathcal{A}\Theta^{\ast}=\Theta,$ $\left[ B_{\mathcal{A}},B_{\mathcal{A}}^{\ast}\right] (t)=qt\mathrm{I}_{d}$. This follows from (\[CovCB\]) since $B_{\mathcal{A}}(t)$ is a standard complex $d\times q$ matrix Brownian motion. \(b) When $\mathcal{A}\Theta^{\ast}=\Sigma_{1}\Theta \Sigma_{2}$ for $\Sigma _{1}\in$ $\overline{\mathbb{H}}_{d}^{+}$ and $\Sigma_{2}\in \overline {\mathbb{H}}_{q}^{+}$, we have $B_{\mathcal{A}}(t)=\Sigma_{1}^{1/2}B(t)\Sigma_{2}^{1/2}$ where $B=\left \{ B(t)\right \} _{t\geq0}$ is a standard complex $d\times q$ matrix Brownian motion. Then, using (\[CovBil\]) we have $$\left[ B_{\mathcal{A}},B_{\mathcal{A}}^{\ast}\right] (t)=\left[ \Sigma _{1}^{1/2}B\Sigma_{2}^{1/2},\Sigma_{2}^{1/2}B^{\ast}\Sigma_{1}^{1/2}\right] (t)=\Sigma_{1}^{1/2}\left[ B\Sigma_{2}^{1/2},\Sigma_{2}^{1/2}B^{\ast}\right] (t)\Sigma_{1}^{1/2}=t\mathrm{tr}(\Sigma_{2})\Sigma_{1},$$ where we have also used the easily checked fact $\left[ B\Sigma_{2}^{1/2},\Sigma_{2}^{1/2}B^{\ast}\right] (t)=t\mathrm{tr}(\Sigma_{2})I_{d}$. \(c) When $q=d$ and $\mathcal{A}\Theta^{\ast}=$ $\mathrm{tr}(\Theta )\mathrm{I}_{d}$, we have $\left[ B_{\mathcal{A}},B_{\mathcal{A}}^{\ast }\right] (t)=t\mathrm{I}_{d}$ since $B_{\mathcal{A}}(t)=b(t)\mathrm{I}_{d}$ where $b=\left \{ b(t)\right \} _{t\geq0}$ is a one-dimensional Brownian motion. The extension of the notion of a real subordinator to the matrix case relies on cones. A cone $K$ is a nonempty, closed, convex subset of $\mathbb{M}_{d\times q}$ such that if $A\in K$ and $\alpha \geq0$ imply $\alpha A\in K$. A cone $K$ determines a partial order in $\mathbb{M}_{d\times q}$ by defining $V_{1}\leq_{K}V_{2}$ for $V_{1},V_{2}\in \mathbb{M}_{d\times q}$ whenever $V_{2}-V_{1}\in K$. A $\mathbb{M}_{d\times q}$-valued Lévy process $L=\left \{ L(t)\right \} _{t\geq0}$ is $K$- increasing if $L(t_{1})\leq _{K}L(t_{2})$ for every $t_{1}<t_{2}$ almost surely. A $K$-increasing Lévy process with values in $\mathbb{M}_{d\times q}$ is called a matrix subordinator. It is easy to see that if $L=\left \{ L(t)\right \} _{t\geq0}$ is a Lévy process in $\mathbb{M}_{d\times q}$ then $L$ is a subordinator if and only if $L$ takes values in $K$. In this sense the matrix quadratic variation Lévy process in (\[QVLP\]) with values in the cone $\overline{\mathbb{H}}_{d}^{+}$ is a matrix subordinator$.$ #### Approximation of Lévy processes The following are useful results on the sample path approximation of complex matrix Lévy processes; see [@Ka Th 15.17] and [@Sato1 Th. 8.7]. They follow from their corresponding real vector case by the usual identification of $\mathbb{M}_{d\times q}\rightarrow$ $\mathbb{R}^{2dq}$ via $A\rightarrow \mathrm{vec}(A),$ $A\in$ $\mathbb{M}_{d\times q}$ and the fact that $\mathrm{tr}\left( A^{\ast}B\right) =\mathrm{vec}(A)^{\ast}\mathrm{vec}(B)$, where $\mathrm{vec}(A)$ is the $dq$ column complex vector obtained by stacking the columns of $A$ one down each other. \[convproc\]Let $L$ and $L^{n}$ $n=1,2,...$ be complex matrix Lévy processes in $\mathbb{M}_{d\times q}$ with $L^{n}(1)\allowbreak \overset {\mathcal{L}}{\rightarrow}L(1)$. Then there exist processes $\tilde{L}^{n}$ with the same distribution that $L^{n}$ such that$$\sup_{0\leq s\leq t}\left \vert \tilde{L}^{n}(s)-L(s)\right \vert \overset{\Pr }{\longrightarrow}0,\quad \forall t\geq0.$$ \[convternas\]Let $M^{n},n=1,2,...$ be infinitely divisible random matrices in $\mathbb{M}_{d\times q}$ with triplet $(\mathcal{A}^{n},\nu ^{n},\Psi^{n})$. Let $M$ be a random matrix in $\mathbb{M}_{d\times q}$. Then $M^{n}\overset{\mathcal{L}}{\rightarrow}M$ if and only if $M$ is infinitely divisible whose triplet $(\mathcal{A},\nu,\Psi)$ satisfies the following three conditions:a) If $f:\mathbb{M}_{d\times q}\rightarrow \mathbb{M}_{d\times q}$ is bounded and continuous function vanishing in a neighborhood of $0$ then$$\lim_{n\rightarrow \infty}\int \nolimits_{\mathbb{M}_{d\times q}}f(\xi)\nu ^{n}(\mathrm{d}\xi)=\int \nolimits_{\mathbb{M}_{d\times q}}f(\xi)\nu (\mathrm{d}\xi)\text{.}$$ b) Define the positive symmetric operator $\mathcal{A}^{n,\epsilon}:\mathbb{M}_{q\times d}\rightarrow \mathbb{M}_{d\times q}$ by$$\mathrm{tr}\left( \Theta^{\ast}\mathcal{A}^{n,\epsilon}\Theta^{\ast}\right) =\mathrm{tr}\left( \Theta^{\ast}\mathcal{A}^{n}\Theta^{\ast}\right) +\int \nolimits_{\left \Vert \xi \right \Vert \leq \varepsilon}\left \vert \mathrm{tr}\left( \Theta^{\ast}\xi \right) \right \vert ^{2}\nu_{n}(\mathrm{d}\xi)\quad \text{for }\Theta \in \mathbb{M}_{d\times q}\text{.}$$ Then $$\lim_{\varepsilon \downarrow0}\limsup_{n\rightarrow \infty}\left \vert \mathrm{tr}\left( \Theta^{\ast}\mathcal{A}^{n,\epsilon}\Theta^{\ast}\right) -\mathrm{tr}\left( \Theta^{\ast}\mathcal{A}\Theta^{\ast}\right) \right \vert =0,\quad \text{for }\Theta \in \mathbb{M}_{d\times q}\text{.}$$ c) $\Psi^{n}\rightarrow \Psi$. BGCD random matrix ensembles ============================ We now consider the matrix Lévy processes associated to the BGCD matrix ensembles $(M_{d})_{d\geq1}$ mentioned in the introduction. When $\mu$ is the standard Gaussian distribution, $M_{d}$ is a Gaussian unitary invariant random matrix, $\Lambda(\mu)$ is the semicircle distribution and $\left \{ M_{d}(t)\right \} _{t\geq0}$ is the Hermitian matrix valued process given by $M_{d}(t)=\left( 1/\sqrt{d+1}\right) (B(t)+dg(t)\mathrm{I}_{d})$ where $B\left( t\right) $ is a $d\times d$ Hermitian matrix Brownian motion independent of the one-dimensional Brownian motion $g(t);$ see [@BG Remark 3.5]. Likewise, if $\mu$ is the Poisson distribution with parameter $\lambda>0,$ $\left \{ M_{d}(t)\right \} _{t\geq0}$ is the $d\times d$ matrix compound Poisson process $M_{d}(t)=\sum_{k=1}^{N(t)}u_{k}^{d}u_{k}^{d\ast}$ where $\left \{ u_{k}^{d}\right \} _{k\geq1}$ is a sequence of independent uniformly distributed random vectors on the unit sphere of $\mathbb{C}^{d}$ independent of the Poisson process $\left \{ N(t)\right \} _{t\geq0}$, and $\Lambda(\mu)$ is the Marchenko-Pastur distribution of parameter $\lambda>0$; see [@BG Remark 3.2]. We observe that in this case $\left \{ M_{d}(t)\right \} _{t\geq0}$ is a matrix covariation (quadratic) process rather than a covariance matrix process as in the Wishart or other empirical covariance processes. Proposition \[polar\] below collects computations in [@BG], [@CD] and [@DRA] to summarize the Lévy triplet of a general BGCD matrix ensemble in an explicit manner. Let $\nu|_{(0,\infty)}$ and $\nu |_{(-\infty,0)}$ denote the corresponding restrictions to $\left( 0,+\infty \right) $ and $\left( -\infty,0\right) $ for any Lévy measure $\nu$, respectively. \[polar\]Let $\mu \ $be an infinitely divisible distribution in $\mathbb{R}$ with Lévy triplet $(a^{2}$,$\mathcal{\psi},\nu)$ and let $(M_{d})_{d\geq1}$ be a BGCD matrix ensemble for $\Lambda(\mu)$. Then, for each $d\geq1$ $M_{d}$ has the Lévy-Khintchine representation (\[LKRgen\]) with Lévy triplet $(\mathcal{A}_{d},\Psi_{d},\nu_{d})$ where a\) $\Psi_{d}=\mathcal{\psi}\mathrm{I}_{d}$ b\) $$\mathcal{A}_{d}\Theta=a^{2}\frac{1}{d+1}(\Theta+\mathrm{tr}(\Theta )\mathrm{I}_{d}),\quad \Theta \in \mathbb{H}_{d}. \label{GPBGCD}$$ c\) $$\nu_{d}\left( E\right) =d\int_{\mathbb{S}(\mathbb{H}_{d(1)})}\int _{0}^{\infty}1_{E}\left( rV\right) \nu_{V}\left( \mathrm{d}r\right) \Pi \left( \mathrm{d}V\right) \text{,\quad}E\in \mathcal{B}\left( \mathbb{H}_{d}\backslash \left \{ 0\right \} \right) \text{,} \label{PDBGCD}$$ where $\nu_{V}=\nu|_{(0,\infty)}$ or $\nu|_{(-\infty,0)}$ according to $V\geq0$ or $V\leq0$ and $\Pi$ is a measure on $\mathbb{S}(\mathbb{H}_{d(1)})$ such that $$\Pi \left( D\right) =\int \limits_{\mathbb{S}(\mathbb{H}_{d(1)})\cap \overline{\mathbb{H}}_{d}^{+}}\int \limits_{\left \{ -1,1\right \} }1_{D}\left( tV\right) \lambda \left( \mathrm{d}t\right) \omega_{d}\left( \mathrm{d}V\right) \text{,\quad}D\in \mathcal{B}\left( \mathbb{S}(\mathbb{H}_{d(1)})\right) \text{,} \label{pi}$$ where $\lambda$ is the spherical measure of $\nu$ and $\omega_{d}$ is the probability measure on $\mathbb{S}(\mathbb{H}_{d(1)})\cap \overline{\mathbb{H}}_{d}^{+}$ induced by the transformation $u\rightarrow V=uu^{\ast}$, where $u$ is a uniformly distributed column random vector in the unit sphere of $\mathbb{C}^{d}$. \(a) It follows from the first term in the Lévy exponent of $M_{d}$ in page $635$ of [@CD], where the notation $\Lambda_{d}(\mu)$ is used for the distribution of $M_{d}$. For (b), the form of the covariance operator $\mathcal{A}_{d}$ was implicitly computed in the first example in Section II.C of [@CD]. Finally, the polar decomposition of the Lévy measure (\[PDBGCD\]) was found in [@DRA]. The Lévy-Itô decomposition of the Lévy process associated to the BGCD model $M_{d}$ is given by $$M_{d}(t)=\mathcal{\psi}td\mathrm{I}_{d}+B_{\mathcal{A}_{d}}(t)+\int _{[0,t]}\int_{\left \{ \left \Vert V\right \Vert \leq1\right \} \cap \mathbb{H}_{d(1)}}V\widetilde{J}_{d}(\mathrm{d}s\mathrm{,d}V)+\int_{[0,t]}\int_{\left \{ \left \Vert V\right \Vert >1\right \} \cap \mathbb{H}_{d(1)}}VJ_{d}(\mathrm{d}s,\mathrm{d}V)\text{,} \label{LIDbgcd}$$ where $t\geq0$, $\mathcal{A}_{d}\Theta=a^{2}\frac{1}{d+1}(\Theta +\mathrm{tr}(\Theta)\mathrm{I}_{d})$, $J_{d}(t,E)=\# \left \{ 0\leq s\leq t:\Delta M_{d}(s)\in E\right \} =J_{d}(t,E\cap \mathbb{H}_{d(1)})$ for any measurable $E$ $\in \mathbb{H}_{d}\backslash \{0\}$. Its quadratic variation is obtained by (\[QVLP\]) as the matrix subordinator $$\left[ M_{d}\right] (t)=a^{2}t\mathrm{I}_{d}+\int_{[0,t]}\int_{\mathbb{H}_{d(1)}\backslash \{0\}}VV^{\ast}J_{d}(\mathrm{d}s,\mathrm{d}V)=a^{2}t\mathrm{I}_{d}+\sum_{s\leq t}\Delta M_{d}(s)\left( \Delta M_{d}(s)\right) ^{\ast}.\nonumber$$ It is possible to obtain BGCD models of symmetric random matrices rather than Hermitian. Indeed, slight changes in the proof of [@BG Theorem 3.1] give for each $d\geq1$, a $d\times d$ real symmetric random matrix $M_{d}$ with orthogonal invariant infinitely divisible matrix distribution. The asymptotic spectral distribution of the corresponding Hermitian and symmetric ensembles is the same, similarly as the semicircle distribution is the asymptotic spectral distribution for the Gaussian Unitary Ensemble and Gaussian Orthogonal Ensemble. Bounded variation case ====================== It is well known that the quadratic variation of a one-dimensional Lévy process is a subordinator, see [@CT Example 8.5]. The following result gives a converse and a generalization to matrix subordinators with rank one jumps. The one dimensional case is given in [@Se10 Lemma 6.5]. \[Sub\] Let $L_{d}=\left \{ L_{d}(t):t\geq0\right \} $ be a Lévy process in $\overline{\mathbb{H}}_{d}^{+}$ whose jumps are of rank one almost surely. Then there exists a Lévy process $X=\left \{ X(t):t\geq0\right \} $ in $\mathbb{C}^{d}$ such that $L_{d}(t)=\left[ X\right] (t)$. We construct $X$ as a Lévy-Itô decomposition realization. Using (\[LIFV\]), for each $d\geq1$, $L_{d}$ is an $\overline{\mathbb{H}}_{d}^{+}$-process of bounded variation with Lévy-Itô decomposition$$L_{d}(t)=\Psi_{0}t+\int \nolimits_{\left[ 0,t\right] }\int_{\mathbb{H}_{d\left( 1\right) }^{+}\backslash \{0\}}VJ_{L_{d}}(\mathrm{d}s,\mathrm{d}V)\text{, }t\geq0\text{,}$$ where $\Psi_{0}\in \mathbb{H}_{d}^{+}$ and $J_{L_{d}}$ is the Poisson random measure of $L_{d}$. Let $Leb\otimes \nu_{L_{d}}$ denote the intensity measure of $J_{L_{d}}$. Consider the cone $C_{+}^{d}=\left \{ x=\left( x_{1},x_{2},\ldots ,x_{d}\right) :x_{1}\geq0,\text{ }x_{j}\in \mathbb{C},\text{ }j=2,...,d\right \} $ and let $\varphi_{+}:\mathbb{R}_{+}\times \mathbb{H}_{d(1)}^{+}\rightarrow \mathbb{R}_{+}\times C_{+}^{d}$ be defined as $\varphi_{+}\left( t,V\right) =(t,x)$ where $V=xx^{\ast}$ and $x\in C_{+}^{d}$. Let $\overline{\varphi}_{+}:\mathbb{H}_{d(1)}^{+}\rightarrow C_{+}^{d}$ be defined by $\overline{\varphi}_{+}\left( V\right) =x$ for $V=xx^{\ast}$ and $x\in C_{+}^{d}$. By Remark \[Decomp\] (a) the functions $\varphi_{+}$ and $\overline{\varphi}_{+}$ are well defined. Let us define $J(\mathrm{d}s,\mathrm{d}x)=\left( J_{L_{d}}\circ \varphi _{+}^{-1}\right) \left( \mathrm{d}s,\mathrm{d}x\right) $ the random measure induced by the transformation $\varphi_{+}$ which is a Poisson random measure on $\mathbb{R}_{+}\times C_{+}^{d}$. Observe that $\mathbb{E}\left[ J(t,F)\right] =\mathbb{E}\left[ J_{L_{d}}\circ \varphi_{+}^{-1}\left( \left \{ t\right \} \times F\right) \right] =t\nu_{L_{d}}\left( \overline{\varphi}_{+}\left( F\right) \right) =t\left( \nu_{L_{d}}\circ \overline{\varphi}_{+}^{-1}\right) \left( F\right) $ for $F\allowbreak \in \allowbreak \mathcal{B(}\allowbreak C_{+}^{d}\allowbreak \backslash \left \{ 0\right \} )$. Let us denote $\nu=\nu_{L_{d}}\circ \overline{\varphi}_{+}^{-1}$ which is a Lévy measure on $C_{+}^{d}$ since$$\int_{C_{+}^{d}\backslash \left \{ 0\right \} }\left( 1\wedge \left \vert x\right \vert ^{2}\right) \nu(\mathrm{d}x)=\int_{C_{+}^{d}\backslash \left \{ 0\right \} }\left( 1\wedge \left \vert x\right \vert ^{2}\right) \nu_{L_{d}}\circ \overline{\varphi}_{+}^{-1}(\mathrm{d}x)$$$$=\int_{C_{+}^{d}\backslash \left \{ 0\right \} }\left( 1\wedge \mathrm{tr}\left( xx^{\ast}\right) \right) \nu_{L_{d}}\circ \overline{\varphi}_{+}^{-1}(\mathrm{d}x)=\int_{\mathbb{H}_{d(1)}^{+}\backslash \left \{ 0\right \} }\left( 1\wedge \mathrm{tr}\left( V\right) \right) \left( \nu_{L_{d}}\circ \overline{\varphi}_{+}^{-1}\right) \circ f^{-1}(\mathrm{d}V)$$$$=\int_{\mathbb{H}_{d(1)}^{+}\backslash \left \{ 0\right \} }\left( 1\wedge \mathrm{tr}\left( V\right) \right) \nu_{L_{d}}(\mathrm{d}V)<\infty \text{,}$$ where $\left( \nu_{L_{d}}\circ \overline{\varphi}_{+}^{-1}\right) \circ f^{-1}=\nu_{L_{d}},$ with $f\left( x\right) =xx^{\ast}$ and we have used that $\mathrm{tr}\left( V\right) \leq \alpha \left \Vert V\right \Vert $ for some constant $\alpha>0$. Thus $Leb\otimes \nu$ is the intensity measure of the Poisson random measure $J$. Let us take the Lévy process in $\mathbb{C}^{d}$$$X(t)=\left \vert \Psi_{0}\right \vert ^{1/2}B_{I}(t)+\int \nolimits_{\left[ 0,t\right] }\int_{\mathbb{C}^{d}\cap \{ \left \vert x\right \vert \leq 1\}}x\widetilde{J}(\mathrm{d}s,\mathrm{d}x)+\int \nolimits_{\left[ 0,t\right] }\int_{\mathbb{C}^{d}\cap \{ \left \vert x\right \vert >1\}}xJ(\mathrm{d}s,\mathrm{d}x)\text{, }t\geq0\text{,} \label{LIX}$$ where $B_{I}$ is a $\mathbb{C}^{d}$-valued standard Brownian motion with quadratic variation $tI_{d}$, (i.e. (\[CovCB\]) with $q=1$). Thus the quadratic variation of $X$ is given by$$\left[ X\right] (t)=\left[ \left \vert \Psi_{0}\right \vert ^{1/2}B_{I},B_{I}^{\ast}\left \vert \Psi_{0}\right \vert ^{1/2}\right] (t)+\int \nolimits_{\left[ 0,t\right] }\int_{\mathbb{C}^{d}\backslash \{0\}}xx^{\ast }J(\mathrm{d}s,\mathrm{d}x)$$$$=\Psi_{0}t+\int \nolimits_{\left[ 0,t\right] }\int_{\mathbb{C}^{d}\backslash \{0\}}xx^{\ast}J_{L_{d}}\circ \varphi_{+}^{-1}(\mathrm{d}s,\mathrm{d}x)=\Psi_{0}t+\int \nolimits_{\left[ 0,t\right] }\int _{\mathbb{H}_{d(1)}^{+}\backslash \left \{ 0\right \} }VJ_{L_{d}}\circ \varphi_{+}^{-1}\circ h^{-1}(\mathrm{d}s,\mathrm{d}V)$$$$=\Psi_{0}t+\int \nolimits_{\left[ 0,t\right] }\int_{\mathbb{H}_{d(1)}^{+}\backslash \left \{ 0\right \} }VJ_{L_{d}}(\mathrm{d}s,\mathrm{d}V)=L_{d}(t),$$ where $J_{L_{d}}\circ \varphi_{+}^{-1}\circ h^{-1}=J_{L_{d}},$ with $h\left( t,x\right) =\left( t,xx^{\ast}\right) .$ For the general bounded variation case we have the following Wiener-Hopf type decomposition. \[Bdv\]Let $L_{d}=\left \{ L_{d}(t):t\geq0\right \} $ be a Lévy process in $\mathbb{H}_{d}$ of bounded variation whose jumps are of rank one almost surely. Then there exist Lévy processes $X=\left \{ X(t):t\geq 0\right \} $ and $Y=\left \{ Y(t):t\geq0\right \} $ in $\mathbb{C}^{d}$ such that $$L_{d}(t)=\left[ X\right] (t)-\left[ Y\right] (t). \label{WHTD}$$ Moreover, $\left \{ \left[ X\right] (t):t\geq0\right \} $ and $\left \{ \left[ Y\right] (t):t\geq0\right \} $ are independent processes. For each $d\geq1$, $L_{d}$ is an $\mathbb{H}_{d}$-process of bounded variation with Lévy-Itô decomposition$$L_{d}(t)=\Psi t+\int \nolimits_{\left[ 0,t\right] }\int_{\mathbb{H}_{d(1)}\backslash \{0\}}VJ_{L_{d}}(\mathrm{d}s,\mathrm{d}V)\text{, }t\geq0\text{,} \label{LIDBV}$$ where $\Psi \in \mathbb{H}_{d}$ and $J_{L_{d}}$ is the Poisson random measure of $L_{d}$. Let $Leb\otimes \nu_{L_{d}}$ denote the intensity measure of $J_{L_{d}}$. First we prove that $L_{d}=L_{d}^{1}-L_{d}^{2}$ where $L_{d}^{1}$ and $L_{d}^{2}$ are the Lévy processes in $\overline{\mathbb{H}}_{d}^{+}$ given by (\[Quad1decomp\]) and (\[Quad2decomp\]).Every $V\in \mathbb{H}_{d\left( 1\right) }$ can be written as $V=\lambda uu^{\ast}$ where $\lambda$ the eigenvalue of $V$ and $u$ is a unitary vector in $\mathbb{C}^{d}$. Let us define $\left \vert V\right \vert =\left \vert \lambda \right \vert uu^{\ast}$ and $V^{+}=\lambda^{+}uu^{\ast}$, $V^{-}=\lambda^{-}uu^{\ast}$ where $\lambda^{+}=\lambda$ if $\lambda \geq0$ and $\lambda^{-}=-\lambda$ if $\lambda<0$. Let $\varphi_{+}:\mathbb{R}_{+}\times \mathbb{H}_{d(1)}\rightarrow \mathbb{R}_{+}\times \mathbb{H}_{d(1)}^{+}$ and $\varphi_{-}:\mathbb{R}_{+}\times \mathbb{H}_{d(1)}\rightarrow \mathbb{R}_{+}\times \mathbb{H}_{d(1)}^{+}$ be defined as $\varphi_{+}\left( t,V\right) =(t,V^{+})$ and $\varphi_{-}\left( t,V\right) =(t,V^{-})$ respectively. Let $\overline {\varphi}_{+}:\mathbb{H}_{d(1)}\rightarrow \mathbb{H}_{d(1)}^{+}$ and $\overline{\varphi}_{-}:\mathbb{H}_{d(1)}\rightarrow \mathbb{H}_{d(1)}^{+}$ be defined as $\overline{\varphi}_{+}(V)=V^{+}$ and $\overline{\varphi}_{-}(V)=V^{-}$ respectively. By Remark \[Decomp\] (b) the functions $\varphi_{+},$ $\overline{\varphi}_{+},$ $\varphi_{-}$ and $\overline{\varphi }_{-}$ are well defined and hence $V=\overline{\varphi}_{+}(V)-\overline {\varphi}_{-}(V)$. Let us define $J^{+}(\mathrm{d}s,\mathrm{d}x)=\left( J_{L_{d}}\circ \varphi_{+}^{-1}\right) \left( \mathrm{d}s,\mathrm{d}x\right) $ and $J^{-}(\mathrm{d}s,\mathrm{d}x)=\left( J_{L_{d}}\circ \varphi_{-}^{-1}\right) \left( \mathrm{d}s,\mathrm{d}x\right) $ the random measures induced by the transformations $\varphi_{+}$ and $\varphi_{-}$ respectively, which are Poisson random measures both on $\mathbb{R}_{+}\times \mathbb{H}_{d(1)}^{+}$. Observe that $\mathbb{E}\left[ J^{+}(t,F)\right] =\mathbb{E[}J_{L_{d}}\circ \allowbreak \varphi_{+}^{-1}(\allowbreak \left \{ t\right \} \allowbreak \times \allowbreak F)]=t\nu_{L_{d}}\left( \overline{\varphi}_{+}^{-1}\left( F\right) \right) =t\left( \nu_{L_{d}}\circ \overline{\varphi}_{+}^{-1}\right) \left( F\right) $ for $F\in \mathcal{B}\left( \mathbb{H}_{d(1)}^{+}\backslash \left \{ 0\right \} \right) $ and similarly $\mathbb{E}\left[ J^{-}(t,F)\right] =t\left( \nu_{L_{d}}\circ \overline{\varphi}_{-}^{-1}\right) \left( F\right) $. Let us denote $\nu_{L_{d}}^{+}=\nu_{L_{d}}\circ \overline{\varphi}_{+}^{-1}$ and $\nu_{L_{d}}^{-}=\nu_{L_{d}}\circ \overline{\varphi}_{-}^{-1}$. Note that $\nu_{L_{d}}^{+}$ is a Lévy measure on $\mathbb{H}_{d(1)}^{+}$ since$$\begin{aligned} \infty & >\int_{\mathbb{H}_{d(1)}\backslash \left \{ 0\right \} }\left( 1\wedge \left \Vert V\right \Vert \right) \nu_{L_{d}}(\mathrm{d}V)\geq \int_{\mathbb{H}_{d(1)}\backslash \left \{ 0\right \} }\left( 1\wedge \left \Vert \overline{\varphi}_{+}(V)\right \Vert \right) \nu_{L_{d}}(\mathrm{d}V)\\ & =\int_{\mathbb{H}_{d(1)}^{+}\backslash \left \{ 0\right \} }\left( 1\wedge \left \Vert W\right \Vert \right) \nu_{L_{d}}^{+}(\mathrm{d}W)\text{.}$$ Hence $Leb\otimes \nu_{L_{d}}^{+}$ is the intensity measure of $J^{+}$. Similarly, one can see that $Leb\otimes \nu_{L_{d}}^{-}$ is the intensity measure of $J^{-}$. There exist $\Psi^{+}$ and $\Psi^{-}$ in $\mathbb{H}_{d}^{+}$ such that $\Psi=\Psi^{+}-\Psi^{-}$. Let us take the Lévy processes $X$ and $Y$ in $\mathbb{C}^{d}$$$X(t)=\left \vert \Psi^{+}\right \vert ^{1/2}B_{I}(t)+\int \nolimits_{\left[ 0,t\right] }\int_{\mathbb{C}^{d}\cap \{ \left \vert x\right \vert \leq 1\}}x\widetilde{J}^{+}(\mathrm{d}s,\mathrm{d}x)+\int \nolimits_{\left[ 0,t\right] }\int_{\mathbb{C}^{d}\cap \{ \left \vert x\right \vert >1\}}xJ^{+}(\mathrm{d}s,\mathrm{d}x)\text{, }t\geq0\text{,}$$$$Y(t)=\left \vert \Psi^{-}\right \vert ^{1/2}B_{I}(t)+\int \nolimits_{\left[ 0,t\right] }\int_{\mathbb{C}^{d}\cap \{ \left \vert x\right \vert \leq 1\}}x\widetilde{J}^{-}(\mathrm{d}s,\mathrm{d}x)+\int \nolimits_{\left[ 0,t\right] }\int_{\mathbb{C}^{d}\cap \{ \left \vert x\right \vert >1\}}xJ^{-}(\mathrm{d}s,\mathrm{d}x)\text{, }t\geq0\text{,}$$ where $B_{I}$ is a $\mathbb{C}^{d}$-valued standard Brownian motion with quadratic variation $tI_{d}$. Observe that$$\left[ X\right] (t)=\Psi^{+}t+\int \nolimits_{\left[ 0,t\right] }\int_{\mathbb{C}^{d}\backslash \{0\}}xx^{\ast}J_{+}(\mathrm{d}s,\mathrm{d}x)=\Psi^{+}t+\int \nolimits_{\left[ 0,t\right] }\int_{\mathbb{H}_{d(1)}^{+}\backslash \{0\}}VJ_{L_{d}}(\mathrm{d}s,\mathrm{d}V) \label{Quad1decomp}$$ and$$\begin{aligned} \left[ Y\right] (t) & =\Psi^{-}t+\int \nolimits_{\left[ 0,t\right] }\int_{\mathbb{C}^{d}\backslash \{0\}}xx^{\ast}J^{-}(\mathrm{d}s,\mathrm{d}x)=\Psi^{-}t-\int \nolimits_{\left[ 0,t\right] }\int_{\mathbb{C}^{d}\backslash \{0\}}\left( -xx^{\ast}\right) J_{L_{d}}(\mathrm{d}s,\mathrm{d}x)\nonumber \\ & =\Psi^{-}t-\int \nolimits_{\left[ 0,t\right] }\int_{\mathbb{H}_{d(1)}^{-}\backslash \{0\}}VJ_{L_{d}}(\mathrm{d}s,\mathrm{d}V), \label{Quad2decomp}$$ where $\mathbb{H}_{d(1)}^{-}$ denotes the cone of negative (nonpositive) definite matrices of rank one in $\mathbb{H}_{d}$. The first assertion follows from (\[LIDBV\]). Finally, since $J_{L_{d}}$ is a Poisson random measure and $\mathbb{H}_{d(1)}^{+}\backslash \{0\}$ and $\mathbb{H}_{d(1)}^{-}\backslash \{0\}$ are disjoint sets, from the last expressions in (\[Quad1decomp\]) and (\[Quad2decomp\]) we have that $\left[ X\right] $ and $\left[ Y\right] $ are independent processes, although $X$ and $Y$ are not. Next we consider the matrix Lévy processes associated to the BGCD matrix ensembles $(M_{d})_{d\geq1}$. We have the following two consequences of the former results. \[corBG\]Let $M_{d}=\left \{ M_{d}(t):t\geq0\right \} $ be the matrix Lévy process associated to the BGCD random matrix ensembles. a\) Let $\mu$ be the infinitely divisible distribution with triplet $\left( 0,\psi,\nu \right) $ associated to $M_{d}$ such that $$\int_{\left \vert x\right \vert \leq1}\left( 1\wedge x\right) \nu (\mathrm{d}x)<\infty,\ \ \nu((-\infty,0])=0\ \text{ and\ }\mathcal{\psi}_{0}:=\mathcal{\psi}-\int_{x\leq1}x\nu(\mathrm{d}x)\geq0.$$ Let us consider the Lévy-Itô decomposition of $M_{d}(t)$ in $\overline{\mathbb{H}}_{d}^{+}$ $$M_{d}(t)=\mathcal{\psi}_{0}tdI_{d}+\int \nolimits_{\left[ 0,t\right] }\int_{\mathbb{H}_{d(1)}^{+}\backslash \{0\}}VJ_{M_{d}}(\mathrm{d}s,\mathrm{d}V).$$ Then there exists a Lévy process $X=\left \{ X(t):t\geq0\right \} $ in $\mathbb{C}^{d}$ such that $M_{d}(t)=\left[ X\right] (t)$, where $$X(t)=\left \vert \mathcal{\psi}_{0}\right \vert ^{1/2}B_{I}(t)+\int \nolimits_{\left[ 0,t\right] }\int_{\mathbb{C}^{d}\cap \{ \left \vert x\right \vert \leq1\}}x\widetilde{J}(\mathrm{d}s,\mathrm{d}x)+\int \nolimits_{\left[ 0,t\right] }\int_{\mathbb{C}^{d}\cap \{ \left \vert x\right \vert >1\}}xJ(\mathrm{d}s,\mathrm{d}x)\text{, }t\geq0\text{,}$$ $B_{I}$ is a $\mathbb{C}^{d}$-valued standard Brownian motion with quadratic variation $tI_{d}$, and the Poisson random measure $J$ is given by $J=J_{M_{d}}\circ \varphi_{+}^{-1}$. b\) If $M_{d}$ has bounded variation then there exist Lévy processes $X=\left \{ X(t):t\geq0\right \} $ and $Y=\left \{ Y(t):t\geq0\right \} $ in $\mathbb{C}^{d}$ such that $M_{d}(t)=\left[ X\right] (t)-\left[ Y\right] (t),$ where $\left \{ \left[ X\right] (t):t\geq0\right \} $ and $\left \{ \left[ Y\right] (t):t\geq0\right \} $ are independent. Covariation matrix processes approximation ========================================== We now consider approximation of general BGCD ensembles by BGCD matrix compound Poisson processes which are covariation of $\mathbb{C}^{d}$-valued Lévy processes. The following results gives realizations of BGCD ensembles of compound Poisson type as the covariation of two $\mathbb{C}^{d}$-valued Lévy processes. Its proof is straightforward. \[BGCDCPP\] Let $\mu$ be a compound Poisson distribution on $\mathbb{R}$ with Lévy measure $\nu$ and drift $\mathcal{\psi}\in \mathbb{R}$ and let $(M_{d})_{d\geq1}$ be the BGCD matrix ensemble for $\Lambda(\mu).$ For each $d\geq1$, assume that i\) $(\beta_{j})_{j\geq1}$ is a sequence of i.i.d. random variables with distribution $\nu/\nu \left( \mathbb{R}\right) $. ii\) $(u_{j})_{j\geq1}$ is a sequence of i.i.d. random vectors with uniform distribution on the unit sphere of $\mathbb{C}^{d}$. iii\) $\left \{ N(t)\right \} _{t\geq0}$ is a Poisson process with parameter one. Assume that $(\beta_{j})_{j\geq1}$, $(u_{j})_{j\geq1}$ and $\left \{ N(t)\right \} _{t\geq0}$ are independent. Then a\) $M_{d}$ has the same distribution as $M_{d}(1)$ where $$M_{d}(t)=\mathcal{\psi}tI_{d}+\sum_{j=1}^{N(t)}\beta_{j}u_{j}u_{j}^{\ast },\quad t\geq0. \label{BGCP1}$$ b\) $M_{d}(\cdot)=[X_{d},Y_{d}](\cdot)$ where $X_{d}=\left \{ X_{d}(t)\right \} _{t\geq0},$ $Y_{d}=\left \{ Y_{d}(t)\right \} _{t\geq0}$ are the $\mathbb{C}^{d}$-valued Lévy processes$$X_{d}(t)=\sqrt{\left \vert \mathcal{\psi}\right \vert }B(t)+\sum_{j=1}^{N(t)}\sqrt{\left \vert \beta_{j}\right \vert }u_{j},\quad t\geq0, \label{BGCP2}$$$$Y_{d}(t)=\mathrm{sign}\left( \mathcal{\psi}\right) \sqrt{\left \vert \mathcal{\psi}\right \vert }B(t)+\sum_{j=1}^{N(t)}\mathrm{sign}\left( \beta_{j}\right) \sqrt{\left \vert \beta_{j}\right \vert }u_{j},\quad t\geq0, \label{BGCP3}$$ and $B=\left \{ B(t)\right \} _{t\geq0}$ is a $\mathbb{C}^{d}$-valued standard Brownian motion independent of $(\beta_{j})_{j\geq1}$, $(u_{j})_{j\geq1}$ and $\left \{ N(t)\right \} _{t\geq0}$. For the general case we have the following sample path approximation by covariation processes for Lévy processes generated by the BGCD matrix ensembles. \[General\] Let $\mu$ be an infinitely divisible distribution on $\mathbb{R}$ with triplet $(a^{2},\psi,\nu)$ and let $(M_{d})_{d\geq1}$ be the corresponding BGCD matrix ensemble for $\Lambda(\mu).$ Let $d\geq1$ fixed and assume that for $n\geq1$ i\) $(\beta_{j}^{n})_{j\geq1}$ is a sequence of i.i.d. random variables with distribution $\mu^{\ast \frac{1}{n}}$. ii\) $(u_{j}^{n})_{j\geq1}$ is a sequence of i.i.d. random vectors with uniform distribution on the unit sphere of $\mathbb{C}^{d}$. iii\) $N^{n}=\left \{ N^{n}(t)\right \} _{t\geq0}$ is a Poisson process with parameter $n$. iv\) $B^{n}=\left \{ B^{n}(t)\right \} _{t\geq0}$ is a $\mathbb{C}^{d}$-valued standard Brownian motion. v\) $(\beta_{j}^{n})_{j\geq1}$, $(u_{j}^{n})_{j\geq1},N^{n}$ and $B^{n}$are independent. Let $$X_{d}^{n}(t)=\sqrt{\left \vert \mathcal{\psi}\right \vert }B^{n}(t)+\sum _{j=1}^{N^{n}(t)}\sqrt{\left \vert \beta_{j}^{n}\right \vert }u_{j}^{n},\quad t\geq0, \label{Gen1}$$$$Y_{d}^{n}(t)=\mathrm{sign}\left( \mathcal{\psi}\right) \sqrt{\left \vert \mathcal{\psi}\right \vert }B^{n}(t)+\sum_{j=1}^{N^{n}(t)}\mathrm{sign}\left( \beta_{j}^{n}\right) \sqrt{\left \vert \beta_{j}^{n}\right \vert }u_{j}^{n},\quad t\geq0. \label{Gen2}$$ Then for each $d\geq1$ there exist $\mathbb{M}_{d}$-valued processes $\widetilde{M}_{d}^{n}=\left \{ \widetilde{M}_{d}^{n}(t)\right \} _{d\geq1}$ such that $\widetilde{M}_{d}^{n}\overset{\mathcal{L}}{=}[X_{d}^{n},Y_{d}^{n}]$,$$\sup_{0<s\leq t}\left \Vert \widetilde{M}_{d}^{n}(s)-M_{d}(s)\right \Vert \underset{n\rightarrow \infty}{\overset{\Pr}{\longrightarrow}}0\text{,\quad }\forall t\geq0\text{,}$$ where $\left \{ M_{d}(t):t\geq0\right \} $ is the $\mathbb{M}_{d}$-valued Lévy process associated to $(M_{d})_{d\geq1}$. By the compound Poisson approximation for infinitely divisible distributions on $\mathbb{R}$ (see [@Sato1 pp 45])$,$ we choose $\mu_{n}$ an infinitely divisible distribution such that $\mu_{n}\longrightarrow \mu,$ where we take the triplet of $\mu_{n}$ as $\left( 0,\psi^{n},\nu^{n}\right) ,$ $\psi ^{n}=\int \frac{x}{1+\left \vert x\right \vert ^{2}}\nu^{n}\left( dx\right) $ and $\nu^{n}=n\mu^{\ast \frac{1}{n}}$, satisfying (see [@Sato1 Theorem 8.7]) that for every bounded continuous function $f$ vanishing in a neighborhood of zero$$\int_{\mathbb{R}}f\left( r\right) \nu^{n}\left( dr\right) \longrightarrow \int_{\mathbb{R}}f\left( r\right) \nu \left( dr\right) \text{ as }n\rightarrow \infty \text{,} \label{lev}$$ for each $\varepsilon>0$$$\int_{\left \vert r\right \vert \leq \varepsilon}r^{2}\nu^{n}\left( dr\right) \longrightarrow a^{2}\text{ as }n\rightarrow \infty, \label{gau}$$ and $\psi^{n}\rightarrow \psi$. A similar proof as for Proposition \[BGCDCPP\] gives$$M_{d}^{n}(t):=\left[ X_{d}^{n},Y_{d}^{n\ast}\right] (t)=\mathcal{\psi }t\mathrm{I}_{d}+\sum_{j=0}^{N^{n}(t)}\beta_{j}^{n}u_{j}^{n}u_{j}^{n\ast},$$ which is a matrix value compound Poisson process with triplet $\left( \mathcal{A}_{d}^{n},\psi_{d}^{n},\nu_{d}^{n}\right) $ given by $\mathcal{A}_{d}^{n}=0,\ \psi_{d}^{n}=\psi \mathrm{I}_{d}$ and$$\nu_{d}^{n}\left( E\right) =d\int_{\mathbb{S}(\mathbb{H}_{d(1)})}\int _{0}^{\infty}1_{E}\left( rV\right) \nu_{V}^{n}\left( \mathrm{d}r\right) \Pi \left( \mathrm{d}V\right) ,\quad E\in \mathcal{B}\left( \mathbb{H}_{d}\backslash \left \{ 0\right \} \right) \text{,} \label{nudn}$$ where $\nu_{V}^{n}=\nu^{n}|_{(0,\infty)}$ or $\nu^{n}|_{(-\infty,0)}$ according to $V\geq0$ or $V\leq0$ and $\Pi$ is the measure on $\mathbb{S}(\mathbb{H}_{d(1)})$ in (\[pi\]). We will prove that $M_{d}^{n}\overset{\mathcal{L}}{\longrightarrow}M_{d}$ by showing that the triplet $\left( \mathcal{A}_{d}^{n},\psi_{d}^{n},\nu_{d}^{n}\right) $ converges to the triplet $\left( \mathcal{A}_{d},\psi_{d},\nu_{d}\right) $ of the BGCD matrix ensemble in Proposition \[polar\] in the sense of Proposition \[convternas\]: We observe that $\psi_{d}^{n}=\psi \mathrm{I}_{d}$ for each $n$. Let $f:\mathbb{H}_{d(1)}\longrightarrow \mathbb{R}$ be a continuous bounded function vanishing in a neighborhood of zero. Using the polar decomposition (\[PDBGCD\]) for $\nu_{d}^{n}$ we have$$\begin{aligned} \int_{\mathbb{H}_{d(1)}}f\left( \xi \right) \nu_{d}^{n}\left( d\xi \right) & =d\int_{\mathbb{S}(\mathbb{H}_{d(1)})}\int_{0}^{\infty}f\left( rV\right) \nu_{V}^{n}\left( dr\right) \Pi \left( dV\right) \nonumber \\ & =d\int_{\mathbb{S}(\mathbb{H}_{d(1)})\cap \overline{\mathbb{H}}_{d}^{+}}\int_{\left \{ -1,1\right \} }\int_{0}^{\infty}f\left( trV\right) \nu _{V}^{n}\left( dr\right) \lambda^{n}\left( dt\right) \omega_{d}\left( dV\right) . \label{intfb}$$ For $V\in \mathbb{S}(\mathbb{H}_{d(1)})\cap \overline{\mathbb{H}}_{d}^{+}$ fixed, $$\begin{aligned} \int_{\left \{ -1,1\right \} }\int_{0}^{\infty}f\left( trV\right) \nu _{V}^{n}\left( dr\right) \lambda^{n}\left( dt\right) & =\lambda ^{n}\left( \left \{ 1\right \} \right) \int_{0}^{\infty}f\left( rV\right) \nu^{n}\left( dr\right) \\ & +\lambda^{n}\left( \left \{ -1\right \} \right) \int_{-\infty}^{0}f\left( rV\right) \nu^{n}\left( dr\right) \text{.}$$ As a function of $r$, $f\left( rV\right) $ is a real valued continuous bounded function vanishing in a neighborhood of zero, hence using (\[lev\])$$\lambda^{n}\left( \left \{ 1\right \} \right) \int_{0}^{\infty}f\left( rV\right) \nu^{n}\left( dr\right) \longrightarrow \lambda \left( \left \{ 1\right \} \right) \int_{0}^{\infty}f\left( rV\right) \nu \left( dr\right)$$ and$$\lambda^{n}\left( \left \{ -1\right \} \right) \int_{-\infty}^{0}f\left( rV\right) \nu^{n}\left( dr\right) \longrightarrow \lambda \left( \left \{ -1\right \} \right) \int_{-\infty}^{0}f\left( rV\right) \nu \left( dr\right) .$$ Then from (\[intfb\])$$\begin{aligned} \int_{\mathbb{H}_{d(1)}}f\left( \xi \right) \nu_{d}^{n}\left( d\xi \right) & \longrightarrow d\int_{\mathbb{S}(\mathbb{H}_{d(1)})\cap \overline {\mathbb{H}}_{d}^{+}}\int_{\left \{ -1,1\right \} }\int_{0}^{\infty}f\left( trV\right) \nu_{V}\left( dr\right) \lambda \left( dt\right) \omega _{d}\left( dV\right) \\ & =d\int_{\mathbb{S}(\mathbb{H}_{d(1)})}\int_{0}^{\infty}f\left( rV\right) \nu_{d}\left( dr\right) \Pi \left( dV\right) =\int_{\mathbb{H}_{d(1)}}f\left( \xi \right) \nu_{d}\left( d\xi \right) .\end{aligned}$$ Next, we verify the convergence of the Gaussian part. Let us define, for each $\varepsilon>0$ and $n\geq1,$ the operator $\mathcal{A}^{n,\varepsilon}:\mathbb{H}_{d}\longrightarrow \mathbb{H}_{d}$ by $$\mathrm{tr}\left( \Theta \mathcal{A}^{n,\varepsilon}\Theta \right) =\int_{\left \Vert \xi \right \Vert \leq \varepsilon}\left \vert \mathrm{tr}\left( \Theta \xi \right) \right \vert ^{2}\nu_{d}^{n}\left( d\xi \right) .$$ From (\[nudn\]) we get $$\begin{aligned} & \int_{\left \Vert \xi \right \Vert \leq \varepsilon}\left \vert \mathrm{tr}\left( \Theta \xi \right) \right \vert ^{2}\nu_{d}^{n}\left( d\xi \right) =d\int_{\mathbb{S}(\mathbb{H}_{d(1)})}\int_{0}^{\infty}\mathbf{1}_{\left \{ \left \Vert rV\right \Vert \leq \varepsilon \right \} }\left( rV\right) \left \vert \mathrm{tr}\left( r\Theta V\right) \right \vert ^{2}\nu_{V}^{n}\left( dr\right) \Pi \left( dV\right) \\ & =d\int_{\mathbb{S}(\mathbb{H}_{d(1)})\cap \overline{\mathbb{H}}_{d}^{+}}\int_{\left \{ -1,1\right \} }\int_{0}^{\infty}\mathbf{1}_{\left \{ r\leq \varepsilon \right \} }\left( rtV\right) r^{2}\left \vert \mathrm{tr}\left( \Theta V\right) \right \vert ^{2}\nu_{V}^{n}\left( dr\right) \lambda \left( dt\right) \omega_{d}\left( dV\right) \\ & =d\int_{\mathbb{S}(\mathbb{H}_{d(1)})\cap \overline{\mathbb{H}}_{d}^{+}}\int_{\mathbb{R}}\mathbf{1}_{\left \{ r\leq \varepsilon \right \} }\left( rV\right) r^{2}\left \vert \mathrm{tr}\left( \Theta V\right) \right \vert ^{2}\nu^{n}\left( dr\right) \omega_{d}\left( dV\right) \\ & =d\int_{\mathbb{S}(\mathbb{H}_{d(1)})\cap \overline{\mathbb{H}}_{d}^{+}}\left \vert \mathrm{tr}\left( \Theta V\right) \right \vert ^{2}\int_{\left \vert r\right \vert \leq \varepsilon}r^{2}\nu^{n}\left( dr\right) \omega_{d}\left( dV\right) .\end{aligned}$$ Then using (\[gau\]),$$\int_{\left \Vert \xi \right \Vert \leq \varepsilon}\left \vert \mathrm{tr}\left( \Theta \xi \right) \right \vert ^{2}\nu_{d}^{n}\left( d\xi \right) \longrightarrow da^{2}E_{u}\left \vert \mathrm{tr}\left( \Theta uu^{\ast }\right) \right \vert ^{2}\text{,}$$ where $u$ is a uniformly distributed column random vector in the unit sphere of $\mathbb{C}^{d}$. Finally $$da^{2}E_{u}\left \vert \mathrm{tr}\left( \Theta uu^{\ast}\right) \right \vert ^{2}=\frac{a^{2}}{d+1}\left( \mathrm{tr}\left( \Theta^{2}\right) +\left( \mathrm{tr}\left( \Theta \right) \right) ^{2}\right) =\mathrm{tr}\left( \Theta^{\ast}\mathcal{A}_{d}\Theta^{\ast}\right) , \label{covgau}$$ where $\mathcal{A}_{d}$ is as in (\[GPBGCD\]) and the first equality in (\[covgau\]) follows from page $637$ in [@CD]. Thus $M_{d}^{n}\overset{\mathcal{L}}{\longrightarrow}M_{d}$ and the conclusion follows from Proposition \[convproc\]. Final remarks ============= 1. For the present work we do not have a specific financial application in mind. However, infinitely divisible nonnegative definite matrix processes with rank one jumps as characterized in Theorem \[Sub\], might be useful in the study of multivariate high-frequency data using realized covariation, where matrix covariation processes appear; see for example [@BNSh04]. Moreover, it seems interesting to explore the construction of financial oriented matrix Lévy based models as in [@BNSe09] for the specific case of rank one jumps matrix process of bounded variation. 2. In the direction of free probability, it is well known that the so-called Hermitian Brownian motion matrix ensemble $\left \{ B_{d}(t):t\geq0\right \} $, $d\geq1,$ is a realization of the free Brownian motion. It is an open question if the matrix Lévy processes from BGCD models $\left \{ M_{d}(t):t\geq0\right \} $, $d\geq1$, are realizations of free Lévy processes. A first step in this direction would be to prove that the increments of a BGCD ensemble become free independent. A second step, more related to our work, would be to have an insight of the implication of the rank one condition of the matrix Lévy BGCD process in Corollary \[corBG\] as realization of a positive free Lévy process. These two problems are the subjects of current research of one of the coauthors. 3. In [@BG07] a new Bercovici-Pata bijection for certain free convolution $\boxplus_{c}$ is established and a $d\times d^{\prime}$ random matrix model for this bijection which is very close to the one given by the BGCD random matrix model is established. It can be seen that the Lévy measures of these rectangular BGCD random matrices are supported in the subset of $d\times d^{\prime}$ complex matrices of rank one, in a similar way as done in [@DRA] for the BGCD case. It would be of interest to have the analogue results on bounded variation of Section 4 for the Lévy processes associated to these rectangular BGCD random matrices, considering an appropriate nonnegative definite notion for rectangular matrices. **Acknowledgement**. *This work was done while Victor Pérez-Abreu was visiting Universidad Autónoma de Sinaloa in January and May of 2012. The authors thank two referees for the very carefully and detailed reading of a previous version of the manuscript and for their comments that improved Theorems \[Sub\] and \[Bdv\] and the presentation of the present version of the manuscript.* [99]{} D. Applebaum (2007): Lévy processes and stochastic integrals in Banach spaces, *Probab. Math. Statist.* **27**, 75-88. O. E. Banrdorff-Nielsen and N. Shephard (2004): Econometrics analysis of realised covariation: high frequency covariance, regression and correlation in financial economics. *Econometrika* **72**, 885-925. O. E. Barndorff-Nielsen and R. Stelzer (2007): Positive-definite matrix processes of finite variation. *Probab. Math. Statist.* **27**, 3-43. O. E. Barndorff-Nielsen and R. Stelzer (2011): The multivariate supOU stochastic volatility model. *Math. Finance.* To appear. O. E. Barndorff-Nielsen and R. Stelzer (2011): Multivariate supOU processes. *Ann. Appl. Probab.* **21**, 140-182. F. Benaych-Georges (2005): Classical and free infinitely divisible distributions and random matrices. *Ann. Probab.,* **33**, 1134-1170. F. Benaych-Georges (2007): Infinitely divisible distributions for rectangular free convolution: classification and matricial interpretation. *Probab. Theory Relat. Fields*, **139,** 143-189. F. Benaych-Georges and T. Cabanal-Duvillard (2012): Marchenko-Pastur theorem and Bercovici-Pata bijections for heavy-tailed or localized vectors. Preprint. H. Bercovici and V. Pata, with an appendix by P. Biane (1999): Stable laws and domains of attraction in free probability theory. *Ann. of Math.*, **149**, 1023-1060. T. Cabanal-Duvillard (2005): A matrix representation of the Bercovici-Pata bijection. *Electron. J. Probab.*, **10**, 632-661 (electronic). R. Cont and P. Tankov (2004): *Financial Modelling with Jump Processes*. Chapman & Hall/CRC. J. A. Domínguez-Molina and A. Rocha-Arteaga (2012): Random matrix models of stochastic integrals type for free infinitely divisible distributions. *Periodica Mathematica Hungarica.* **64** (2), 145-160. O. Kallenberg (2002): *Foundations of Modern Probability. Second Edition.* Springer. V. Pérez-Abreu and N. Sakuma (2008): Free generalized gamma convolutions. *Elect. Comm. in Probab*. **13**, 526-539. C. Pigorsch and R. Stelzer (2009): On the definition, stationary distribution and second order structure of positive-semidefinite Ornstein-Uhlenbeck type processes. *Bernoulli* **15**, 754-773. P. Protter (2004): *Stochastic Integration and Differential Equations.* Stoch. Model. Appl. Probab. **21**, Springer. K. Sato (1999): *Lévy Processes and Infinitely Divisible Distributions.* Cambridge Univ. Press, Cambridge. R. Stelzer (2010): Multivariate COGARCH(1, 1) processes. *Bernoulli* **16** (1), 80-115. [^1]: jadguez@uas.edu.mx [^2]: pabreu@cimat.mx [^3]: arteaga@uas.edu.mx
--- abstract: 'The nonlinear dynamics of resistive flow with a chemical reaction is studied. Proceeding from the Lagrangian description, the influence of a chemical reaction on the development of fluid singularities is considered.' author: - | A.R. Karimov\ [^1] Institute for High Temperatures,\ Russian Academy of Sciences,\ Izhorskaya 13/19, Moscow 127412, Russia\ and Department of Electrophysical Facilities,\ National Research Nuclear University MEPhI,\ Kashirskoye shosse 31, Moscow, 115409, Russia\ A.M. Korshunov\ Department of Electrophysical Facilities,\ National Research Nuclear University MEPhI,\ Kashirskoye shosse 31, Moscow, 115409, Russia\ V.V. Beklemishev\ Department of Electrophysical Facilities,\ National Research Nuclear University MEPhI,\ Kashirskoye shosse 31, Moscow, 115409, Russia date: title: Influence of chemical reactions on the nonlinear dynamics of dissipative flows --- Introduction ============ The clustering of matter in different physical conditions remains a topic of interest in modern science because of its importance for understanding the behavior of natural and artificial systems far from equilibrium [@lev]-[@kur]. The process of clustering typically develops in particle ensembles with long-range forces, such as gravity or electric interactions and it may lead to the formation of stable spatially inhomogeneous structures [@zel] - [@dn]. However, there is another possibility of structure formation in dissipative systems, and typical example of such systems is flow with some chemical reactions. In this paper, we are going to examine some special features of the nonlinear dynamics of such flows. The macroscopic dynamics of dissipative systems with chemical reactions is, as a rule, described by diffusion equations [@lev]-[@nic]. However, such description is valid only for the systems with microscopic flow-transfer near some equilibrium state. If it is not the case, and we wish to consider the flow far from equilibrium, such approach is inadequate since the dynamics can involve several time and space scales that usually differ by several orders of magnitude. As a result, a full nonlinear treatment on the basis of hydrodynamic equations with chemical sources will be required to investigate the evolution of such system. Here we consider a particularly simple form of nonlinear dynamics of dissipative, one-dimensional flow moving in an external field with model chemical terms. We present a class of exact solutions of the fully nonlinear hydrodynamic equations describing the flow with a chemical reaction which may be useful to define the direction of the real system behavior. Based on this description, we compare the formation of density profile in the dissipative flows with a chemical reaction with the corresponding evolution in the flows without chemical reactions. Model ===== We shall study the one-dimensional, time-dependent pattern for the infinite medium containing movable, particles of type $A$ and immobile particles of the $B$ type between which there is a chemical reaction leading to the formation of new immobile component $AB$. Such situation can be realized when $m_A \ll m_B$, where $m_A$ and $m_B$ are the mass of $A$ and $B$ particles, respectively. It is assumed that the movement of the component $A$ is caused by an external force (for definiteness, let this be the gravitational field and all particles are the neutral ones) and friction force (Stokes force) only. The macroscopic model of these processes can be present in an uniform form as $$% ------------ (1_chem) ------------- {\partial n_A \over \partial t} + {\partial \over \partial x}(n_A u) = -W\/, \label{1_chem}$$ $$% ------------ (1a_chem) ------------- {\partial n_B \over \partial t} = -W\/, \label{1a_chem}$$ $$% ------------ (1b_chem) ------------- {\partial n_{AB} \over \partial t} = W\/, \label{1b_chem}$$ $$% ------------ (2_chem) ------------- {\partial u \over \partial t} + u {\partial u \over \partial x} = -\left(\nu + {W\over n_A}\right) u + g\/, \label{2_chem}$$ where $u$ is the velocity of the $A$ component, $n_A$, $n_B$ and $n_{AB}$ is the density of the $A$, $B$ and $AB$ components, respectively, $\nu$ is the collision frequency, $g$ is the acceleration of freedom fall (note that the direction of the $0x$ axis is assumed to coincide with the direction of the acceleration of freedom fall $g$) and $W$ is the rate of the chemical reaction in the system. The concrete form of $W$ is defined by the order of reaction with respect to the components $A$ and $B$. Here we take for consideration the reactions of first ($s=1$) and second $(s=2)$ order only, and in this case we have $$W = \left\{\begin{array}{ll} k_1 n_A, & s=1, \\ k_2 n_A n_B, & s=2, \end{array} \right.$$ where $k_s$ is the chemical reaction constant. For simplicity, here we neglect the dependence of the friction coefficient $\nu$ on the medium parameters, as well as the dependence of the reaction constants $k_s$, setting $k_s$=const. and $\nu$=const. since these dependences are not important for our study of qualitative properties of the flow dynamics. Lagrangian frame ================ In order to see how these factors manifest themselves in the system dynamics it is convenient to pass from the Euler description of the original system (\[1\_chem\])-(\[2\_chem\]) to the Lagrangian frame. According to the definition (see, for example, [@sch_2; @dav]) $$% ------------ (4_chem) ------------- \tau=t,\hspace{11mm} \xi=x-\int_0^t u(\xi,t^{\prime})dt^{\prime}\/, \label{4_chem}$$ where $x(\xi, t)$ satisfies the initial condition $$x( \xi, 0)=\xi$$ and provides $$% ------------ (U) ------------- u(\xi, \tau) =\left({\partial x \over \partial \tau} \right)_{\xi}\/, \label{U_nature}$$ the temporal and spatial derivatives are transformed as $$% ------------ (6_chem) ------------- {\partial \over \partial t} \to {\partial \over \partial \tau} - {u \over J} {\partial \over \partial \xi}, \hspace*{11mm} {\partial \over \partial x} \to {1 \over J} {\partial \over \partial \xi} \label{6_chem}$$ with Jacobian $$% ------------ (5_chem) ------------- J(\tau, \xi ) = {\partial x\over \partial \xi} > 0\/, \label{5_chem}$$ besides, the condition of sign conservation of $J( \xi,\tau)$ eliminates singularities of the flow. Under the transformation (\[6\_chem\]), the Eq.(\[2\_chem\]) is reduced to $$% ------------ (9_chem) ------------- {\partial u \over \partial \tau} + \gamma u = g\/, \label{9_chem}$$ where $$% ------------ (g_chem) ------------- \gamma= W/n_A +\nu\/. \label{g_chem}$$ At the same time the continuity equations (\[1\_chem\]) is transformed into $$% ------------ (14_chem) ------------- {\partial \over \partial \tau}\ln(J n_A)=-{W \over n_A}\/. \label{14_chem}$$ In such form this equation will be needed for the exposition in the next sections. Dynamics for $s=1$ ================== Following [@chm], we start from the flow with a chemical reaction of first order. In fact, if $\gamma$=const., then Eq. (\[9\_chem\]) is the Newton equation for a particle, moving under the influence of gravity and friction. Such situation occurs when the chemical reaction is first order, i.e. $s=1$. Physically this means that the concentration of one component is excessive, for example, in our model $n_A \ll n_B$. In this case the solution of Eq. (\[9\_chem\]) has the exact solution $$% ------------ (10_chem) ------------- u(\xi,\tau) = u_0(\xi) e^{-\gamma \tau} + {g \over \gamma} \left(1 - e^{-\gamma \tau}\right)\/, \label{10_chem}$$ where $u_0(\xi)$ is initial velocity and $\gamma= k_1 +\nu$. With the help of the relation (\[U\_nature\]) bearing in mind $x(\xi, \tau=0) = \xi$, we may deduce the path of a fluid element $$% ------------ (12_chem) ------------- x(\xi,\tau) = \xi + {g\over \gamma}\tau + {1\over \gamma} \left( u_0 - {g \over \gamma}\right)(1 - e^{-\gamma \tau})\/, \label{12_chem}$$ from which we find $$% ------------ (J_chem) ------------- J(\xi,\tau) = 1 + {u_0^{\prime}\over \gamma}(1 - e^{-\gamma \tau})\/. \label{J_chem}$$ Finally, substitution of this relation and $W$ for $s=1$ into (\[14\_chem\]) leads to $$% ------------ (Na_chem) ------------- n_A(\xi, \tau) = {\gamma n_{0A} e^{-k_1 \tau}\over \gamma + (1 - e^{-\gamma \tau}) u_0^{\prime}}\/, \label{na_chem}$$ where $n_{0A} = n(\xi, \tau=0)$. Also, from relation $${\partial u\over \partial x} = {1 \over J} {\partial u \over \partial \xi}$$ taking in account (\[10\_chem\]) and (\[J\_chem\]) we get $$% ------------ (u_chem) ------------- {\partial u\over \partial x} = {\gamma u_0^{\prime} e^{-\gamma \tau}\over \gamma + (1 - e^{-\gamma \tau}) u_0^{\prime}}\/, \label{u_chem}$$ As is seen from (\[na\_chem\]) and (\[u\_chem\]), the chemical reaction affects the flow via the parameter $\gamma$. It is worth noting that in the limit $\gamma \to 0$ the relations (\[na\_chem\]) and (\[u\_chem\]) pass to $$% ------------ (15b_chem) ------------- n_A = {n_{0A} e^{-k_1 \tau}\over 1 +\tau u_0^{\prime}}, \hspace{11mm} {\partial u\over \partial x} = {u_0^{\prime} \over 1 + \tau u_0^{\prime}}\/. \label{15b_chem}$$ In absence of a chemical reaction ($W \to 0$) and/or friction ($\nu \to 0$) these relations describe the flow by inertia in collisionless neutral gas. When $ u_0^{\prime} <0 $, it predicts a singular behavior for finite time. A discontinuous pattern is a well-known intrinsic feature of fluids especially in inviscid pressureless limit. As is seen from (\[12\_chem\]), on times $\tau < \gamma^{-1}$ the motion of fluid elements occurs in a similar way. However, in our model of dissipative flow with chemical a reaction some new peculiarities appear. In the case when $\gamma \neq 0$ and $W \neq 0$ the collapse condition is $$% ------------ (16_chem) ------------- 1 + {1 - e^{-\gamma \tau}\over \gamma} u_0^{\prime} = 0 \label{16_chem}$$ from which follows the collapse time $$\tau_*= -{1\over \gamma }\ln\left(1 + {\gamma \over u_0^{\prime}}\right)\/.$$ If $\gamma \to 0$, this estimation transforms into $\tau_* = -1/ u_0^{\prime}$ for the Eq. (\[15b\_chem\]). It is clear that $\tau_*$ for the dissipative flow with a chemical reaction is larger than the corresponding values for the collisionless flow. Thus, the chemical reaction and the friction add some restriction on the parameters of the problem but do not limit the development of singularity in the system. The physically admissible parameters $u_0$ and $\gamma$ are determined by the condition that the density should be positive for all time, i.e. the following inequality must be fulfilled $$% ------------ (17_chem) ------------- u_0^{\prime} <0, \hspace{11mm} \mid u_0^{\prime} \mid \geq \mid \gamma \mid\/. \label{17_chem}$$ It should be noted that in this simplest case the nonlinear dynamics for some initial conditions can lead to the formation of dynamical space structures [@ks; @kss] in which the initial conditions play the role of a driving parameter precluding the formation of singularities but providing for the formation of transient dynamical structures. Dynamics for $s=2$ ================== Now we move on to a more realistic case of $s=2$. For simplicity, in this section we shall restrict our consideration to the limit $W/n_A \ll \nu$ when we may use the relation (\[J\_chem\]) for our estimations. The Eq. (\[14\_chem\]) then reduces to $$% ------------ (18_chem) ------------- {\partial \over \partial \tau}\ln(J n_A)=- k_2 n_B\/. \label{18_chem}$$ Differentiating relation (\[18\_chem\]) with respect to $\tau$ and using (\[1a\_chem\]), we get $$% ------------ (19_chem) ------------- {\partial^2 \over \partial \tau^2}\ln(J n_A)= k_2^2 n_B n_A\/. \label{19_chem}$$ Then eliminating $n_B$ from (\[19\_chem\]) with the help of (\[18\_chem\]) we obtain $$% ------------ (20_chem) ------------- J{\partial^2 y\over \partial \tau^2} + k_2 {\partial e^y \over \partial \tau}=0\/, \label{20_chem}$$ where we introduce $$% ------------ (21_chem) ------------- y = \ln(J n_A)\/. \label{21_chem}$$ It should be noted that in the framework of model worked out the Eq. (\[20\_chem\]) is an exact equation, one that has been correct for any $J$. Now we have to define the initial conditions for Eq. (\[20\_chem\]) which follow from the initial conditions for $n_A$, $n_B$ and $J$. Without losing too much generality, we can set $n_{0A}=1$ and $n_{0B}=\varepsilon<1$ so we have $y(\tau=0)=0$. Proceeding from obvious relations $${\partial y\over \partial \tau}= {J\dot{n}_A+n_A\dot{J} \over J n_A}, \hspace{7mm} {\partial J\over \partial \tau} = u_0^{\prime}\/,$$ where the overhead dot now denotes derivative with respect to $\tau$, and Eq. (\[18\_chem\]) rewritten for $\tau=0$ in the form $$\left.{\partial n_A\over \partial \tau}\right|_{\tau=0}=-n_{0A} u_0^{\prime} - k_2n_{0A}n_{0B}$$ we get $$\left.{\partial y\over \partial \tau}\right|_{\tau=0}=-k_2\varepsilon\/.$$ Thus, the Eq. (\[20\_chem\]) together with the initial conditions $$% ------------ (22_chem) ------------- y(\tau=0)=0, \hspace{11mm} \left.{\partial y\over \partial \tau}\right|_{\tau=0}=-k_2\varepsilon\/. \label{22_chem}$$ defines the evolution of the flow with a chemical reaction of second order. Unfortunately, this equation cannot be solved analytically but proceeding from a Chaplygin comparison theorem for nonlinear differential equations [@bel] we are able to obtain the apriori estimates for the solutions of problem (\[20\_chem\])-(\[22\_chem\]). Applying this theorem to our issue, the functions $y_{min}(\tau)$ and $y_{max}(\tau)$ satisfying the initial conditions (\[22\_chem\]) shall hold the relation $$y_{min} \leq y \leq y_{max}$$ in the interval $0 \leq \tau \leq \tau_*$ if the following inequalities are fulfilled $$\Lambda(y_{min}) <0, \hspace{11mm} \Lambda(y_{max}) >0$$ where $$% ------------ (23_chem) ------------- \Lambda(y) = J{\partial^2 y\over \partial \tau^2} + k_2 {\partial y \over \partial \tau} e^y\/. \label{23_chem}$$ As such functions we can take $$% ------------ (24_chem) ------------- y_{min}= - k_2 \varepsilon \tau, \hspace{11mm} y_{max}={a \over 2}\tau^2 - k_2 \varepsilon \tau \/, \label{24_chem}$$ where $a$ is some unknown parameter to be determined. Indeed, for $y_{min}$ we get $$\Lambda(y_{min}) = -k_2^2\varepsilon \exp(-k_2\varepsilon \tau) <0\/,$$ which means that we can consider $y_{min}$ as a lower boundary of the exact solution for Eq. (\[20\_chem\]). We now look for the parameter $a$ for which $y_{max}$ becomes an upper boundary of the exact solution Eq. (\[20\_chem\]). Substitution of $y_{max}$ into (\[23\_chem\]) yields $$% ------------ (25_chem) ------------- \Lambda(y_{max}) = (1 + \tau u_0^{\prime}) a - k_2(k_2\varepsilon - a \tau) \exp\left({a\over 2}\tau^2 - k_2\varepsilon \tau\right)\/. \label{25_chem}$$ If we set $$% ------------ (26_chem) ------------- a > k_2^2\varepsilon, \hspace{11mm} a > k_2\varepsilon \mid u_0^{\prime} \mid\/. \label{26_chem}$$ then the curve $y_1(\tau) = k_2^2\varepsilon - k_2a \tau$ always is lower than the curve $y_2(\tau) = a + u_0^{\prime}a\tau$ in the interval $\tau \geq 0$. Moreover, it easy to see that $$k_2(k_2\varepsilon - a \tau) \exp\left({a\over 2}\tau^2 - k_2\varepsilon \tau\right) \leq k_2(k_2\varepsilon - a \tau)$$ in the interval $0 \leq \tau \leq k_2\varepsilon /a$. For $\tau \geq k_2\varepsilon /a$ the second term of (\[25\_chem\]) becomes a positive value and the condition $\Lambda(y_{max})>0$ always holds true if the parameter $a$ belongs to the interval defined by the relation (\[26\_chem\]). It implies that the functions $y_{min}(\tau)$ and $y_{max}(\tau)$ have no peculiarities in the interval $0 \leq \tau \leq \tau_*$. Then from the relation (\[21\_chem\]) it follows that in the present case there may be only one hydrodynamic peculiarity associated with $J = 0$ which is similar to the situation occurring in flow with the reaction of first order. ![image](fig_a.eps){width="9.cm"} \ \ As an illustration of exact dynamics of the flow for case $s=2$, in Fig.1 we present the numerical solution of (\[20\_chem\]) for $k_2 =1$, $u_0^{\prime}=-10^{-3}$ and $k_2\varepsilon=-0.96$. This solution exhibits regular behavior up to the moment of development of hydrodynamic collapse $\tau_*$ and the function $y(\tau)$ holds negative in the interval $0 \leq \tau < \tau_*$. Thus, this partial case indicates that our conclusion about character of $y(\tau)$ remains valid and collapse can be caused only by the flow hydrodynamics. ![image](fig_b.eps){width="9.cm"} \ \ In Fig.2 we graph this exact solution with the lower $y_{min}$ and upper $y_{max}$ boundaries for small times. As is seen from these graphs, we have a good approximation of the exact solution near initial state but for large times the boundaries can point out some rough estimate of the exact solution. However, such method of estimates seems sufficient enough for the study of the effect of chemical reactions on the formation of singularities and the influence of initial conditions on the dynamics of the system worked out. Conclusion ========== In this paper we have focused on the influence of a chemical reaction on the dynamics of the dissipative, one-dimensional flow. In order to get a full analytical description we have considered the reaction of first and second orders. These reactions belong to the simplest type of possible reactions. However, the present results indicate that the same features can be observed in more complex systems. In particular, we have shown that the wave breaking may occur in the flow under analysis \[see Eq. (\[14\_chem\])\] similarly to what happens in a collisionless flow \[see Eq. (\[15b\_chem\])\]. In the present example the dissipative physical processes changed only the kind of singular dynamics but did not eliminate the phenomenon itself. It is important to stress that the above-described behavior can be observed only for the initial conditions and the parameters stated in Eq. (\[17\_chem\]). However, in the present case the evolution of the resistive flow is not restricted by any physical mechanism, such as the pressure gradient, which is usually presumed to limit the growth of the density peak (see, for example, [@sch_2; @k02]). Such type of solution represents a collapse-like class of nonlinear solutions that may arise in different physical situations. In particular, the results describing the formation of time-dependent structures in dissipative flow may be of interest for some biophysical experiments in laboratory conditions [@es; @ep; @van] and aerosol applications [@reist; @smir]. Besides, we believe that our present approach can be generalized, for instance, to an inhomogeneous, many-component fluid or plasma flows in gravitational or electrostatic fields [@kur; @sch_2; @ep; @smir; @ss09; @kys12; @inh]. It should thus be of interest to examine such flows in the cylindrical and spherical geometries, which have more natural rotational degrees of freedom (see, for example, [@es; @sch_2; @s96]). Owing to different kinds of interactions which may exist in such multidimensional systems and due to a variety of their initial states, different modes of collective motion are possible [@ss09]-[@k09]. Therefore, in higher spatial dimensions, a large variety of nonlinear dynamical structures can be expected [@sch_2; @nat]. However, one should be noted that depending on the ratio between nonlinearity and dispersion, one can expect the formation of hydrodynamic collapses or nonlinear wave structures (see, for example, [@dav; @nat; @ir]). In our case, the present results for reactions of first and second orders (in the limit $W/n_A \ll \nu$) indicate that there is a relatively weak effect of chemical terms on the flow dynamics since the nonlinearity is too strong that the system experiences a collapse-like behavior for small times. Proceeding from this point one may expect that singularities form cellular structures in multidimensional geometry. As a result, the intensity of the chemical reactions strongly can increase at these points. Such special features are expected to play an important role, in particular, for the understanding of basic properties of aerosol systems in various environments and laboratory conditions [@smir; @smir_2]. But it is only our assumption or guess and no more. In order to show the realization of this hypothetical mechanism we should study the above-outlined script of dynamics for dissipative flows with a chemical reaction in multidimensional geometry. [99]{} Levich V G 1977 [*Physicochemical Hydrodynamics*]{} (London: Adv. Publications Ltd.) Gray P and Scott S K 1990 [*Chemical Oscillations and Instabilities*]{} (Oxford: Oxford University Press) Nicolis G and Prigogine I 1977 [*Self-organization in Nonequilibrium Systems*]{} (New York: Wiley) Kuramoto Y 1984 [*Chemical Oscilations, Waves and Turbulence*]{} (Berlin: Springer) Shandarin S F and Zeldovich Ya B 1989 [*Rev. Mod. Phys.*]{} [**61**]{} 185 Sack Ch and Schamel H 1987 [*Phys. Rep.*]{} [**156**]{} 311 Dubin D H E and O$^{\prime}$Neil T M 1999 [*Rev. Mod. Phys.*]{} [**71**]{} 20 Epstein I R and Pojman J A 1998 [*An Introduction to Nonlinear Chemical Dynamics: Oscillations, Waves, Patterns, and Chaos*]{} (New York: Oxford University Press) Schamel H 2004 [*Phys. Rep.*]{} [**392**]{} 279 Davidson R C 1972 [*Methods in Nonlinear Plasma Theory*]{} (New York: Academic Press) Karimov A R 2005 [*J. Russ. Laser Res.*]{} [**26**]{} 283 Karimov A R and Schamel H 2001 [*Phys. Plasmas*]{} [**8**]{} 1180 Karimov A R, Schamel H and Shcheglov V A 2000 [*Phys. Lett. A*]{} [**272**]{} 193 Beckenbach E F and Bellman R 1961 [*Inequalities*]{} (Berlin: Springer-Verlag) Karimov A R 2002 [*Phys. Scripta*]{} [**65**]{} 356 Epstein I R and Showalter K 1996 [*J. Phys. Chem.*]{} [**100**]{} 13132 Vanag V K 2004 [*Phys. Usp.*]{} [**47**]{} 923 Reist P C 1984 [*Introduction to Aerosol Science*]{} (New York: Macmillan) Smirnov B M 2000 [*Clusters and Small Particles in Gases*]{} (New York: Springer) Stenflo L and Shukla P K 2009 [*J. Plasma Phys.*]{} [**75**]{} 841 (2009) Karimov A R, Yu M Y and Stenflo L 2012 [*Phys. Plasmas*]{} [**19**]{} 092118 Karimov A R, Yu M Y and Stenflo L 2011 [*Phys. Lett. A*]{} [**375**]{} 2629 Stenflo L 1996 [*Phys. Scripta*]{} [**T63**]{} 59 Stenflo L and Yu M Y 1996 [*Nature*]{} [**384**]{} 224 Stenflo L and Yu M Y 1998 [*Phys. Plasmas*]{} [**5**]{} 3122. Karimov A R 2009 [*J. Plasma Physics*]{} [**75**]{} 817 Infeld E and Rowlands G 2000 [*Nonlinear Waves, Solitons and Chaos*]{} (Cambridge: Cambridge University Press) Smirnov B M 2014 [*Phys. Usp.*]{} [**57**]{} 1041 [^1]: E-mail:alexanderkarimov999@gmail.com, arkarimov@mephi.ru
--- abstract: 'Marengo and the second author have developed in the last years a geometric model of social choice when this takes place among bundles of interdependent elements, showing that by bundling and unbundling the same set of constituent elements an authority has the power of determining the social outcome. In this paper we will tie the model above to tournament theory, solving some of the mathematical problems arising in their work and opening new questions which are interesting not only from a mathematical and a social choice point of view, but also from an economic and a genetic one. In particular, we will introduce the notion of u-local optima and we will study it from both a theoretical and a numerical/probabilistic point of view; we will also describe an algorithm that computes the universal basin of attraction of a social outcome in $O(M^3\log M)$ time (where $M$ is the number of social outcomes).' author: - 'Gennaro [Amendola]{}[^1]' - 'Simona [Settepanella]{}[^2]' title: Modularity and Optimality in Social Choice --- [[**Keywords**]{}:\ Social rule, modularity, object, optimum,\ hyperplane arrangement, tournament, algorithm.]{} [[**MSC (2010)**]{}: 05C20, 05C85, 52C35.]{} [[**JEL Classification**]{}: D03, D71, D72.]{} [[**ACM CCS (1998)**]{}: G.2.2.]{} Introduction {#introduction .unnumbered} ============ In [@Arrow] Arrow created [*modern social choice theory*]{}, a rigorous melding of social ethics and voting theory with an economic flavor. The central aim of social choice theory is to analyze the aggregation of preferences. Assume there is a society of $n$ agents indexed by $i = 1,\dotsc,m$. Each agent has his own well-behaved preference $\succeq_i$ over some space of possibilities (or [*social outcomes*]{}) $X$, i.e. a total order on the set $X$. Let ${\cal P}$ be the set of well-behaved preferences. The element $(\succeq_1,\dotsc,\succeq_m) \in {\cal P}^m$ is the [*profile*]{} of a society. The goal is to put all of these preferences together to come up with a single [*system of social preferences*]{} or [*social rule*]{}, i.e. a total order on the set $X$, to decide matters of policy and to evaluate welfare. Namely, a [*social choice function*]{} (or [*social decision rule*]{}) $${{\mathcal{R}}}: {\cal P}^m \longrightarrow {\cal P}$$ is needed. This social decision rule should fulfill the following properties. [ ]{} <span style="font-variant:small-caps;">Completeness and Transitivity</span>With this economists means that society can make a decision about any social outcome and can rank all social outcomes. (Obviously, this property is intrinsic in the definition of the function ${\cal R}$.) <span style="font-variant:small-caps;">Paretianity</span>If everyone unanimously prefers $x$ to $y$ then so should society. <span style="font-variant:small-caps;">Universal Domain Property</span>No matter what kind of wacky preferences people may have, so long as they are well-behaved, ${\cal R}$ has to be able to deal with them. In other words there are no restrictions on the profiles of preferences, i.e. on the elements in ${\cal P}^m$. <span style="font-variant:small-caps;">Independence of Irrelevant Alternatives</span>Whether or not society prefers $x$ to $y$ does not depend on what people think of any other [*irrelevant alternative*]{} $z$. This can be formally stated by saying that if there are two profiles of individual preferences $(\succeq_1,\dotsc,\succeq_m)$ and $(\succeq'_1,\dotsc,\succeq'_m)$ such that $$x \succeq_i y\quad \text{if and only if}\quad x \succeq'_i y,$$ then $$x\ {\cal R}(\succeq_1, \ldots , \succeq_m)\ y\quad \text{if and only if}\quad x\ {\cal R}(\succeq'_1, \ldots , \succeq'_m)\ y.$$ <span style="font-variant:small-caps;">Nondictatorship</span>An agent $a_i$ is said to be [*dictatorial*]{} if, for all $x,y \in X$, whenever $a_i$ prefers $x$ to $y$ society prefers $x$ to $y$, i.e. ${\cal R}$ is the projection on the $i$-th component. The social decision rule ${\cal R}$ is said to be [*nondictatorial*]{} if it is not a projection map. Arrow [@Arrow] proved that such a function does not exist. Therefore, in order to overcome this problem, in social choice theory it is a customary convention to drop the transitivity request. Mathematicians and economists studied this problem during the last 50 years with different approaches. For example, tournament (and, in general, graph) theory turned out to be strictly connected to voting and social choice problems, since Landau started to study this subject [@LandauI; @LandauII; @LandauIII]. In the works of Eckmann [@ecman], Eckmann, Ganea and Hilton [@ecmannb], and Weinberger [@wein] there has been an “unexpected application of algebraic topology to a different field of intellectual enterprise,”[^3] i.e. the social choice theory. Topology is also used to study social choice problems as, for instance, Chichilnisky [@cici1; @cici2; @cici3] and Baryshnikov [@bary] have done. Very recently, Saari [@saari94; @Saarib] used geometry to analyze the matter of voting. Moreover, Terao [@terao] introduced an admissible map of chambers of a real central arrangement which is a generalization of a social welfare function. Social choice theory usually assumes that agents are faced with a set of exogenously given and mutually exclusive alternatives. These alternatives are “simple”, in the sense that are one-dimensional objects or, even when they are multidimensional, they are simply points in some portion of the homogeneous $\mathbb{R}^n$ space and they lack an internal structure that limits the set of possible alternatives. Many choices in real life situations depart substantially from this simple setting. Choices are often made among bundles of interdependent elements. These bundles may be formed in a variety of ways, which in turn affect the selection process of a social outcome. For instance, in the typical textbook example of social choice, where a group of friends decides what to do for the evening, the choice set is {movie, concert, restaurant, dinner at home,…}. However, at a closer scrutiny, these alternatives are neither primitive nor exogenously given, because they are labels for bundles of elements (e.g. with whom, where, when,…) and the preferences are unlikely to be expressed before the labels get specified in their constituting elements. Moreover, a member of the group could easily obtain a social outcome close to the one he or she prefers by carefully crafting the objects and possibly designing a new set of objects. Other examples can be candidates and parties in political elections (which stand for complex bundles of interdependent policies and personality traits) or packages of policies on which committees and boards are called upon to decide. In [@MarengoSette] Marengo and the second author develop a model of social choice among bundles of elements, which they call [*objects*]{}. They show that the outcome of the social choice process is highly dependent on the way these bundles are formed. By bundling and unbundling the same set of constituent elements (they call this the [*object construction power*]{}) an authority may have the power to determine the social outcome. The object construction power is stronger than the agenda power (i.e. the power to decide the order on which the social outcomes are decided on), traditionally studied in the literature (for instance, by McKelvey [@mckelvey76]). Moreover, in their approach, objects decompose the computationally complex search space into quasi-separable subspaces (see Simon [@simon82]), simplifying the computational task and making decisions possible. They also show that by appropriately designing objects it is possible to break almost all intransitive cycles, which frequently characterize social choice. In order to formally analyze the properties of a social choice model with object construction and achieve general results, they use geometric properties of hyperplane arrangements and link them to graph theory by means of Salvetti’s Complex. In this respect, the model of Marengo and the second author is a novel contribution to the analysis of the relation between discrete problems of social choice and their topological structure. It provides a bridge between a geometrical representation and a topological one of a social choice problem to create a more general framework in which the topological space is manipulable through object construction. A local study is strictly connected to the geometric structure of the hyperplane arrangement and to the “local” structure of the graph, while global properties depend also on the whole graph. Therefore, in the search for global properties also combinatorial and computational problems arise. In this paper we tie the model described in [@MarengoSette] to tournament theory. This new link allows us to get results and opens new problems. Tournaments are relevant in many fields of science, so they have been greatly studied by many mathematicians, between 1940 and 1970, and almost everything has been done. We translate the voting and social choice problems, arising in [@MarengoSette], into new tournament-theory problems interesting for mathematicians too. Moreover, modularity plays a fundamental role in many Natural Complex Systems [@Modularity] and hence we believe that this model can be applied to other fields of science, e.g. genetics (see Stadler [@Stadler]). We plan to go into this subject in a subsequent paper. In Section \[sec:preliminaries\] we will recall some basic notions on Hyperplane Arrangements, Salvetti’s Complex and Tournaments. In Section \[sect:model\] we will give basic notions on social rules, we will describe Marengo and the second author’s model, and we will also recall their main results. In the last part of the section we will define the notion of [*u-local optimum*]{} and ([*u-*]{})[*deepness*]{} of a social outcome. The former is a compromise between the notion of local optimum and global optimum. In order to obtain a particular local optimum after the voting process it is enough to have the power of deciding the [*status quo*]{} from which the voting process starts. In contrast, u-local optima are characterized by the property of being obtainable after the voting process by means of object construction power only. This is significant, because it may happen that whoever has the object construction power does not have the power of deciding the [*status quo*]{} from which the voting process starts. The deepness and the u-deepness measure the length of voting processes. In Section \[sec:results\] we will give results that tie the model described in [@MarengoSette] to tournament theory. This link will allow us to prove results in order to compute the universal basin of attraction of a given social outcome (which is a studied and open problem in economics). In Section \[sec:probability\] we will start studying the problem of probability on tournaments with the extra module structure. We will compute the maximum number of local optima that a given social rule can have and the probability to have a given number of local optima in a two dimensional social rule. This probability is related to the phenomenon (very important and studied in economics) of the trade-off between [*decidability*]{} (i.e. the possibility of reaching some social optimum in a feasible time) and [*non manipulability*]{} (i.e. the convergence of the social decision process to a unique global outcome that does not depend upon initial condition and agenda). Then it would be very interesting to generalize this result to an $n$-dimensional space of features. We will also define a function to measure the gain in using Marengo and the second author’s model instead of the classical one by means of the probabilities that a social outcome is an optimum in the two models. In Sections \[sec:algorithm\] and \[sec:numerical\] we will approach the far more difficult problem of understanding when a local optimum is an u-local or a global one. In the former section we will give an algorithm that computes the universal basin of attraction of a given social outcome in $O(M^3\log M)$ time, where $M$ is the number of social outcomes. We point out that this problem deals with both object constructions and agendas. Since there could be infinitely many agendas, the problem it is not finite [*a priori*]{}. It is not difficult to reduce the problem to a finite one, but a simple brute-force algorithm would take far more than exponential time. This algorithm has been implemented by the first author who has written the computer program [FOSoR]{} [@FOSoR]. In Section \[sec:numerical\] we will give numerical data obtained by means of the computer program [FOSoRStat]{} [@FOSoRStat] (created by first author) that computes statistics on the number of social rules with a given number of (u-)local optima. The last section is devoted to one example. It is treated in some detail because it is the smallest in which all kinds of optima (local, u-local and global) appear. #### Acknowledgements The authors are grateful to Prof. Luigi Marengo for his useful comments and corrections. The first author is grateful to Antonio Caruso for his useful discussions on and help for computer science problems during the beautiful period spent at the Department of Mathematics in Lecce. He would also like to thank the Department of Mathematics and Applications in Milano for the nice welcome. Preliminaries {#sec:preliminaries} ============= Hyperplane arrangements and Salvetti’s complex ---------------------------------------------- In this section we will recall some basic notions from the theory of hyperplanes arrangements. The interested reader is referred to, for instance, Orlik and Terao [@OT92] for a much more detailed and extended study. #### Hyperplane arrangements In geometry and combinatorics, an [*arrangement of hyperplanes*]{} is a finite set ${\mathcal{A}}$ of hyperplanes in a linear, affine, or projective space $S$. The [*cardinality*]{} $\left|{\mathcal{A}}\right|$ of the arrangement ${\mathcal{A}}$ is the number of hyperplanes in ${\mathcal{A}}$. One is normally interested both in the real and in the complex case, hence let ${\mathbb{K}}$ be either ${\mathbb{R}}$ or ${\mathbb{C}}$ and let $V$ be either ${\mathbb{R}}^n$ or ${\mathbb{C}}^n$. Thus, given the canonical base $\{e_1,\dotsc ,e_n\}$ in $V$, each hyperplane $H \in {\mathcal{A}}$ is the kernel of a degree-1 polynomial $\alpha_H \in {\mathbb{K}}[x_1,\dotsc, x_n]$, defined up to a constant. The product $${\mathcal{Q}}({\mathcal{A}})= \prod_{H \in {\mathcal{A}}} \alpha_H$$ is called a [*defining polynomial*]{} of ${\mathcal{A}}$. If ${\mathcal{B}}$ is a subset of ${\mathcal{A}}$, it is called a [*subarrangement*]{} of ${\mathcal{A}}$. The [*intersection semilattice*]{} of ${\mathcal{A}}$, denoted by $L({\mathcal{A}})$, is the set of all non-empty intersections of elements of ${\mathcal{A}}$, i.e. $$L({\mathcal{A}})=\left\{\textstyle{\bigcap_{H \in {\mathcal{B}}} H} \mid {\mathcal{B}} \subseteq {\mathcal{A}} \right\}.$$ These subspaces are called the [*flats*]{} of ${\mathcal{A}}$. The set $L({\mathcal{A}})$ is partially ordered by reverse inclusion. The [*complement*]{} of ${\mathcal{A}}$ is defined as $$M({\mathcal{A}}) = V \setminus \bigcup_{H \in {\mathcal{A}}} H .$$ The complement of an arrangement ${\mathcal{A}}$ in ${\mathbb{R}}^n$ is clearly disconnected. It is made up of separate pieces called [*chambers*]{} or [*regions*]{}, each of which may be either bounded or unbounded. Each flat of ${\mathcal{A}}$ is also divided into sections by the hyperplanes that do not contain the flat; these sections are called the [*faces*]{} of ${\mathcal{A}}$. Chambers are faces because the whole space is a flat. The faces of codimension $1$ may be called the [*facets*]{} of ${\mathcal{A}}$. The [*face semilattice*]{} of an arrangement is the set of all faces, ordered by inclusion. The arrangement ${\mathcal{A}}$ said to be [*essential*]{} if the minimal dimensional flats are points (that we call *vertices* of the arrangement). Every arrangement ${\mathcal{A}}_{\mathbb{R}}$ in ${\mathbb{R}}^n$ also generates an arrangement over ${\mathbb{C}}$. Let ${\mathcal{Q}}({\mathcal{A}}_{{\mathbb{R}}})=\prod_{H \in {\mathcal{A}}_{{\mathbb{R}}}} \alpha_H$ be the defining (real) polynomial of ${\mathcal{A}}_{\mathbb{R}}$ in ${\mathbb{R}}^n$. The [*${\mathbb{C}}$-extended*]{} arrangement ${\mathcal{A}}_{\mathbb{C}}$ is the arrangement in ${\mathbb{C}}^n$ that consists of the hyperplanes that are the kernel of the polynomials $\alpha_H$ in ${\mathbb{C}}^n$ (instead of ${\mathbb{R}}^n$). The arrangement ${\mathcal{A}}_{\mathbb{C}}$ is also called the [*complexification*]{} of ${\mathcal{A}}_{\mathbb{R}}$. #### Salvetti’s complex As shown in [@Sal87], if the arrangement ${{\mathcal{A}}}_{{\mathbb{C}}}$ is the complexification of a real one ${{\mathcal{A}}}_{{\mathbb{R}}}$, there is a regular CW-complex ${\mathcal{S}}({{\mathcal{A}}}_{{\mathbb{R}}})$ having the homotopy type of the complement $M({{\mathcal{A}}}_{{\mathbb{C}}})$. We recall here briefly the construction of this complex, which is called [*Salvetti’s complex*]{}. Let ${{\mathcal{A}}}_{{\mathbb{R}}}=\{H_{{\mathbb{R}}}\}$ be an essential finite affine hyperplane arrangement in ${\mathbb{R}}^n$. Let $M({{\mathcal{A}}}_{{\mathbb{C}}}) ={\mathbb{C}}^n \setminus \bigcup_{H_{{\mathbb{R}}} \in{\cal A}_{{\mathbb{R}}}} H_{{\mathbb{C}}}$ be the complement to the complexified arrangement. The CW-complex ${\mathcal{S}}({{\mathcal{A}}}_{{\mathbb{R}}})$ can be characterized as follows. Let $\mathbf{S}{\mathrel{\mathop:}=}\{F^k\}$ be the stratification of ${\mathbb{R}}^n$ into facets $F^k$ that is induced by the arrangement [@bourbaki68], where the exponent $k$ stands for codimension. Then $\mathbf S$ has a standard partial ordering defined by $$F^i \ <_{\mathbf S} F^j \quad \text{if}\quad \operatorname{clos}(F^i)\supset F^j$$ where $\operatorname{clos}(F^i)$ is the closure of $F^i$. The $k$-cells of ${\mathcal{S}}({{\mathcal{A}}}_{{\mathbb{R}}})$ bijectively correspond to the pairs $[C<_{\mathbf S} F^k]$, where $C$ is a chamber of $\mathbf S$. A $k$-cell $[C<_{\mathbf S} F^k]$ is in the boundary of a $j$-cell $[D<_{\mathbf S} G^j]$, with $k<j$, if - $F^k<_{\mathbf S} G^j$, - the chambers $C$ and $D$ are contained in the same chamber of the sub-arrangement $$\{H_{{\mathbb{R}}}\in{{\mathcal{A}}}_{{\mathbb{R}}}\mid F\subset H_{{\mathbb{R}}}\}.$$ The previous conditions are equivalent to saying that $C$ is the chamber of ${{\mathcal{A}}}_{{\mathbb{R}}}$ “closest” to $D$ among those containing $F^k$ in their closure. It is possible to realize $\cal{S}({{\mathcal{A}}}_{{\mathbb{R}}})$ inside ${\mathbb{C}}^n$ with explicitly given attaching maps of the cells (see [@Sal87]). Graphs and tournaments {#sec:graphs_tournaments} ---------------------- We will give here a short summary of graph theory to fix notation. For a complete discussion we refer the reader to Chartrand and Lesniak [@Chartrand-Lesniak] and Moon [@Moon]. #### Graphs We will only take oriented simple graphs into account. Hence, throughout the paper, a [*graph*]{} will be a pair $({{\cal V}},{{\cal E}})$, where ${{\cal V}}$ is the set of [*nodes*]{} and ${{\cal E}}$ is the set of [*arcs*]{}, such that each pair of nodes $\{p,q\}$ is connected by at most one arc (either ${\overrightarrow{pq}}$ or ${\overleftarrow{pq}}$). If the arc ${\overrightarrow{pq}}$ (or ${\overleftarrow{qp}}$) is in ${{\cal E}}$, the node $p$ is said to [*dominate*]{} $q$. A [*sub-graph*]{} of $({{\cal V}},{{\cal E}})$ is a graph $({{\cal V}}',{{\cal E}}')$ such that ${{\cal V}}'\subset{{\cal V}}$ and ${{\cal E}}'\subset{{\cal E}}$. A [*path*]{} ${P\left(p,q\right)}$ from $p$ to $q$ is a sequence of arcs of the type ${\overrightarrow{pp_1}},{\overrightarrow{p_1p_2}},\dotsc,{\overrightarrow{p_{k}q}}$. A [*domination path*]{} ${DP\left(p,q\right)}$ from $p$ to $q$ is a sequence of arcs of the type ${\overleftarrow{pp_1}},{\overleftarrow{p_1p_2}},\dotsc,{\overleftarrow{p_{k}q}}$. A [*cycle*]{} ${P\left(p,p\right)}$ (resp. a [*domination cycle*]{} ${DP\left(p,p\right)}$) is a path (resp. a domination path) from $p$ to itself. The [*length*]{} of a (domination) path is the number of arcs it contains; a cycle of length $k$ is called $k$-cycle. #### Tournaments A [*tournament*]{} is a complete graph (i.e. each pair of nodes $\{p,q\}$ is connected by an arc). By ${{\cal T}}$ we will always denote a tournament with $M$ nodes. A [*sub-tournament*]{} of ${{\cal T}}$ is a sub-graph of ${{\cal T}}$ that is a tournament. A tournament is said to be [*reducible*]{} if it is possible to partition its nodes into two non-empty subsets ${{\cal V}}_1$ and ${{\cal V}}_2$ in such a way that all the nodes in ${{\cal V}}_1$ dominate all the nodes in ${{\cal V}}_2$; otherwise it is called [*irreducible*]{}. A tournament is irreducible if and only if each pair of nodes is contained in a cycle. There is no bound on the length of the cycle, but every node of an irreducible tournament is contained in a $k$-cycle for all $k=3,4,\dotsc,M$. In particular, any irreducible tournament contains a hamiltonian cycle. A path is [*hamiltonian*]{} if it passes through all nodes. A tournament is [*transitive*]{} if it contains no cycle. Every tournament contains a hamiltonian path; if the tournament is transitive the hamiltonian path is unique. An [*irreducible component*]{} ${{\cal T}}_i$ of ${{\cal T}}$ is a maximal irreducible sub-tournament of ${{\cal T}}$. The nodes of these irreducible components form a partition of the nodes of ${{\cal T}}$. Moreover, all the nodes of a component ${{\cal T}}_i$ either dominate or are dominated by all the nodes of another component ${{\cal T}}_j$. The transitive tournament ${\widetilde{{{\cal T}}}}$ whose nodes are the irreducible components of ${{\cal T}}$ and whose arcs are deduced by any arc between the two irreducible components (see Figure \[fig:derived\_tournament\]) is called the [*condensation*]{} of ${{\cal T}}$. $\xymatrix@R=35pt{ {\mbox{${{\cal T}}$}} & {*=<14pt>[o][F-]{p_5}} {\ar @{-} [ld] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar @{-} [rd] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar @{-} [d] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar @{-} @/^55pt/ [dd] |-(.38){\SelectTips{cm}{}\object@{>}}} {\ar@{.} +UL(1.5);+UR(1.5) \ar@{.} +UR(1.5);+DR(1.5) \ar@{.} +DR(1.5);+DL(1.5) \ar@{.} +DL(1.5);+UL(1.5) }{\ar@{.} +R(1.5);[rrr] |-{\object@{>}}} & & & {*=<14pt>[F-]{{{\cal T}}_3}} {\ar @{-} [d] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar @{-} @/^15pt/ [dd] |-(.4){\SelectTips{cm}{}\object@{>}}} & {\mbox{${\widetilde{{{\cal T}}}}$}} \\ {*=<14pt>[o][F-]{p_2}} {\ar @{-} [r] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar @{-} [rd] |-(0.57){\SelectTips{cm}{}\object@{>}}} & {*=<14pt>[o][F-]{p_3}} {\ar @{-} [r] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar @{-} [d] |-(0.57){\SelectTips{cm}{}\object@{>}}} & {*=<14pt>[o][F-]{p_4}} {\ar @{-} @/_15pt/ [ll] |-(.65){\SelectTips{cm}{}\object@{>}}} {\ar @{-} [ld] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar@{.} [ll]+UL(1.5);[]+UR(1.5) \ar@{.} []+UR(1.5);[]+DR(1.5) \ar@{.} []+DR(1.5);[ll]+DL(1.5) \ar@{.} [ll]+DL(1.5);[ll]+UL(1.5) } {\ar@{.} +R(1.5);[rr] |-{\object@{>}}} & & {*=<14pt>[F-]{{{\cal T}}_2}} {\ar @{-} [d] |-(0.57){\SelectTips{cm}{}\object@{>}}} \\ & {*=<14pt>[o][F-]{p_1}} {\ar@{.} +UL(1.5);+UR(1.5) \ar@{.} +UR(1.5);+DR(1.5) \ar@{.} +DR(1.5);+DL(1.5) \ar@{.} +DL(1.5);+UL(1.5) }{\ar@{.} +R(1.5);[rrr] |-{\object@{>}}} & & & {*=<14pt>[F-]{{{\cal T}}_1}} }$ Without lack of generality, we will choose the subscripts so that if $i>j$ then ${{\cal T}}_i$ dominates ${{\cal T}}_j$. The maximal component of ${{\cal T}}$ will be denoted by ${{{\cal T}}_{\mathrm{MAX}}}$. #### Score The number of nodes dominated by a node $p$ is called the [*score*]{} of $p$. The sequence of the scores of the the nodes of a tournament is called the [*score sequence*]{} of ${{\cal T}}$. Up to a relabeling of the nodes, we can suppose that the score sequence of ${{\cal T}}$ is non-decreasing. A tournament is transitive if and only if its score sequence is $0,1,\dotsc,M-1$. A non-decreasing sequence $s_1,s_2,\dotsc,s_M$ of nonnegative integers is the score sequence of a tournament if and only if $\sum_{i=1}^{k}s_i\geqslant\binom{k}{2}$ for each $k<M$ and $\sum_{i=1}^{M}s_i=\binom{M}{2}$ hold. A tournament is irreducible if and only if all the $M-1$ inequalities above are strict. In order to find the irreducible components (and hence the condensation) of ${{\cal T}}$, the following very simple algorithm, having complexity $O(M^2)$, can be applied. 1. Find the smallest $k$ such that $\sum_{i=1}^{k}s_i=\binom{k}{2}$; the sub-tournament ${{\cal T}}_1$ made up of the $k$ nodes with smallest score is an irreducible component of ${{\cal T}}$. 2. Remove ${{\cal T}}_1$ from ${{\cal T}}$ and repeat Step 1 until no node is left. Note that if the aim is to find the maximal component ${{{\cal T}}_{\mathrm{MAX}}}$, one can start “from above” and take into account $M-1-s_i$ instead of $s_i$ (i.e. the number of nodes that dominate the $i$-th node), so that only one step is needed. Another algorithm finding the irreducible components of any graph and having complexity $O(M^2)$ is shown in Kocay and Kreher [@Kocay-Kreher]. #### Cycles A measure of how far a tournament is from being transitive is the number of 3-cycles. If the score sequence of ${{\cal T}}$ is $s_1,s_2,\dotsc,s_M$, then the number of $3$-cycles is at most $\binom{M}{3}-\sum_{i=1}^{M}\binom{s_i}{2}\leqslant \left\{\begin{array}{ll}\frac{M^3-M}{24} & \mbox{if $2\nmid M$}\\[1pt] \frac{M^3-4M}{24} & \mbox{if $2\mid M$}\end{array}\right.$. If ${{\cal T}}$ is irreducible, the $3$-cycles are at least $M-2$. #### Number of tournaments The number of tournaments with $M$ nodes (up to relabeling) is $T(M)=\displaystyle{\sum_{(d)}\frac{2^D}{N}}$, where - $(d)=(d_1,d_2,\dotsc,d_M)$ is multi-index with $d_{2i}=0$, $d_{2i+1}\geq0$ and $\sum_{i=1}^{M}i\cdot d_i=M$, - $D$ is $\frac12\left(\sum_{i,j=1}^{M}d_id_j\gcd(i,j)-\sum_{i=1}^{M}d_i\right)$, - $N$ is $\prod_{i=1}^M i^{d_i}d_i!$. The values of $T(M)$ for $M\leqslant12$ are given in Table \[tab:num\_tournaments\]. ----- -------- -- ----- -------- -- ----- -------------- $M$ $T(M)$ $M$ $T(M)$ $M$ $T(M)$ 1 1 5 12 9 191536 2 1 6 56 10 9733056 3 2 7 456 11 903753248 4 4 8 6880 12 154108311168 ----- -------- -- ----- -------- -- ----- -------------- : The number of tournaments.[]{data-label="tab:num_tournaments"} As $M$ tends to infinity, $T(M)\rightarrow\infty$ and $T(M)\sim\displaystyle{\frac{2^{\binom{M}{2}}}{M!}}$ hold. The probability $P(M)$ that a tournament with $M$ nodes is irreducible can be computed recursively by the formula $$P(M)=1-\sum_{i=1}^{M-1}\binom{M}{i}\frac{P(i)}{2^{t(M-t)}}.$$ The values of $P(M)$ for $M\leqslant16$ are given in Table \[tab:prob\_irred\_tournament\]. ----- -------- -- ----- ---------- -- ----- ---------- -- ----- ---------- $M$ $P(M)$ $M$ $P(M)$ $M$ $P(M)$ $M$ $P(M)$ 1 1 5 0.53125 9 0.931702 13 0.993671 2 0 6 0.681152 10 0.961589 14 0.996587 3 0.25 7 0.799889 11 0.978720 15 0.998171 4 0.375 8 0.881115 12 0.988343 16 0.999024 ----- -------- -- ----- ---------- -- ----- ---------- -- ----- ---------- : The probability that a tournament is irreducible.[]{data-label="tab:prob_irred_tournament"} As $M$ tends to infinity, $P(M)\rightarrow1$ and $P(M)\sim1-\displaystyle{\frac{M}{2^{M-2}}}$ hold. Definitions and structure of the model {#sect:model} ====================================== #### Social decision rules Consider a population of $\nu$ [*agents*]{}. Each agent $i$ is characterized by a [*system of transitive preferences*]{} $\succeq_i$ over the set of social outcomes $X$. The set of systems of transitive preferences $\succeq$ is denoted by ${\mathcal{P}}$. A [*social decision rule*]{} ${\mathcal{R}}$ is a function: $$\begin{matrix} {\mathcal{R}}: & {\mathcal{P}}^{\nu} & \longrightarrow & \overline{{\mathcal{P}}} \\ & (\succeq_1 ,\dotsc,\succeq_\nu) & \longmapsto & \succeq_{{\mathcal{R}} (\succeq_1 ,\dotsc,\succeq_\nu)} \end{matrix}$$ which determines a [*system of social preferences*]{} or [*social rule*]{} $\succeq_{{\mathcal{R}} (\succeq_1 ,\dotsc,\succeq_\nu)}$ from the preferences of $\nu$ individual agents. With $\overline{{\mathcal{P}}}$ we denote the set of systems of (non-necessarily transitive) social preferences; as a matter of fact, we note that the social rule $\succeq_{{\mathcal{R}} (\succeq_1 ,\dotsc,\succeq_\nu)}$ is not, in general, transitive anymore. If $\Delta$ is the diagonal of the cartesian product $X \times X$, the element $\succeq_{{\mathcal{R}}} \in \overline{{\mathcal{P}}}$ defines a subset $${\mathcal{Y}}_{1,\succeq_{{\mathcal{R}}}}=\{ (x,y)\in X\times X \setminus \Delta \mid x \succeq_{{\mathcal{R}}} y \}$$ and the set of [*relevant*]{} social outcomes $${\mathcal{Y}}_{0,\succeq_{{\mathcal{R}}}}=\{x \in X \mid \exists y\in{X}\ \text{such that}\ (x,y) \in {\mathcal{Y}}_{1,\succeq{{\mathcal{R}}}}\ \text{or}\ (y,x) \in {\mathcal{Y}}_{1,\succeq_{{\mathcal{R}}}} \}.$$ If ${\mathcal{Y}}_{0,\succeq_{{\mathcal{R}}}}$ is the whole $X$, the social rule is said to be [*complete*]{}. A complete social rule is said to be [*strict*]{} if for each pair of social outcomes $x$ and $y$ the two conditions $x \succeq_{{\mathcal{R}}} y$ and $y \succeq_{{\mathcal{R}}} x$ are mutually exclusive (i.e. either the social outcome $x$ is preferred to the social outcome $y$ or the converse holds). For the sake of simplicity, we will consider only strict social rules. This restriction is almost always unnecessary, but it simplifies both the investigation and the presentation. Therefore, from now on, $\succ$ will always denote a complete strict social rule; unless explicitly stated, it will be considered as fixed. For the sake of shortness, we will always drop the words “complete” and “strict”. #### The graph The sets ${\mathcal{Y}}_{0,\succ}$ and ${\mathcal{Y}}_{1,\succ }$ are, respectively, the sets of nodes and arcs of a graph ${\mathcal{Y}}_{\succ}$. Two nodes $x$ and $y$ in ${\mathcal{Y}}_{0,\succ}$ are connected by an arc if $(x,y) \in {\mathcal{Y}}_{1,\succ}$ or $(y,x) \in {\mathcal{Y}}_{1,\succ}$; the orientation is from $x$ to $y$ in the former case and from $y$ to $x$ in the latter. For the sake of simplicity, we will use the same symbol $x$ for the nodes of ${\mathcal{Y}}_{\succ}$ and $(x,y)$ for its arcs. We decided to use different notations for graphs in regard of the relevance of different theories. Note that the completeness assumption on social rules guarantees that the graph ${\mathcal{Y}}_{\succ}$ is connected. A cycle $$(x_1,x_2),(x_2,x_3),\dotsc ,(x_h,x_1)$$ in the graph ${\mathcal{Y}}_{\succ}$ corresponds to a cycle [*à la*]{} Condorcet-Arrow [@Condorcet-Arrow], i.e. to the sequence $$x_1 \succ x_2 \succ \dotsb \succ x_h \succ x_1.$$ #### Features Let $F=\{f_1,\dotsc ,f_n\}$ be a bundle of elements, said [*features*]{}, the $i$-th of which takes $m_i$ values, i.e. $f_i \in \{0,1,2,\dotsc, m_i-1\}$ with $i=1,\dotsc,n$. Denote by $m=(m_1,\dotsc,m_n)$ the multi-index of the numbers of values of the features. From now on, a [*social outcome*]{} (or [*configuration*]{}) will be an $n$-uple $(v_1,\dotsc,v_n)$ of values such that $0\leqslant v_i<m_i$. For the sake of shortness, it will be also denoted by $v_1\dotsm v_n$. The set of all social outcomes will be denoted by ${X}$. The cardinality of ${X}$ is $\prod_{i=1}^n m_i$ and will be denoted by $M$. #### The hyperplane arrangement There is a correspondence [@MarengoSette] between the set $X$ of social outcomes and the set ${\mathcal{C}}$ of the chambers of the arrangement $${\mathcal{A}}_{n,m}=\left\{H_{i,j} \mid \alpha_{H_{i,j}}=\lambda_i-j\right\}_{{1 \leqslant i \leqslant n} \atop {0 \leqslant j < m_i-1} }.$$ Namely, $x=v_1\dotsm v_n$ corresponds to the chamber $C$ that contains the open set $$\left\{(\lambda_1,\dotsc ,\lambda_n) \in {\mathbb{R}}^n \mid v_j-1 < \lambda_j < v_j,\ j=1, \dotsc ,n\right\}.$$ #### Salvetti’s complex There is a correspondence [@MarengoSette] between the oriented graph ${\mathcal{Y}}_{\succ}$ and a subcomplex of the $1$-skeleton of Salvetti’s complex ${\mathcal{S}}({\mathcal{A}}_{n,m})$ as follows. Namely, there is a one-to-one correspondence between the $0$-skeleton ${\mathcal{S}}_0({\mathcal{A}}_{n,m})$ and the set of chambers in ${\mathcal{A}}_{n,m}$, i.e. the set of social outcomes $X$ by means of the correspondence above. The generators of the $1$-skeleton can be described as $${\mathcal{S}}_1({\mathcal{A}}_{n,m})=\{(x,y) \in X \times X \setminus \Delta \mid x\ \text{and}\ y\ \text{are adjacent}\}$$ where two chambers $C$ and $D$ are said to be [*adjacent*]{} if they are separated by only one hyperplane. Given a subset of consecutive elements $$\{(x_1,x_2),(x_2,x_3),\dotsc, (x_{k-2},x_{k-1}),(x_{k-1},x_k)\}$$ in ${\mathcal{S}}_1({\mathcal{A}}_{n,m})$ their [*formal sum*]{} is $$(x_1,x_k)=\sum_{j=1}^{k-1} (x_j,x_{j+1}).$$ It follows that given a social rule $\succ$ any arc $(x,y) \in {\mathcal{Y}}_{1,\succ}$ can be written as a formal sum of a minimal number of consecutive elements in ${\mathcal{S}}_1({\mathcal{A}}_{n,m})$. The number of elements is exactly the number of hyperplanes that separate the two social outcomes $x,y\in{X}$. Let $(x,y) \in {\mathcal{Y}}_{1,\succ}$ be an arc given by a formal sum with coefficients $1$ of arcs that are in ${\mathcal{Y}}_{1,\succ}$. If the social rule is transitive the arc can be deleted, because it can be reconstructed by means of the other arcs. Saari has greatly contributed to establishing general geometric representations of voting models and voting paradoxes [@saari94; @saari00a; @saari00b]. Salvetti’s complex is a CW-complex in ${\mathbb{C}}^n$, but it has an underlying real structure which is a purely simplicial complex. Moreover, vertices in this complex can be freely chosen inside each chamber. This structure can be used in order to recast and generalize some existing geometric models of voting such as those provided by Saari. #### Objects schemes Given a non-empty subset $I\subseteq\{1,\dotsc, n\}$, the [*object*]{} ${\mathcal{A}}_I$ is the subset $${\mathcal{A}}_I=\{H_{i,j}\}_{{i \in I} \atop {0 \leqslant j < m_i-1}}$$ of the arrangement ${\mathcal{A}}_{n,m}$. The cardinality of ${\mathcal{A}}_I$ is called [*size*]{} of the object ${\mathcal{A}}_I$ and is denoted by $|{\mathcal{A}}_I|$. The complement of a set $I$ in $\{1,\dotsc, n\}$ will be denoted (as usual) by $I^c$, hence the complement ${\mathcal{A}}_I^c={\mathcal{A}}_{n,m} \setminus {\mathcal{A}}_I$ of the arrangement ${\mathcal{A}}_I$ in ${\mathcal{A}}_{n,m}$ turns out to equal ${\mathcal{A}}_{I^c}$. The [*object instantiation*]{} $x({\mathcal{A}}_I)$ of a social outcome $x$ is the chamber of the subarrangement ${\mathcal{A}}_I$ that contains the chamber corresponding to $x$. An [*objects scheme*]{} is a set of objects $A=\{{\mathcal{A}}_{I_1},\dotsc, {\mathcal{A}}_{I_k}\}$ such that $\bigcup_{j=1}^k I_j=\{1,\dotsc,n\}$. Note that the sets $I_j$ may have non-empty intersection. The [*size*]{} of an objects scheme is the size of its largest object, $$| A | = \mbox{max}\{| {\mathcal{A}}_{I_1} | ,\dotsc,| {\mathcal{A}}_{I_k} | \}.$$ #### Neighbors of a social outcome Let be given an objects scheme $A=\{{\mathcal{A}}_{I_1},\dotsc, {\mathcal{A}}_{I_k}\}$. A social outcome $y$ is said to be a [*preferred neighbor*]{} of a social outcome $x$ with respect to an object ${\mathcal{A}}_{I_h} \in A$ if the following conditions hold: 1) $y \succ x$, 2) $y({\mathcal{A}}_{I_h^c})=x({\mathcal{A}}_{I_h^c})$, i.e. $x$ and $y$ belong to the same chamber of the arrangement ${\mathcal{A}}_{I_h^c}$, 3) \[cond:def\_pref\_neigh\] $y({\mathcal{A}}_{I_h}) \neq x({\mathcal{A}}_{I_h})$, i.e. $x$ and $y$ belong to different chambers of the arrangement ${\mathcal{A}}_{I_h}$. Note that Condition \[cond:def\_pref\_neigh\] is a direct consequence of the first two, but we have left it for the sake of consistency with the non-strict case. The set of all preferred neighbors of the social outcome $x$ with respect to ${\mathcal{A}}_{I_h} \in A$ is denoted by $\Phi(x,{\mathcal{A}}_{I_h})$. The set of all preferred neighbors of the social outcome $x$ is denoted by $\Phi(x,A)=\bigcup_{j=1}^k \Phi(x,{\mathcal{A}}_{I_j})$. A social outcome $y\in\Phi(x,{\mathcal{A}}_{I_h})$ is said to be a [*best neighbor*]{} of a social outcome $x$ with respect to an object ${\mathcal{A}}_{I_h} \in A$ if $$y \succ w \quad \forall w \in \Phi(x,{\mathcal{A}}_{I_h}).$$ The set of all best neighbors of the social outcome $x$ with respect to ${\mathcal{A}}_{I_h} \in A$ is denoted by $B(x,{\mathcal{A}}_{I_h})$. Obviously, $B(x,{\mathcal{A}}_{I_h}) \subseteq \Phi(x,{\mathcal{A}}_{I_h})$ holds. Moreover, either $B(x,{\mathcal{A}}_{I_h})$ is empty, or $B(x,{\mathcal{A}}_{I_h})$ contains one social outcome only. Even if this notation seems to be useless, we use it to follow the literature-customary convention; indeed, if one takes also non-strict social rules into account, the set $B(x,{\mathcal{A}}_{I_h})$ may contain more than one social outcome. The set of all best neighbors of the social outcome $x$ is denoted by $B(x,A)=\bigcup_{j=1}^k B(x,{\mathcal{A}}_{I_j})$. A [*domination path $DP(x,y,A) $ through $A$, starting from $x$ and ending in $y$*]{}, is a sequence of best neighbors with respect to objects in $A$, i.e. a sequence $$x=x_0 \prec x_1 \prec \dotsb \prec x_s=y$$ such that there exist objects ${\mathcal{A}}_{I_{h_1}}, \dotsc, {\mathcal{A}}_{I_{h_s}} \in A$ with $x_i \in B(x_{i-1},{\mathcal{A}}_{I_{h_i}})$ for all $1 \leqslant i \leqslant s$. A social outcome $y$ is said to be [*reachable from $x$ with respect to an objects scheme $A$*]{} if there exists a domination path $DP(x,y,A)$. A social outcome $x$ is said to be a [*local optimum for $A$*]{} if $\Phi(x,A)$ is empty. #### Agenda Let $A=\{{\mathcal{A}}_{I_1},\dotsc,{\mathcal{A}}_{I_k}\}$ be an objects scheme. An [*agenda $\alpha$ of $A$*]{} is an ordered $t$-uple of indices $(h_1,\dotsc,h_t)$ with $t\geq k$ such that $\{h_1,\dotsc, h_t\} = \{1,\dotsc, k\}$. An agenda $\alpha$ states the order in which the objects ${\mathcal{A}}_{I_i}$ are decided upon. In the model of [@MarengoSette] the agenda is repeated over and over again until either a local optimum or a domination cycle is reached. The ordered $t$-uple of objects $({\mathcal{A}}_{I_{h_1}},\dotsc,{\mathcal{A}}_{I_{h_t}})$ is denoted by $A_{\alpha}$. The set of all possible agendas of $A$ is denoted by $\Lambda(A)$. Let $\alpha = (h_1,\dotsc,h_t)$ be an agenda. A domination path $$x_0 \prec x_1 \prec \dotsb \prec x_s$$ is said to be [*ordered along $\alpha$*]{} if $$x_i \in B(x_{i-1},{\mathcal{A}}_{I_{h_q+1}})$$ where $h_q$ is the remainder of the division of $i-1$ by $t$. Such a domination path will be denoted by $DP(x_0,x_s,A_{\alpha})$. A domination path is said to be [*maximal*]{} if it ends in either a local optimum or a limit domination cycle. More precisely, either $x_s$ is a local optimum or $x_{s-t}$ belongs to $B(x_s,{\mathcal{A}}_{I_{h_s+1}})$, where $h_s$ is the remainder of the division of $s-1$ by $t$. Note that in the first case we do not require that $x_{s-1}$ is different from $x_s$, so there is no control on the number of times that $x_s$ appears at the end of the domination path. Also in the second case, there is no control on the number of times that the domination cycle $$x_{s-t} \prec \dotsb \prec x_s$$ appears at the end of the domination path. In the first case, we will say that the domination path [*ends up in $x_s$*]{}. #### Basin of attraction The [*basin of attraction $\Psi (x,A)$ of a social outcome $x$ with respect to an objects scheme $A$*]{} is the set of the social outcomes $y$ such that there exists a maximal domination path $DP(y,x,A)$ that ends up in $x$. \[rem:basin\_local\] Note that $\Psi (x,A)$ is empty if and only if $x$ is not a local optimum for $A$. The [*ordered basin of attraction $\Psi(x,A_{\alpha})$ of $x$ with respect to an agenda $\alpha$ of $A$*]{} is the set of the social outcomes $y$ such that there exists a maximal domination path $DP(y,x,A_{\alpha})$ that ends up in $x$. Clearly, we have $$\Psi (x,A)= \bigcup_{\alpha \in \Lambda(A)} \Psi (x,A_{\alpha}).$$ #### Global optima A social outcome $z \in X$ is said to be a [*global optimum for an agenda $\alpha$*]{} if $\Psi(z,A_{\alpha})=X$ holds. It is said to be a [*global optimum for the objects scheme $A$*]{} if and only if $\Psi(z,A_{\alpha})=X$ holds for all agendas $\alpha \in \Lambda(A)$, i.e. it is a global optimum for all the agendas of $A$. Local and global optima strictly depend on the choice of the objects scheme $A$. In [@MarengoSette] the authors prove that object construction power is, in some sense, stronger than agenda power, i.e. they prove $$\Psi (z,A) \neq \emptyset \quad \Longleftrightarrow \quad \Psi (z,A_{\alpha}) \neq \emptyset\ \text{for all}\ \alpha \in \Lambda(A).$$ #### Separating hyperplanes and distance between social outcomes Let $x$ and $y$ be two social outcomes. They are said to be [*separated by an hyperplane $H \in {\mathcal{A}}_{n,m}$*]{} if $H$ separates the chambers $C_x$ and $C_y$. In this case, the notation $x \mid H \mid y$ will be used. Moreover, $x$ and $y$ are said to be [*prominently separated*]{} if there exist two hyperplanes $H_{i_1,j_1}, H_{i_2,j_2} \in {\mathcal{A}}_{n,m}$ with $i_1 \neq i_2$ (i.e. non-parallel) such that $x \mid H_{i_1,j_1} \mid y$ and $x \mid H_{i_2,j_2} \mid y$ hold. We will say that $x$ and $y$ are [*separated by the feature*]{} $f$ if the value of the feature $f$ of $y$ differs from that of the feature $f$ of $x$. The set of the features that separates $x$ and $y$ is denoted by $\overline{{\mathcal{H}}}_{x,y}$. The [*distance*]{} between $x$ and $y$ is the minimum number of hyperplanes that separate $x$ and $y$. The [*prominent distance*]{} $d_p(x,y)$ is the number of features that separate $x$ and $y$, i.e. $\#\overline{{\mathcal{H}}}_{x,y}$. Note that $d_p(x,y)$ equals the minimum number of hyperplanes that prominently separate $x$ and $y$. Recall that, by definition, if $H_{i,\bar{\jmath}}$ belongs to the object ${\mathcal{A}}_I$ for some $\bar{\jmath}$, then $H_{i,j}$ belongs to ${\mathcal{A}}_I$ for all $0 \leqslant j < m_i-1$. Therefore, the subarrangement $${\mathcal{H}}_{x,y}=\left\{H_{i,j}\in{\mathcal{A}}_{n,m} \mid i\in\overline{{\mathcal{H}}}_{x,y},\ 0 \leqslant j < m_i-1\right\}$$ of ${\mathcal{A}}_{n,m}$ has been considered. Note that, if we have $d_p(x,y)=1$ and $d(x,y)>1$, all the hyperplanes in ${\mathcal{H}}_{x,y}$ are parallel. The sets ${\mathcal{H}}_{x,y}$ and $\overline{{\mathcal{H}}}_{x,y}$ are strictly interconnected. For instance, we will use the fact that ${\mathcal{H}}_{x,y}$ is contained in ${\mathcal{H}}_{z,w}$ if and only if $\overline{{\mathcal{H}}}_{x,y}$ is contained in $\overline{{\mathcal{H}}}_{z,w}$. In [@MarengoSette] the authors prove the following result. \[principio\] Let $z$ be a social outcome. Then, there exists an objects scheme $A_z$ for which $z$ is a local optimum if and only if the inequality $d_p(w,z)>1$ holds for any social outcome $w$ with $w\succ z$. The previous theorem explains also the reason of the choice of the name “local optimum”. Namely, a social outcome $z$ is a local optimum for an objects scheme $A$ if and only if any social outcome $x$ such that $d_p(x,z)=1$ belongs to $\Psi(z,A)$. A social outcome $z$ is said to be [*free*]{} if and only if the inequality $d_p(w,z)>1$ holds for any social outcome $w$ with $w\succ z$. Thus, by means of Theorem \[principio\], we have that $z$ is a local optimum for an objects scheme $A_z$ if and only if $z$ is free. An interesting question, pointed out in [@MarengoSette], is to understand when a local optimum $z$ for an objects scheme $A$ is a global optimum, i.e. when there exists an agenda $\alpha$ of $A$ such that the basin of attraction $\Psi(z,A_{\alpha})$ is the whole $X$ and when this is true for all agendas $\alpha \in \Lambda(A)$. In [@MarengoSette] the authors prove the following. \[pglob1\] Let $z$ be a free social outcome. Then, there exists an objects scheme $A_z$ such that $\Phi(z,A_z)= \emptyset$ and $\Phi(x,A_z) \neq \emptyset$ for all free social outcomes $x$ if and only if the condition $$\label{eq:cond_hyper} \exists y \succ x\ \text{such that}\ {\mathcal{H}}_{w,z} \nsubseteq {\mathcal{H}}_{x,y} \quad \forall w \succ z$$ holds for all free $x$. The equivalent conditions on the free social outcome $z$ above are necessary for $z$ to be a global optimum. #### Universal basin of attraction and u-local optima Let $\Pi({\mathcal{A}}_{n,m})$ be the set of all possible objects schemes in ${\mathcal{A}}_{n,m}$. The [*universal basin of attraction*]{} of a social outcome $z \in X$ is the set $$\Psi(z) = \bigcup_{A \in \Pi({\mathcal{A}}_{n,m})} \Psi(z,A),$$ i.e. the set of all the social outcomes $x$ such that there exists an objects scheme through which there is a domination path starting from $x$ and ending up in $z$. By virtue of Remark \[rem:basin\_local\] and Theorem \[principio\], the universal basin of attraction of the social outcome $z$ is non-empty if and only if $z$ is [*free*]{}. A social outcome $z$ is said to be an [*u-local optimum*]{} if its universal basin of attraction $\Psi(z)$ is the whole set of social outcomes ${X}$. \[rem:glob\_uloc\_loc\] A global optimum is necessarily an u-local optimum, and an u-local optimum is necessarily a local optimum for at least one objects scheme. Let $x$ and $z$ be social outcomes. In [@MarengoSette], when $z$ is free, the authors consider the set $$G_x^z=\{y \succ x \mid {\mathcal{H}}_{w,z} \nsubseteq {\mathcal{H}}_{x,y} \quad \forall w \succ z \mbox { and } B(x, {\mathcal{H}}_{x,y}) \neq \emptyset \}$$ and prove that if $x$ is in the universal basin of attraction of $z$ then $G_x^z \neq \emptyset$. For the sake of completeness, we will define $G_x^z$ to be $\emptyset$ if $z$ is not free. Suppose $z$ is free. The set $G^z_x$ is non-empty if and only if there exists an objects scheme ${A}_z$ such that $\Phi(z,{A}_z)=\emptyset$ and $\Phi(x,{A}_z)\neq\emptyset$ hold; i.e. if and only if $x$ satisfies Condition  of Theorem \[pglob1\]. Suppose now that $x$ is a social outcome such that $G_x^z$ is non-empty (so $z$ is free). If $B(x,{\mathcal{H}}_{x,y})$ is non-empty, its cardinality is one. The only element of $B(x,{\mathcal{H}}_{x,y})$ will be denoted by $b_{x,y}$. In [@MarengoSette] the authors consider the set $$BG^z_x=\{b_{x,y} \mid y \in G_x^z\} \subseteq G_x^z,$$ the sets $$\begin{aligned} E_0^z &=\{z\}, \\ E_1^z &=\{x \in X \setminus \{z\} \mid z \in BG_x^z\}, \\ E_2^z &=\{ x \in X \setminus \cup_{i=0}^1 E_i^z \mid E_1^z \cap BG_x^z \neq\emptyset\}, \\ & \quad\quad\quad\quad\quad\quad\quad\vdots \\ E_h^z &=\{ x \in X \setminus \cup_{i=0}^{h-1}E_i^z \mid E_{h-1}^z \cap BG_x^z \neq \emptyset\}, \\ E_{h+1}^z &=\{ x \in X \setminus \cup_{i=0}^{h}E_i^z \mid E_{h}^z \cap BG_x^z \neq \emptyset\} =\emptyset\end{aligned}$$ (where $h$ is the smallest integer such that $E_{h+1}^z$ is empty), and the set $$E^z=\bigcup_{i=1}^h E_i^z.$$ For the sake of completeness, we define all these sets to be empty if $z$ is not free. They prove the following theorem. \[teo:universal\_basin\] Let $x$ and $z$ be two social outcomes. Then $x$ is in the universal basin of attraction $\Psi(z)$ if and only if $x$ belongs to $E^z$, i.e. $$\Psi(z)=E^z.$$ Let $z$ be a social outcome. The [*deepness*]{} of a social outcome $x$ with respect to $z$ is - $d$ if $x$ belong to $E_d^z$, - $\infty$ if $x$ does not belong to $\Psi(z)$. Note that this definition makes sense because the $E_*^z$’s form a partition of the universal basin of attraction of $z$. The deepness of a social outcome $x$ with respect to $z$ is the minimum of the lengths of all maximal domination paths $DP(x,z,A_z)$, among all objects schemes $A_z$ such that $\Phi(z,A_z)$ is empty. Let $d$ be the deepness of the social outcome $x$ with respect to $z$ and let $h$ be the minimum of the lengths of all maximal domination paths $DP(x,z,A_z)$, among all objects schemes $A_z$ such that $\Phi(z,A_z)$ is empty. If $d$ is $\infty$, by virtue of Theorem \[teo:universal\_basin\], there is no maximal domination path $DP(x,z,A_z)$, where $A_z$ is an objects scheme such that $\Phi(z,A_z)$ is empty, and hence $h$ is $\infty$. If $d$ is not $\infty$, we can construct a maximal domination path $$x = x_d \prec x_{d-1} \prec \dotsb \prec x_1 \prec x_0 = z$$ such that $x_j$ belongs to $E^z_j\cap BG^z_{x_{j+1}}$ for $j=0,\dotsc,d-1$ and hence we have $h\leq d$. Let $DP(x,z,A_z)$ be a maximal domination path $$x = x_h \prec x_{h-1} \prec \dotsb \prec x_1 \prec x_0 = z$$ of length $h$. If ${{\cal A}}_{I_j}$ is the object of $A_z$ such that $x_{j-1}$ belongs to $B(x_j,{{\cal A}}_{I_j})$, we have ${{\cal H}_{x_j,x_{j-1}}}\subseteq{{\cal A}}_{I_j}$ and $x_{j-1}\in E^z_{j-1} \cap BG^z_{x_j}$. Thus, $x$ belongs to $E^z$ and hence to some $E^z_k$ with $k\leq h$. Since the deepness of $x$ is $d$, we have $k=d$, and then $$d=k\leq h\leq d.$$ The proof is complete. The [*u-deepness*]{} of a social outcome $z$ is - the maximum integer $h$ such that $E^z_h$ is not empty, - $-\infty$ if all $E^z_h$’s are empty. Note that the u-deepness of a social outcome $z$ is $-\infty$ if and only if $z$ is not free. Theoretical results {#sec:results} =================== In this section we will give results that ties the model described in [@MarengoSette] with tournament theory. From now on, we will denote by ${{\cal T}}_i$’s the irreducible components of the graph ${\mathcal{Y}}_{\succ}$, with ${{{\cal T}}_{\mathrm{MAX}}}$ being the maximal component. \[prop:univ\_basin\_irred\_comp\] If a social outcome $x \in {{\cal T}}_i$ is in the universal basin of attraction $\Psi(z)$ of a social outcome $z \in {{\cal T}}_j$, then $i \leqslant j$ holds. Since $x$ belongs to $\Psi(z)$, there exists an objects scheme $A$, an agenda $\alpha$ and a maximal domination path $DP(x,z,A_{\alpha})$, $$x = x_0 \prec x_1 \prec \dotsb \prec x_s = z,$$ ending up in $z$. Two cases may occur: either $x \succ z$ or $x \prec z$. In the former case, there exists a domination cycle $\gamma$ that contains $x$ and $z$, i.e. we have $i=j$. In the latter case, we have $i \leqslant j$. This concludes the proof. \[cor2\] Each u-local optimum belongs to ${{{\cal T}}_{\mathrm{MAX}}}$. The converse of the above corollary is not true. For instance, the social rule whose graph is shown in Figure \[fig:maxcomp\_no\_globalopt\] has only one irreducible component and no u-local optimum. $\xymatrix{ {*=<12pt>[o][F-]{0}} {\ar @{-} [r] |-(0.57){\SelectTips{cm}{}\object@{>}}} & {*=<12pt>[o][F-]{1}} {\ar @{-} [r] |-(0.57){\SelectTips{cm}{}\object@{>}}} & {*=<12pt>[o][F-]{2}} {\ar @{-} @/_13pt/ [ll] |-(0.57){\SelectTips{cm}{}\object@{>}}} }$ Moreover, for a social outcome $z$ the property of being a local optimum for an objects scheme (and agenda) and the property of belonging to ${{{\cal T}}_{\mathrm{MAX}}}$ are not related to each other. The social rule whose graph is shown in Figure \[fig:maxcomp\_no\_globalopt\] has no local optimum, while that shown in Figure \[fig:localopt\_no\_maxcomp\], for the objects scheme $\{\{H_1\},\{H_2\}\}$, has a local optimum, $00$, which is not in ${{{\cal T}}_{\mathrm{MAX}}}=\{11\}$. $\xymatrix@R=35pt@C=35pt{ {*=<14pt>[o][F-]{10}} {\ar @{-} [rd] |-(0.4){\SelectTips{cm}{}\object@{>}}} & {*=<14pt>[o][F-]{11}} {\ar @{-} [d] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar @{-} [l] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar @{-} [ld] |-(0.7){\SelectTips{cm}{}\object@{>}}} \\ {*=<14pt>[o][F-]{00}} {\ar @{-} [r] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar @{-} [u] |-(0.57){\SelectTips{cm}{}\object@{>}}} & {*=<14pt>[o][F-]{01}} }$ A social outcome $z \in X$ is a local optimum for all objects schemes if and only if $z$ is the only element in ${{{\cal T}}_{\mathrm{MAX}}}$. If $z \in X$ is a local optimum for all objects schemes, in particular it is a local optimum for $A=\{{\mathcal{A}}_{n,m}\}$. Then, we have $B(z,{\mathcal{A}}_{n,m})=\emptyset$, i.e. we have $z \succ x$ for all $x \in X\setminus\{z\}$ and hence $z$ is the only social outcome in ${{{\cal T}}_{\mathrm{MAX}}}$. The converse is obvious. Let $x \in {{\cal T}}_i$ be a social outcome. We say that $x$ is [*lifting with respect to an objects scheme $A$*]{} if there is an object ${\mathcal{A}} \in A$ such that the best neighbor $y \in B(x,{\mathcal{A}})$ belongs to a component ${{\cal T}}_j$ such that $j >i$. By definition, in ${{{\cal T}}_{\mathrm{MAX}}}$ there are no lifting social outcomes. Indeed, social outcomes that are lifting with respect the objects scheme $A$ arise when an arc in a domination path through $A$ has the endpoints in two different irreducible components. In the following theorem we will give an equivalent condition for a social outcome $x \in X$ to be lifting. A social outcome $x$ in an irreducible component ${{\cal T}}_i$ is lifting for at least an objects scheme $A$ if and only if there exists a social outcome $y \in {{\cal T}}_j$ with $j>i$ such that the following condition holds: $${\mathcal{H}}_{w,y} \nsubseteq {\mathcal{H}}_{x,y} \quad \forall w \in X\ \text{such that}\ w \succ y.$$ Let $x \in {{\cal T}}_i$ be a lifting social outcome for an objects scheme $A$. Then, there exists $y \in {{\cal T}}_j$ with $j>i$ such that $B(x,{\mathcal{A}})=\{y\}$ for an object ${\mathcal{A}} \in A$. By construction, we have ${\mathcal{H}}_{x,y} \subseteq {\mathcal{A}}$. Suppose by contradiction that there exists a social outcome $w \succ y$ such that ${\mathcal{H}}_{w,y} \subseteq {\mathcal{H}}_{x,y}$. We have ${\mathcal{H}}_{w,y} \subseteq {\mathcal{A}}$, so $y$ cannot belong to $B(x, {\mathcal{A}})$, a contradiction. Conversely, let $y \in {{\cal T}}_j$, with $j>i$, be a social outcome such that ${\mathcal{H}}_{w,y} \nsubseteq {\mathcal{H}}_{x,y}$ for all $w \succ y$. Then, we have $y \succ x$. Moreover, for each social outcome $w \succ y$ we have $w({\mathcal{H}}_{x,y}^c) \neq y({\mathcal{H}}_{x,y}^c)$, i.e. $w$ is a neighbor neither of $x$ nor of $y$ with respect to ${\mathcal{H}}_{x,y}$. Therefore, we obtain $B(x, {\mathcal{H}}_{x,y})=\{y\}$ and hence the thesis. Until the end of this section, we fix a social outcome $z$, which will be a candidate for being an u-local optimum. We give the following necessary conditions on the irreducible components ${{\cal T}}_i$ of the graph ${\mathcal{Y}}_{\prec}$ in order for $z$ to be an u-local optimum. \[prop1\] If $z$ is an u-local optimum, the following statements hold. (i) \[item:prop1\_exist\] For each social outcome $x\in{{\cal T}}_i$, with ${{\cal T}}_i\neq{{{\cal T}}_{\mathrm{MAX}}}$, there exist an objects scheme ${A}_x$ such that $\Phi(z,A_x)$ is empty and a domination path $DP(x,y,A_x)$ to a social outcome $y\in{{\cal T}}_i$ lifting with respect to $A_x$. (ii) \[item:prop1\_lifting\] For each social outcome $x\in{{\cal T}}_i$, with ${{\cal T}}_i\neq{{{\cal T}}_{\mathrm{MAX}}}$, every domination path $DP(x,z,A)$ through an objects scheme $A$ such that $\Phi(z,A)$ is empty contains a social outcome $y\in{{\cal T}}_i$ lifting with respect to $A$. (iii) \[item:prop1\_contain\] Each ${{\cal T}}_i$ different from ${{{\cal T}}_{\mathrm{MAX}}}$ contains a lifting social outcome with respect to an objects scheme $A$ such that $\Phi(z,A)$ is empty. Let us prove Point (\[item:prop1\_exist\]). Let $x$ be a social outcome belonging to ${{\cal T}}_i$, with ${{\cal T}}_i\neq{{{\cal T}}_{\mathrm{MAX}}}$. Since $x$ belongs to $\Psi(z)$, there exist an objects scheme ${A}$ such that $\Phi(z,{A})$ is empty and a maximal domination path $DP(x,z,A)$, $$x = x_0 \prec x_1 \prec \dotsb \prec x_s = z,$$ ending up in $z$. For $j=0,\dotsc,s$, let us define the integer $i_j$ such that $x_j$ belongs to ${{\cal T}}_{i_j}$. These integers are ordered non decreasingly and $i_0$ differs from $i_s$. Therefore, there exists a maximal one (say $\bar{\jmath}$) different from $s$ and such that $i_{\bar{\jmath}}=i_0$ and $i_{\bar{\jmath}+1}\neq i_0$ hold. The social outcome $y=x_{\bar{\jmath}}$ belongs to ${{\cal T}}_i$ and is lifting with respect to ${A}$, so the domination path $$x = x_0 \prec x_1 \prec \dotsb \prec x_{\bar{\jmath}} = y,$$ is the path we are looking for. A proof of Point (\[item:prop1\_lifting\]) is very similar to that of Point (\[item:prop1\_exist\]), so we leave it to the reader. Point (\[item:prop1\_contain\]) is a direct consequence of Point (\[item:prop1\_exist\]). \[prop:comp\_lifting\] Suppose that there is an irreducible component ${{\cal T}}_i\neq{{{\cal T}}_{\mathrm{MAX}}}$ such that for each $x\in{{\cal T}}_i$ we have $G^z_x\subseteq{{\cal T}}_i$ (or equivalently $BG_x^z\subseteq{{\cal T}}_i$). Then z is not an u-local optimum. Suppose by contradiction that $z\in{X}$ is an u-local optimum. Let $x$ be a social outcome that belongs to ${{\cal T}}_i$. By means of Theorem \[teo:universal\_basin\], we obtain an objects scheme $A$ such that $\Phi(z,A)$ is empty and a domination path $DP(x,z,A)$, $$x = x_0 \prec x_1 \prec \dotsb \prec x_s \prec x_{s+1} = z,$$ where $x_j$ belongs to $E^z$ for each $j=0,\dotsc,s+1$, i.e. $x_{j+1}$ belongs to $BG_{x_j}^z$ for each $j=0,\dotsc,s$. By virtue of Proposition \[prop1\]-(\[item:prop1\_lifting\]), there exists $\bar{\jmath}\in\{0,\dotsc,s\}$ such that $x_{\bar{\jmath}}$ belongs to ${{\cal T}}_i$ and is lifting with respect to $A$, a contradiction to the hypothesis. We will denote by $${\cal S}^z_i=\{x\in{{\cal T}}_i\mid G^z_x\subseteq{{\cal T}}_i\}$$ the set of the social outcomes of the irreducible component ${{\cal T}}_i$ that are not lifting with respect to any objects scheme ${A}$ such that $\Phi(z,A)$ is empty. Therefore, the proposition above can be restated as follows. If $z$ is an u-local optimum, then ${\cal S}^z_i\neq{{\cal T}}_i$ holds for all ${{\cal T}}_i\neq{{{\cal T}}_{\mathrm{MAX}}}$. For each irreducible component ${{\cal T}}_i$, we will now construct a particular sub-graph of ${{\cal T}}_i$. It will give information on the possible domination paths through an objects scheme, starting from a social outcome of ${{\cal T}}_i$ and ending up in $z$. The nodes of this graph are the social outcomes in ${{\cal T}}_i$; if $x$ and $y$ are social outcomes, there is an arc from $x$ to $y$ if $y\in BG_x^z$. Note that lifting social outcomes are maximal elements of this graph. Suppose $z$ is an u-local optimum. Then the following two conditions are satisfied: - for each irreducible component ${{\cal T}}_i\neq{{{\cal T}}_{\mathrm{MAX}}}$, each maximal element of the graph constructed above (considered as a social outcome) is lifting with respect to an objects scheme ${A}$ such that $\Phi(z,A)$ is empty; - for ${{{\cal T}}_{\mathrm{MAX}}}$, the u-local optimum $z$ is the only maximal element of the graph constructed above. Let $y\in{{\cal T}}_i\neq{{{\cal T}}_{\mathrm{MAX}}}$ be a maximal element of the graph constructed above. By virtue of Proposition \[prop1\]-(\[item:prop1\_exist\]) there exists a domination path $$y=y_0 \prec y_1 \prec \dotsb \prec y_d = y'$$ from $y$ to a lifting social outcome $y'\in{{\cal T}}_i$. Since $y$ is a maximal element, the set $BG_y^z \cap {{\cal T}}_i$ is empty and hence $d$ is zero. Therefore, $y'$ equals $y$ and hence $y$ is lifting. Similarly, let $y\in{{{\cal T}}_{\mathrm{MAX}}}$ be a maximal element of the graph constructed above. The set $BG_y^z \cap {{{\cal T}}_{\mathrm{MAX}}}$ is empty and hence the whole $BG_y^z$ is empty. Since $y$ belongs to $\Psi(z)$, we obtain that $y$ equals $z$. This concludes the proof. The score of a local optimum is at least $\sum_{j=1}^n (m_j-1)$. By virtue of Theorem \[principio\] each local optimum $z$ must dominate the $\sum_{j=1}^n (m_j-1)$ social outcomes $w$ with $d_p(w,z)=1$. The bound in the proposition above seems to be quite weak, mainly because in the classical social choice framework the score of an optimum is $M-1$. However, the bound above is attained. Namely, let $\succ$ be any social rule such that $z\succ x$ for each social outcome $x$ with $d_p(x,z)=1$, and $z\prec x$ for each social outcome $x$ with $d_p(x,z)>1$. For $\succ$ the social outcome $z$ is free (and hence is a local optimum for an objects scheme $A_z$) and has score $\sum_{j=1}^n (m_j-1)$. Moreover, there can be also global optima with score $\sum_{j=1}^n (m_j-1)$. This can be easily obtained by suitably choosing the arcs of the social rule $\succ$ that are not fixed above, so we leave it to the reader. Probability {#sec:probability} =========== As above, let $X$ be the set of possible social outcomes given by a bundle of features $F=\{f_1,\dotsc ,f_n\}$ such that $f_i$ belongs to $\{0,1,2,\dotsc, m_i-1\}$ for $i=1,\dotsc,n$. Throughout this section, we will suppose (without loss of generality) that the $m_i$’s are ordered decreasingly: $$m_1 \geqslant m_2 \geqslant \dotsb \geqslant m_n.$$ \[teo:max\_free\] In the hypothesis above, any given social rule $\succ$ on $X$ has at most $$\prod_{i=2}^{n} m_i$$ local optima, and this bound is attained. The proof of the bound is by induction on the number $n$ of features. If $n$ is $1$, there is at most one local optimum (one when the social rule is transitive, zero otherwise). Suppose now the statement true for $n$ and suppose $\succ$ is defined on social oucomes with $n+1$ features. If $j$ belongs to $\{0,1,2,\dotsc, m_{n+1}-1\}$ for the $(n+1)$-th feature, we define the subspace $V_n^j=\{(y_1,\ldots,y_{n+1})\in {\mathbb{R}}^{n+1} \mid y_{n+1}=j-\frac12\}$ of ${\mathbb{R}}^{n+1}$ having dimension $n$. Let $X_n^j$ be the set of all the social outcomes in $X$ whose corresponding chambers intersect $V_n^j$. Then, for any $j\in\{0,1,2,\dotsc, m_{n+1}-1\}$, $X_n^j$ is given by $n$ features taking $m_1 \geqslant \dotsb \geqslant m_n$ values. By induction, $X_n^j$ has at most $\prod_{i=2}^n m_i$ local optima for any $j\in\{0,1,2,\dotsc, m_{n+1}-1\}$. Moreover, by definition of local optimum, if $x \in X$ is a local optimum for the social rule $\succ$, it is a local optimum also for the social rule $\succ^j$ induced by $\succ$ on $X_n^j$. Therefore, the number of local optima in $X$ is at most the sum over $j\in\{0,1,2,\dotsc, m_{n+1}-1\}$ of the maximum number of local optima of each $X_n^j$, i.e. $\prod_{i=2}^{n+1}m_i$. This concludes the proof of the bound. In order to prove that the bound is attained, we will prove the following slightly stronger statement. <span style="font-variant:small-caps;">Assertion.</span> In the hypothesis above, there exist at least $m_n$ social rules with exactly $\prod_{i=2}^{n}m_i$ local optima and such that for any two of them the sets of local optima are disjoint. The proof of the assertion is by induction on the number $n$ of features. If $n$ is $1$, any transitive social rule has one local optimum (the global one). Moreover, for any social outcome $z$ there is a transitive social rule with $z$ as local optimum. Since there are $m_1$ social outcomes, we obtain the thesis. Suppose now the statement true for $n$ and let ${X}$ be a set of social outcomes with $n+1$ features. As above, we define the subspace $V_n^j$ and the set $X_n^j$, for $j=0,\dotsc, m_{n+1}-1$. By induction, on each $X_n^j$ we can define a social rule $\succ^j$ with exactly $\prod_{i=2}^{n}m_i$ local optima and such that the local optima in two different $X_n^j$ are separated by at least two features (one of which being $f_{n+1}$), because $m_{n+1} \leqslant m_n$ holds. More precisely, if $v_1\dotsm v_n j \in X_n^j$ is a local optimum for $\succ^j$ and $v'_1\dotsm v'_n j' \in X_n^{j'}$ is a local optimum for $\succ^{j'}$ with $j\neq j'$, then there exists a feature $f_k$ different from $f_{n+1}$ such that $v_k$ differs from $v'_k$. Therefore, there exists a social rule $\succ$ on ${X}$ that satisfies the following properties: - $\succ$ equals $\succ^j$ on $X_n^j$, - if $x \in X_n^j$ is a local optimum for $\succ^j$ then $x\succ y$ for all $y \in X \setminus X_n^j$ such that $d_p(x,y)=1$. This social rule has $\prod_{i=2}^{n+1}m_i$ free social outcomes, which are local optima by virtue of Theorem \[principio\]; therefore, it is one of the social rules we are looking for. The other ones can be obtained by shifting the pairing of $X_n^*$’s and $\succ^*$’s. More precisely, for $l=0,\dotsc,m_{n+1}-1$, the $l$-th social rule $\succ_l$ is defined by choosing on $X_n^j$ the social rule $\succ^h$, where $h$ is the remainder of the division of $j+l$ by $n+1$, and by repeating the procedure above. These $m_{n+1}$ social rules on ${X}$ have $\prod_{i=2}^{n+1}m_i$ local optima each. Moreover, if $v_1\dotsm v_n j \in X_n^j$ is a local optimum for $\succ_l$ and $v'_1\dotsm v'_n j' \in X_n^{j'}$ is a local optimum for $\succ_{l'}$ with $l\neq l'$, then by construction either $j$ differs from $j'$ or $v_1\dotsm v_n$ differs from $v'_1\dotsm v'_n$. This concludes the proof of the assertion, and hence the proof of the theorem. #### Social rules with a fixed number of free social outcomes in the two-feature case We compute the number of social rules with two features and a fixed number of free social outcomes. Note that, by virtue of Theorem \[teo:max\_free\], there are at most $m_2$ free social outcomes. Let us call $e_k$ the number of social rules with $k$ free social outcomes. We will count the graphs corresponding to the social rules. Call $V_1$ (resp. $V_2$) the set of values of the first (resp. second) feature of the $k$ free social outcomes. Since two free social outcomes are separated by both features, we have $\#V_1=\#V_2=k$. There are $\binom{m_1}k$ (resp. $\binom{m_2}k$) possibilities for choosing $V_1$ (resp. $V_2$). Moreover, in $V_1\times V_2$, the $k$ free social outcomes can be chosen in $k!$ ways. Suppose now that the position of the $k$ free social outcomes is fixed. For each $k=0,1,\dotsc,m_2$, we compute an integer $a_k$ which is related to (but different from) $e_k$, because we allow some repetitions in the counting process. Since the $k$ social outcomes are free, each of them dominates all the social outcomes that are separated from it by one feature. Therefore, $k\left(m_1+m_2-2\right)$ arcs are fixed. If $k$ equals $m_2$, the other $\binom{m_1m_2}2 - m_2\left(m_1+m_2-2\right)$ arcs are unrestricted and hence we obtain $$e_{m_2} = \binom{m_1}{m_2} \binom{m_2}{m_2} m_2! 2^{\binom{m_1m_2}2 - m_2\left(m_1+m_2-2\right)}$$ graphs. For general $k$, the other $\binom{m_1m_2}2 - k\left(m_1+m_2-2\right)$ arcs are not unrestricted, because there should not be any other free social outcome. However, if we leave them unrestricted, we obtain $$a_k = \binom{m_1}k \binom{m_2}k k! 2^{\binom{m_1m_2}2 - k\left(m_1+m_2-2\right)}$$ graphs. In this process we count each graph with $k+l$ free social outcomes $\binom{k+l}{k}$ times and hence we obtain the system of linear equations $$\left\{ \begin{array}{l} \sum_{k=0}^{m_2}\binom{k}{0}e_k = a_0 \\ \sum_{k=1}^{m_2}\binom{k}{1}e_k = a_1 \\ \quad\quad\quad\vdots \\ e_{m_2-1} + m_2e_{m_2} = a_{m_2-1} \\ e_{m_2}=a_{m_2}. \end{array} \right.$$ By (partially) solving this system, we obtain the recursive formula $$e_k = a_k - \sum_{l=k+1}^{m_2}\binom{l}{k}e_l$$ for computing the number of social rules with two features and $k$ free social outcomes. An explicit formula can be given. If $S$ is a subset of $\{k,k+1,\dotsc,i\}$, we denote by $\mathop{\mathrm{Prod}}(S)$ the product $$\prod_{j=1}^{\#S-1} \binom{s_{j+1}}{s_j},$$ where the $s_*$’s are the elements of $S$ ordered increasingly ($s_1<s_2<\dotsb<s_{\#S}$). The number of social rules with two features and $k$ free social outcomes is $$e_k = \sum_{i=k}^{m_2} \left( \sum_{\substack{S\subseteq\{k,k+1,\dotsc,i\}\\ k,i\in S}} (-1)^{\#S+1} \mathop{\mathrm{Prod}}(S) \right) a_i.$$ The proof of this formula, by means of a recursion from $m_2$ to zero, is straightforward, so we leave it to the reader. With an effort one may carry out a similar argument to compute the number of social rules with three features and a fixed number of free social outcomes, but a general formula seems to be unfeasible with this technique. An interesting issue is to study the probability $P_{(m_1,\dotsc,m_n)}(k)$ that a social rule has $k$ free social outcomes. For the social rules with two features, this quotient is $$P_{(m_1,m_2)}(k) = \frac{e_k}{2^{\binom{m_1m_2}2}}.$$ In Table \[table:prob\_fixed\_outcomes\] we have computed it for small values of $m_1$ and $m_2$. ------------- ------- ------------- ------------- ------------- free social outcomes (2,2) (3,3) (5,5) (10,10) 0 .125 .5063476563 .9053598846 .9996185892 1 .75 .4262695313 .0916594645 .0003813519 2 .125 .0659179688 .0029453066 .0000000589 3 . .0014648438 .0000352051 $<10^{-10}$ 4 . . .0000001392 $<10^{-10}$ 5 . . .0000000001 $<10^{-10}$ 6 . . . $<10^{-10}$ 7 . . . $<10^{-10}$ 8 . . . $<10^{-10}$ 9 . . . $<10^{-10}$ 10 . . . $<10^{-10}$ ------------- ------- ------------- ------------- ------------- : The probability that a social rule with two features has a fixed number of free social outcomes.[]{data-label="table:prob_fixed_outcomes"} #### Decidability and manipulability in the new framework In the classical social choice framework a given social outcome $z$ is an optimum if and only if it dominates all the other social outcomes. Therefore, the probability $P(z)$ that a given social outcome $z$ is an optimum for a social rule on $M$ social outcomes is given by the quotient between the number of graphs with $M-1$ nodes and the number of graphs with $M$ nodes, i.e. $$P(z)=\frac{2^{\binom{M-1}{2}}}{2^{\binom{M}{2}}} = \frac{1}{2^{M-1}}.$$ In Marengo and the second author’s model, global optima play the role of optima in the classical framework, but also a local optimum can be an optimum if the agents vote starting from a particular social outcome. The probability $P(z)$ for a given social outcome $z$ to be a local optimum is given by the quotient between the number of the graphs with $M$ nodes and with $\sum_{i=1}^{n}m_i -n $ fixed arcs, and the number of all the graphs with $M$ nodes, i.e. $$P(z)=\frac{2^{\binom{M}{2}-(\sum_{i=1}^{n}m_i-n)}}{2^{\binom{M}{2}}} = \frac{1}{2^{\sum_{i=1}^{n}m_i -n}}=\frac{2^n}{2^{\sum_{i=1}^{n}m_i}}.$$ It is clear that, if $n$ is greater than $1$, the probability for $z$ to be a local optimum is far greater than that to be an optimum in the classical framework. Therefore, we define a function $F: \mathbb N^3 \longrightarrow \mathbb Q $ depending on $n$, $M=\prod_{i=1}^n m_i$ and $\sigma=\sum_{i=1}^n m_i$, defined to be the quotient between the probability of a social outcome to be an optimum in the classical framework and that to be a local optimum in the new model, $$F(n,M,\sigma)=\frac{2^n}{2^{\sum_{i=1}^{n}m_i}}2^{M-1}=2^{n+M-(\sigma-1)}.$$ The inequality $$F(n,M,\sigma) \geqslant 1$$ holds. However, the inequality becomes strict, $$F(n,M,\sigma) > 1,$$ if and only if $n$ is greater than $1$. The study of the function $F$ seems to be very important in the social choice context. Indeed, it gives an idea about the decidability and the manipulability of choice in the new model with respect to the old one. We note that, for example, if $m_i$ is $2$ for $i=1,\dotsc,n$, there is a high probability for a social outcome to be a local optimum, while this probability strongly decreases when the values of the $m_i$’s increase. However, the value of $F$ is far greater than $1$ even if the $m_i$’s are greater than $2$. We think that this function $F$ is a measure of the power of this new approach in social decision theory. The algorithm {#sec:algorithm} ============= We will describe here an algorithm to compute the universal basin of attraction of a social outcome $z$ of a social rule $\prec$. It finds also the sets $E^z_i$ defined at the end of Section \[sect:model\]. Therefore it obtains also the deepness of each social outcome with respect to $z$ and the u-deepness of $z$. The algorithm <span style="font-variant:small-caps;">ComputeUniversalBasin</span> works as follows. The pseudocode is shown in Algorithm \[alg:ComputeUniversalBasin\]. A social rule $\succ$ and a social outcome $z$ The non-void $E^z_i$’s and hence the universal basin of attraction $\Psi(z)=\bigcup_i E^z_i$. \[line:initialise\_start\] Initialize ${X}{\mathrel{\mathop:}=}$ the set of all social outcomes Initialize $E^z_i {\mathrel{\mathop:}=}\emptyset$ \[line:initialise\_end\] \[line:compute\_irred\] Compute the irreducible components ${{\cal T}}_j$ \[line:after\_irred\] Let $h {\mathrel{\mathop:}=}$ the integer such that $z\in{{\cal T}}_h$ \[line:remove\_irred\] $X {\mathrel{\mathop:}=}X\setminus\bigcup_{j>h}{{\cal T}}_j$ \[line:is\_local\_opt\] \[line:empty\_basin\] $\emptyset$ \[line:n1\_start\] $E^z_0 {\mathrel{\mathop:}=}\{z\}$ ${X}{\mathrel{\mathop:}=}{X}\setminus\{z\}$ \[line:Bz\] Compute $B(z) {\mathrel{\mathop:}=}\{y\in{X}| y\prec z,\ {\overline{{\cal H}}_{z,y}}\not\supseteq{\overline{{\cal H}}_{w,z}}\ \forall w\succ z\}$ $E^z_1 {\mathrel{\mathop:}=}B(z)$ \[line:n1\_end\] ${X}{\mathrel{\mathop:}=}{X}\setminus B(z)$ <span style="font-variant:small-caps;">Recursion</span>($1$) \[line:first\_part\_end\] \[line:recursion\] **where** <span style="font-variant:small-caps;">Recursion</span> **is the following function** An integer $i$ The non-void $E^z_i$’s and hence the universal basin of attraction $\Psi(z)=\bigcup_i E^z_i$. \[line:moving\_new\_elm\_start\] \[line:Byi\] Compute $B(y_i) {\mathrel{\mathop:}=}\{y_{i+1}\in{X}| y_{i+1}\prec y_i,\ {\overline{{\cal H}}_{y_i,y_{i+1}}}\not\supseteq{\overline{{\cal H}}_{w,y_i}}\ \forall w\succ y_i,$\ ${\overline{{\cal H}}_{y_i,y_{i+1}}}\not\supseteq{\overline{{\cal H}}_{w,z}}\ \forall w\succ z\}$ \[line:Ei1\_Byi\] $E^z_{i+1} {\mathrel{\mathop:}=}E^z_{i+1}\cup B(y_i)$ \[line:X\_Byi\] ${X}{\mathrel{\mathop:}=}{X}\setminus B(y_i)$ \[line:moving\_new\_elm\_end\] \[line:another\_step\] \[line:rec\_stop\] $E^z_j$ for all $j=0,\dotsc,i$ \[line:rec\_recall\] <span style="font-variant:small-caps;">Recursion</span>($i+1$) 1. Consider the set ${X}$ of all social outcomes. Start with the empty sets $E^z_i$ for $i\in\mathbb{N}$ (we will need only a finite number of them). Eventually, the universal basin of attraction $\Psi(z)$ will be $\bigcup_i E^z_i$. 2. \[step:remove\_irred\] Compute the irreducible components ${{\cal T}}_*$ of the graph corresponding to the social rule $\prec$. Let $h$ be the integer such that $z\in{{\cal T}}_h$. Remove from $X$ all the social outcomes that are in ${{\cal T}}_j$ for all $j>h$. 3. If ${\overline{{\cal H}}_{w,z}}$ is made up of only one feature for some $w\succ z$, then $z$ is not a local optimum and hence $E^z_i$ is empty for all $i\in\mathbb{N}$: go to Step \[item:final\_step\]. Otherwise, $z$ is a local optimum: hence add $z$ to $E^z_0$ and remove it from $X$. 4. Find all the social outcomes $y\in{X}$ such that $$\begin{gathered} y\prec z,\\ {\overline{{\cal H}}_{z,y}}\not\supseteq{\overline{{\cal H}}_{w,z}}\ \mathrm{for\ all}\ w\succ z.\end{gathered}$$ Add these $y$’s to $E^z_1$ and remove them from ${X}$. Go to Step \[step:iter\] with $i=1$. 5. \[step:iter\] For each $y_i\in E^z_i$ do the following steps. - Find all the social outcomes $y_{i+1}\in{X}$ such that $$\begin{gathered} y_{i+1}\prec y_i,\\ {\overline{{\cal H}}_{y_i,y_{i+1}}}\not\supseteq{\overline{{\cal H}}_{w,y_i}}\ \mathrm{for\ all}\ w\succ y_i,\\ {\overline{{\cal H}}_{y_i,y_{i+1}}}\not\supseteq{\overline{{\cal H}}_{w,z}}\ \mathrm{for\ all}\ w\succ z.\end{gathered}$$ - Add these $y_{i+1}$’s to $E^z_{i+1}$ and remove them from ${X}$. - If $E^z_{i+1}$ is empty, go to Step \[item:final\_step\]; otherwise, repeat Step \[step:iter\] with $i$ incremented by $1$. 6. \[item:final\_step\] The universal basin of attraction $\Psi(z)$ is the union of the $E^z_i$’s (only a finite number of them being non-empty). If the social rule $\prec$ is defined on $M$ social outcomes and a social outcome $z$ is given, the algorithm <span style="font-variant:small-caps;">ComputeUniversalBasin</span> computes the universal basin of attraction of $z$ in $O(M^3\log M)$ time. We start by proving that the algorithm comes to an end. The algorithm is tail-recursive, hence we need to prove that the recursive function <span style="font-variant:small-caps;">Recursion</span> (Line \[line:recursion\]) does not give rise to an infinite loop. Each time <span style="font-variant:small-caps;">Recursion</span> is called, it may move elements (those belonging to $B(y_i)$ for each $y_i\in E^z_i$) from ${X}$ to $E^z_{i+1}$ (Lines \[line:moving\_new\_elm\_start\]-\[line:moving\_new\_elm\_end\]), and then it either stops <span style="font-variant:small-caps;">ComputeUniversalBasin</span> (Line \[line:rec\_stop\]) or calls itself with $i$ incremented by $1$ (Line \[line:rec\_recall\]). Since ${X}$ is finite, there is a minimal $\bar{\imath}\in\mathbb{N}$ such that all $B(y_{\bar{\imath}})$’s (and hence $E^z_{\bar{\imath}+1}$) are empty. When <span style="font-variant:small-caps;">Recursion</span>($\bar{\imath}$) is called, it moves no element from ${X}$ to $E^z_{\bar{\imath}+1}$, and then it stops <span style="font-variant:small-caps;">ComputeUniversalBasin</span> (Line \[line:rec\_stop\]). We now prove that the algorithm is correct. By virtue of Theorem \[teo:universal\_basin\], we have that the universal basin of attraction $\Psi(z)$ is $\bigcup_i E^z_i$. By means of Proposition \[prop:univ\_basin\_irred\_comp\], we know that all the social outcomes that are in ${{\cal T}}_j$ for all $j>h$ cannot be in the universal basin of attraction of $z$, and hence they can be removed from the set ${X}$ of social outcomes that may be in the universal basin of attraction (Lines \[line:compute\_irred\]-\[line:remove\_irred\]). If $z$ is not a local optimum, all $E^z_i$’s are empty; in this case the condition in Line \[line:is\_local\_opt\] is true (in virtue of Theorem \[principio\]) and hence the output is the empty set (Line \[line:empty\_basin\]). Otherwise, if $z$ is a local optimum, $E^z_0$ is $\{z\}$ and $E^z_1$ is not empty (see the end of Section \[sect:model\]); in this case $E^z_0$ and $E^z_1$ are computed, and their elements are removed from ${X}$ (Lines \[line:n1\_start\]-\[line:n1\_end\]). Then <span style="font-variant:small-caps;">Recursion</span> is called and $E^z_2$ is computed by means of the conditions of the end of Section \[sect:model\] (Lines \[line:moving\_new\_elm\_start\]-\[line:moving\_new\_elm\_end\]). If $E^z_2$ is empty (Line \[line:another\_step\]), the universal basin of attraction has been computed (see the end of Section \[sect:model\]) and the output is $E^z_0$, $E^z_1$ and $E^z_2$ (Line \[line:rec\_stop\]). Otherwise, <span style="font-variant:small-caps;">Recursion</span> is called again and $E^z_3$ is computed. An easy recursive argument now concludes the proof of the correctness of the algorithm. We eventually show that our algorithm has run-time $O(M^3\log M)$. First of all, we note that if $x$ and $y$ are social outcomes, ${\overline{{\cal H}}_{x,y}}$ has $O(\log M)$ elements at most and hence it can be computed in $O(\log M)$ time. Moreover, if $\bar{x}$ and $\bar{y}$ are other social outcomes, the test ${\overline{{\cal H}}_{x,y}}\not\supseteq{\overline{{\cal H}}_{\bar{x},\bar{y}}}$ can also be performed in $O(\log M)$ time. We denote by $M_i$ the cardinality of the set $E^z_i$ for $i=0,\dotsc,M$. The initializations in Lines \[line:initialise\_start\]-\[line:initialise\_end\] are done in $O(M)$ time. The computation of the irreducible components of Line \[line:compute\_irred\] is done in $O(M^2)$ time (see Section \[sec:graphs\_tournaments\]). Lines \[line:after\_irred\] and \[line:remove\_irred\] are executed in $O(M)$ time. The condition in Line \[line:is\_local\_opt\] can be checked in $O(M\log M)$ time. Line \[line:Bz\] is executed in $O(M^2\log M)$ time and hence the same holds for Lines \[line:n1\_start\]-\[line:n1\_end\]. We will now take the call of <span style="font-variant:small-caps;">Recursion</span>($i$) into account. Line \[line:Byi\] is executed in $O(M^2\log M)$ time, while Lines \[line:Ei1\_Byi\] and \[line:X\_Byi\] are executed in $O(M)$ time. These three lines are executed for all $y_i\in E^z_i$, i.e. $M_i$ times. The condition in Line \[line:another\_step\] is checked in $O(1)$ time, and Line \[line:rec\_stop\] is executed in $O(M)$ time. Therefore, the call of <span style="font-variant:small-caps;">Recursion</span>($i$) has run-time $O(M_iM^2\log M)$. Summing the run-time $O(M^2\log M)$ of Lines \[line:initialise\_start\]-\[line:first\_part\_end\] and the run-times of all calls of <span style="font-variant:small-caps;">Recursion</span>, we obtain $$M^2\log M + \sum_{i=1}^M M_iM^2\log M = \left(\sum_{i=0}^M M_i\right)M^2\log M \sim M^3\log M.$$ This concludes the proof. The calculus of irreducible components (Step \[step:remove\_irred\]) is not necessary, but it can make the computation faster if there are many social outcomes in the irreducible components that dominate $z$. In Step \[step:iter\] the social outcomes with deepness $i+1$ with respect to $z$ are found. Therefore, the number of calls of the recursive function <span style="font-variant:small-caps;">Recursion</span> is the u-deepness of $z$. A (faster) simplification of the algorithm described above can be easily constructed to check whether a social outcome is in the universal basin of attraction of another one. #### FOSoR The first author has used Algorithm \[alg:ComputeUniversalBasin\] to write the computer program [FOSoR]{} [@FOSoR]. It reads a social rule and can - compute the universal basin of attractions, - check whether a social outcome is a local (or an u-local) optimum, - check whether a social outcome is in the universal basin of attraction of another one, - check whether there is a local (or an u-local) optimum, - find the number of local (or u-local) optima, - find an objects scheme (if there is any) through which there is a maximal dominating path starting from a social outcome and ending up in another one, - find the deepnesses and the u-deepnesses. Numerical examples {#sec:numerical} ================== We give here some numerical results on the numbers of local and u-local optima of social rules. In order to compute these results the first author has written the computer program [FOSoRStat]{} [@FOSoRStat], which is based on Algorithm \[alg:ComputeUniversalBasin\]. It reads the number of values of each feature and the number of random social rules to check. It works as follows: - it repeatedly - creates a random social rule, - computes the number of local (and u-local) optima; - it computes the percentages and collects the results. We have shown the results for local (resp. u-local) optima in the case when each feature can assume two values in Table \[table:statistics\_local\_2\] (resp. Table \[table:statistics\_ulocal\_2\]). This case is interesting because it represents the binary choice (i.e. yes/no, true/false, for/against features). We have shown the results for local (resp. u-local) optima in some other cases in Table \[table:statistics\_local\_other\] (resp. Table \[table:statistics\_ulocal\_other\]). Note that the relative frequencies in the cases with two features are consistent with the probabilities (computed in Section \[sec:probability\]) that a social rule with two features has a fixed number of free social outcomes, see Table \[table:prob\_fixed\_outcomes\]. -------- --- --------- --------- --------- --------- --------- --------- --------- local optima 1 2 3 4 5 6 7 8 0 . .125298 .234797 .296109 .328291 .346168 .359183 .363905 1 1 .749722 .544492 .451488 .410650 .390143 .380017 .375250 2 . .124980 .206551 .210998 .201968 .194522 .188194 .185849 3 . . .013679 .038372 .050962 .056757 .058031 .058879 4 . . .000481 .002934 .007427 .010837 .012366 .013346 5 . . . .000097 .000657 .001444 .001964 .002385 6 . . . .000002 .000043 .000120 .000220 .000341 7 . . . . .000002 .000009 .000024 .000041 8 . . . . . . .000001 .000003 9 . . . . . . . .000001 -------- --- --------- --------- --------- --------- --------- --------- --------- : Relative frequencies of social rules with a fixed number of local optima (over $10^6$ social rules): cases of two values for each feature.[]{data-label="table:statistics_local_2"} --------- --- --- --------- --------- --------- --------- --------- --------- u-local optima 1 2 3 4 5 6 7 8 0 . .270704 .353896 .377606 .377254 .374337 .369871 1 1 .716457 .608455 .551384 .511074 .472559 .438681 2 . .012335 .036169 .066967 .101588 .133080 .157869 3 . .000504 .001460 .003919 .009610 .018503 .029815 4 . . .000020 .000123 .000456 .001444 .003468 5 . . . .000001 .000016 .000074 .000279 6 . . . . .000002 .000003 .000017 --------- --- --- --------- --------- --------- --------- --------- --------- : Relative frequencies of social rules with a fixed number of u-local optima (over $10^6$ social rules): cases of two values for each feature.[]{data-label="table:statistics_ulocal_2"} ------------- ---------- ---------- ----------- ------------ ---------- local optima (3,3) (3,3,3) (3,3,3,3) (5,5) (10,10) 0 .5065899 .6392066 .7246560 .905331876 .9996083 1 .4260296 .3042338 .2376727 .091717916 .0003917 2 .0659261 .0522738 .0345455 .002915423 . 3 .0014544 .0041184 .0029567 .000034649 . 4 . .0001637 .0001618 .000000136 . 5 . .0000037 .0000071 . . 6 . . .0000002 . . repetitions $10^7$ $10^7$ $10^7$ $10^9$ $10^7$ ------------- ---------- ---------- ----------- ------------ ---------- : Relative frequencies of social rules with a fixed number of local optima (the number of repetitions is indicated in the last line): other cases.[]{data-label="table:statistics_local_other"} ------------- ---------- ---------- ----------- ------------ --------- -- -- -- u-local optima (3,3) (3,3,3) (3,3,3,3) (5,5) (10,10) 0 .8020871 .9699638 .9966669 .999923702 1 1 .1979129 .0300356 .0033330 .000076298 . 2 . .0000006 . . . repetitions $10^7$ $10^7$ $10^7$ $10^9$ $10^7$ ------------- ---------- ---------- ----------- ------------ --------- -- -- -- : Relative frequencies of social rules with a fixed number of u-local optima (the number of repetitions is indicated in the last line): other cases.[]{data-label="table:statistics_ulocal_other"} Eventually, we compare Marengo and the second author’s model with the classical one. Note that in the classical model there can be only one optimum and that the probability $P(M)$ that a social rule with $M$ social outcomes has an optimum equals $M$ times the probability that a given social outcome is an optimum, i.e. $\frac{M}{2^{M-1}}$. In Table \[table:statistics\_old\_model\] we have computed this probability for small values of $M$. ----- -------- -- ----- --------- -- ----- --------- -- ----- --------- $M$ $P(M)$ $M$ $P(M)$ $M$ $P(M)$ $M$ $P(M)$ 2 1 6 .1875 10 .019531 14 .001709 3 .75 7 .109375 11 .010742 15 .000915 4 .5 8 .0625 12 .005859 16 .000488 5 .3125 9 .035156 13 .003174 17 .000259 ----- -------- -- ----- --------- -- ----- --------- -- ----- --------- : The probability that a social rule has an optimum in the classical model.[]{data-label="table:statistics_old_model"} An example ========== In this section we will describe in detail an example in which all kinds of optima appear. It is so small that we can deal with it by hands. The number of features is three, assuming two values each. The set $X$ is made up of eight social outcomes: $v_1v_2v_3$ with $v_*=0,1$. The social rule is any $\succ$ with $$\begin{aligned} & 000 \succ 100,\quad 000 \succ 010,\quad 000 \succ 001,\quad 000 \succ 101,\quad 000 \succ 011, \\ & 110 \succ 000,\quad 110 \succ 100,\quad 110 \succ 010,\quad 110 \succ 101,\quad 110 \succ 011, \\ & 101 \succ 100,\quad 101 \succ 001,\quad 101 \succ 111, \\ & 011 \succ 010,\quad 011 \succ 001,\quad 011 \succ 101,\quad 011 \succ 111, \\ & 111 \succ 110,\end{aligned}$$ where the ten preferences that are not defined are arbitrary. The preferences are shown in Figure \[fig:example\], where we have disposed the social outcomes as the vertices of a cube. $\xymatrix{ & & {*=<16pt>[o][F-]{011}} \ar @{} [l]_<<*+<3pt>{\txt{\begin{small}$g$\end{small}}} {\ar @{-} [ddd] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar @{-} [lld] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar @{-} [rd] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar @{-} [rrr] |-(0.57){\SelectTips{cm}{}\object@{>}}} & & & {*=<16pt>[o][F-]{111}} {\ar @{-} [ddd] |-(0.57){\SelectTips{cm}{}\object@{>}}} \\ {*=<16pt>[o][F-]{001}} & & & {*=<16pt>[o][F-]{101}} \ar @{} [u]_<<<*-<2pt>{\txt{\begin{small}$l$\end{small}}} {\ar @{-} [ddd] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar @{-} [lll] |-(.46){\SelectTips{cm}{}\object@{>}}} {\ar @{-} [rru] |-(0.57){\SelectTips{cm}{}\object@{>}}} & & \\ & & & & & \\ & & {*=<16pt>[o][F-]{010}} & & & {*=<16pt>[o][F-]{110}} {\ar @{-} [llllld] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar @{-} [lld] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar @{-} [lll] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar @{-} [lluu] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar @{-} @/_24pt/ [llluuu] |-(.5){\SelectTips{cm}{}\object@{>}}} \\ {*=<16pt>[o][F-]{000}} \ar @{} [u]^<*+<12pt>{\txt{\begin{small}$u$\end{small}}} {\ar @{-} [rrr] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar @{-} [rru] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar @{-} [uuu] |-(0.57){\SelectTips{cm}{}\object@{>}}} {\ar @{-} [rrruuu] |-(.5){\SelectTips{cm}{}\object@{>}}} {\ar @{-} [rruuuu] |-(0.57){\SelectTips{cm}{}\object@{>}}} & & & {*=<16pt>[o][F-]{100}} & & }$ We will show that the social outcome $g=011$ is a global optimum, that the social outcome $u=000$ is an u-local optimum but not a global optimum, and that the social outcome $l=101$ is a local optimum but not an u-local optimum. The proof that $g$ is a global optimum for the objects scheme $A_g=\{\{H_{1,0},H_{2,0}\},\{H_{3,0}\}\}$ is straightforward, so we leave it to the reader. In order to prove that $u$ is an u-local optimum, we note that we have $$\Psi (u,(\{H_{2,0},H_{3,0}\},\{H_{1,0}\},\{H_{3,0}\})) = {X}\setminus\{l\}$$ and $$l \in \Psi (u,(\{H_{1,0},H_{3,0}\},\{H_{2,0}\})).$$ Therefore, $\Psi(u)$ is the whole ${X}$, and $u$ is an u-local optimum. We will now prove that $u$ is not a global optimum. Suppose by way of contradiction that $u$ is a global optimum for an agenda $\alpha$ of an objects scheme ${A}$, i.e. $\Psi(u,({{\cal A}}_1,\dotsc,{{\cal A}}_k))=X$ (where ${{\cal A}}_i={{\cal A}}_j$ is allowed). Since $u$ is a local optimum for ${A}$ (see Remark \[rem:basin\_local\]), the objects $\{H_{1,0},H_{2,0}\}$ and $\{H_{1,0},H_{2,0},H_{3,0}\}$ cannot belong to ${A}$. Since $l$ (resp. $g$) belongs to $\Psi(u,({{\cal A}}_1,\dotsc,{{\cal A}}_k))$, the object $\{H_{1,0},H_{3,0}\}$ (resp. $\{H_{2,0},H_{3,0}\}$) belongs to ${A}$ and hence $\{H_{1,0},H_{3,0}\}={{\cal A}}_i$ and $\{H_{2,0},H_{3,0}\}={{\cal A}}_j$ for some $i$ and $j$ in $\{1,\dotsc,k\}$. Since $l$ (resp. $g$) belongs to $\Psi(u,({{\cal A}}_1,\dotsc,{{\cal A}}_k))$, we have $i<j$ (resp. $j<i$). This is a contradiction and hence $u$ is not a global optimum. Since $l$ is free, it is a local optimum for some objects scheme. We will now prove that $l$ is not an u-local optimum (then, by virtue of Remark \[rem:glob\_uloc\_loc\], it is not a global optimum, either). If ${A}$ is an objects scheme such that $\Phi(l,A)$ is empty, the objects $\{H_{1,0},H_{2,0}\}$, $\{H_{1,0},H_{3,0}\}$, $\{H_{2,0},H_{3,0}\}$ and $\{H_{1,0},H_{2,0},H_{3,0}\}$ cannot belong to ${A}$. Therefore, ${A}$ is $\{\{H_{1,0}\},\{H_{2,0}\},\{H_{3,0}\}\}$ and hence $\Psi(l)$ equals $\Psi(l,\{\{H_{1,0}\},\{H_{2,0}\},\{H_{3,0}\}\})$. Since $u$ does not belong to the last-mentioned basin of attraction, the social outcome $l$ is not an u-local optimum. The social rule $\succ$ is the smallest one that has a global optimum, an u-local optimum and a local optimum. Indeed, a social rule with less than eight nodes can have at most two features; if there is only one feature, there is at most one local optimum (which is actually a global optimum); if there are two features assuming $m_1$ and $m_2$ values (the smaller of which is $2$), by virtue of Theorem \[teo:max\_free\] there are at most $\min\{m_1,m_2\}=2$ local optima. [99]{} <span style="font-variant:small-caps;">G. Amendola</span>, [*`FOSoR`*]{},\ http://www.dm.unipi.it/$\sim$amendola/files/software/fosor/. <span style="font-variant:small-caps;">G. Amendola</span>, [*`FOSoRStat`*]{},\ http://www.dm.unipi.it/$\sim$amendola/files/software/fosorstat/. <span style="font-variant:small-caps;">K. Arrow</span>, “Social Choice and Individual Values,” Wiley, New York, 1951. <span style="font-variant:small-caps;">Y.M. Baryshnikov</span>, *Topological and discrete social choice: in a search of a theory*, Social Choice and Welfare [**14**]{} (1997), 199–209. <span style="font-variant:small-caps;">N. Bourbaki</span>, “Groupes et algèbres de Lie, Chap. IV-VI,” Masson-Dunod, Paris, 1968. <span style="font-variant:small-caps;">G. Chartrand – L. Lesniak</span> “Graphs & digraphs,” Fourth edition. Chapman & Hall/CRC, Boca Raton, FL, 2005. <span style="font-variant:small-caps;">G. Chichilnisky</span>, *Social choice and topology of spaces of preferences*, Advances in Mathematics [**37**]{} (1980), 165–176. <span style="font-variant:small-caps;">G. Chichilnisky</span>, “Social Choice and Game Theory: Recent Results with a Topological Approach,” Social Choice and Welfare, P.K. Pattanaik and M. Salles, North Holland , Amsterdam, 1983, 79–102. <span style="font-variant:small-caps;">G. Chichilnisky</span>, *Action of Symmetry Groups*, Social Choice and Welfare [**13**]{} (1996), 357–364. <span style="font-variant:small-caps;">M. J. A. N. de Caritat (marquis de Condorcet)</span>, “Essai sur l’Application de l’Analyse aux Probabilités de Decision Rendue à la Pluralité des Voix,” Imprimerie Royale, Paris, 1785. <span style="font-variant:small-caps;">B. Eckmann</span>, *Räume mit Mittelbildungen*, Comment. Math. Helv. [**28**]{} (1954), 329–340. <span style="font-variant:small-caps;">B. Eckmann – T. Ganea – P. J. Hilton</span>, “Generalized means, Studies in Mathematical Analysis,” Stanford University Press, 1962. <span style="font-variant:small-caps;">W. Kocay – D. L. Kreher</span>, “Graphs, algorithms, and optimization,” Discrete Mathematics and its Applications (Boca Raton), Chapman & Hall/CRC, Boca Raton, FL, 2005. <span style="font-variant:small-caps;">H. G. Landau</span>, *On dominance relations and the structure of animal societies. I. Effect of inherent characteristics*, Bull. Math. Biophys. [**13**]{} (1951), 1–19. <span style="font-variant:small-caps;">H. G. Landau</span>, *On dominance relations and the structure of animal societies. II. Some effects of possible social factors*, Bull. Math. Biophys. [**13**]{} (1951), 245–262. <span style="font-variant:small-caps;">H. G. Landau</span>, *On dominance relations and the structure of animal societies. III. The condition for a score structure*, Bull. Math. Biophys. [**15**]{} (1953), 143–148. <span style="font-variant:small-caps;">L. Marengo – S. Settepanella</span>, *Social choice among complex objects*, LEM, Working Paper Series, WP 2010/02 (2010). <span style="font-variant:small-caps;">R. McKelvey</span>, *General conditions for global intransitivities in formal voting models*, Econometrica [**47**]{} (1979), 1086–1112. <span style="font-variant:small-caps;">J. W. Moon</span>, “Topics on tournaments,” Holt, Rinehart and Winston, New York-Montreal, Que.-London, 1968. <span style="font-variant:small-caps;">P. Orlik and M. Terao</span>, “Arrangements of Hyperplanes,” Springer-Verlag, Berlin, 1992. <span style="font-variant:small-caps;">D. Saari</span>, “Geometry of Voting,” Springer-Verlag, New York, 1994. <span style="font-variant:small-caps;">D. Saari</span>, *Complexity and the geometry of voting*, Math. Comp. Modelling [**48**]{} (2008), 1335–1356. <span style="font-variant:small-caps;">D. Saari</span>, *Mathematical structure of voting paradoxes 1: Pairwise vote*, Economic Theory [**15**]{} (2000), 1–53. <span style="font-variant:small-caps;">D. Saari</span>, *Mathematical structure of voting paradoxes 2: Positional voting*, Economic Theory [**15**]{} (2000), 55–101. <span style="font-variant:small-caps;">M. Salvetti</span>, *Topology of the complement of real hyperplanes in ${\mathbb{C}}^N$*, Inventiones Mathematicae [**88**]{} (1987), 603–618. <span style="font-variant:small-caps;">H. A. Simon</span>, “The Sciences of the Artificial,” MIT Press, Cambridge, MA, 2nd ed., 1982. <span style="font-variant:small-caps;">P. F. Stadler</span>, *Fitness Landscapes*, Biological Evolution and Statistical Physics, Springer-Verlag, Berlin, (2002), 187-207. <span style="font-variant:small-caps;">H. Terao</span>, *Chambers of Arrangements of Hyperplanes and Arrow’s Impossibility Theorem*, Advances in Mathematics [**214**]{} (2007), 366–378. <span style="font-variant:small-caps;">S. Weinberger</span>, *On the topological social choice model*, Journal of Economic Theory [**115**]{} (2004), 377–384. “Modularity: understanding the development and evolution of natural complex systems,” MIT press, Cambridge, MA, 2005. [^1]: Department of Mathematics and Applications, University of Milano-Bicocca (Type A Research Fellowship, formerly “E. De Giorgi” grant from the Department of Mathematics of the University of Salento), gennaro.amendola@unimib.it [^2]: LEM, Scuola Superiore Sant’Anna, s.settepanella@sssup.it [^3]: Cited by B. Eckmann.
--- abstract: '$ \textsf{Yarel} $ is a core reversible programming language that implements a class of permutations, defined recursively, which are primitive recursive complete. The current release of syntax and operational semantics, implemented by compiling to , is `0.1.0`, according to [Semantic Versioning 2.0.0](https://semver.org/#semantic-versioning-200). comes with [](https://yarel.di.unito.it), developed as an [](https://www.eclipse.org/) plug-in by means of [](https://www.eclipse.org/Xtext/).' author: - Claudio Grandi - Dariush Moshiri - Luca Roversi bibliography: - 'bibliography.bib' title: Introducing Yet Another REversible Language ---
--- abstract: 'Neural controllable text generation is an important area gaining attention due to its plethora of applications. In this work, we provide a new schema of the pipeline of the generation process by classifying it into five modules. We present an overview of the various techniques used to modulate each of these five modules to provide with control of attributes in the generation process. We also provide an analysis on the advantages and disadvantages of these techniques and open paths to develop new architectures based on the combination of the modules described in this paper.' author: - | Shrimai Prabhumoye, Alan W Black, Ruslan Salakhutdinov\ School of Computer Science\ Carnegie Mellon University\ Pittsburgh, PA, USA\ `sprabhum, awb, rsalakhu@cs.cmu.edu`\ bibliography: - 'acl2020.bib' title: Exploring Controllable Text Generation Techniques --- Introduction ============ Controllable text generation is the task of generating realistic sentences whose attributes can be controlled. The attributes to control can range from being stylistic such politeness, sentiment, formality, etc.; demographic attributes of the person writing the text such as gender, age, etc.; content such as information, keywords, entities, etc to be generated, ordering of information, events, like plot summaries etc. Controlling various attributes of text generation has manifold applications. For instance in dialogue response generation task, work has been done in controlling persona [@zhang2018personalizing; @li2016persona], controlling various aspects of the response such as politeness [@niu:2018], formality, authority etc, grounding the responses in external source of information [@zhou2018dataset; @dinan2018wizard; @ghazvininejad2018knowledge], and controlling topic sequence [@tang2019target; @prabhumoye2020i]. Another application is story generation where you can control the ending [@peng2018towards], the persona [@chandu2019my], the plot [@yao2019plan], and the topic sequence [@huang2019hierarchically]. Controllable text generation is also used to modulate the formality and politeness of emails [@madaan2020politeness]. Report generation can be controlled by pulling disparate source documents into a coherent unified whole, which can use a shared set of sources such as Wikipedia article generation [@liu2018generating; @prabhumoye-etal-2019-towards]. ![image](bg-control-modules.png) Although there is a large body of prior work in controllable text generation, there is no unifying theme. Each work addresses a specific task in a specific context. In this paper we outline a new schema which connects prior work and provides an insight into various aspects of controllable text generation. The schema contains five modules that cover the overall generation pipeline and provide an understanding of the effect of each component on the generation process. Prior work has focused on specific parts of the schema that we outline here and we provide insights into their similarities. We provide an overview of these modules and also present an exploration of the various techniques used to control and update each of these modules. Most of the controllable text generation tasks can be framed as conditional language generation tasks. They have an input or a [*source*]{} sequence ${\mathbf{U}}$ and an output or a [*target*]{} sequence ${\mathbf{Y}}$ to be generated. In this case, we model the probability of the [*target*]{} sequence conditioned on the [*source*]{} sequence given by $P({\mathbf{Y}} | {\mathbf{U}}) = \prod^T_t P({\mathbf{y}}_t |{\mathbf{U}}, {\mathbf{y}}_{<t})$. The generation of the target tokens of the sequence ${\mathbf{Y}}$ unfolds as a time series where each token ${\mathbf{y}}_t$ is generated at a time step ${\mathbf{t}}$. At a given time step $t$, a generative model takes in the previous hidden state ${\mathbf{h}}_{t-1}$ and the input ${\mathbf{x}}_t$ at current time step. It performs a set of operations denoted by $\boldsymbol{G}$ to produce the output ${\mathbf{o}}_t$ which is used to predict token ${\mathbf{\hat{x}}}_t$. The ground truth token to be generated is denoted by ${\mathbf{y}}_t$. As shown in Figure \[fig:bg-overview\], we have identified the following five modules for controlling the generation process: (1) [**External Input**]{} module is responsible for the initialization ${\mathbf{h}}_0$, of the generation process. (2) [**Sequential Input**]{} module is the input ${\mathbf{x}}_t$ at each time step of the generation. (3) [**Generator Operations**]{} module performs consistent operations or calculations on all the input at each time step. (4) [**Output**]{} module is the output ${\mathbf{o}}_t$ which is further projected on to the vocabulary space to predict the token ${\mathbf{\hat{x}}}_t$ at each time step. (5) [**Training Objective**]{} module takes care of the loss functions used for training the generator. This schema provides an insight into the contributions of the various modules for controllable text generation. The main advantage of this schema is that it can be used with any algorithmic paradigm like sequence-to-sequence, probabilistic models, adversarial methods, reinforcement learning, etc. The schema can also be used with non-autoregressive algorithms which may generate text using graphical structures like trees [@welleck2019non; @guo2019non]. In this paper, we focus on how this schema can be used to describe controllable text generation focusing particularly on the use of autoregressive models. This work paves way to designing new architectures based on our schema. This can be done by identifying promising techniques for each module and then combining them. Our schema can also be potentially used for applying these techniques on new tasks of similar nature. It also provides an easy access to appropriate comparison with existing techniques for those new architectures. The prior work on unifying text generation models has mostly focused on building efficient tool-kits and modular views of generation. For instance, [@reiter2000buildNLG] details seven sub-tasks which are conceptually distinct to describe the generation process. These sub-tasks can be modelled separately or in some cases they may interleave. In [@reiter2000buildNLG], these seven sub-tasks are primarily characterized as content or structure tasks. Note that this work is not specific to neural text generation. Our work focuses specifically on controlling attributes in neural text generation process. We don’t divide the generation pipeline into several sub-tasks but we divide the neural text generation process into modules all of which are required for generation. In [@hu2019texar], the focus is on building a toolkit for various text generation tasks based on the three properties of versatility, modularity and extensibility. This work enlists few model architectures and learning paradigms for various text generation tasks. In our work, we focus only on the generation process of controllable text generation tasks. We specifically detail the inputs, outputs and operations of the generation process. We do not provide any specific examples of architectures but provide an overview of the basic underlying modules which can be used with any learning paradigm. @xie2017neural provides a practical guide to the neural generation process describing it in terms of initialization, optimization, regularization and decoding strategies. Our work on the other hand does not delve into the implementation details of the generation pipeline but provides an overall schema for understanding of the various components involved. In the remainder of the paper, we denote the representation of the control attribute by ${\mathbf{s}}$ and the representation of the input or [*source*]{} sentence returned by the encoder as ${\mathbf{h}}_e$. In what follows, we first describe the possible ways of controlling attributes by modulating the [*external input*]{} in [§\[sec:bg-dec-init\]]{}, the [*sequential input*]{} in [§\[sec:bg-dec-inp\]]{}, the [*generator operations*]{} in [§\[sec:bg-gen\]]{}, the [*output*]{} in [§\[sec:bg-out\]]{} and the [*training objective*]{} in [§\[sec:bg-train-obj\]]{}. At the end of each section, we provide an analysis of each of the techniques described and how they fit together. External Input {#sec:bg-dec-init} ============== In this section we discuss the different techniques which can be used to control the generation process by updating the initialization of the generator ${\mathbf{h}}_0$. In the standard generation process, ${\mathbf{h}}_0$ is equal to ${\mathbf{h}}_e$. This is marked as module (1) in Figure \[fig:bg-overview\]. Arithmetic or Linear Transform {#sec:bg-dec-init-elem} ------------------------------ One of the easiest ways to control the generation is to concatenate a control vector ${\mathbf{s}}$ to output of the encoder ${\mathbf{h}}_e$. The external input of the decoder ${\mathbf{h}}_0$ will be $[{\mathbf{h}}_e; {\mathbf{s}}]$, where $[a;b]$ denotes concatenation. Here, the control vector ${\mathbf{s}}$ would provide the generator with a strong signal to guide the generation process. @fu:2017 use this technique to control the style representation for their generator. The encoder builds representation that is devoid of the style and only retains content. The control vector for style is then concatenated to the encoder representation to initialize the decoder. This technique is commonly used in [@ghazvininejad2018knowledge; @zhou2018dataset; @dinan2018wizard] to concatenate information from external sources to dialogue context to generate dialogue responses. @chandu2019my concatenate personality representation ${\mathcal{P}}$ derived from a separate corpus to generate visual stories. They also experiment with a simple arithmetic operation on ${\mathbf{h}}_e$ given by ${\mathbf{h}}_0 = {\mathbf{h}}_e - {\mathcal{S}} + {\mathcal{P}}$ to get the initialization of the generator (here ${\mathcal{S}}$ denotes the average representation of the story). They observed that while concatenation technique is better at preserving the meaning of the generated story, the arithmetic operation provides a better signal of the personality for the generation process. @hoang2016incorporating uses both the concatenation technique as well as performs a linear transform of ${\mathbf{s}}$ to obtain ${\mathbf{h}}_0$ for language modelling task. The control vectors in this case represents meta data such as key-words, topics etc. In case of the linear transform ${\mathbf{h}}_0 = {\mathtt{tanh}}({\mathbf{W}}_1 {\mathbf{h}}_e + {\mathbf{W}}_2 {\mathbf{s}} + {\mathbf{b}})$. The paper also explores adding the control vector to the encoder representation (${\mathbf{h}}_0 = {\mathbf{h}}_e + {\mathbf{s}}$). In case of addition, the resulting ${\mathbf{h}}_0$ would be averaged representation of the input representation ${\mathbf{h}}_e$ and ${\mathbf{s}}$. Information could be lost in this case as control is not explicit. In case of concatenation, if the size of the control vector ${\mathbf{s}}$ is too small compared to the context vector ${\mathbf{h}}_e$, then ${\mathbf{s}}$ is over-shadowed by ${\mathbf{h}}_e$ and the generator may not be able to pay attention to ${\mathbf{s}}$. Hence it is important to choose comparable dimensions for these two vectors. But this increases the size of model considerably and could be quite costly. Linear transform avoids these issues and performs better than the other two techniques for @hoang2016incorporating. Stochastic Changes {#bg:sec-stochastic} ------------------ @kingma2013auto introduce variational auto-encoder, where you can stochastically draw a continuous latent variable ${\mathbf{z}}$ from a Gaussian distribution. The initialization of the generator ${\mathbf{h}}_0$ is based on this latent variable which is drawn. @bowman-etal-2016-generating use this concept for generating sentences from this continuous latent representation. This process of changing the encoder state ${\mathbf{h}}_e$ is can only be used with Kullback-Leibler (KL) Divergence training objective described in [§\[bg:sec-loss-kl\]]{}. In [@wang-etal-2019-topic], VAE is used to guide the generation process with topics of a document. A gaussian mixture model is used to incorporate topics into latent variables. In [@Xu2020unsupervisedVAE], VAE is used to control for sentiment attribute in style transfer task by constraining the posterior mean to a learned probability simplex. Such a design of controllable text generation works when the control attributes can be represented as latent variables for example style, topics, strategies etc. This design is difficult to work for content grounded text generation tasks where specific information, keywords or entities have to guide the generation process. Decompose {#bg:sec-decompose} --------- You can decompose the encoder representation ${\mathbf{h}}_e$ into multiple subspaces, each of which signifies a different attribute you would like to control. @liu2018learning split the encoder representation ${\mathbf{h}}_e$ into two components, one which represents the structure in the document and the other represents the semantic information. This formulation was used by [@balachandran2020strucsum] for controlling structure in abstractive summarization. This work performs the split with respect to the dimensions of ${\mathbf{h}}_e$. The method forces the first $n$ dimensions of ${\mathbf{h}}_e$ to capture meaning and the latter to capture structure. @balachandran2020strucsum also show quantitative and qualitative analysis on the types of structures of documents learnt by this technique. @romanov-etal-2019-adversarial decompose the encoder representation ${\mathbf{h}}_e$ into a form vector ${\mathbf{f}}$ and a meaning vector ${\mathbf{m}}$. During the training phase, a [*discriminator*]{} enforces ${\mathbf{m}}$ to not carry any information about the form using an adversarial loss and a [*motivator*]{} is used for a motivational loss that encourages ${\mathbf{f}}$ to carry the information about the form. The generation process can then be guided to adhere to the desired target form. As opposed to splitting ${\mathbf{h}}_e$ with respect to dimensions, this work learns subspaces ${\mathbf{W_m}}$ and ${\mathbf{W_f}}$ given by ${\mathbf{m}} = {\mathtt{tanh}}({\mathbf{W}}_m{\mathbf{h}}_e + {\mathbf{b}}_m)$ and ${\mathbf{f}} = {\mathtt{tanh}}({\mathbf{W}}_f{\mathbf{h}}_e + {\mathbf{b}}_f)$ respectively. When ${\mathbf{h}}_e$ is projected on ${\mathbf{W}}_m$, we get the meaning vector ${\mathbf{m}}$ and similarly when it is projected on ${\mathbf{W}}_f$ we get the form vector ${\mathbf{f}}$. This work shows qualitatively how ${\mathbf{m}}$ and ${\mathbf{f}}$ are learnt in the subspaces using t-SNE plots. It also shows quantitatively the use of ${\mathbf{m}}$ and ${\mathbf{f}}$ in downstream paraphrase detection tasks. This is an excellent method in building interpretable representations for control attributes. Although, the effectiveness of this technique is not yet proven in the style transfer task or the abstractive summarization task. In both the above mentioned works, the models learns interpretable representations of control attributes but were not able to beat state of the art methods in their respective tasks. It is also worth noting that learning good decomposed vectors is especially hard when no supervision is provided on what the decomposed components are supposed to learn. This techniques works well when the representation space of the input ${\mathbf{x}}$ can be decomposed into subspaces which can represent the control attributes. This means that the input ${\mathbf{x}}$ needs to contain signal of the control attributes. It is unlikely to work when the control attributes need to be externally provided. For example in case of content grounded generation tasks described in [@prabhumoye-etal-2019-towards; @dinan2018wizard; @zhou2018dataset], the input may not necessarily contain the content that needs to be generated. A separate input of the content to be generated is provided in these cases. External Feedback {#bg:sec-inp-ext} ----------------- A regularizer is often used to control the external input ${\mathbf{h}}_0$ to the generator. In many cases, an adversarial loss to manipulate the latent space is used as an external feedback mechanism. This essentially controls the latent space of the encoder which is eventually provided as an initialization to the generator. In [@FuTan], a multi-layer perceptron (MLP) is used for predicting the style labels from ${\mathbf{h}}_0$. Similarly, the adversarial loss is also used in [@wang2019controllable] to control the latent representation ${\mathbf{h}}_0$ for style attributes. In [@romanov-etal-2019-adversarial], an adversarial loss is used to ensure that the meaning representation ${\mathbf{m}}$ does not carry any style signals. The adversarial loss is obtained by training a discriminator which takes as input a representation ${\mathbf{m}}$ and tells if it carries the target style signal. Similarly, this work also employs a motivator loss which is the opposite of the adversarial loss to ensure that the style representation ${\mathbf{f}}$ actually does carry the stylistic information. @john-etal-2019-disentangled use multiple losses to control the style and content information represented in ${\mathbf{h}}_0$. The discriminator which provides external feedback has to be jointly trained with the generator. This technique can be useful with the decompose technique to ensure that the decomposed sub-spaces represent the desired control attributes. Sequential Input {#sec:bg-dec-inp} ================ In this section we discuss the different techniques which can be used to manipulate the sequential input ${\mathbf{x}}_t$ to the decoder at each time step. ${\mathbf{x}}_t$ here is used to denote the word embedding of the token at time step $t$. This is marked as position (2) in Figure \[fig:bg-overview\]. Arithmetic or Linear Transform {#arithmetic-or-linear-transform} ------------------------------ Similar to changing the initialization, we can change the input to the decoder by concatenating the information at each time step with some additional control vector ${\mathbf{s}}$. Typically, teacher forcing method [@williams1989learning] is used to train the generator. At time step $t$, the generator takes as input the word embedding ${\mathbf{x}}_t$ of the word that was predicted at step $t-1$ and predicts the word to be generated ${\mathbf{y}}_t$ at the current time step. Note that ${\mathbf{x}}_t = {\mathbf{y}}_{t-1}$. The input ${\mathbf{x}}_t$ can be concatenated with ${\mathbf{s}}$ at each time step to control the generation process. Hence, ${\mathbf{\tilde{x}}}_t = [{\mathbf{x}}_t; {\mathbf{s}}]$. @noraset2017definition, use this technique in the task of definition modeling. They concatenate word embedding vector ${\mathbf{s}}$ of the word to be defined at each time step of the definition generation process. Unfortunately, for this task, this technique has not proved to be effective compared to other techniques of controlling the generation. @zhou2018dataset concatenate the hidden representation of the external source of information ${\mathbf{s}}$ to each time step of dialogue response generation. Similarly, @prabhumoye-etal-2019-towards also concatenate the hidden representation of the external source of information $s$ to each time step of Wikipedia update generation process. In this work as well, this results of this technique were not as impressive as simple concatenating the control context to the input of the encoder. @harrison2019maximizing concatenate a side constraint $s$ which represents style and personality into the generation process. For this task of generating language from meaning representations with stylistic variation, this method performed better than conditioning the encoder with side constraint in terms of BLEU metric. @chandu2019my also concatenate the personality representation ${\mathcal{P}}$ at each time step of the story generation process. This is used to control the personality of the visual stories. In addition to concatenation, this work proposes to modify the sequential input as ${\mathbf{\tilde{x}}}_t = {\mathbf{x}}_t - {\mathcal{S}} + {\mathcal{P}}$ (here ${\mathcal{S}}$ denotes the average representation of the story and ${\mathcal{P}}$ denotes the representation of the personality). The latter technique is better at generating personality conditioned stories than the concatenation technique. Neither of these techniques prove to be conclusively better than making similar changes to the external input module ([§\[sec:bg-dec-init-elem\]]{}). Note that in this technique, changes are made directly to the input of generation and not the context which is the case with external input. Also, most of the prior work has focused on recurrent neural network and its variants for making such changes. It could be interesting to see such changes made to transformers [@vaswani2017attention]. Generator Operations {#sec:bg-gen} ==================== This module takes in the external input ${\mathbf{h}}_0$, the sequential input ${\mathbf{x}}_t$ at time step $t$ and performs computation to return an output ${\mathbf{o}}_t$. The same set of computations ($\boldsymbol{G}$) are performed at each time step. Different set of operations can be performed to compute ${\mathbf{o}}_t$ which are enlisted below. You can also decide to change the operations based on the control vector ${\mathbf{s}}$ to compute ${\mathbf{o}}_t$. This is shown as position (3) in Figure \[fig:bg-overview\]. Recurrent Neural Networks {#sec:bg-gen-rnn} ------------------------- Recurrent Neural Networks (RNNs) are designed to model sequential information. RNNs perform the same operations for every element of a sequence, with the output depending on previous computations. This recurrence serves as a form of memory. It allows contextual information to flow through the network so that relevant outputs from previous time steps can be applied to network operations at the current time step. Theoretically, RNNs can make use of information in arbitrarily long sequences, but empirically, they are limited to looking back only a few steps. The Long Short-Term Memory (LSTM) [@hochreiter1997long] units are a type of RNNs that have additional ‘memory cell’ apart from standard units of basic RNNs. The memory cell can maintain information in memory for long periods of time. A set of gates is used to control when information enters the memory, when it’s output, and when it’s forgotten. This architecture lets them learn longer-term dependencies. The vanishing gradient problem of RNNs is resolved here. Gated Recurrent Units (GRUs) [@cho2014learning] are similar to LSTMs, but use a simplified structure designed to adaptively capture dependencies of different time scales. They also use a set of gates to control the flow of information, but they don’t use separate memory cells, and they use fewer gates. The computations of the RNN or its variants can be modified to account for the control attribute. Additional gates can be added or the control attribute can be provided as an additional input to the standard gates of RNNS. @gan2017stylenet propose a variant of the LSTM model, named factored LSTM, which controls style representation in image caption task. The parameters of the LSTM module which are responsible to transform the input ${\mathbf{x}}_t$ are factored into three components ${\mathbf{U}}$, ${\mathbf{S}}$ and ${\mathbf{V}}$. The operations of the input (${\mathbf{i}}_t$), forget (${\mathbf{f}}_t$) and output gate (${\mathbf{o}}_t$) are given by: $$\begin{aligned} {\mathbf{i}}_t &=& {\mathtt{sigmoid}}({\mathbf{U}}_{ix} {\mathbf{S}}_{ix} {\mathbf{V}}_{ix} {\mathbf{x}}_t + {\mathbf{W}}_{ih} {\mathbf{h}}_{t-1}) \\ {\mathbf{f}}_t &=& {\mathtt{sigmoid}}({\mathbf{U}}_{fx} {\mathbf{S}}_{fx} {\mathbf{V}}_{fx} {\mathbf{x}}_t + {\mathbf{W}}_{fh} {\mathbf{h}}_{t-1}) \\ {\mathbf{o}}_t &=& {\mathtt{sigmoid}}({\mathbf{U}}_{ox} {\mathbf{S}}_{ox} {\mathbf{V}}_{ox} {\mathbf{x}}_t + {\mathbf{W}}_{oh} {\mathbf{h}}_{t-1}) \\ {\mathbf{\tilde{c}}}_t &=& {\mathtt{tanh}}({\mathbf{U}}_{cx} {\mathbf{S}}_{cx} {\mathbf{V}}_{cx} {\mathbf{x}}_t + {\mathbf{W}}_{ch} {\mathbf{h}}_{t-1})\end{aligned}$$ Particularly, the matrix set $\{{\mathbf{S}}\}$ is specific to each style in the task and is responsible to capture the underlying style features in the data. In [@kiddon2016globally], the GRU unit is modified to accommodate extra inputs - goal ${\mathbf{g}}$ and agenda items $E^{new}_t$ in the recipe generation task. The operation of the new component ${\mathbf{\tilde{h}}}_t$ is given by: $$\begin{aligned} {\mathbf{\tilde{h}}}_t = {\mathtt{tanh}}({\mathbf{W}}_h {\mathbf{x}}_t + {\mathbf{r}}_t \odot {\mathbf{U}}_h {\mathbf{h}}_{t-1} + {\mathbf{s}}_t \odot {\mathbf{Y}}{\mathbf{g}} + \\ {\mathbf{q}}_t \odot ({\mathbf{1}}^{T}_{L} {\mathbf{Z}} {\mathbf{E}}^{new}_{t})^{T})\end{aligned}$$ where ${\mathbf{s}}_t$ is a goal select gate and ${\mathbf{q}}_t$ is a item select gate. With this modification, the generation process is controlled for the items to be generation in the recipe and the goal. @sem_cond_lstm adapt the LSTM to control the dialogue act information in the generation process. The operation to compute the cell value ${\mathbf{c}}_t$ is given by: $${\mathbf{c}}_t = {\mathbf{f}}_t \odot {\mathbf{c}}_{t-1} + {\mathbf{i}}_t \odot {\mathbf{\tilde{c}}}_t + {\mathtt{tanh}}({\mathbf{W}}_d {\mathbf{d}}_t)$$ The dialogue act representation ${\mathbf{d}}_t$ is build using another LSTM cell. RNNs, LSTMs and GRUs are commonly used to model controllable text generation tasks [@prabhumoye-etal-2019-towards; @rao2018dear; @see2017get; @zhou2018dataset; @fu:2017]. Most of these variants still have trouble remembering long sequences and are hence commonly used with attention mechanism ([§\[sec:bg-out-att\]]{}) on the source sequence. Transformer ----------- Transformers are proposed by [@vaswani2017attention] and they rely on attention mechanism to draw global dependencies between input and output. The Transformer uses stacked self-attention and point-wise, fully connected layers for both the encoder and decoder. The encoder stacks $N$ identical layers, each of which has two sub-layers. The first sub-layer is a multi-head self-attention mechanism ([§\[sec:bg-out-att\]]{}), and the second sub-layer is a positionwise fully connected feed-forward network. Each sub-layer uses residual connections around each of the sub-layers, followed by layer normalization. The decoder has an additional third sub-layer, which performs multi-head attention over the output of the encoder stack. Since, attention mechanism is at the core of this generator, the decoder can attend over all positions of input sequence. Computations over a sequence can be parallelized in this case and hence it is faster in performance. The modifications made to the computing units of RNN mentioned in [§\[sec:bg-gen-rnn\]]{} which use parameters specific to control attributes such as style, dialog act etc have not been explored with the transformers architecture. Pre-trained models ------------------ Recently pre-trained conditional language models are used for text generation like GPT [@radford2018improving], GPT2 [@radford2019language], XLNet [@yang2019xlnet], etc. Several works have fine-tuned the pre-trained models for downstream controllable text generation tasks [@sudhakar2019transforming; @dinan2018wizard; @urbanek2019learning]. The language modeling aspects of generation like fluency and grammaticality are already learnt if pre-trained models are used. These models are hard to fine-tune for sequence-to-sequence tasks such as machine translation, abstractive summarization etc. BART [@lewis2019bart] is a denoising autoencoder built with a sequence-to-sequence model and is particularly effective when fine tuned for text generation. Alternatively, T5 [@raffel2019exploring] treats every NLP problem as a “text-to-text" problem, i.e. taking text as input and producing new text as output. Hence, it can be adapted to controllable text generation tasks. @dathathri2019plug propose a Plug and Play Language Model (PPLM) for controllable language generation. It combines a pretrained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM. This is similar to the classifier feedback technique described in [§\[bg:sec-loss-class\]]{}. Some of the other techniques described in this paper such as stochastic changes [§\[bg:sec-stochastic\]]{} , external feedback [§\[bg:sec-inp-ext\]]{} and [§\[bg:sec-out-ext\]]{}, decompose [§\[bg:sec-decompose\]]{} etc would be hard to incorporate into pre-trained language models without modifying the model architecture or fine-tuning entailing the significant cost of retraining. Output {#sec:bg-out} ====== In the standard generation process, ${\mathbf{o}}_t$ is the output of the generator module which is projected to the vocabulary space to predict the token ${\mathbf{\hat{x}}}_t$. Here, we discuss the various techniques used to modulate the sequential output ${\mathbf{o}}_t$ at each time step $t$, before projecting it to the vocabulary space. This is marked as position (4) in Figure \[fig:bg-overview\]. Attention {#sec:bg-out-att} --------- Attention is the most popular way of guiding the generation process. It is typically used to guide the generation process to focus on the source sequence [@bahdanau2015neural]. The attention calculating module takes as input the current hidden state ${\mathbf{h}}_t$ of the generator at each time step ${\mathbf{t}}$. The aim of this module is to determine a context vector ${\mathbf{c}}_t$ that captures relevant source-side information to help predict the token ${\mathbf{\hat{x}}}_t$. In case of [*global attention*]{}, all the hidden states of the encoder are considered to calculate the context vector ${\mathbf{c}}_t$ [@luong2015effective]. This faces the the downside of expensive calculation especially for longer source sequences like documents. To overcome this challenge, [*local attention*]{} only chooses to focus only on a small subset of the source positions per target word. In this case, ${\mathbf{c}}_t$ is calculated over a window of size $D$ of the source hidden states. @vaswani2017attention view attention as a mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. This work proposes the simultaneous use of [*scaled dot-product*]{} attention which helps in parallelizing computation and a [*multi-headed*]{} attention which allows the model to jointly attend to information from different representation subspaces at different positions. @sudhakar2019transforming use self-attention to control for style by simply adding a special target style token in the source sequence. @dinan2018wizard also use transformers to attend over information from external document for guided dialogue response generation. [@zhang2018personalizing] uses the encoded representation of personas to compute the attention weights ${\mathbf{a}}_t$ at a given time step of the decoder. The attention is re-weighted according to the persona of the response to be generated in dialogue. So far, work has not been done to modulate the attention weights to control for attributes like style, topic, content etc. External Feedback {#bg:sec-out-ext} ----------------- The output latent space of the generator can be controlled by external feedback. Similar to changing the external input ${\mathbf{h}}_0$, the output latent space can also be changed using adversarial loss. In [@logeswaran2018content], an adversarial loss is used which encourages the generation realistic and attribute compatible sentences. The adversarial loss tries to match the distribution of sentence and attribute vector pairs $({\mathbf{x}}, {\mathbf{s}})$ where the sentence can either be a real or generated sentence. @gong-etal-2019-reinforcement also control the output latent space by providing different types of rewards like style reward, semantic reward and fluency reward in the reinforcement learning setup. The discriminator used to obtain the adversarial loss has to be jointly trained with the generator. Arithmetic or Linear Transform {#arithmetic-or-linear-transform-1} ------------------------------ @hoang2016incorporating demonstrate three simple ways of changing the output ${\mathbf{o}}_t$ of an RNN to control for meta information like topic, keywords etc. They show that you can add the control vector ${\mathbf{s}}$ to ${\mathbf{o}}_t$. Hence the modified output ${\mathbf{\tilde{o}}}_t$ is ${\mathbf{\tilde{o}}}_t = {\mathbf{o}}_t + {\mathbf{s}}$. Similarly, you can create ${\mathbf{\tilde{o}}}_t$ by concatenating ${\mathbf{s}}$ to ${\mathbf{o}}_t$ (${\mathbf{\tilde{o}}}_t = [{\mathbf{o}}_t; {\mathbf{s}}]$). We can also build ${\mathbf{\tilde{o}}}_t$ using a perceptron layer dependent on ${\mathbf{s}}$ and ${\mathbf{o}}_t$. In this case, ${\mathbf{\tilde{o}}}_t$ is given by ${\mathbf{\tilde{o}}}_t = {\mathtt{tanh}}({\mathbf{W}}_o {\mathbf{o}}_t + {\mathbf{W}}_s {\mathbf{s}} + {\mathbf{b}}_o)$. In each of the three cases, the modified output ${\mathbf{\tilde{o}}}_t$ is then projected to the vocabulary space to predict the token ${\mathbf{\hat{x}}}_t$. Training Objective {#sec:bg-train-obj} ================== In this section we describe various methods used to control the generation using objective functions. The output ${\mathbf{o}}_t$ at each time step $t$ of the generation process is projected to the vocabulary space using a linear transform (${\mathbf{\hat{o}}}_t = {\mathbf{W}}_o {\mathbf{o}}_t + {\mathbf{b}}$). A token ${\mathbf{\hat{x}}}_t$ is predicted from the vocabulary by passing ${\mathbf{\hat{o}}}_t$ through a softmax function and taking the max value. The predicted token ${\mathbf{\hat{x}}}_t$ is compared with the reference token ${\mathbf{y}}_t$ using a loss function. This loss function can be tweaked to ensure that the generated text carries the desired control attributes. General Loss Objective ---------------------- Here, we describe the loss objectives commonly used in natural language generation tasks. These loss objectives do not try to control for any attribute. Instead they try to ensure fluent, grammatical and diverse generations. #### Cross Entropy Loss: This is the basic loss used to compare the generated tokens with the reference tokens and is used in all text generation process. At each time step $t$, the generation has to predict a token from the vocabulary. Hence, it could be seen as a classification problem with number of classes being equal to vocabulary size. The categorical cross entropy loss is given by: $$- \Sigma^{M}_{c=1} {\mathbf{y}}_{t, c} {\mathtt{log}} (p_{t, c})$$ where $p_{t, c}$ is the probability of the token $c$ at time step $t$. Note that $p_t = {\mathtt{softmax}} ({\mathbf{\tilde{o}}}_t)$ is the probability distribution over the vocabulary. #### Unlikelihood loss: This maintains a set of negative candidates which is based on repeating tokens or n-grams and frequent tokens [@Welleck2020Neural]. This set is updated at each time step as tokens are generated. This works at both token and sequence level and the objective tries to minimize the repetitions in generations. This is used at train time in augmentation with the maximum likelihood objective and can be used for any task. #### Diversity-Promoting objective: This is used to generate a varied set of sentences given similar inputs. Particularly, @li:2015 use Maximum Mutual Information (MMI) as an objective function for the dialogue response generation task. Most generation systems use maximum likelihood objective but this objective additionally tries to reduce the proportion of generic responses. It is given by: $${\mathbf{\hat{T}}} = {\mathtt{argmax}}_{T} \{ {\mathtt{log}} p({\mathbf{T}} | {\mathbf{S}}) - \lambda {\mathtt{log}} p({\mathbf{T}})\}$$ where ${\mathbf{\hat{T}}}$ is the generated target sequence, ${\mathbf{T}}$ is the reference target sequence and ${\mathbf{S}}$ is the source sequence. The second term controls the generation of the high frequency or the generic target sequences. Note that this objective is only used during the inference and the generators are trained using cross entropy loss. @zhang2018personalizing, also use a diversity encouraging objective for dialogue response generation. They train a discriminator to calculate similarity between the source ${\mathbf{S}}$ and target ${\mathbf{T}}$ ($D_{\psi}({\mathbf{T}}, {\mathbf{S}})$) , as well as between the source ${\mathbf{S}}$ and the generated target ${\mathbf{\hat{T}}}$ ($D_{\psi}({\mathbf{\hat{T}}}, {\mathbf{S}})$). They finally try to minimize the difference between $D_{\psi}({\mathbf{T}}, {\mathbf{S}})$ and $D_{\psi}({\mathbf{\hat{T}}}, {\mathbf{S}})$. Apart from these, many other objectives rely on post-hoc decoding strategies such as stochastic decoding which include Top $k$-sampling [@fan:2018], nucleus sampling [@Holtzman2020The], or beam search variants [@paulus2018a; @kulikov-etal-2019-importance; @vijayakumar2018diverse; @holtzman2018learning]. KL Divergence {#bg:sec-loss-kl} ------------- The Kullback-Leibler (KL) Divergence score, quantifies how much one probability distribution differs from another probability distribution. The KL divergence between two distributions ${\mathbf{{\mathcal{Q}}}}$ and ${\mathbf{{\mathcal{P}}}}$ is often stated using the following notation: $${\mathtt{KL}} ({\mathbf{{\mathcal{P}}}} \parallel {\mathbf{{\mathcal{Q}}}})$$ where the operator “$\parallel$” indicates [*divergence*]{} or ${\mathbf{{\mathcal{P}}}}$’s divergence from ${\mathbf{{\mathcal{Q}}}}$. Note that KL Divergence is not symmetric i.e ${\mathtt{KL}} ({\mathbf{{\mathcal{P}}}} \parallel {\mathbf{{\mathcal{Q}}}}) \neq {\mathtt{KL}} ({\mathbf{{\mathcal{Q}}}} \parallel {\mathbf{{\mathcal{P}}}})$. KL divergence can be used to minimize the information loss while approximating a distribution. In text generation, the KL Divergence is combined with the evidence lower bound (ELBO) to approximately maximize the marginal likelihood of data $p({\mathbf{x}})$ which helps in better generations. This objective is used in variational autoencoders and its variants in combination with sampling techniques described in [§\[bg:sec-stochastic\]]{}. This objective fits in the controllable text generation paradigm because it allows you to approximate the posterior distribution of the control variables in the latent ${\mathbf{z}}$-space. Classifier Loss {#bg:sec-loss-class} --------------- This loss is specifically used to ensure that the generated tokens ${\mathbf{\hat{x}}}$ comply with the control attributes ${\mathbf{s}}$. Note the difference between this loss and the external feedback loss used for the [*external input*]{} module and the [*output*]{} module is that this loss operates at the token level and the external feedback loss works on the latent hidden representations. In case of style transfer task, this loss is used to guide the generation process to output the target style tokens. Some works [@prabhumoye:2018; @sudhakar2019transforming; @hu2017toward] use this loss to discriminate between all the styles in their task (one verses all fashion). This type of design will suffer from low accuracy scores when the number of styles increases. To counter this problem, this loss can be setup to calculate if the generated sentence ${\mathbf{\hat{x}}}$ belongs to style ${\mathbf{s_1}}$ or not and similarly to calculate another separate loss term for each style [@chandu2019my]. This type of loss design encounters increasing number of loss terms depending on the number of styles. The third way to motivate this loss term is to discriminating between a sentence ${\mathbf{x}}$ from data which belongs to style ${\mathbf{s_1}}$ and a generated sentence ${\mathbf{\hat{x}}}$ which belongs to the same style ${\mathbf{s_1}}$ [@yang2018unsupervised]. Again, you would need as many loss terms as the number of styles in this case. All of these works use cross entropy loss function to measure their losses. @hu2019makes use a classifier based loss in the visual storytelling task. The classifier is a pre-trained language model [@devlin2019bert] used to measure the coherence between generated sentences of the story. Particularly, the classifier takes as input two sentences at a time ${\mathbf{\hat{x}}}_1$ and ${\mathbf{\hat{x}}}_2$ and outputs a binary label which indicates if ${\mathbf{\hat{x}}}_2$ follows ${\mathbf{\hat{x}}}_1$. In this case, the control variable is coherence in stories which is used to guide the generator to produce consistent sentences. Task Specific Loss ------------------ Depending on the end task and the attribute to be controlled, you can design different loss objectives to ensure that generations abide by the target attributes. #### Strategy Loss: @Zhou2020Augmenting use a dialogue strategy based objective to generate responses for negotiation tasks. This task has ground truth strategies that lead to better negotiations. This loss captures the probability of a particular strategy occurring for the next utterance given the dialogue history. It guides the generator to align the responses with particular strategies. #### Coverage Loss: Generating repeated words or phrases is a common problem for text generation systems, and this becomes especially pronounced for multi-sentence text generation task such as abstractive document summarization. @see2017get introduce a [*coverage loss*]{} which penalizes repeatedly attending to the same locations of the source document. #### Structure loss: @li-etal-2018-improving-neural introduce two new loss objectives [*structural compression*]{} and [*structural coverage*]{} based on sentence-level attention. These objectives are specially designed for the task of abstractive document summarization. [*structural compression*]{} is used to generate a sentence by compressing several specific source sentences and [*structural coverage*]{} is used to cover more salient information of the original document. These objectives leverage document structure in document summarization, and explore the effectiveness of capturing structural properties of document summarization by regularization of the generative model to generate more informative and concise summaries. Conclusion and Future Work ========================== In this paper we propose a new schema to organize the prior work in controllable text generation. The schema contains five modules, each of which plays an important role in the generation process. We detail the various techniques used to modulate each of the five modules to perform controllable text generation. We also provide theoretical understanding and qualitative analysis of these techniques. This understanding paves way to new architectures based on combinations of these modules. The future work will focus on empirical comparison of these techniques to gain an insight into their usefulness and strength. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported in part by ONR Grant N000141812861, NSF IIS1763562, and Apple. We would also like to acknowledge NVIDIA’s GPU support.
--- abstract: 'The observed sequence variation at a locus informs about the evolutionary history of the sample and past population size dynamics. The standard Kingman coalescent model on genealogies – timed trees that represent the ancestry of the sample – is used in a generative model of molecular sequence variation to infer evolutionary parameters. However, the state space of Kingman’s genealogies grows superexponentially with sample size $n$, making inference computationally unfeasible already for small $n$. We introduce a new coalescent model called Tajima heterochronous n-coalescent with a substantially smaller cardinality of the genealogical space. This process allows to analyze samples collected at different times, a situation that in applications is both met (*e.g.* ancient DNA and RNA from rapidly evolving pathogens like viruses) and statistically desirable (variance reduction and parameter identifiability). We propose an algorithm to calculate the likelihood efficiently and present a Bayesian nonparametric procedure to infer the population size trajectory. We provide a new MCMC sampler to explore the space of Tajima’s genealogies and model parameters. We compare our procedure with state-of-the-art methodologies in simulations and applications. We use our method to re-examine the scientific question of how Beringian bison went extinct analyzing modern and ancient molecular sequences of bison in North America, and to reconstruct population size trajectory of SARS-CoV-2 from viral sequences collected in France and Germany.' author: - | Lorenzo Cappello$^1$[^1], Amandine Véber$^{2*}$, Julia A. Palacios$^{1,3*}$\ $^1$Department of Statistics, Stanford University\ $^2$CMAP, CNRS, École Polytechnique, I.P. Paris\ $^3$Department of Biomedical Data Science, Stanford Medicine title: '**The Tajima heterochronous $n$-coalescent: inference from heterochronously sampled molecular data**' --- \#1 1 [1]{} 0[1]{}[ ]{} [*Keywords:*]{} Bayesian nonparametric, Kingman $n$-coalescent, multi-resolution, ancient DNA, Gaussian process. Introduction {#sec:intro} ============ Statistical inference of evolutionary parameters from a sample of $n$ DNA sequences accounts for the dependence among samples and models observed variation through two stochastic processes: an ancestral process of the sample represented by a genealogy [**g**]{}, which depends on the *effective population size trajectory* $(N_e(t))_{t\geq 0}$, and a mutation process with a given set of parameters $\mu$ that, conditionally on [**g**]{}, models the phenomena that have given rise to the sequences. However, state-of-the-art methodologies are not scalable to the amount of data available because the latent space of genealogies lives in a high dimensional space. In this paper, we tackle the problem of scalability from a modeling perspective: we propose a new ancestral process for heterochronous data that dramatically reduces the state space of genealogies. We complement this model with a new algorithm for fast likelihood calculations. Inference for $(N_e(t))_{t\geq 0}$ has important applications in many fields, such as genetics, anthropology, and public health. In the absence of natural selection, the effective population size can be used to approximate census population size. While census population size estimates can be difficult to obtain due to high costs and challenging sampling designs, we can reconstruct past population sizes from observed signatures of genetic diversity in a sample of the population. For example, one can estimate the population size of a virus from genetic samples in a situation where census counts are believed to be inaccurate, as it is common during an epidemic. Inferring population size dynamics – timing of population events, growth and decline rates – rather than estimating census counts, may be of scientific interest. For example, [@sha04] reconstructed bison population dynamics, providing new insights into the extinction of Beringian bison. In this paper, we include two studies supporting the motivations highlighted above: in one study we analyze viral samples of SARS-CoV-2, the virus responsible for the coronavirus disease, and in a second study, we analyze ancient samples of bison in North America (dataset described by [@fro17]). Both Bayesian and frequentist methods rely on the marginal likelihood computed by integrating over the space $\mathcal{G}_n$ of tree topologies with $n$ leaves, and over the space of branch lengths. That is: $$\label{marg} P(\textbf{Y}\mid (N_e(t))_{t\geq 0}, \mu)= \int_{\mathbf{g} \in \mathcal{G}_n\times \mathbb{R}_+^{n-1}} P(\textbf{Y}\mid \mathbf{g}, (N_e(t))_{t\geq 0}, \mu) \,{{\rm d}}\pi (\mathbf{g}\mid (N_e(t))_{t\geq 0}),$$ where the $n-1$ random variables with values in $\mathbb{R}_+$ are the times between consecutive coalescence events in the genealogy, and $\pi(\cdot \mid(N_e(t))_{t\geq 0})$ denotes the probability distribution on $\mathcal{G}_n\times \mathbb{R}_+^{n-1}$ implied by the ancestral process as a function of the past population size trajectory $(N_e(t))_{t\geq 0}$. The genealogy [**g**]{} is an auxiliary variable introduced to compute $P(\textbf{Y}\mid \mathbf{g}, N_e(t), \mu)$ because direct calculation of the marginal $P(\textbf{Y}| N_e(t), \mu)$ is intractable. The prevailing consensus in the literature is to compute with $\pi$ defined as the Kingman-coalescent prior law on leaf-labeled genealogies (formally introduced in [@kingn82; @king82]). However, the cardinality of $\mathcal{G}_n$ grows superexponentially with $n$ ($|\mathcal{G}_n|=n!(n-1)!/2^{n-1}$), creating a computational bottleneck in the calculation of the integral . ![image](genealogy_intro-eps-converted-to){width=".5\textwidth"} An alternative to the Kingman $n$-coalescent is to use a lower resolution coalescent process, known as the Tajima $n$-coalescent [@taj83; @sai15; @pal19]. While the state space of the Kingman $n$-coalescent is in bijection with the set of timed and *labeled* binary trees with $n$ leaves, the state space of the Tajima $n$-coalescent is in bijection with the set of timed and *unlabeled* binary trees with $n$ leaves. The cardinality of the space of the timed and unlabeled binary tree topologies with $n$ leaves is given by the $(n-1)$-th Euler zigzag number [@di13], which behaves like $2 (2/\pi)^{n} \, (n-1)!$ when $n$ increases. While the cardinality of the space of Tajima trees still grows superexponentially in $n$, its rate of growth is drastically smaller than that of the space of Kingman trees. [@pal19] show that when all sequences are sampled at the same time, employing the Tajima coalescent allows fast inference of $(N_e(t))_{t\geq 0}$. The main focus of this work is to develop a scalable model for sequences observed at different time points like those at the tips of the genealogy in Figure \[fig:coalescent\]. The need to model heterochronous data is motivated by both applications and statistical reasons. In applications, viral samples (HIV, influenza) are routinely collected serially. Ancient DNA studies are another very active area of research in which the sampling design is intrinsically sequential. At least two statistical reasons are motivating this model. First, it usually leads to a decrease in the variance of the estimate [@rod99]. In coalescent-based inference, the smaller the number of extant lineages in a given time interval, the greater the variance of our estimate of the interval length, and consequently, the greater the variance of any estimate that depends on that length such as $(N_e(t))_{t\geq 0}$. By including heterochronous samples, we can increase the number of active lineages in the past, and thus obtain better estimates. Second, [@par19] show that including heterochronous samples is necessary (in some cases) to make the parameters describing $(N_e(t))_{t\geq 0}$ identifiable. The Tajima heterochronous $n$-coalescent fundamentally differs from the Tajima *isochronous* $n$-coalescent in that sequences sampled at different times are not exchangeable. The Tajima isochronous $n$-coalescent distinguishes between singletons and *vintaged* lineages, where a singleton lineage refers to a lineage that subtends a leaf in [**g**]{}, and a vintaged lineage refers to a lineage that subtends an internal node in [**g**]{}. Singletons are indistinguishable while vintages are labeled by the ranking of the coalescence event at which they were created. When dealing with heterochronous samples, singletons are instead implicitly labeled by their underlying sampling times. To account for this difference, we define a new Markov chain. The present paper contains three main contributions: first, we propose a new lower-resolution continuous-time Markov chain (CTMC) which we call the Tajima *heterochronous* $n$-coalescent, that allows to model partially-labeled genealogies of heterochronous samples. Second, we introduce a new algorithm to compute the likelihood. Likelihood calculation in [@pal19] called BESTT relies on a backtracking algorithm that is computationally unfeasible for sample sizes greater than $35$. Our new algorithm can accommodate up to $100$ sequences. Lastly, we introduce new MCMC proposals for efficient exploration of the space of Tajima genealogies and do Bayesian nonparametric inference on $(N_e(t))_{t\geq 0}$. The main challenge in employing Tajima genealogies for coalescent-based inference is that the sequence data can be allocated, *i.e.* mapped, to a given genealogy, in many possible ways. The allocation is necessary to compute the likelihood. To find all possible maps, [@pal19] use a backtracking algorithm that proceeds in a bottom-up fashion: starting from the tips of the tree, the algorithm moves along the tree to the root checking for possible allocations of subsets of the data [**Y**]{} to clades of the tree [**g**]{}. Our proposal reverts this process and proceeds in a top to bottom fashion eliminating the backtracking step and reducing its computational complexity. While there is no exact analytical expression for the computational complexity of the backtracking algorithm, a loose upper bound of the backtracking algorithm is $\mathcal{O} (n!)$, which we bring down to $\mathcal{O}(n^2)$ with our new proposal. The lower bound is of the order of $\mathcal{O}(n)$ for both algorithms. The algorithm relies on a novel graphical representation of the data as a tree structure. We note that this tree is related to the definition of the directed acyclic graph (DAG) used in [@pal19], with some important differences. The DAG depends on [**g**]{} whereas the tree we introduced is solely a function of the data. This implies that we need to define it only once. Also, the DAG groups sequences differently since it does not incorporate sampling time information. The rest of the paper proceeds as follows. In Section \[sec:hettaj\], we define the Tajima heterochronous $n$-coalescent. In Section \[lik\], we introduce the mutation model we shall assume, describe the data, define the likelihood and the new algorithm to compute it. Section \[mcmc\] describes the MCMC algorithm for posterior inference, and in Section \[sim\], we present a comprehensive simulation study outlininghow the model works and comparing our method to state-of-the-art alternatives. In Section \[app\], we analyze modern and ancient bison sequences described in [@fro17]. In Section \[covid\], we apply our method to SARS-CoV-2 viral sequences collected in France and in Germany. Section \[concl\] concludes. An open-source implementation is given in `R` package `phylodyn`, which is available for download at `https://github.com/JuliaPalacios/phylodyn.` The Tajima heterochronous $n$-coalescent {#sec:hettaj} ======================================== The Tajima heterochronous $n$-coalescent is an inhomogeneous continuous-time Markov chain that describes the ancestral relationships of a set of $n$ individuals sampled, possibly at different times, from a very large population. The set of ancestral relationships of the sample is represented by a ranked genealogy as the one depicted in Figure \[fig:hetTaj\]. Every organism is dated and labeled according to the time in which the organism lived (if ancient, by radiocarbon date) or in which the living organism was sequenced. In this generalization of the Tajima coalescent, each pair of extant ancestral lineages merges into a single lineage at an instantaneous rate which depends on the current effective population size $N_e(t)$, and new lineages are added when one of the prescribed sampling times is reached. In this work, we do not model the stochasticity of sampling times but we condition on them as being fixed. ![[]{data-label="fig:hetTaj"}](markov_chain-eps-converted-to) Let us introduce some notation. Let $m$ be the number of sampling time points and $n$ be the total number of samples. Let ${\textbf{n}}{}=(n_1,\ldots,n_{m})$ denote the number of sequences collected at times ${\textbf{s}}{}=(s_{1},\ldots,s_{m})$, with $s_{1}=0$ denoting the present time, and $s_{j}>s_{j-1}$ for $j=2,\ldots,m$ (time goes from present towards the past). We refer to the sequences counted in $n_i$ as “belonging to sampling group $s_i$”. Let ${\textbf{t}}{}=(t_{n+1}, \ldots, t_{2})$ be the vector of coalescent times with $t_{n+1}=0<t_{n}<...<t_{2}$; these are the times when two lineages have a common ancestor. Note that the subscript in $t_{k}$ does not indicate the current number of lineages, as it is often done in the coalescent literature, but it indicates the number of lineages that have yet to coalesce (some sequences may not have been sampled yet). We use the rank order of the coalescent events (bottom-up) to label the internal nodes of the genealogy. That is, the node corresponding to the coalescent event occurring at time $t_n$ is labeled $1$ (see $t_{10}$ in Figure \[fig:hetTaj\]), the node corresponding to the coalescence event occurring at time $t_{n-1}$ is labeled $2$, *etc*. We refer to the internal node labels as *vintages* (*i.e.*, rankings). The Tajima heterochronous $n$-coalescent is the process $({\textbf{a}}{}(t), b(t))_{t \geq 0}$ that keeps track of ${\textbf{a}}{} (t)$, a vector of length $m$ whose $j$-th position indicates the number of singletons (*i.e.*, lineages that have not been involved in a coalescence event) from sampling group $s_j$ at time $t$, and $b (t)$ is the set of vintaged lineages at time $t$. The process starts at $t=0$ in state $({\textbf{a}}{}(0)=(n_1,0,\dots,0), b(0)=\emptyset)$, jumps deterministically at every sampling time and jumps stochastically at every random coalescent time until it reaches the unique absorbing state $({\textbf{a}}{(t_2)}=(0,\ldots, 0), b(t_2)=\{n-1\})$ at time $t_2$, when all $n$ samples have a single most recent common ancestor at the root (Figure \[fig:hetTaj\]). At each sampling time $s_i$, the state of the Tajima coalescent jumps deterministically as follows: $$({\textbf{a}}{}(s_i), b(s_i))=({\textbf{a}}{}(s_i-)+ n_i\textbf{e}_i, b(s_i-)),$$ where $f(s_i-)$ denotes the left-limit of the function $f$ at $s_i$ and $\textbf{e}_i$ is the $i$-th unit vector. Let us now turn to the embedded jump chain at coalescent times. At time $t_i$, two extant lineages coalesce to create a new lineage with vintage $n+1-i$. Four types of coalescence transitions are possible depending on which and how many sampling groups are involved: (1) two singletons of the same sampling group coalesce (up to $m$ possible moves for the chain), (2) two singletons of different sampling groups coalesce (up to $m (m-1)/2$ possible moves), (3) one singleton lineage and one vintaged lineage coalesce (up to $m$ possible moves), or (4) two vintaged lineages coalesce (only one possibility because for vintages, the sampling information is irrelevant). Each pair coalesces with the same probability and the transition probabilities at coalescent times are thus given by $$\begin{aligned} \label{transition} P\Big[({\textbf{a}}{(t_i)}, &b(t_i))\Big| ({\textbf{a}}{(t_i-)},b(t_i-))\Big]\\ & =\left\{ \begin{array}{ll} \frac{\mathlarger\prod_{j=1}^m \dbinom{a_j(t_i-)}{a_j(t_i-)-a_j(t_i)}}{\dbinom{\sum_{j=1}^m a_j(t_i-)+|b(t_i-)|}{2}} \,\ \,\ \,\ \,\ & \text{if} \ \,\ ({\textbf{a}}{(t_i)},b(t_i)) \prec ({\textbf{a}}{(t_i-)},b(t_i-))\nonumber\\ \,\ \\ 0 \,\ \,\ \,\ \,\ \,\ \,\ \, \,\ \,\ & \text{otherwise} \end{array} \right.\end{aligned}$$ where $({\textbf{a}}{(t_i)},b(t_i)) \prec ({\textbf{a}}{(t_i-)},b(t_i-))$ means that $({\textbf{a}}{(t_i)},b(t_i))$ can be obtained by merging two lineages of $({\textbf{a}}{(t_i-)},b(t_i-))$ and $|b|$ denotes the cardinality of the set $b$. Observe that the quantity $\sum_{j=1}^m a_j(t_i-)+|b(t_i-)|$ appearing in corresponds to the total number of extant lineages just before the event at $t_i$. Furthermore, since only two lineages coalesce at time $t_i$, at most two terms in the product appearing in the numerator of are not equal to one. Finally, if $m=1$, degenerates into the transition probabilities of the Tajima isochronous $n$-coalescent; on the other hand if $m=n$, the process degenerates into the Kingman heterochronous $n$-coalescent since all singletons are uniquely labeled by their sampling times. Figure \[fig:hetTaj\] shows a possible realization from the Tajima heterochronous $n$-coalescent. To define the distribution of the holding times, we introduce the following notation. We denote the intervals that end with a coalescent event at $t_{k}$ by $I_{0,k}$ and the intervals that end with a sampling time within the interval $(t_{k+1},t_{k})$ as $I_{i,k}$ where $i\geq 1$ is an index tracking the sampling events in $(t_{k+1},t_{k})$. More specifically, for every $k\in \{2,\ldots,n\}$, we define $$I_{0,k}=[\max\{t_{k+1},s_{j}\},t_{k}),\quad \text{ where the maximum is taken over all } s_{j}<t_{k},$$ and for every $i\geq 1$ we set $$I_{i,k}=[\max\{t_{k+1},s_{j-i}\},s_{j-i+1}) \text{ with the max taken over all } s_{j-i+1}>t_{k+1} \text{ and } s_{j}<t_{k}.$$ We also let $n_{i,k}$ denote the number of extant lineages during the time interval $I_{i,k}$. For example, in Figure \[fig:hetTaj\], in $(t_9, t_8)$ we have $I_{0,8}=[s_2,t_8)$, $I_{1,8}=[t_9,s_2)$ and no $I_{i,8}$ for $i\geq 2$. The vector of coalescent times [**t**]{} is a random vector whose density with respect to Lebesgue measure on $\mathbb{R}_+^{n-1}$ can be factorized as the product of the conditional densities of $t_{k-1}$ knowing $t_k$, which reads: for $k=3,...,n+1$, $$\label{prior_time} p(t_{k-1}\mid t_{k},\mathbf{s},\mathbf{n},(N_{e}(t))_{t\geq 0})=\frac{C_{0,k-1}}{N_{e}(t_{k-1})} \exp \left\lbrace - \int_{I_{0,k-1}} \frac{C_{0,k-1}}{N_{e}(t)}{{\rm d}}t+\sum^{m}_{i=1}\int_{I_{i,k-1}} \frac{C_{i,k-1}}{N_{e}(t)}{{\rm d}}t\right\rbrace,$$ where $t_{n+1}=0$ by convention, $C_{i,k}:=\binom{n_{i,k}}{2}$, and the integral over $I_{i,k-1}$ is zero if there are less than $i$ sampling times between $t_k$ and $t_{k-1}$. The distribution of the holding times defined above corresponds to the same distribution of holding times in the heterochronous Kingman coalescent [@rod99]. Although the heterochronous Tajima coalescent takes value on a different state space, it remains true that every pair of extant lineages coalesces at equal rate. Finally, given [**n**]{}, [**s**]{} and ${\textbf{t}}{}$, a complete realization of the Tajima heterochronous $n$-coalescent chain can be uniquely identified with a partially labeled binary ranked tree shape $g$ of $\mathbf{n}=(n_{1},\ldots,n_{m})$ samples at $(s_{1},\ldots,s_{m})$ with its $n-1$ coalescent transitions, so that $$\label{prior_rts} P(g \mid {\textbf{t}}{},{\textbf{s}}{},{\textbf{n}})=\prod_{i=2}^{n} P\Big[({\textbf{a}}{(t_i)},b(t_i))\,\Big|\, ({\textbf{a}}{(t_i-)},b(t_i-))\Big].$$ Equation  gives the prior probability of the tree topology under the Tajima heterochronous $n$-coalescent. Putting together and , we obtain a prior $\pi({\textbf{g}}{}\mid {\textbf{s}}{},{\textbf{n}}{}, (N_e(t))_{t\geq 0})$ $$\label{prior} \pi({\textbf{g}}{}\mid {\textbf{s}}{},{\textbf{n}}{}, (N_e(t))_{t\geq 0})=P(g \mid {\textbf{t}}{},{\textbf{s}}{},{\textbf{n}}) \prod_{k=3}^{n+1}p(t_{k-1}\mid t_{k},\mathbf{s},\mathbf{n},(N_{e}(t))_{t\geq 0}),$$ which can be used in . Data and Likelihood {#lik} =================== Infinite Sites Model and the Perfect Phylogeny {#data} ---------------------------------------------- We assume that the observed data [**Y**]{} consists of $n$ sequences at $z$ polymorphic (mutating) sites at a non-recombining contiguous segment of DNA of organisms with low mutation rate. Under these assumptions, a widely studied mutation model is the *infinite sites model* (ISM) [@kim69; @wat75] with Poissonian mutation, which corresponds to throwing a Poisson point process of mutations on the branches of [**g**]{} at rate $\mu$ such that every mutation occurs at a different site and no mutations are hidden by a second mutation affecting the same site. An important consequence of the ISM is that [**Y**]{} can be represented as an incidence matrix ${\textbf{Y}}{}_1$ and a frequency counts matrix ${\textbf{Y}}{}_2$. ${\textbf{Y}}{}_1$ is a $k \times z$ matrix with $0$-$1$ entries, where $0$ indicates the ancestral type and $1$ indicates the mutant type; $k$ is the number of unique sequences (or haplotypes) observed in the sample, and the columns correspond to polymorphic sites. ${\textbf{Y}}{}_2$ is a $k \times m$ count matrix where the $(i,j)$th entry denotes how many haplotype $i$ sequences belonging to group $s_j$ are sampled. For example, the $n = 10$ sequences defined by the realizations of the ancestral and mutation processes depicted in Figure \[fig:coalescent\] can be summarized into ${\textbf{Y}}_1$ and ${\textbf{Y}}_2$ displayed in Figure \[perfect\_phylo\](A). ![[]{data-label="perfect_phylo"}](perf_phylo2-eps-converted-to){width="100.00000%"} ${\textbf{Y}}{}_1$ and ${\textbf{Y}}_2$ can alternatively be represented graphically as an *augmented perfect phylogeny* ${\textbf{T}}$. This graphical representation of the data is exploited by our likelihood algorithm. The augmented perfect phylogeny representation is an extension of the *gene tree* or *perfect phylogeny* [@gus91; @gri94b; @pal19] representation to the heterochronous case. The standard perfect phylogeny definition leaves out the information carried by ${\textbf{Y}}_2$. To our knowledge, Gusfield’s approach has never been generalized to the heterochronous case. In the augmented perfect phylogeny ${\textbf{T}}=({\textbf{V}},{\textbf{E}})$, ${\textbf{V}}{}$ is the set of nodes of [**T**]{}, and ${\textbf{E}}{}$ is the set of weighted edges. We define [**T**]{} as follows: 1. Each haplotype labels at least one leaf in ${\textbf{T}}$. If a haplotype is observed at $k$ different sampling times, then $k$ leaves in ${\textbf{T}}$ will be labeled by the same haplotype. The pair (haplotype label, sampling group) uniquely labels each leaf node. 2. Each of the $z$ polymorphic sites labels exactly one edge. When multiple sites label the same edge, the order of the labels along the edge is arbitrary. Some external edges (edges subtending leaves) may not be labeled, indicating that they do not carry additional mutations to their parent node. 3. For any pair (haplotype $h_{k}$, sampling group), the labels of the edges along the unique path from the root to the leaf $h_{k}$ specify all the sites where $h_{k}$ has the mutant type. Figure \[perfect\_phylo\](B) plots [**T**]{} corresponding to ${\textbf{Y}}_1$ and ${\textbf{Y}}_2$ displayed in Figure \[perfect\_phylo\](A). Observe that [**T**]{} includes sampling information in the leaf labels. In the example, $h_C$ labels two leaves because it is observed at times $s_1$ and $s_2$. The corresponding edges $E_3$ and $E_4$ are unlabeled, *i.e.*, no mutations are allocated to those edges because the underlying nodes carry identical sequences (same haplotype). We “augment" Gusfield’s perfect phylogeny because the sampling information is crucial in the likelihood calculation. [**T**]{} implicitly carries some quantitative information that can be quickly summarized. We denote the number of observed sequences subtended by an internal node $V$ by $|V|$. If $V$ is a leaf node, $|V|$ denotes the frequency of the haplotype $h$ observed at the corresponding sampling time $s$. Similarly, we denote the number of mutation labels assigned to an edge $E$ by $|E|$. If no mutations are assigned to $E$, then $|E|=0$. For parsimony, the edge that connects node $V_i$ to its parent node is denoted by $E_i$. See Figure \[perfect\_phylo\](C) for an example. [@gus91] gives an algorithm to construct the “traditional perfect phylogeny" [**T**]{}’ in linear time. Constructing [**T**]{} from [**T**]{}’ is straightforward since all we need is to incorporate the sampling information and add leaf nodes if a haplotype is observed at multiple sampling times. If we drew ${\textbf{T}}'$ from the data in Figure \[perfect\_phylo\], it would not have node $V_4$, but only a single node $V_3$ labeled by haplotype $h_C$. A description of the algorithm can be found in the supplementary material. Likelihood ---------- The crucial step needed to compute the likelihood of a Tajima genealogy ${\textbf{g}}{}$ is to sum over all possible allocations of mutations to its branches. This can be efficiently done by exploiting the augmented perfect phylogeny representation of the data [**T**]{} and by first mapping nodes of [**T**]{} to subtrees of [**g**]{}. We stress that the need for an allocation step arises only when working with Tajima genealogies. In Kingman’s coalescent, tree leaves are labeled by the sequences to which they correspond, and so there is a unique possible allocation. In Tajima’s coalescent, leaves are unlabeled, creating potential symmetries in the tree, and so we have to scan all the possible ways in which the observed sequences may be allocated to [**g**]{}. ### Allocations {#sec:alloc} Let [**a**]{} denote a possible mapping of nodes of $\textbf{T}$ to subtrees of ${\textbf{g}}$. [**a**]{} is encoded as a vector of length $n-1$, where the $i$-th entry gives the node in ${\textbf{T}}$ which is mapped to the subtree with vintage $i$, ${\textbf{g}}_i$ (including the branch that subtends vintage $i$). Our algorithm first maps all *non-singleton* nodes $\mathbf{V}$ of $\mathbf{T}$ to subtrees of ${\textbf{g}}{}$, that is, only nodes such that $|V|>1$ are entries of [**a**]{}. Singleton nodes in $\mathbf{T}$ ($V\in \mathbf{V}$ such that $|V|=1$) are treated separately and are initially excluded from the allocation step. For example, Figure \[sub\_all\] shows a possible vector [**a**]{} whose entries are the non-singleton nodes $V_{0},V_{1},V_{2},$ and $V_{5}$ of ${\textbf{T}}$ of Figure \[perfect\_phylo\]. We note that nodes can appear more than once in [**a**]{}, meaning that they can be mapped to more than one subtree. On the other hand, a single node $V_i$ is not necessarily mapped to all the vintages, leaves and internal branches of ${\textbf{g}}_j$; different nodes may be mapped to some subtrees of ${\textbf{g}}_j$ (including external branches), leading to a situation where $V_i$ is mapped to only a subset of the vintages and branches constituting ${\textbf{g}}_j$. For example, in Figure \[sub\_all\], $V_1$ is mapped to ${\textbf{g}}_6$ and ${\textbf{g}}_3$, but $V_2$ is mapped to ${\textbf{g}}_1$, a subtree of both ${\textbf{g}}_6$ and ${\textbf{g}}_3$; hence $V_{1}$ is only mapped to the green part of ${\textbf{g}}_{6}$ and ${\textbf{g}}_{3}$ as depicted in the Figure. The precise mapping of nodes in ${\textbf{T}}$ to subtrees of ${\textbf{g}}$ described below is needed to allocate mutations in [**T**]{} to branches of ${\textbf{g}}$. We will explain the allocation of mutations on ${\textbf{g}}$ for a given ${\textbf{a}}{}$ in the next subsection. ![image](subtree_allocations-eps-converted-to){width=".5\textwidth"} We now define an algorithm to efficiently find all possible mappings [**a**]{} for a given ${\textbf{g}}{}$. We encode the set of all possible [**a**]{}, as an $\#{\textbf{a}}\times (n-1)$ matrix $\mathbf{A}$, where each row is a possible ${\textbf{a}}{}$ ($n-1$ columns) and the number of rows $\#{\textbf{a}}$ is equal to the number of possible allocations. To generate $\mathbf{A}$, the algorithm proceeds recursively from top to bottom in [**g**]{}, by sweeping through subtrees in [**g**]{} and matching them to nodes in $\mathbf{T}$ according to parent-offspring relationships and number of descendants in both [**T**]{} and [**g**]{}. To be more precise, the algorithm is initialized by setting the $1 \times (n-1)$ $\mathbf{A}$ matrix to $\mathbf{A}=(V_0,\ldots,V_0)$, *i.e.*, $V_0$ is mapped to all subtrees in ${\textbf{g}}{}$. The algorithm proceeds iteratively, adding and removing rows from $A$, iterating over an index $i$ going from $n-2$ to $1$. The first step is to define $A(i)$, the set of node allocations in the $i$-th column of $\mathbf{A}$. Then for all $V \in A(i)$, the algorithm iterates through the following steps: define $T_{V}$ as the set of child nodes of $V$ that have $|{\textbf{g}}_{i}|$ descendants. If the number of child nodes of $V$ is at least $3$, $V$ is also included in $T_{V}$. If $T_V=\emptyset$, for example if $V$ is a leaf node, the algorithm does nothing. If $|T_{V}|=1$, the algorithm replaces $V$ by the element of $T_{V}$ in the columns $I$ of $\textbf{A}$ corresponding to all subtrees of ${\textbf{g}}_{i}$. If $|T_{V}|>1$, the matrix $\mathbf{A}$ is augmented by stacking $|T_{V}|-1$ copies of $\mathbf{A}_{V}(,I)$, the submatrix of $\mathbf{A}$ obtained by extracting all the row vectors whose $I$-th elements are $V$. The original submatrix $\mathbf{A}_{V}(,I)$ is referred to as $\mathbf{A}^{(1)}_{V}(,I)$, and $\mathbf{A}^{(2)}_{V}(,I),\ldots,\mathbf{A}^{(|T_{V}|)}_{V}(,I)$ denote its copies. Lastly, the algorithm replaces $V$ by the first element of $T_{V}$ in $\mathbf{A}^{(1)}_{V}(,I)$, by the second element of $T_{V}$ in $\mathbf{A}^{(2)}_{V}(,I)$ and so on, until the last element of $T_{V}$ is substituted in $\mathbf{A}^{(|T_{V}|)}_{V}(,I)$. The simple rule described above is fast to compute but it leads to incorrect allocations because nodes may be mapped a redundant number of times. For example, it is easy to see that implementing the algorithm above, we could define an allocation [**a**]{} where node $V_2$ is allocated to all subtrees of size two; however, $V_2$ should be allocated at most once. This issue can be avoided by noting that internal nodes in [**V**]{} should appear in each [**a**]{} a number of times equal to their number of child nodes minus one, while leaf nodes, say $V' \in {\textbf{V}}{}$, should appear $|V'|-1$ times. Hence, we complete each iteration by eliminating rows of $\mathbf{A}$ where this rule is violated. A second elimination rule is needed to account for the constraints imposed by the sampling time information: rows are eliminated when their assignments involve nodes labeled by a sampling time $s'$ “matched” to subtrees of [**g**]{} that have leaf branches terminating at a different sampling time. Algorithm \[all\] in the Appendix summarizes the above description. ![[]{data-label="allocations"}](allocations-eps-converted-to) Figure \[allocations\] gives examples of possible allocations of [**T**]{} to two different genealogies [**g**]{} and [**g**]{}’. The second genealogy [**g**]{}’ differs from [**g**]{} by the order of the coalescent events $3$ and $6$ which are inverted. [**g**]{} and [**g**]{}’ share the common allocation ${\textbf{a}}_1=(V_2,V_5,V_1,V_5,V_0,V_1,V_0,V_0,V_0)$; however, [**g**]{} has a second possible allocation ${\textbf{a}}_2=(V_5,V_2,V_5,V_1,V_1,V_0,V_0,V_0,V_0)$ that it is not compatible with ${\textbf{g}}{}$’. This difference is due to the fact that $V_5$ has three descendants belonging to sampling group $s_1$, while [**g**]{} has two subtrees with $3$ leaves sampled at $s_1$, and [**g**]{}’ has only one. We note that singleton nodes also need to be allocated, both in ${\textbf{a}}_1$ and ${\textbf{a}}_2$. We will elaborate on this point in the next subsection. ### Likelihood Calculations {#fastlik} To calculate the likelihood, we assume the ISM of mutations and that mutations occur according to a Poisson point process with rate $\mu$ on the branches of [**g**]{}, where $\mu$ is the total mutation rate. To compute the likelihood we need to map mutations in [**T**]{} to branches of [**g**]{} and this is done for each mapping ${\textbf{a}}_{i}$ of non-singleton nodes of [**T**]{} to subtrees of [**g**]{} . For every $V$ in [**T**]{} such that $|V|>1$, we define ${\textbf{E}}_{V}$ as the set formed by the edges in ${\textbf{T}}$ that subtend singleton children of $V$ and, with the exception of $V=V_{0}$, ${\textbf{E}}_{V}$ in addition includes the edge that subtends $V$. For the example in Figure \[perfect\_phylo\](B), ${\textbf{E}}_{V_{1}}=\{E_{1},E_{3},E_{4}\}$. Let ${\textbf{V}}^*$ be the set of all $V \in {\textbf{V}}$ such that $|V|>1$, then the likelihood function is defined as $$\begin{aligned} \label{lik_2} P(\textbf{Y}\mid {\textbf{g}}, N_e,\mu)&=\sum_{i=1}^{\#{\textbf{a}}} P(\textbf{Y}, {\textbf{a}}{}_i \mid {\textbf{g}}, N_e, \mu)\nonumber\\ &= \sum_{i=1}^{\#{\textbf{a}}} \prod_{V \in {\textbf{V}}^*} P(V, {\textbf{E}}_{V}, {\textbf{a}}{}_i \mid {\textbf{g}}, N_e, \mu),\end{aligned}$$ where we recall that $\#{\textbf{a}}$ is the number of possible allocations, we have written $N_e=(N_e(t))_{t\geq 0}$, and $P(V, {\textbf{E}}_{V}, {\textbf{a}}{}_i \mid {\textbf{g}}, N_e, \mu)$ is the probability of observing the mutations of the ${\textbf{E}}_{V}$ edges along the corresponding branches of ${\textbf{g}}{}$ defined by the mapping ${\textbf{a}}_i$ as follows. If $V$ has no singleton child nodes, then ${\textbf{E}}_{V}=\{E\}$ and $$\label{intern_mut} P(V, \{E\}, {\textbf{a}}_{i} \mid {\textbf{g}}, N_e, \mu) \propto (\mu l)^{|E|} e^{-\mu \mathcal{T}},$$ where $l$ is the length of the branch in ${\textbf{g}}$ that subtends ${\textbf{g}}_{j}$, $j$ is the largest index such that ${\textbf{a}}_{i,j}=V$, and $\mathcal{T}$ denotes the length of the subtree in ${\textbf{g}}$ to which $V$ is mapped in ${\textbf{a}}_{i}$ (as described in Subsection \[sec:alloc\]). For example, considering $V_2$ in Figure \[sub\_all\], we have $\mathcal{T}_2= 2 t_n +(t_{n-2}-t_n)$ and $l=(t_{n-2}-t_n)$ is the length of the branch connecting vintage $1$ to vintage $3$. If node $V$ has singleton child nodes, $$\label{leaf_all} P(V, \{E, E_{ch_1}, \ldots, E_{ch_k} \}, {\textbf{a}}_{i} \mid {\textbf{g}},N_e, \mu) \propto (\mu l)^{|E|} e^{-\mu \mathcal{T}} \sum_{\textbf{R} \in \Pi({\textbf{E}}_{V})} \prod_{j=1}^k (\mu l_{R_j})^{|E_{ch_j}|},$$ where the first term on the r.h.s is defined exactly as the quantity on the r.h.s. of , while the second term corresponds to the probability of all possible different matchings between $R_{1},\ldots,R_{k}$, the first $k$ indexes such that ${\textbf{a}}_{i,R_{j}}=V$, and $|E_{ch_1}|,|E_{ch_2}|,\ldots,|E_{ch_k}|$, the $k$ numbers of mutations observed on the edges $E_{ch_1}, \ldots, E_{ch_k}$ leading to the child nodes of $V$. In this expression, $\Pi({\textbf{E}}_{V})$ is the set of all possible such matchings **R**. Before defining $\Pi({\textbf{E}}_{V})$ more precisely, we make two observations. First, not all matchings are possible since not all leaf branches terminate at the same time (heterochronous sampling). Second, it is enough to consider the allocations that contribute to distinct likelihood values, *i.e.* allocations for which the underlying samples are “distinguishable" in the sense that they have a different number of mutations. We define $\Pi({\textbf{E}}_{V})$ as the set of all possible “distinct matchings of number of observed singleton mutations to singleton branches", that is, allocations which lead to a distinct likelihood values. To construct $ \Pi({\textbf{E}}_{V})$, we first partition the singleton edges $E_{ch_1}, \ldots, E_{ch_k}$ according to the sampling times of the corresponding nodes $V_{ch_1}, \ldots, V_{ch_k}$. Let $k_i$ be the number of nodes in $\{V_{ch_1}, \ldots, V_{ch_k}\}$ with sampling time $s_i$, *i.e.*, the size of each subset of the partition. We then further partition these subsets by grouping together the edges carrying the same number of mutations (defined as $|E_{ch_1}|, \ldots, |E_{ch_k}|$). For each given sampling time $s_j$, let $k^{(1)}_j, \ldots, k^{(m_j)}_j$ denote the cardinalities of the $m_j$ sub-subsets obtained by this procedure, so that $k_j=\sum_{h=1}^{m_j} k^{(h)}_j$. The cardinality of $\Pi({\textbf{E}}_{V})$ is then $$\label{perm} |\Pi({\textbf{E}}_{V})|=\prod_{j=1}^m \frac{k_j!}{k^{(1)}_j! \dots k^{(m_j)}_j!},$$ where the product in is the number of permutations with repetition of the different edges that are compatible with the data in terms of sampling times and numbers of mutations carried. Note that Equation  is not the same as Equation (6) in [@pal19] because here we account for the different sampling groups. It degenerates into Equation (6) in [@pal19] in the isochronous case. Lastly, we note that knowing a priori the full matrix $\mathbf{A}$ allows to compute efficiently the likelihood via a sum-product algorithm. Indeed, for each $V\in {\textbf{V}}^*$ there may be several rows ${\textbf{a}}$ of $\mathbf{A}$ such that $P(V, {\textbf{E}}_{V}, {\textbf{a}}{} \mid {\textbf{g}}, N_e, \mu)$ is the same, due to the fact that $V$ is mapped to the same subtree in all these allocations. For such a $V$, one could compute the likelihood corresponding to these $r$ allocations ${\textbf{a}}'_1,\ldots,{\textbf{a}}'_r$ in the following way: $$\begin{aligned} \label{eq:sumprod} \sum_{i=1}^{r} & \prod_{V \in {\textbf{V}}^*} P(V, {\textbf{E}}_{V}, {\textbf{a}}'_i \mid {\textbf{g}}, N_e, \mu)\nonumber \\ & =P(V, {\textbf{E}}_{V}, {\textbf{a}}'_1 \mid {\textbf{g}}, N_e, \mu)\sum_{i=1}^{r} \prod_{V' \in {\textbf{V}}^*\setminus \{V\}} P(V', {\textbf{E}}_{V'}, {\textbf{a}}'_i \mid {\textbf{g}}, N_e, \mu).\end{aligned}$$ The exact sum-product formulation of is specific to the observed [**Y**]{} and $\mathbf{A}$. Bayesian Model and MCMC inference {#mcmc} ================================= In Section \[sec:hettaj\] we have introduced a new prior for genealogies and in Section \[lik\], we have expounded how to compute the likelihood of heterochronous data [**Y**]{} generated by a Poisson process of mutations superimposed on this new genealogy. We finally need to specify a prior distribution on $(N_e(t))_{t\geq 0}$ to complete our Bayesian model. In this paper, we follow [@pal13], who place a Gaussian process (GP) prior on $(\log(N_e(t)))_{t\geq 0}$ (the logarithm is used to ensure that $N_e(t)\geq 0$ for all $t$). We thus have: $$\begin{aligned} \label{model} {\textbf{Y}}{}\mid {\textbf{g}}{},\mu, (N_e(t))_{t\geq 0},{\textbf{n}}{},{\textbf{s}}{} & \sim \text{Poisson process} \nonumber \\ {\textbf{g}}\mid (N_e(t))_{t\geq 0},{\textbf{s}}{}, {\textbf{n}}{} & \sim \text{Tajima heterochronous $n$-coalescent}\\ (\log( N_e(t)))_{t\geq 0} \mid \tau& \sim \text{GP} (0, C(\tau)) \nonumber\\ \tau &\sim {\text{Gamma}}(\alpha, \beta) \nonumber\end{aligned}$$ where $C(\tau)$ is the covariance function of the Gaussian process. As in [@pal13], for computational convenience we use Brownian motion with covariance elements $$\mathrm{Cov} (\log(N_e(t)), \log(N_e(t'))= \tau \min (t, t')$$ for any $t,t' >0$ as our GP prior. From , the posterior distribution can be written as $$\label{posterior} \hspace{-1cm}\pi((\log(N_e(t)))_{t\geq 0}, \tau, {\textbf{g}}{}| {\textbf{Y}}{}, \mu) \propto P ({\textbf{Y}}| {\textbf{g}}{}, (\log(N_e(t)))_{t\geq 0}, \mu) \pi({\textbf{g}}{}|(\log(N_e(t)))_{t\geq 0}) \pi((\log(N_e(t)))_{t\geq 0}|\tau) \pi (\tau),$$ which we approximate via MCMC methods. Full conditionals are not available, and so we use Metropolis-within-Gibbs updates. At each MCMC iteration, we jointly update $((\log(N_e(t)))_{t\geq 0}, \tau)$ via a Split Hamiltonian Monte Carlo (HMC) [@shah14] suitably adapted to phylodynamics inference by [@lan15]; then we update the topology $g$ and ${\textbf{t}}{}$. We propose two Metropolis steps to update $g$ and ${\textbf{t}}{}$. The latter may also be combined in a single step. The transitions for $g$ and [**t**]{} are tailored to the Tajima $n$-coalescent genealogies. To update $g$, we employ the scheme in [@pal19]. To update ${\textbf{t}}{}$, we propose a new sampler, which allows to propose branch lengths that account for the observed sampling times constraints, an issue specific to heterochronous samples under the ISM assumption and detailed in the next subsection. Constraints imposed by the ISM hypothesis ----------------------------------------- Under the ISM hypothesis, mutations partition the observed sequences into two sets: the sequences that carry the mutations and the sequences that do not. This recursive partitioning of the sequences is graphically represented by [**T**]{}. As a consequence, not all topologies $g$ and not all vectors [**t**]{} are compatible with the data, i.e. have posterior probability or density greater than 0. The combinatorial constraints imposed by the ISM on the space of topologies are discussed in detail in [@cap19]. The constraints on [**t**]{} solely arise in the heterochronous case. First note that the definition of the Tajima heterochronous $n$-coalescent implies that there can be at most $n_1-1$ coalescence events before $s_2$, at most $n_1+n_2-1$ events before $s_3$, and so on. Moreover, if there are shared mutations between some (but not all) samples with different sampling times, the maximum number of coalescent events between the involved sampling times is further restricted. In the example of Figure \[perfect\_phylo\](A), there is a shared mutation $l_{3}$ between 3 samples with sampling time $s_{1}$ and a sample with sampling time $s_{2}>s_{1}$. Out of the 7 samples obtained at time $s_{1}$, the 3 samples that share the $l_3$ mutation could coalesce first some time between $s_{1}$ and $s_{2}$ (at most $2$ coalescent events among the 4 sequences descending from node $V_{1}$), but they need to coalesce with the sample at time $s_2$ in node $V_1$ before they coalesce with the other 4 samples collected at time $s_{1}$(those can coalesce at most 3 times between $s_{1}$ and $s_{2}$). Therefore, there are at most 5 coalescent events before $s_{2}$. To encode the constraints imposed by the sampling information, we define a vector **c** of length $m$, where the $i$th entry denotes the maximum number of coalescent events that can happen (strictly) before time $s_{i}$ for given [**Y**]{}, [**s**]{} and [**n**]{}. Trivially $c_1$=0 because there are no samples. Let us stress that **c** does not define the number of coalescent events in a given interval (a quantity determined by [**t**]{}), but it is the maximum number of coalescent events that can happen before each sampling time to ensure compatibility with the data. In the example of Figure \[perfect\_phylo\](A), we have $\textbf{c}=(0,5)$. Note that $c_2$ is $5$ and not $n_1-1=6$. In the online supplementary material, we provide a greedy search algorithm to define **c**. Coalescent times updates ------------------------ Let $\Delta\mathbf{t}:=(t_{n}-t_{n+1}, \ldots, t_{2}-t_{3})$ be the vector of intercoalescence times, and $(\Delta t_i)_{i \in I}$ the subvector of elements of $\Delta\mathbf{t}$ at positions $I \subseteq\{1,\ldots,n-1\}$. The proposal is generated in three steps. First, we uniformly sample the number of intercoalescent times proposal moves – *i.e.*, the cardinality of $I$, then we uniformly choose which times to modify – *i.e.*, we define $I$, and lastly, we sample the proposals $(\Delta t_i)'_{i \in I}$. The first two steps balance between fast exploration of the coalescent times state-space and a high acceptance probability – few changes are expected to lead to higher acceptance rates while many changes are expected to lead to faster exploration of the state space. In our implementation we limit the maximum possible number of intercoalescent times moves to a fixed number $Z \ll n-1$. Lastly, we sample new states $(\Delta t_i)'$, for $i \in I$ from a truncated normal with mean $\Delta t_i$ and standard deviation $ \sigma \Delta t_i$. The left tail is truncated by a parameter $lo_i$, and the right tail is left unbounded. Three reasons motivate this choice: it has positive support, it can be centered and scaled around the current $\Delta t_i$ using a single parameter $\sigma$, and we can set the lower bound $lo_i$ to ensure that only compatible times ${\textbf{t}}{}'$ are proposed. To set the values of $lo_i$, we rely on $\textbf{c}$, the vector that specifies the maximum number of coalescent events possible before each sampling time. We note that the elements of $\textbf{c}$ can be used to index coalescent times. In particular, $t_{n-c_i}$ denotes the time of the $(c_i+1)$th coalescent event. For example in Figure \[fig:hetTaj\], $t_{n-c_1}=t_{10}$ is the first coalescent event, and $t_{n-c_2}=t_5$ is the sixth coalescent event. Given $\textbf{c}$, $lo_i$ is set to $$\label{eq:lower_time} lo_i= \max_{j=1,\ldots,m} \{0,\{[s_j-(t_{n-c_j}-\Delta t_i)] \mathbbm{1}(i \leq c_j+1)\}\},$$ where $\mathbbm{1}(i \leq c_j+1)$ is an indicator function. Equation  ensures the proposal $t'_{n-c_j} \geq s_j$ for all $j$. Indeed, note that $t_{n-c_j}=\sum_{k=1}^{c_j+1}\Delta t_k$. Hence, if $(t_{n-c_j}-\Delta t_i)-s_j>0$ for any given $j$ such that $i \leq c_j+1$, then the proposed value of $(\Delta t_i)'$ could be zero and still $t'_{n-c_j}$ would be a compatible time. In this case, we do not need to impose any restriction on the lower bound of the truncated normal. On the other hand, if the vector considered in has one or more positive values, the proposed value $(\Delta t_i)'$ should be large enough to ensure that for all sampling times $s_j$, there will never be more than $c_j$ coalescent events before $s_j$. In other words, we truncate the proposal distribution support to ensure the compatibility of [**t**]{}’. We discuss how to set $Z$ and $\sigma$ in Section \[sim\]. The transition density of coalescent times is given by $$\label{timeMH} k({\textbf{t}}{},{\textbf{t}}')= \frac{1}{Z} \binom{n-1}{|I|}^{-1} \prod_{i \in I}\text{Truncated} \, {\text{N}}(\Delta t_i, \sigma \, \Delta t_i, lo_i, \infty),$$ with $\text{Truncated} \, {\text{N}}(\Delta t_i, \sigma \, it_i, lo_i, \infty)$ denoting a truncated normal density function with mean $\Delta{\textbf{t}}_i$, standard deviation $\sigma \Delta{\textbf{t}}_i$, lower bound $lo_i$ and upper bound $\infty$. Simulations {#sim} =========== We explore the ability of our procedure to reconstruct $(N_e(t))_{t\geq 0}$ in simulation across a range of demographic scenarios which capture realistic and challenging population size trajectories encountered in applications. The code for simulations and inference is implemented in `R `package `phylodyn`, which is available for download at `https://github.com/JuliaPalacios/phylodyn.` *Simulation setup.* Given [**n**]{}, [**s**]{}, and $(N_e(t))_{t\geq 0}$, we simulate genealogies under the Tajima heterochronous $n$-coalescent (Section \[sec:hettaj\]). Given a realized ${\textbf{g}}{}$ and fixed $\mu$, we draw $M$ mutations from a Poisson distribution with parameter $\mu L$ ($L$ is the length of the tree [**g**]{}: the sum of all branch lengths of ${\textbf{g}}{}$) and place them independently and uniformly at random along the branches of the timed genealogy. ${\textbf{Y}}_1$, ${\textbf{Y}}_2$ and [**T**]{} are then constructed as described in Section \[data\]. We simulate genealogies with three population scenarios: 1\. A bottleneck (“bottleneck”): $$\label{bottle} N_e(t)=\begin{cases} 3 & \qquad \hbox{if } t \in [0,0.1),\\[-0.3cm] 0.1 & \qquad \hbox{if }t \in [0.1,0.3),\\[-0.3cm] 2 & \qquad \hbox{if }t \in [0.3,\infty). \end{cases}$$ 2\. An instantaneous drop (“drop”): $$\label{drop} N_e(t)=\begin{cases} 0.5 & \qquad \hbox{if }t \in [0,0.5),\\[-0.3cm] 2 & \qquad \hbox{if } t \in [0.5,\infty). \end{cases}$$ 3\. Two periods of constant population size with an exponential growth in between (“exp”): $$\label{exp} N_e(t)=\begin{cases} 10 & \qquad \hbox{if } t \in [0,0.1),\\[-0.3cm] 10\,\exp(2-20\,t) & \qquad \hbox{if }t \in [0.1,0.25),\\[-0.3cm] 0.5 & \qquad \hbox{if }t \in [0.25,\infty). \end{cases}$$ For each scenario, we generated genealogies with three numbers of leaves ($n =14,\, 35,\, 70$) and different ${\textbf{n}}, {\textbf{s}}{}$ as summarized in Table \[sim\_summary\]. The mutation parameter is varied to analyze the effect of the number of segregating sites on the quality of the estimation. We empirically assess the accuracy of our estimates with three commonly used criteria. The first one is the sum of relative errors (SRE). $$SRE=\sum^{k}_{i=1}\frac{|\widehat{N}_e(v_{i})-{N_e(v_{i})}|}{{N_e(v_{i})}},$$ where $(v_1, \ldots, v_k)$ is a regular grid of $k$ time points, $\widehat{N}_e(v_{i})$ is the posterior median of $N_e$ at time $v_{i}$ and $N_e(v_i)$ is the value of the true trajectory at time $v_i$. The second criterion is the mean relative width, defined by $$MRW=\frac 1 k \sum^{k}_{i=1}\frac{|\hat{N}_{97.5}(v_{i})-\hat{N}_{2.5}(v_{i})|}{N(v_{i})},$$ where $\hat{N}_{97.5}(v_{i})$ and $\hat{N}_{2.5}(v_{i})$ are respectively the $97.5\%$ and $2.5\%$ quantiles of the posterior distribution of $N(v_{i})$. Lastly, we consider the envelope measure defined by $$ENV= \frac 1 k \sum^{k}_{i=1}\mathbf{1}_{\{\hat{N}_{2.5}(v_{i})\leq N_e(v_{i}) \leq \hat{N}_{97.5}(v_{i})\}},$$ which measures the proportion of the curve that is covered by the 95% credible region. In this simulation study we fix $k=100$, $v_1=0$ and $v_k=.8 \, t_2$. *MCMC tuning parameters.* The posterior approximation is sensitive to both the initial values of ([**g**]{}, $(N_e(t))_{t\geq 0}$) and the MCMC parameters. We initialize [**g**]{} with the serial UPGMA [@dru00]. In addition to the usual MCMC parameters such as chain length, burnin and thinning, there are three parameters specific to our method: the HMC step size $\epsilon$, the maximum number of intercoalescent times proposals ($Z$), and the standard deviation $\sigma$ that parametrizes the transition kernel $k({\textbf{t}}{},{\textbf{t}}{'})$. While all three parameters contribute to the mixing of the Markov chain and acceptance rates, in our experience, $\epsilon$ and $\sigma$ are the most influential. In settings similar to the ones analyzed here (time scale, type of trajectory patterns, and mutation rate), parameter values $\epsilon \in [0.03, 0.09]$, $Z \in \{1,2,3\}$, and $\sigma \in [0.01,0.03]$ lead to a similar mixing of the Markov chain and accuracy (w.r.t the metrics considered). We based these guidelines on extensive simulation studies on the $9$ datasets considered (Table \[sim\_summary\]), which we believe to be representative of a broad set of settings encountered in applications. In our simulations, we set $\epsilon=0.07$, $Z=2$, and $\sigma=0.02$ for the “bottleneck” and “drop” trajectories, and we set $\epsilon=0.08$, $Z=2$, and $\sigma=0.02$ for the “exp” trajectories. Inference is carried out with $3 \times 10^5$ iterations for $n=14$, $4 \times 10^5$ iterations for $n=35$ and $5 \times 10^5$ iterations for $n=70$. Posterior distributions are approximated after a burn-in period of $5 \times 10^4$ iterations and after thinning every $20$ iterations. *Comparison to other methods.* To our knowledge, no other method simultaneously deals with heterochronous data, assumes the ISM, samples Tajima genealogies and does Bayesian nonparametric inference. All available methodologies rely on the Kingman coalescent coupled with finite sites mutation models, with the Jukes-Cantor model [@juk69] being the closest to the ISM. Hence, it is not fully possible to isolate the impact of using Tajima’s genealogies in lieu of Kingman’s. Nevertheless, we include some alternative estimates for completeness. We compare our results to two popular methodologies implemented in BEAST [@drummond2012bayesian]: the Bayesian Skyline (SKY) [@dru05] and the Gaussian Markov Random Field Skyride (GMRF) [@min08]. For the SKY and GMRF, we use the Jukes– Cantor mutation model [@juk69], and approximate posterior distributions with $10^7$ iterations after a burn-in period of $10^6$ iterations and after thinning every $10^3$ iterations. We also compare our results to an oracle estimator that infers $(N_e(t))_{t\geq 0}$ from the “true” [**g**]{}. The oracle estimation is obtained using the method of [@pal12] with the same Gaussian process prior on $(N_e(t))_{t\geq 0}$. Note that the goal of the comparison is not to determine whether our method is superior, rather see if the performance of Tajima-based inference is in line with the results obtained through two popular Kingman-based methods in some challenging population scenarios. *Results.* The results of the nine curves estimated with our method are plotted in Figure \[fig:sim1\]. The supplementary material includes the plots for SKY and GMRF. True trajectories are depicted as dashed lines, posterior medians as black lines and $95\%$ credible regions as gray shaded areas. Table \[tab:sim1\] summarizes SRE, MRW, and ENV for the $9$ simulated data sets achieved with our method (“Tajima"), SKY, GMRF, and “Oracle". SKY estimates for “Exp" $n=14$ are not included because we could not obtain convergent runs. Accuracy increases with sample size: credible regions shrink substantially in all three scenarios. As $n$ increases, posterior medians track more closely the true trajectories. It is well known in the literature that abrupt population size changes are the most difficult to recover. The “drop" and “bottleneck" scenarios are less accurate for $n=14$, as exhibited by the wider credible region. We recover the bottleneck (panel first row and first column), but we do not recover the instantaneous drop (panel first row and third column). \[tab:sim\_results\] ![[]{data-label="fig:sim1"}](figure1_simulations-eps-converted-to) Table \[tab:sim\_results\] largely confirms the analysis of Figure \[fig:sim1\]. Recall that SKY and GMRF rely on Kingman coalescent rather than Tajima coalescent; SKY and GMRF methods assume a different mutation model, whereas Oracle relies on knowing the true [**g**]{} rather than computing its posterior. First, no method unequivocally outperforms the others. The oracle methodology is the method with the best overall performance more frequently. Surprisingly, the advantage of knowing [**g**]{} is not as big as one would expect. Both SKY and GMRF have much narrower credible regions for the bottleneck trajectory. On the other hand, Tajima has the best overall performance in the “drop" trajectory (low SRE and MRW). Note that $100\%$ ENV is not always an indicator of accuracy because it can be achieved with a very wide credible region. Lastly, note that no method outperforms the others. This is consistent with theoretical expectations as we are comparing two resolutions of the same ancestral process. Reassuringly, Tajima-based estimates are competitive with Kingman-based estimates. The current simulation study cannot single out the benefit of employing Tajima vs Kingman topologies because no available implementations rely on an identical mutation model and MCMC scheme. This analysis is out of the current scope and will be the subject of future research. North American Bison data {#app} ========================= ![image](bison_data-eps-converted-to){width=".5\textwidth"} Recent advances in molecular and sequencing technologies allow recovering genetic material from ancient specimens [@paa04]. In this section, we analyze modern and ancient bison sequences. These mammals offer a case study of a population experiencing a population growth followed by a decline. It was a long-standing question whether the drop was instigated by human intervention or by environmental changes. [@sha04] first reconstructed the genetic history of Beringian bisons. Their estimate for the start of the decline supports the environmental hypothesis, in particular, they suggest that the decline may be due to environmental events preceding the last glacial maxima (LGM). This data-set has been the subject of extensive research in the past decade. We analyze new bison data recently described by [@fro17]. We fit our coalescent model to these sequences and estimate population size dynamics. To our knowledge, there is no phylodynamics analysis of this data set in the literature. Two motivations underlie this study: first, [@sha04] sequences include $602$ base pairs from the mitochondrial control region, while [@fro17] provide the full mitochondrial genome ($16322$ base pairs after alignment); second, we are interested in testing whether the previously published overwhelming evidence in favor of the environmentally induced population decline is confirmed by this new data. [@fro17] data comprises $50$ sequences ($14$ modern and $36$ ancient). DNA was extracted from bison specimens from Canada (28, three locations), USA (9, two locations); Siberia (7, three locations), and unknown locations (5). It includes sequences of $37$ (extinct ancient bison), $1$ (extinct ancient bison), $11$ (modern bison), and $4$ (control group). We selected $38$ out of $50$ sequences. We removed the control group sequences and the Siberian sequences to analyze samples from a single population ([@fro17] (Figure 1) suggested population structure). We removed the sequence because it has $3803$ ambiguities *i.e.*, sites in a sequence that cannot be unambiguously assigned to a unique nucleotide basis at sites where all the other samples have valid entries. Out of the $94$ observed polymorphic sites, we retain $91$ sites compatible with the ISM assumption. To encode data in the $0-1$ incidence matrix representation ${\textbf{Y}}_1$, we use the root of the UPGMA tree reconstructed using `R` function `upgma` (`phanghorn`) as the ancestral state. Figure \[fig:bison\_data\] displays the perfect phylogeny [**T**]{} and the vectors [**s**]{} and [**n**]{}. For our inference procedure, we set $\epsilon=0.09$, $Z_1=2$, $\sigma=0.02$, and approximated the posterior distribution with $1.5 \times 10^6$ iterations after a burn-in of $8 \times 10^5$ and after thinning every $200$ iterations. As a comparison, we ran GMRF on BEAST and approximated the posterior distribution with $1 \times 10^7$ iterations after a burn-in of $1 \times 10^6$ and after thinning every $1000$ iteration. We used the default values for all GMRF hyperparameters. We initialized both methods with the same genealogy (serial UPGMA). To compute the likelihood, we used the BEAST mutation rate estimate per site per year of $2.52 \times 10^{-8}$. ![[]{data-label="fig:bison"}](bison_figure-eps-converted-to) The first panel of Figure \[fig:bison\] plots a summary of the effective population size pattern recovered by a recent analysis of [@sha04] data by [@fau20]. While the precise timings and the details of the trajectory differ from method to method, the broad patterns are consistent. The population peak is estimated to be between 41.6 and 47.3 kya. The timing of the start of the decline is the main feature of interest. We plot the posterior medians (black lines) of $(N_e(t))_{t\geq 0}$ along with the $95\%$ credible regions (gray area) obtained from posterior samples by sampling Tajima’s trees (“Tajima", second panel) and Kingman’s trees (“GMRF", third panel). Both our method and GMRF recover the pattern described in the first panel. We detect the population decline only up to about $60$kya ago, afterward the median trajectory is quite flat while the credible regions are wide. This can be explained by the fact that we have no samples from $42$kya to $128.5$kya. On the other hand, GMRF detects more clearly the population decline. The GMRF median time estimate of the population peak is $29.6$ kya, while the median time estimate for our method is $29.7$ kya. Thus, the estimates of the main event of interest, the population decline, are practically identical. The difference between the estimates obtained analyzing $2017$ data differ substantially from the estimates of a population peak between 41.6 and 47.3 kya obtained analyzing the $2004$ data. The LGM in the Northern hemisphere reached its peak between $26.5$ and $19$ kya [@cla09]. Hence, the analysis of the $2017$ data still supports the hypothesis of a decline that initiated before the LGM. However, our estimates suggest an initial decline much closer to the LGM peak than the analysis of the $2004$ data. Human arrival in North America via the Berigian bridge route should have happened around $14-16$ kya [@lla16]. Therefore, despite the mismatch of the timing, the human-induced decline hypothesis has little evidence also according to our analysis of this new dataset. SARS-CoV-2 {#covid} ========== ![[]{data-label="fig:covid"}](covid_figure-eps-converted-to) SARS-CoV-2 is the virus causing the pandemic of novel coronavirus disease in 2019-2020 and it is of interest to explore the utility of viral molecular sequences for surveillance during the outbreak of the epidemic. Here, we analyze $123$ whole genome sequences collected in France, and $32$ sequences collected in Germany that were made publicly available in the GISAID EpiCov database [@shu2017gisaid]. We note that our estimates may not reflect the whole countries effective population sizes but simply local effective population trajectories of the locations in which our samples were obtained. We only analyzed high coverage sequences with more than 25000 base pairs and performed multiple sequence alignment with Mafft [@katoh2013mafft]. To encode nucleotide data as binary sequences ${\textbf{Y}}_1$, we used the GenBank MN908947 [@wu20] sequence as ancestral reference and eliminated sites that were not present in the ancestral sequence. The numbers of variable sites observed are $137$ and $45$ for France and Germany respectively. The observed patterns of mutations in both datasets are compatible with the ISM (no site was further removed). The Gisaid reference numbers of the sequences included in this study and data access acknowledgment are included in the supplementary material. We note that observed differences may be caused by sequencing errors and these are being ignored in our study. The heat maps included in each panel of Figure \[fig:covid\] show the sampling frequency information. In the French dataset, $109$ out of $123$ samples were collected in March (at least one sample every day from $03/01/20$ to $03/22/20$), $9$ in February (spread over $5$ different dates), $5$ in January (spread over $3$ days, oldest sample dated $01/23/20$). In the German dataset, $25$ out of $32$ samples were collected in March (spread over $7$ different dates and $03/16/20$ last sampling day), $6$ in February (spread over $4$ dates), $1$ in January (oldest sample $01/28/20$). We include in each dataset the reference sequence. For our inference procedure, we set $\epsilon=0.11$, $Z_1=2$, $\sigma=0.02$, and approximate the posterior distribution with $1.4 \times 10^6$ iterations after a burn-in of $8 \times 10^5$ and after thinning every $100$ iterations. For comparison, we ran GMRF on BEAST assuming the HKY mutation model [@hky] as proposed in previous studies [@scire2020phylodynamic] and approximate the posterior distribution with $5 \times 10^7$ iterations after a burn-in of $5 \times 10^6$ and after thinning every $1000$ iteration. We used the default values for all GMRF hyperparameters. We initialized both methods with the serial UPGMA genealogy [@dru00]. BEAST estimates a mutation rate of $5.99 \times 10^{-4}$ mutations per site per year in the French dataset, and $7.41 \times 10^{-4}$ mutations per site per year in the German dataset. Our estimate follows the method discussed by [@rambaut2016exploring] obtained by regressing the Hamming distance of the sequences to the ancestral reference sequence on time difference between the sampling times and the reference sampling time. We estimated a mutation rate of $1.07 \times 10^{-3}$ mutations per site per year in the French dataset, and $8.54 \times 10^{-4}$ mutations per site per year in the German dataset. We show the estimates of effective population size with our method in the first column of Figure \[fig:covid\] and with BEAST in the second column. Results for Germany correspond to the first row and for France to the second row. Both analyses of the French dataset exhibit exponential growth from mid-December of $2019$ to the end of February (Tajima estimate of median population peak is 2020/02/29, GMRF estimate is 2020/03/1). Following the exponential growth, both methods suggest a decline. Both analyses of the German dataset recover nearly constant trajectories, possibly due to sampling time concentration in mid-march and spatial sampling concentration in Duesseldorf (see online supplementary material for details). A final remark. Our estimates should be interpreted as estimates of genetic diversity over time and not as number of infections. Our model ignores recombination, population structure and selection. Viruses tend to exhibit antigenic drifts, selective sweeps, and tend to cluster spatially following migration events [@ram08]. All these aspects may hinder the use of coalescent-based models to analyze viral population size dynamics. Indeed, the scientific knowledge on this virus is still limited and the validity of our model assumptions to SARS-CoV-2 is an active area of research. Discussion {#concl} ========== We have introduced a new methodology for Bayesian nonparametric inference of population size trajectory from heterochronous DNA sequences collected at a single non-recombining locus. The main focus of this work is scalability. In this respect, we developed a fast alternative to the Kingman’s coalescent that can be used for nonparametric inference of serially sampled sequences. We also developed a fast algorithm to compute the likelihood of Tajima genealogies, which is in itself a relevant contribution to the literature. We applied our method to a recent data set including modern and ancient bison sequences. There has been a lot of interest in determining whether the decline in the bison population was human-induced or climate-induced. Genetic evidence supported the environmental hypothesis, estimating the population peak to be approximately 45kya. Our analysis reconstructed a similar population size pattern. However, we estimated the peak to be about $29.5$ kya. These analyses confirm that the population decline started sometimes before the LGM. We believe that this brings further genetic evidence to the environmentally induced population decline hypothesis. This paper makes important steps in the direction of a more scalable coalescent-based inference. However, the Tajima heterochronous $n$-coalescent has some limitations which need to be addressed. An obvious one is that we do not model population structure. In the ancient bison application, we removed the Beringian sequences, keeping only North-American ones, because population structure violated the assumption of the standard coalescent, and consequently of any of its “resolutions”. We have also stressed the importance of this feature in the analysis of viral data. In addition, throughout the paper, we assumed the infinite site model of mutation. This prevented us from analyzing the original bison data of [@sha04], as well as many other data that violate the ISM assumption. Given the promise of Tajima based inference shown here, incorporating other mutation models seems to be an interesting avenue of research. Appendix {#app:algo .unnumbered} ========= **Inputs:** [**T**]{}, [**s**]{} **Output:** $A$ 1. Initialize $A=(V_0,\ldots,V_0)$ 2. **For** $i=n-2$ to $1$ **do** 1. Define $A(i)$ unique nodes in the $i$th column of $A$ 2. **For all** $V \in A(i)$ **do** 1. Define $T_V$, set of (non-singleton) child nodes of $V$ having $|g_i|$ descendants 2. Include $V$ in $T_V$ if it has more than two child nodes 3. Define $I$, set of vintages corresponding to all subtrees of $g_i$ 4. **If** $|T_V|=0$: do nothing **Else if** $|T_V|=1$: set column $A_V(\cdot,I)$ equal to $T_V$ **Else if** $|T_V|>1$: copy $A_V(,I)$ $|T_V|-1$ times, attach the copies to $A$ and set each copy equal to one element of $T_V$ 5. Eliminate rows in $A$ where $V$ appears too frequently (rule in the paper) 6. Eliminate rows not compatible with [**s**]{} and [**t**]{} 3. Return $A$. [1]{} [xx]{} Cappello, L.  Palacios, J. A. 2020, ‘Sequential importance sampling for multi-resolution [Kingman]{}-[Tajima]{} coalescent counting’, [*Annals of Applied Statistics*]{} [**in press**]{}. Clark, P. U., Dyke, A. S., Shakun, J. D., Carlson, A. E., Clark, J., Wohlfarth, B., Mitrovica, J. X., Hostetler, S. W.  McCabe, A. M. 2009, ‘The last glacial maximum’, [ *Science*]{} [**325**]{}(5941), 710–714. Disanto, F.  Wiehe, T. 2013, ‘Exact enumeration of cherries and pitchforks in ranked trees under the coalescent model’, [*Mathematical biosciences*]{} [**242**]{}(2), 195–200. Drummond, A. J., Rambaut, A., Shapiro, B.  Pybus, O. G. 2005, ‘Bayesian coalescent inference of past population dynamics from molecular sequences’, [*Molecular biology and evolution*]{} [**22**]{}(5), 1185–1192. Drummond, A.  Rodrigo, A. G. 2000, ‘Reconstructing genealogies of serial samples under the assumption of a molecular clock using serial-sample [UPGMA]{}’, [ *Molecular Biology and Evolution*]{} [**17**]{}(12), 1807–1815. Drummond, A., Suchard, M., Xie, D.  Rambaut, A. 2012, ‘Bayesian phylogenetics with [BEAU]{}ti and the [BEAST]{} 1.7’, [*Molecular Biology and Evolution*]{} [**29**]{}(8), 1969–1973. Faulkner, J. R., Magee, A. F., Shapiro, B.  Minin, V. N. 2020, ‘Horseshoe-based [Bayesian]{} nonparametric estimation of effective population size trajectories’, [ *Biometrics*]{} [**in press**]{}. Felsenstein, J.  Rodrigo, A. G. 1999, [C]{}oalescent approaches to [HIV]{} population genetics, [*in*]{} ‘The [E]{}volution of [HIV]{}’, [Johns Hopkins University]{} Press, pp. 233–272. Froese, D., Stiller, M., Heintzman, P. D., Reyes, A. V., Zazula, G. D., Soares, A. E., Meyer, M., Hall, E., Jensen, B. J., Arnold, L. J. et al. 2017, ‘Fossil and genomic evidence constrains the timing of bison arrival in [North]{} [America]{}’, [ *Proceedings of the National Academy of Sciences*]{} [**114**]{}(13), 3457–3462. Griffiths, R. C.  Tavare, S. 1994, ‘Sampling theory for neutral alleles in a varying environment’, [*Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences*]{} [**344**]{}(1310), 403–410. Gusfield, D. 1991, ‘Efficient algorithms for inferring evolutionary trees’, [*Networks*]{} [**21**]{}(1), 19–28. Hasegawa M, Kishino H, Y. T. 1985, ‘Dating of the human-ape splitting by a molecular clock of mitochondrial [DNA]{}’, [ *Journal of Molecular Evolution*]{} [**2**]{}, 160–164. Jukes, T. H.  Cantor, C. R. 1969 , ‘Evolution of protein molecules’, [*Mammalian protein metabolism*]{} [ **3**]{}(21), 132. Katoh, K.  Standley, D. M. 2013 , ‘Mafft multiple sequence alignment software version 7: improvements in performance and usability’, [*Molecular biology and evolution*]{} [ **30**]{}(4), 772–780. Kimura, M. 1969, ‘The number of heterozygous nucleotide sites maintained in a finite population due to steady flux of mutations’, [*Genetics*]{} [**61**]{}(4), 893. Kingman, J. F. 1982[*a*]{}, ‘On the genealogy of large populations’, [*Journal of Applied Probability*]{} [ **19**]{}(A), 27–43. Kingman, J. F. C. 1982[*b*]{}, ‘The coalescent’, [*Stochastic processes and their applications*]{} [ **13**]{}(3), 235–248. Lan, S., Palacios, J. A., Karcher, M., Minin, V. N.  Shahbaba, B. 2015, ‘An efficient [Bayesian]{} inference framework for coalescent-based nonparametric phylodynamics’, [ *Bioinformatics*]{} [**31**]{}(20), 3282–3289. Llamas, B., Fehren-Schmitz, L., Valverde, G., Soubrier, J., Mallick, S., Rohland, N., Nordenfelt, S., Valdiosera, C., Richards, S. M., Rohrlach, A. et al. 2016, ‘Ancient mitochondrial [DNA]{} provides high-resolution time scale of the peopling of the americas’, [ *Science advances*]{} [**2**]{}(4), e1501385. Minin, V. N., Bloomquist, E. W.  Suchard, M. A. 2008, ‘Smooth skyride through a rough skyline: Bayesian coalescent-based inference of population dynamics’, [*Molecular biology and evolution*]{} [**25**]{}(7), 1459–1471. P[ä]{}[ä]{}bo, S., Poinar, H., Serre, D., Jaenicke-Despr[é]{}s, V., Hebler, J., Rohland, N., Kuch, M., Krause, J., Vigilant, L.  Hofreiter, M. 2004, ‘Genetic analyses from ancient [DNA]{}’, [*Annu. Rev. Genet.*]{} [**38**]{}, 645–679. Palacios, J. A.  Minin, V. N. 2012, Integrated nested [Laplace]{} approximation for [Bayesian]{} nonparametric phylodynamics, [*in*]{} ‘Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence’, UAI’12, AUAI Press, Arlington, Virginia, United States, pp. 726–735. Palacios, J. A.  Minin, V. N. 2013, ‘Gaussian process-based [Bayesian]{} nonparametric inference of population size trajectories from gene genealogies’, [ *Biometrics*]{} [**69**]{}(1), 8–18. Palacios, J. A., V[é]{}ber, A., Cappello, L., Wang, Z., Wakeley, J.  Ramachandran, S. 2019, ‘Bayesian estimation of population size changes by sampling [Tajima]{}’s trees’, [*Genetics*]{} [**213**]{}(2), 967–986. Parag, K. V.  Pybus, O. G. 2019 , ‘Robust design for coalescent model inference’, [*Systematic biology*]{} [**68**]{}(5), 730–743. Rambaut, A., Lam, T. T., Max Carvalho, L.  Pybus, O. G. 2016, ‘Exploring the temporal structure of heterochronous sequences using tempest (formerly path-o-gen)’, [*Virus evolution*]{} [**2**]{}(1), vew007. Rambaut, A., Pybus, O. G., Nelson, M. I., Viboud, C., Taubenberger, J. K.  Holmes, E. C. 2008, ‘The genomic and epidemiological dynamics of human influenza a virus’, [ *Nature*]{} [**453**]{}(7195), 615–619. Sainudiin, R., Stadler, T.  V[é]{}ber, A. 2015, ‘Finding the best resolution for the [Kingman]{}–[Tajima]{} coalescent: theory and applications’, [*Journal of Mathematical Biology*]{} [**70**]{}(6), 1207–1247. Scire, J., Vaughan, T. G.  Stadler, T. 2020, ‘Phylodynamic analyses based on 93 genomes’. Shahbaba, B., Lan, S., Johnson, W. O.  Neal, R. M. 2014, ‘Split [Hamiltonian]{} [Monte]{} [Carlo]{}’, [*Statistics and Computing*]{} [**24**]{}(3), 339–349. Shapiro, B., Drummond, A. J., Rambaut, A., Wilson, M. C., Matheus, P. E., Sher, A. V., Pybus, O. G., Gilbert, M. T. P., Barnes, I., Binladen, J. et al. 2004, ‘Rise and fall of the beringian steppe bison’, [*Science*]{} [**306**]{}(5701), 1561–1565. Shu, Y.  McCauley, J. 2017, ‘Gisaid: Global initiative on sharing all influenza data–from vision to reality’, [*Eurosurveillance*]{} [**22**]{}(13). Tajima, F. 1983, ‘Evolutionary relationship of dna sequences in finite populations’, [*Genetics*]{} [ **105**]{}(2), 437–460. Watterson, G. 1975, ‘On the number of segregating sites in genetical models without recombination’, [ *Theoretical Population Biology*]{} [**7**]{}(2), 256–276. Wu, F., Zhao, S., Yu, B., Chen, Y.-M., Wang, W., Song, Z.-G., Hu, Y., Tao, Z.-W., Tian, J.-H., Pei, Y.-Y. et al. 2020 , ‘A new coronavirus associated with human respiratory disease in china’, [*Nature*]{} [**579**]{}(7798), 265–269. [**SUPPLEMENTARY MATERIAL**]{} Algorithm Description: Augmented Perfect Phylogeny. : The algorithm below uses Gusfield’s perfect phylogeny as an input, duplicates nodes corresponding to haplotypes that are sampled more than once, and returns the augmented perfect phylogeny [**T**]{}. **Inputs:** [**T**]{}’ [@gus91], [**s**]{}, ${\textbf{Y}}_2$ **Output:** [**T**]{} 1. **For** $i=1$ to $k$ **do** **If** $h_{i}$ is observed at multiple sampling times (from ${\textbf{Y}}_2$): \[let w.l.o.g. $r$ be the number of sampling groups in which $h_i$ is observed, and $ s_{i_1},\dots, s_{i_r} $ the corresponding sampling times\] 1. Take the leaf node $V'$ in [**T**]{}’ labeled by $h_{i}$ (each haplotype labels a unique node in Gusfield [**T**]{}’) 2. **If** $|E'|=0$: make $r-1$ copies of $V$ ($r-1$ nodes with edges connecting them to the same parent of $V$ with no edge labels). Then label each of these nodes uniquely with a pair $(h_i,s_{i_1}),\dots,(h_i,s_{i_r})$ **Else if** $|E'|\geq 1$: create $r$ new nodes with unlabeled edges connecting them to $V'$. Then label each of these nodes in a unique way with a pair $(h_i,s_{i_1}),\dots,(h_i,s_{i_r})$ **Else if** $h_{i}$ is observed at a single sampling time (from ${\textbf{Y}}_2$): 1. Identify $V'$ in [**T**]{}’ labeled $h_{i}$ 2. Label $V'$ with a pair ($h_{i}$, its corresponding sampling time) 2. Return [**T**]{}. Algorithm Description: Computing **c**. : We compute **c** through greedy search. The idea is simple: it is not possible to build a compatible topology $g$ conditionally on an incompatible vector [**t**]{}. We initially assume that ISM does not impose any constraints on [**t**]{}, check if we can build a compatible topology, if we are, it will mean that indeed ISM does not impose constraints, if not, it will mean that we need to add some constraints. We continue iteratively until we manage to sample a compatible $g$. To do this process, we consider one sampling group at a time. We define a vector **add** of length $m$ whose $i$th entry is the number of coalescent events that happens before $s_i$. Note that if we are interested in sampling $g$ (ignoring branch information), **add** is the only time information we need. We can sample compatible $g$’s through a simple extension of an Algorithm 2 in [@cap19]. We refer to that paper for details. **Inputs:** [**T**]{}, [**s**]{} **Output:** **c** 1. Initialize $\textbf{c}=(n_1-1,n_1+n_2-1,\dots,\sum_{i=1}^{m-1}n_i-1,n-1)$ 2. **For** $i=1$ to $m-1$ **do** 1. Set $\textbf{add}=(0,\ldots, add_i=0, add_{i+1}=\sum_{i=1}^{i}n_i-1,\ldots,add_{m}=\sum_{i=1}^{i}n_i-1)$ 2. Given **add**, try to sample a compatible topology $g$ 3. **If** $g$ compatible: set $c_i=add_{i+1}$ **Else if** $g$ not compatible: set $add_{i+1}=a_{i+1}-1,\ldots, add_{m}=add_m-1$ and return to (b) 3. Return $\textbf{c}$. Simulation study: : Plots of the estimates obtained from BEAST of the methods GMRF and SKY for the examples discussed in Section \[sim\]. ![[]{data-label="fig:sim2"}](figure_simulations_appendix-eps-converted-to) SARS-CoV-2 Molecular Data Description: : Data set used in the study in Section \[covid\]. We acknowledge the following sequence submitting laboratories to Gisaid.org: - Charité Universitätsmedizin Berlin, Institute of Virology. Victor M Corman, Julia Schneider, Talitha Veith, Barbara Mühlemann, Markus Antwerpen, Christian Drosten, Roman Wölfel. - Bundeswehr Institute of Microbiology. Mathias C Walter, Markus H Antwerpen and Roman Wölfel. - Center of Medical Microbiology, Virology, and Hospital Hygiene, University of Duesseldorf. Ortwin Adams, Marcel Andree, Alexander Dilthey, Torsten Feldt, Sandra Hauka, Torsten Houwaart, Björn-Erik Jensen, Detlef Kindgen-Milles, Malte Kohns Vasconcelos, Klaus Pfeffer, Tina Senff, Daniel Strelow, Jörg Timm, Andreas Walker, Tobias Wienemann. - CNR Virus des Infections Respiratoires - France SUD. Antonin Bal, Gregory Destras, Gwendolyne Burfin, Solenne Brun, Carine Moustaud, Raphaelle Lamy, Alexandre Gaymard, Maude Bouscambert-Duchamp, Florence Morfin-Sherpa, Martine Valette, Bruno Lina, Laurence Josset. - National Reference Center for Viruses of Respiratory Infections, Institut Pasteur, Paris. Mélanie Albert, Marion Barbet, Sylvie Behillil, Méline Bizard, Angela Brisebarre, Flora Donati, Fabiana Gambaro, Etienne Simon-Lorière, Vincent Enouf, Maud Vanpeene, Sylvie van der Werf, Lèa Pilorge. - Laboratoire Virpath, CIRI U111, UCBL1, INSERM, CNRS, ENS Lyon. Olivier Terrier, Aurélien Traversier, Julien Fouret, Yazdan Yazdanpanah, Xavier Lescure, Alexandre Gaymard, Bruno Lina, Manuel Rosa-Calatrava. It follows a description of all sequence sampling locations and dates. gisaid\_epi\_isl date country division ------------------ ------------ --------- ------------------------ EPI\_ISL\_412912 2020-02-25 Germany Baden-Wuerttemberg EPI\_ISL\_406862 2020-01-28 Germany Bavaria EPI\_ISL\_414520 2020-03-02 Germany Bavaria EPI\_ISL\_414521 2020-03-02 Germany Bavaria EPI\_ISL\_413488 2020-02-28 Germany North Rhine Westphalia EPI\_ISL\_414497 2020-02-25 Germany North Rhine Westphalia EPI\_ISL\_414499 2020-02-26 Germany North Rhine Westphalia EPI\_ISL\_414505 2020-02-27 Germany North Rhine Westphalia EPI\_ISL\_414509 2020-02-28 Germany North Rhine Westphalia EPI\_ISL\_417457 2020-03-10 Germany Duesseldorf EPI\_ISL\_417458 2020-03-11 Germany Duesseldorf EPI\_ISL\_417459 2020-03-11 Germany Duesseldorf EPI\_ISL\_417460 2020-03-11 Germany Duesseldorf EPI\_ISL\_417461 2020-03-11 Germany Duesseldorf EPI\_ISL\_417462 2020-03-11 Germany Duesseldorf EPI\_ISL\_417463 2020-03-13 Germany Duesseldorf EPI\_ISL\_417464 2020-03-14 Germany Duesseldorf EPI\_ISL\_417465 2020-03-14 Germany Duesseldorf EPI\_ISL\_417466 2020-03-14 Germany Duesseldorf EPI\_ISL\_417467 2020-03-15 Germany Duesseldorf EPI\_ISL\_417468 2020-03-16 Germany Duesseldorf EPI\_ISL\_419541 2020-03-14 Germany Duesseldorf EPI\_ISL\_419542 2020-03-15 Germany Duesseldorf EPI\_ISL\_419543 2020-03-15 Germany Duesseldorf EPI\_ISL\_419544 2020-03-15 Germany Duesseldorf EPI\_ISL\_419545 2020-03-15 Germany Duesseldorf EPI\_ISL\_419546 2020-03-15 Germany Duesseldorf EPI\_ISL\_419548 2020-03-15 Germany Duesseldorf EPI\_ISL\_419549 2020-03-15 Germany Duesseldorf EPI\_ISL\_419550 2020-03-16 Germany Duesseldorf EPI\_ISL\_419551 2020-03-16 Germany Duesseldorf EPI\_ISL\_419552 2020-03-16 Germany Duesseldorf EPI\_ISL\_402125 2019-12-26 China Hubei gisaid\_epi\_isl date country division ------------------ ------------ --------- ---------------------- EPI\_ISL\_418412 2020-03-15 France Auvergne-Rhône-Alpes EPI\_ISL\_418413 2020-03-15 France Auvergne-Rhône-Alpes EPI\_ISL\_418414 2020-03-15 France Auvergne-Rhône-Alpes EPI\_ISL\_418416 2020-03-16 France Auvergne-Rhône-Alpes EPI\_ISL\_418417 2020-03-16 France Auvergne-Rhône-Alpes EPI\_ISL\_418418 2020-03-16 France Auvergne-Rhône-Alpes EPI\_ISL\_418419 2020-03-16 France Auvergne-Rhône-Alpes EPI\_ISL\_418420 2020-03-17 France Auvergne-Rhône-Alpes EPI\_ISL\_418422 2020-03-17 France Auvergne-Rhône-Alpes EPI\_ISL\_418423 2020-03-17 France Auvergne-Rhône-Alpes EPI\_ISL\_418424 2020-03-17 France Auvergne-Rhône-Alpes EPI\_ISL\_418425 2020-03-17 France Auvergne-Rhône-Alpes EPI\_ISL\_418426 2020-03-17 France Auvergne-Rhône-Alpes EPI\_ISL\_418427 2020-03-17 France Auvergne-Rhône-Alpes EPI\_ISL\_418428 2020-03-17 France Auvergne-Rhône-Alpes EPI\_ISL\_419168 2020-03-17 France Auvergne-Rhône-Alpes EPI\_ISL\_418429 2020-03-18 France Auvergne-Rhône-Alpes EPI\_ISL\_418430 2020-03-18 France Auvergne-Rhône-Alpes EPI\_ISL\_418431 2020-03-18 France Auvergne-Rhône-Alpes EPI\_ISL\_418432 2020-03-18 France Auvergne-Rhône-Alpes EPI\_ISL\_419169 2020-03-21 France Auvergne-Rhône-Alpes EPI\_ISL\_419170 2020-03-21 France Auvergne-Rhône-Alpes EPI\_ISL\_419171 2020-03-21 France Auvergne-Rhône-Alpes EPI\_ISL\_419172 2020-03-21 France Auvergne-Rhône-Alpes EPI\_ISL\_419173 2020-03-21 France Auvergne-Rhône-Alpes EPI\_ISL\_419174 2020-03-20 France Auvergne-Rhône-Alpes EPI\_ISL\_419175 2020-03-21 France Auvergne-Rhône-Alpes gisaid\_epi\_isl date country division ------------------ ------------ --------- ------------------------ EPI\_ISL\_419176 2020-03-21 France Auvergne-Rhône-Alpes EPI\_ISL\_419177 2020-03-22 France Auvergne-Rhône-Alpes EPI\_ISL\_419178 2020-03-22 France Auvergne-Rhône-Alpes EPI\_ISL\_419179 2020-03-22 France Auvergne-Rhône-Alpes EPI\_ISL\_419180 2020-03-22 France Auvergne-Rhône-Alpes EPI\_ISL\_419181 2020-03-22 France Auvergne-Rhône-Alpes EPI\_ISL\_419182 2020-03-22 France Auvergne-Rhône-Alpes EPI\_ISL\_419183 2020-03-22 France Auvergne-Rhône-Alpes EPI\_ISL\_419184 2020-03-22 France Auvergne-Rhône-Alpes EPI\_ISL\_419185 2020-03-22 France Auvergne-Rhône-Alpes EPI\_ISL\_419186 2020-03-22 France Auvergne-Rhône-Alpes EPI\_ISL\_419187 2020-03-22 France Auvergne-Rhône-Alpes EPI\_ISL\_419188 2020-03-22 France Auvergne-Rhône-Alpes EPI\_ISL\_418219 2020-02-26 France Bretagne EPI\_ISL\_416502 2020-02-26 France Bretagne EPI\_ISL\_416503 2020-03-01 France Bretagne EPI\_ISL\_416504 2020-03-02 France Bretagne EPI\_ISL\_416505 2020-03-02 France Bretagne EPI\_ISL\_416506 2020-03-03 France Bretagne EPI\_ISL\_416507 2020-03-05 France Bretagne EPI\_ISL\_416508 2020-03-06 France Bretagne EPI\_ISL\_416509 2020-03-06 France Bretagne EPI\_ISL\_416510 2020-03-06 France Bretagne EPI\_ISL\_416511 2020-03-07 France Bretagne EPI\_ISL\_416512 2020-03-07 France Bretagne EPI\_ISL\_416513 2020-03-07 France Bretagne EPI\_ISL\_415651 2020-03-05 France Bourgogne-France-Comté EPI\_ISL\_415652 2020-03-05 France Bourgogne-France-Comté EPI\_ISL\_416757 2020-03-07 France Auvergne-Rhône-Alpes EPI\_ISL\_417340 2020-03-07 France Auvergne-Rhône-Alpes EPI\_ISL\_418222 2020-03-04 France Centre-Val de Loire EPI\_ISL\_416752 2020-03-04 France Auvergne-Rhône-Alpes EPI\_ISL\_416751 2020-03-05 France Auvergne-Rhône-Alpes EPI\_ISL\_414623 2020-02-25 France Grand Est EPI\_ISL\_414631 2020-03-04 France Grand Est EPI\_ISL\_414632 2020-03-04 France Grand Est gisaid\_epi\_isl date country division ------------------ ------------ --------- ----------------- EPI\_ISL\_418218 2020-02-21 France Hauts de France EPI\_ISL\_418220 2020-02-28 France Hauts de France EPI\_ISL\_414626 2020-02-29 France Hauts de France EPI\_ISL\_414627 2020-03-02 France Hauts de France EPI\_ISL\_414630 2020-03-03 France Hauts de France EPI\_ISL\_414635 2020-03-04 France Hauts de France EPI\_ISL\_414637 2020-03-04 France Hauts de France EPI\_ISL\_414638 2020-03-04 France Hauts de France EPI\_ISL\_415649 2020-03-05 France Hauts de France EPI\_ISL\_418223 2020-03-05 France Hauts de France EPI\_ISL\_418224 2020-03-08 France Hauts de France EPI\_ISL\_418225 2020-03-08 France Hauts de France EPI\_ISL\_415654 2020-03-09 France Hauts de France EPI\_ISL\_416493 2020-03-08 France Hauts de France EPI\_ISL\_416495 2020-03-10 France Hauts de France EPI\_ISL\_416496 2020-03-10 France Hauts de France EPI\_ISL\_416497 2020-03-10 France Hauts de France EPI\_ISL\_418226 2020-03-09 France Hauts de France EPI\_ISL\_418227 2020-03-12 France Hauts de France EPI\_ISL\_418228 2020-03-12 France Hauts de France EPI\_ISL\_418231 2020-03-15 France Hauts de France EPI\_ISL\_418236 2020-03-16 France Hauts de France EPI\_ISL\_418237 2020-03-16 France Hauts de France EPI\_ISL\_418238 2020-03-16 France Hauts de France EPI\_ISL\_418239 2020-03-16 France Hauts de France EPI\_ISL\_406596 2020-01-23 France Ile de France EPI\_ISL\_406597 2020-01-23 France Ile de France EPI\_ISL\_411219 2020-01-28 France Ile de France EPI\_ISL\_408430 2020-01-29 France Ile de France EPI\_ISL\_408431 2020-01-29 France Ile de France gisaid\_epi\_isl date country division ------------------ ------------ --------- ---------------------- EPI\_ISL\_415650 2020-03-02 France Ile de France EPI\_ISL\_416498 2020-03-11 France Ile de France EPI\_ISL\_416499 2020-03-11 France Ile de France EPI\_ISL\_416501 2020-03-10 France Ile de France EPI\_ISL\_418229 2020-03-12 France Ile de France EPI\_ISL\_418230 2020-03-13 France Ile de France EPI\_ISL\_418232 2020-03-15 France Ile de France EPI\_ISL\_418233 2020-03-15 France Ile de France EPI\_ISL\_418234 2020-03-14 France Ile de France EPI\_ISL\_418235 2020-03-16 France Ile de France EPI\_ISL\_418240 2020-03-16 France Ile de France EPI\_ISL\_417333 2020-03-04 France Auvergne-Rhône-Alpes EPI\_ISL\_417334 2020-03-04 France Auvergne-Rhône-Alpes EPI\_ISL\_416753 2020-03-06 France Auvergne-Rhône-Alpes EPI\_ISL\_416754 2020-03-06 France Auvergne-Rhône-Alpes EPI\_ISL\_416756 2020-03-06 France Auvergne-Rhône-Alpes EPI\_ISL\_417337 2020-03-07 France Auvergne-Rhône-Alpes EPI\_ISL\_417336 2020-03-06 France Auvergne-Rhône-Alpes EPI\_ISL\_417339 2020-03-08 France Auvergne-Rhône-Alpes EPI\_ISL\_416758 2020-03-08 France Auvergne-Rhône-Alpes EPI\_ISL\_416747 2020-03-04 France Auvergne-Rhône-Alpes EPI\_ISL\_416748 2020-03-04 France Auvergne-Rhône-Alpes EPI\_ISL\_416750 2020-03-06 France Auvergne-Rhône-Alpes EPI\_ISL\_417338 2020-03-07 France Auvergne-Rhône-Alpes EPI\_ISL\_414624 2020-02-26 France Normandie EPI\_ISL\_416494 2020-03-04 France Normandie EPI\_ISL\_414625 2020-02-26 France Pays de la Loire EPI\_ISL\_416745 2020-03-10 France Auvergne-Rhône-Alpes EPI\_ISL\_416746 2020-03-03 France Auvergne-Rhône-Alpes EPI\_ISL\_416749 2020-03-04 France Auvergne-Rhône-Alpes [^1]: The authors gratefully acknowledge partial funding from the France-Stanford Center for Interdisciplinary Studies. JAP acknowledges support from National Institutes of Health grant R01-GM-131404 and the Alfred P. Sloan Foundation. AV acknowledges partial funding from the chaire program Mathematical Modeling and Biodiversity (Ecole polytechnique, Museum National d’Histoire Naturelle, Veolia Environment, Foundation X).
--- abstract: 'Superconductivity at about 15.6 K was achieved in Tb$_{1-x}$Ca$_x$FeAsO by partially substituting Tb$^{3+}$ with Ca$^{2+}$ in the nominal doping region $x = 0.40 \sim 0.50$. A detailed investigation was carried out in a typical sample with doping level of $x$ = 0.44. The upper critical field of this sample was estimated to be 77 Tesla from the magnetic field dependent resistivity data. The domination of hole-like charge carriers in the low-temperature region was confirmed by Hall effect measurements. The comparison between the calcium-doped sample Pr$_{1-x}$Ca$_x$FeAsO (non-superconductive) and the Strontium-doped sample Pr$_{1-x}$Sr$_x$FeAsO (superconductive) suggests that a lager ion radius of the doped alkaline-earth element compared with that of the rare-earth element may be a necessary requirement for achieving superconductivity in the hole-doped 1111 phase.' author: - 'Gang Mu, Bin Zeng, Peng Cheng, Xiyu Zhu, Fei Han, Bing Shen, and Hai-Hu Wen' title: 'Superconductivity at 15.6 K in Calcium-doped Tb$_{1-x}$Ca$_x$FeAsO: the structure requirement for achieving superconductivity in the hole-doped 1111 phase' --- Introduction ============ The discovery of superconductivity in iron pnictides have generated enormous interests in the community of condensed matter physics.[@Kamihara2008] Up to date, the iron pnictide superconductors have developed into several families with different structures, which were abbreviated as the 1111 phase (including the oxy-arsenide[@Kamihara2008] and fluorine-arsenide[@SrF]), 122 phase,[@Rotter; @CWCh] 111 phase,[@LiFeAs; @LiFeAsChu; @LiFeAsUK] 11 phase,[@FeSe] 42622 phase,[@42622] and so on. It seems that each phase with different structure has a unique superconducting transition temperature $T_c$. As for the 1111 phase, most of the discovered superconductors are characterized as electron-doped ones [@Pr52K; @CP; @WangC; @Mandrus; @CaoGH], while the hole-doped superconductors were only reported in the strontium-doped Ln$_{1-x}$Sr$_{x}$FeAsO (Ln = La, Pr, Nd).[@WenEPL; @LaSr2; @PrSr; @NdSr] The hole-doped superconductivity in 1111 phase by substituting other ion-dopants with valence of “+2”, such as barium or calcium, seems quite difficult to be achieved, at least in many of the rare-earth based systems. Obviously, it is important to carry out more explorations in this direction in order to extend the family of the hole-doped superconductors in 1111 phase. And it is also significant to investigate the factors which govern the electronic properties (superconducting or non-superconducting) in the hole-doped side based on the 1111 phase. In this paper we report a new hole-doped superconductor in the 1111 phase, calcium-doped Tb$_{1-x}$Ca$_x$FeAsO, with the maximum superconducting transition temperature of 15.6 K (95% $\rho_n$). It is found that superconductivity appears in the nominal doping region $x = 0.40 \sim 0.50$. The physical properties of a selected sample with $x$ = 0.44 were investigated in depth. We estimated the upper critical field of this sample to be 77 Tesla based on the Werthamer-Helfand-Hohenberg (WHH) formula.[@WHH] The conducting charge carriers in this sample were characterized to be hole type in a wide low-temperature region by the Hall effect measurements. Meanwhile, we have also successfully synthesized calcium-doped Pr$_{1-x}$Ca$_x$FeAsO, which also displays hole-type charge carriers in low-temperature region but doesn’t superconduct at all. We attribute this different behavior to the sensitive electronic response to the relative radii of the doped ions compared with that of the rare-earth ions. Experimental Details ==================== The Tb$_{1-x}$Ca$_x$FeAsO samples were prepared using a two-step solid state reaction method. In the first step, TbAs and CaAs were prepared by reacting Tb flakes (purity 99.99%), Ca flakes (purity 99.9%) and As grains (purity 99.99%) at 500 $^o$C for 10 hours and then 700 $^o$C for 16 hours. They were sealed in an evacuated quartz tube when reacting. Then the resultant precursors were thoroughly grounded together with Fe powder (purity 99.95%) and Fe$_2$O$_3$ powder (purity 99.5%) in stoichiometry as given by the formula Tb$_{1-x}$Ca$_x$FeAsO. All the weighing and mixing procedures were performed in a glove box with a protective argon atmosphere. Then the mixtures were pressed into pellets and sealed in an evacuated quartz tube. The materials were heated up to 1150-1170 $^o$C with a rate of 120 $^o$C/hr and maintained for 40 hours. Then a cooling procedure was followed. After that, we can get the superconducting polycrystalline samples. The process of preparing Pr$_{1-x}$Ca$_x$FeAsO samples is quite similar to that of Tb$_{1-x}$Ca$_x$FeAsO. The x-ray diffraction (XRD) measurements of our samples were carried out by a $Mac$-$Science$ MXP18A-HF equipment with Cu-K$_\alpha$ radiation. The ac susceptibility of the samples were measured on the Maglab-12T (Oxford) with an ac field of 0.1 Oe and a frequency of 333 Hz. The resistance and Hall effect measurements were done using a six-probe technique on the Quantum Design instrument physical property measurement system (PPMS) with magnetic fields up to 9 T. The current direction was changed for measuring each point in order to remove the contacting thermal power. The temperature stabilization was better than 0.1% and the resolution of the voltmeter was better than 10 nV. ![(Color online) (a) X-ray diffraction pattern for the sample Tb$_{0.56}$Ca$_{0.44}$FeAsO. All the main peaks can be indexed to the tetragonal ZrCuSiAs-type structure. The peaks from the impurities are precisely indexed to Tb$_2$O$_3$ and FeAs. (b) Temperature dependence of resistivity for the Tb$_{0.56}$Ca$_{0.44}$FeAsO sample under two different fields 0 T and 9 T. The data under 0 T is shown up to 300 K. (c) The ac susceptibility data measured with $f = 333$ Hz and $H_{ac} = 0.1$ Oe .[]{data-label="fig1"}](Fig1.eps){width="9cm"} Experimental data and discussion ================================ Sample characterization for Tb$_{0.56}$Ca$_{0.44}$FeAsO ------------------------------------------------------- The x-ray diffraction pattern for the sample Tb$_{1-x}$Ca$_x$FeAsO with the nominal doping level of $x$ = 0.44 is shown in Fig. 1(a). It is clear that all the main peaks can be indexed to the 1111 phase with the tetragonal ZrCuSiAs-type structure.[@ZrCuSiAs] The main impurity phases were identified to be Tb$_2$O$_3$ and FeAs, which are all not superconducting in the measuring temperature. By using the software Fullprof, we can determine the lattice constants as $a = 3.900$ $\AA$ and $c = 8.423$ $\AA$ for this sample. By comparing with the lattice constants of the parent phase TbFeAsO ($a = 3.898$ $\AA$, $c = 8.404$ $\AA$) reported by other group,[@JieY] we find that the $a$-axis lattice constant in the present sample is slightly larger than that of the parent phase, while the expansion along the $c$-axis direction is more distinct. In fact, the similar tendency has been observed in other hole-doped systems in the 1111 phase.[@LaSr2; @PrSr] This indicates that the calcium atoms go into the crystal lattice of the TbFeAsO system because the radius of Ca$^{2+}$ is larger than that of Tb$^{3+}$ (see Fig. 7). In Fig. 1(b) we present a typical set of resistive data for the same sample Tb$_{0.56}$Ca$_{0.44}$FeAsO under 0 T and 9 T. The data under 0 T is shown up to 300 K. A clear superconducting transition can be seen in the low temperature region. Taking a criterion of 95% $\rho_n$, the onset transition temperature is determined to be 15.6 K. A magnetic field of 9 T only suppresses the onset transition temperature about 1.6 K, indicating a rather high upper critical field in our sample. In the high temperature region, the resistivity anomaly coming from the antiferromagnetic (AF) or structural transition has been suppressed and a flattening feature was observed clearly. The similar behavior has been observed in other hole-doped 1111 systems Ln$_{1-x}$Sr$_x$FeAsO (Ln = La, Pr, Nd).[@WenEPL; @LaSr2; @PrSr; @NdSr] Figure 1(c) shows the ac susceptibility data measured with $f = 333$ Hz and $H_{ac} = 0.1$ Oe. A rough estimate from the diamagnetic signal shows that the superconducting volume fraction of the present sample is beyond 50%, confirming the bulk superconductivity in our samples. The onset critical temperature by magnetic measurements is roughly corresponding to the zero-resistance temperature. ![(Color online) Temperature dependence of resistivity for Tb$_{0.56}$Ca$_{0.44}$FeAsO near the superconducting transition under different magnetic fields. The onset transition temperature defined by 95%$\rho_n$ shifts with the magnetic field slowly. Inset: phase diagram derived from the resistive transition curves.[]{data-label="fig2"}](Fig2.eps){width="8.5cm"} ![(Color online) Hall effect measurements for the samples Tb$_{1-x}$Ca$_{x}$FeAsO. The main frame shows the field dependence of the Hall resistivity $\rho_{xy}$ at different temperatures for the sample with $x$ = 0.44. Inset: Temperature dependence of the Hall coefficient $R_H$ for two samples with $x$ = 0.44 and 0.45.[]{data-label="fig3"}](Fig3.eps){width="9cm"} Upper critical field for Tb$_{0.56}$Ca$_{0.44}$FeAsO ---------------------------------------------------- We attempted to estimate the upper critical field of the sample Tb$_{0.56}$Ca$_{0.44}$FeAsO from the resistivity data. Temperature dependence of resistivity under different magnetic fields is shown in the main frame of Fig. 2. It is found that the onset transition point, which reflects mainly the upper critical field in the configuration of H$\|$ab-plane, shifts more slowly than the zero resistivity point to low temperatures under fields. The magnetoresistance in the normal state is found to be quite small. We take a criterion of 95%$\rho_n$ to determine the onset transition points under different fields, which are represented by the red open circles in the inset of Fig. 2. From these data we can determine the slope of $H_{c2}(T)$ near $T_c$, $dH_{c2}/dT|_{T_c} \approx -7.1$ T/K. By using the WHH formula[@WHH] the value of zero temperature upper critical field $H_{c2}(0)$ can be estimated through: $$H_{c2}(0)=-0.693T_c(\frac{dH_{c2}}{dT})|_{T_c}. \label{eq:1}$$ Taking $T_c$= 15.6 K, we get $H_{c2}(0) \approx 77$ T. Regarding the relatively low value of $T_c$=15.6 K in the present sample, this value of upper critical field $H_{c2}(0)$ is actually quite high. Actually, in the strontium-doped Ln$_{1-x}$Sr$_x$FeAsO (Ln = La, Pr), the rather high $H_{c2}(0)$ and large slope $dH_{c2}(T)/dT|_{T_c}$ ($\sim4$ T/K) have been observed when comparing with the F-doped LaFeAsO sample, which was attributed to higher quasiparticle density of states (DOS) near the Fermi level in the hole-doped samples.[@PrSr] Surprisingly, the slope $dH_{c2}(T)/dT|_{T_c}$ found here is even larger than that of the strontium-doped samples. The essential physical mechanism for this behavior may still need more investigation in this system, including that from the theoretical side. Hall effect of Tb$_{1-x}$Ca$_{x}$FeAsO -------------------------------------- It is known that Hall effect measurement is a useful tool to investigate the information of charge carriers and the band structure. For a conventional metal with Fermi liquid feature, the Hall coefficient is almost independent of temperature. However, this situation is changed for a multiband material[@HY] or a sample with non-Fermi liquid behavior, such as the cuprate superconductors.[@Ong] To examine the type of the conducting carriers, we measured the Hall effect of the samples Tb$_{1-x}$Ca$_{x}$FeAsO. The main frame of Fig. 3 shows the magnetic field dependence of Hall resistivity ($\rho_{xy}$) at different temperatures for the sample with $x$ = 0.44. In the experiment $\rho_{xy}$ was taken as $\rho_{xy}$ = \[$\rho$(+H) - $\rho$(-H)\]/2 at each point to eliminate the effect of the misaligned Hall electrodes. We can see that all curves in Fig. 3 have good linearity versus the magnetic field. Moreover, $\rho_{xy}$ is positive at all temperatures below 160 K giving a positive Hall coefficient $R_H = \rho_{xy}/H$, which actually indicates that hole-type charge carriers dominate the conduction below 160 K in the present sample. The temperature dependence of $R_H$ for two samples with $x$ = 0.44 and 0.45 is shown in the inset of Fig. 3. One can see that the evolution of $R_H$ with temperature are quite similar for the two samples, indicating the reliability of the Hall data. The hump feature in low temperature region is quite similar to that observed in strontium-doped Ln$_{1-x}$Sr$_{x}$FeAsO (Ln = La, Pr) samples.[@WenEPL; @LaSr2; @PrSr] However, there is still some differences obviously. Firstly, the Hall coefficient $R_H$ changes its sign at about 160 K which is remarkably lower than that observed in the strontium-doped systems ($\sim250$ K). This character seems to be quite common in the calcium-doped 1111 phase because the sign changing of $R_H$ was also found to occur at about 160 K in Pr$_{1-x}$Ca$_{x}$FeAsO (see Fig. 5) and Nd$_{1-x}$Ca$_{x}$FeAsO (data not shown here). Secondly, the negative $R_H$ at about 200 K has a rather large absolute value. This feature seems to be unique in the calcium-doped superconducting samples, since it can’t be observed in Ln$_{1-x}$Sr$_{x}$FeAsO (Ln = La, Pr) or Pr$_{1-x}$Ca$_{x}$FeAsO. Assuming a simple two-band scenario with different types of carriers, we can express the Hall coefficient $R_H$ in the low-field limit as $$R_H=\frac{\sigma_1 \mu_1+\sigma_2 \mu_2}{(\sigma_1+\sigma_2)^2},\label{eq:2}$$ where $\sigma_i$ and $\mu_i$ are the conductivity and the mobility of the $i^{th}$ band, respectively. They are determined by the charge-carrier density and scattering rate of each band. We attribute the strong and complicated temperature dependence of $R_H$ in the present system to the competing effect of the scattering rate as well as the charge-carrier density in different bands. ![(Color online) Temperature dependence of resistivity for two calcium-doped samples Pr$_{1-x}$Ca$_{x}$FeAsO with $x$ = 0.20 and 0.40, along with two strontium-doped samples Pr$_{1-x}$Sr$_{x}$FeAsO with $x$ = 0.05 and 0.25 for comparison. It is clear that the behavior of the calcium-doped samples is between that of the two strontium-doped samples in high temperature region.[]{data-label="fig4"}](Fig4.eps){width="9cm"} ![(Color online) Hall effect measurements for one sample Pr$_{0.60}$Ca$_{0.40}$FeAsO. The main frame shows the field dependence of the Hall resistivity $\rho_{xy}$ at different temperatures. Inset: Temperature dependence of the Hall coefficient $R_H$, which is positive in the temperature region below about 160 K.[]{data-label="fig5"}](Fig5.eps){width="9cm"} The case in the calcium-doped Pr$_{1-x}$Ca$_{x}$FeAsO ----------------------------------------------------- One may be curious to know what would happen if we substitute calcium to the systems based on other rare-earth elements. Actually, we have tried the case of calcium-doped LaFeAsO, PrFeAsO, NdFeAsO, GdFeAsO, and so on. Here we just show the results of Pr$_{1-x}$Ca$_{x}$FeAsO for example. No superconductivity was found in the calcium-doped Pr$_{1-x}$Ca$_{x}$FeAsO samples in quite wide doping range ($0.10\leq x \leq 0.50$). In Fig. 4, we show the temperature dependence of resistivity for two selected samples with $x = 0.20$ and 0.40. In order to have a comparison, we also display the resistivity data for two strontium-doped Pr$_{1-x}$Sr$_{x}$FeAsO samples with different doping levels ($x$ = 0.05 and 0.25). It is clear that the resistivity anomaly from the AF or structural transition around 160 K is suppressed gradually with the increase of strontium contents. By having a closer scrutiny, we find that the behavior of the calcium-doped samples is between that of the two strontium-doped samples in high temperature region. This may suggest that the real doped charge carriers in the calcium-doped PrFeAsO are roughly corresponding to $0.10 \sim0.20$ of that of the strontium-doped PrFeAsO system and the AF order has been suppressed to a certain extent by the calcium-doping. To further investigate the conducting properties of the Pr$_{1-x}$Ca$_{x}$FeAsO sample, we have measured the Hall effect of them and display the data of one typical sample with $x = 0.40$ in Fig. 5. Nonlinear behavior was observed in the field dependent $\rho_{xy}$ data below 100 K. The hump feature and positive value of $R_H$ can be seen below about 160 K, which is rather similar to that observed in Tb$_{1-x}$Ca$_{x}$FeAsO (see Fig. 3). This behavior indicates strongly that the hole-type charge carriers have been induced to the system and they dominate the conducting in low temperature region. [ccccccc]{}sample & $a$ ($\AA$) & $c$ ($\AA$) & Fe-As-Fe ($^o$) & d$_{PrOPr}$ ($\AA$) & d$_{AsFeAs}$ ($\AA$) & d$_{inter}$ ($\AA$)\ PrFeAsO & $3.985$ & $8.595$ & $111.968$ & $2.405$ & $2.690$ & $1.750$\ Pr$_{0.87}$Ca$_{0.13}$FeAsO & $3.987$ & $8.628$ & $112.429$ & $2.396$ & $2.668$ & $1.782$\ Pr$_{0.84}$Sr$_{0.16}$FeAsO & $3.985$ & $8.622$ & $112.413$ & $2.387$ & $2.666$ & $1.785$\ \[tab:table1\] In order to find the factors which prevent Pr$_{1-x}$Ca$_{x}$FeAsO from superconducting even if hole-type charge carriers have been doped into the system, we analyzed the structural details of one calcium-doped Pr$_{1-x}$Ca$_x$FeAsO with nominal $x$ = 0.4. Selected Rietveld refinement results are listed in table I, and the refinement pattern is shown in Fig. 6. Only small amounts of Pr$_2$O$_3$ and FeAs can be seen as the impurities. From the refinement, we find that the actual doping concentration for the sample Pr$_{1-x}$Ca$_x$FeAsO with nominal $x$ = 0.4 is only about 0.13, which is quite consistent with the argument we obtained from the resistivity data (see Fig. 4). In order to have a comparison with the strontium-doped PrFeAsO system where superconductivity has been obtained, we also display the structural parameters of the parent phase PrFeAsO and strontium-doped Pr$_{1-x}$Sr$_x$FeAsO with $x$ = 0.16 (the value from refinement) [@Jeitschko; @Ju] in table I. Here we define d$_{PrOPr}$ and d$_{AsFeAs}$ as the vertical distance between the Pr atoms residing at the top and bottom of the PrO layer and that between the As atoms in the FeAs layer, respectively. And d$_{inter}$ is the interlayer space between the Pr-O-Pr block and the As-Fe-As block. It is clear that the lattice constant along $a$-axis remains nearly unchanged while that along $c$-axis expands clearly when calcium is doped to PrFeAsO. Both d$_{PrOPr}$ and d$_{AsFeAs}$ shrink slightly while d$_{inter}$ expands distinctly with the actual calcium doping of 0.13, resulting in the expansion behavior of the lattice along $c$-axis. Surprisingly, we find that calcium doping gives a rather similar influence to the crystal structure to that of strontium doping, when we compare the parameters of Pr$_{0.87}$Ca$_{0.13}$FeAsO and Pr$_{0.84}$Sr$_{0.16}$FeAsO as shown in table I, even if the radius of Ca$^{2+}$ is smaller than that of Pr$^{3+}$ while the radius of Sr$^{2+}$ is larger (see Fig. 7). We note that the sample Pr$_{0.84}$Sr$_{0.16}$FeAsO still doesn’t superconduct and superconductivity was achieved in the samples with even higher doping, as reported in Ref.\[25\]. However, the fact that the radius of Ca$^{2+}$ is smaller than that of Pr$^{3+}$ seems to prevent from doping even more calcium to PrFeAsO and consequently prevent from achieving superconductivity in Pr$_{1-x}$Ca$_{x}$FeAsO system, because the effect of doped calcium is to expand the lattice along $c$-axis. This argument is reinforced by the fact that only 13% of calcium can be doped into the system even if the nominal doping concentration is 40%. ![(Color online) The observed (red crosses) and calculated (green solid line) x-ray powder diffraction patterns of Pr$_{0.6}$Ca$_{0.4}$FeAsO. The three rows of vertical bars show the calculated positions of Bragg reflections for Pr$_2$O$_3$ (blue), FeAs(red) and Pr$_{0.6}$Ca$_{0.4}$FeAsO (black), respectively. The magenta solid line shown at the bottom of the figure indicates the differences between observations and calculations.[]{data-label="fig6"}](Fig6.eps){width="9cm"} ![(Color online) The data of ion radii for some selected rare-earth-element ions. The blue dotted and red dashed lines represent the value of Sr$^{2+}$ and Ca$^{2+}$, respectively.[]{data-label="fig7"}](Fig7.eps){width="9cm"} To validate our supposition, we have tried the case of other calcium-doped samples. In La$_{1-x}$Ca$_{x}$FeAsO, we found that it is quite difficult to dope charge carriers (probably also Ca) to the system and the Hall coefficient remains negative, and very similar behaviors to that of Pr$_{1-x}$Ca$_{x}$FeAsO were observed in Nd$_{1-x}$Ca$_{x}$FeAsO. Very small superconducting signal can be observed in Sm$_{1-x}$Ca$_{x}$FeAsO sometimes. Zero resistance can be obtained easily in Gd$_{1-x}$Ca$_{x}$FeAsO, but the diamagnetic signal is smaller than that of the Tb$_{1-x}$Ca$_{x}$FeAsO system. We can’t get the 1111 phase on the heavy rare-earth (Dy, Ho, et al) side under ambient pressure. By summarizing the phenomena mentioned above, we argue that the relationship between the ion radii of the rare-earth elements and the alkaline-earth element may play a key role in achieving superconductivity in the hole-doped 1111 phase (see Fig. 7). That is, superconductivity emerges only when the ion radius of the rare-earth element is smaller than that of the alkaline-earth element. At this time, we can say safely that it is rather difficult, if not impossible, to obtain superconductivity when the ion radius of the rare-earth element is larger. Moreover, it seems that superconductivity favors the situation when the difference between the two radii is lager, within the tolerance of crystal lattice. These arguments are quite consistent with that stated in the previous paragraph. This actually gives a restriction in exploring new superconductors in hole-dope side of 1111 phase. Concluding remarks ================== In summary, bulk superconductivity was achieved by substituting Tb$^{3+}$ with Ca$^{2+}$ in TbFeAsO system. The maximum superconducting transition temperature $T_c$ = 15.6 K is found to appear around the nominal doping level $x$ = 0.40$\sim$ 0.50. The positive Hall coefficient $R_H$ in a wide low-temperature range suggests that the hole-type charge carriers dominate the conduction in this system. Surprisingly, the slope of the upper critical magnetic field vs. temperature near $T_c$ in calcium-doped Tb$_{1-x}$Ca$_{x}$FeAsO is found to be much higher than that of the electron-doped and strontium-doped ones. Moreover, we have investigated the structural and conducting properties of other calcium-doped systems (taking Pr$_{1-x}$Ca$_{x}$FeAsO for example). We found that the relationship between the ion radii of the rare-earth elements and alkaline-earth elements may play a key role in achieveing superconductivity in the hole-doped 1111 phase. We acknowledge the help of XRD experiments from L. H. Yang and H. Chen. This work is supported by the Natural Science Foundation of China, the Ministry of Science and Technology of China (973 project: 2006CB01000, 2006CB921802), the Knowledge Innovation Project of Chinese Academy of Sciences (ITSNEM). [00]{} Y. Kamihara, T. Watanabe, M. Hirano, and H. Hosono, J. Am. Chem. Soc. **130**, 3296 (2008). X. Zhu, F. Han, P. Cheng, G. Mu, B. Shen, and H. H. Wen, Europhys. Lett. **85**, 17011 (2009). M. Rotter, M. Tegel, and D. Johrendt, Phys. Rev. Lett. **101**, 107006 (2008). K. Sasmal, B. Lv, B. Lorenz, A. M. Guloy, F. Chen, Y. Y. Xue, and C. W. Chu, Phys. Rev. Lett. **101**, 107007 (2008). X. C. Wang, Q. Q. Liu, Y. X. Lv, W. B. Gao, L. X. Yang, R. C. Yu, F. Y. Li, and C. Q. Jin, arXiv: Condat/0806.4688. J. H. Tapp, Z. Tang, B. Lv, K. Sasmal, B. Lorenz, P. C.W. Chu, and A. M. Guloy, Phys. Rev. B **78**, 060505(R) (2008). M. J. Pitcher, D. R. Parker, P. Adamson, S. J. C. Herkelrath, A. T. Boothroyd, and S. J. Clarke, Chem. Commun. (Cambridge) **2008**, 5918. F.C. Hsu, J.Y. Luo, K.W. Yeh, T.K. Chen, T.W. Huang, P.M. Wu, Y.C. Lee, Y.L. Huang, Y.Y. Chu, D.C. Yan, M.K. Wu, Proceedings of National Academy of Sciences **105**, 14262 (2008). X. Zhu, F. Han, G. Mu, P. Cheng, B. Shen, B. Zeng, and H. H. Wen, Phys. Rev. B **79**, 220512(R) (2009) Z. Ren, J. Yang, W. Lu, W. Yi, G. C. Che, X. L. Dong, L. L. Sun, and Z. X. Zhao, Materials Research Innovations **12**, 105, (2008). P. Cheng, L. Fang, H. Yang, X. Zhu, G. Mu, H. Luo, Z. Wang, and H. H. Wen, Science in China G, **51(6)**, 719-722 (2008). C. Wang, L. Li, S. Chi, Z. Zhu, Z. Ren, Y. Li, Y. Wang, X. Lin, Y. Luo, S. Jiang, X. Xu, G. Cao, and Z. Xu, Europhys. Lett. **83**, 67006 (2008). A. S. Sefat, A. Huq, M. A. McGuire, R. Jin, B. C. Sales, D. Mandrus, L. M. D. Cranswick, P. W. Stephens, and K. H. Stone, Phys. Rev. B **78**, 104505 (2008). G. Cao, C. Wang, Z. Zhu, S. Jiang, Y. Luo, S. Chi, Z. Ren, Q. Tao, Y. Wang, and Z. Xu, Phys. Rev. B **79**, 054521 (2009). H. H. Wen, G. Mu, L. Fang, H. Yang, and X. Y. Zhu, Europhys. Lett. **82**, 17009 (2008). G. Mu, L. Fang, H. Yang, X. Zhu, P. Cheng, and H. H. Wen, J. Phys. Soc. Jpn. Suppl. **77**, 15-18 (2008). G. Mu, B. Zeng, X. Zhu, F. Han, P. Cheng, B. Shen, and H. H. Wen, Phys. Rev. B **79**, 104501 (2009). K. Kasperkiewicz, J. G. Bos, A. N. Fitch, K. Prassides, and S. Margadonna, Chem. Commun. (Cambridge) **2009**, 707. N. R. Werthamer, E. Helfand and P. C. Hohenberg, Phys. Rev. **147**, 295 (1966). V. Johnson, and W. Jeitschko, J. Solid State Chem. **11**, 161 (1974). J. Yang, X. L. Shen, W. Lu, W. Yi, Z. C. Li, Z. A. Ren, G. C. Che, X. L. Dong, L. L. Sun, F. Zhou, and Z. X. Zhao, New J. Phys. **11** 025005 (2009). H. Yang, Y. Liu, C. G. Zhuang, J. R. Shi, Y. G. Yao, S. Massidda, M. Monni, Y. Jia, X. X. Xi, Q. Li, Z. K. Liu, Q. R. Feng, H. H. Wen, Phys. Rev. Lett. **101**, 067001 (2008). T. R. Chien, D. A. Brawner, Z. Z. Wang, and N. P. Ong, Phys. Rev. B, **43**, 6242-6245 (1991). P. Quebe, L.J. Terbüchte, and W. Jeitschko, New J. Phys. **11** 083003 (2009). J. Ju, Z. Li, G. Mu, H. H. Wen, K. Sato, M. Watahiki, G. Li, and K. Tanigaki, New J. Phys. **11** 083003 (2009).
--- abstract: 'In this note, we show that the normalized Hochschild co–chains of an associative algebra with a non–degenerate, symmetric, invariant inner product are an algebra over a chain model of the framed little discs operad which is given by cacti. In particular, in this sense they are a BV algebra up to homotopy and the Hochschild cohomology of such an algebra is a BV algebra whose induced bracket coincides with Gerstenhaber’s bracket. To show this, we use a cellular chain model for the framed little disc operad in terms of normalized cacti. This model is given by tensoring our chain model for the little discs operad in terms of spineless cacti with natural chain models for $(S^1)^{\times n}$ adapted to cacti.' address: 'University of Connecticut, Department of Mathematics, Storrs, CT 06269' author: - 'Ralph M. Kaufmann' title: 'A proof of a cyclic version of Deligne’s conjecture via Cacti' --- Introduction {#introduction .unnumbered} ============ In this note, we expand our chain model of the little discs operad which we gave in terms of spineless cacti to a chain model for the framed little discs operad in terms of normalized cacti. Extending the philosophy of [@del], we then show that the chain model for the framed little discs operad naturally acts on the normalized Hochschild cochains of a unital associative algebra with a non–degenerate, symmetric, invariant bi–linear pairing. In fact, as in [@del], this operation can again be seen as a discretization of the calculations for the relations of a BV algebra up to homotopy on the chains of the operad $\Arc$ of [@KLP]. In [@cact] it is proven, that the operad of framed little discs is equivalent to the operad of cacti. Moreover, we gave a description of cacti in terms of a bi–crossed product of spineless cacti and an operad built on the monoid $S^1$ which we showed to be homotopy equivalent to the semi–direct product of these operads [@cact]. Furthermore, we gave a chain model for spineless cacti in terms of normalized spineless cacti which we showed to give a natural solution to Deligne’s conjecture [@del]. Using the description in terms of the bi–crossed and semi–direct products, we obtain a chain model for the operad of framed little discs, by tensoring the chains of normalized spineless cacti with the chains for the operad built on the monoid $S^1$. In order to prove the necessary relations on the chain level one can translate the respective relations from the relations in the $\Arc$ operad using the method described in [@cact; @KLP]. As it turns out, in order to translate the relations and thus to establish the homotopy BV structure on the chain level, one needs a refinement of the cell decomposition on the semi-direct product to be able to accommodate all the operations which were used in the $\Arc$ operad picture. This refinement uses cell decompositions on the $S^1$ factors which are induced by regarding them as the lobe they represent. This leads to a combinatorial description in terms of planar planted black and white (b/w) bipartite trees with additional data called spines. In the language of cacti [@cact], the additional data keeps track of the position of the local zeros. On these trees, there are linear orders at each vertex, which may differ from the induced linear order of the planar planted trees. This forces us to look at non–rooted trees or equivalently to invert the orientation of edges. According to the general calculus for “correlation functions” defined by trees, to achieve such an inversion one needs to have a non–degenerate pairing, which is symmetric and invariant. This is the assumption we have to make on our algebra. With this assumption, we can rewrite the action of the cellular chains as “operadic correlation functions” for decorated trees. In this description the operation of the chains of the framed little discs operad becomes apparent. The results and techniques we present below can also be employed in other situations, which we comment on at the end of the paper. Notably one can use it to obtain an action of cells of a ribbon graph cell decomposition of moduli space on cyclic complexes. This should ultimately lead to string topology like operations of the cells of moduli space of decorated bordered surfaces on the free loop space of a compact manifold extending the operations of the string PROP or dioperad. The basic constructions for this are announced below. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank Alain Connes for an enlightening discussion and Jim Stasheff for his valuable comments. We also thank the Max–Planck–Institute for Mathematics in Bonn for providing the atmosphere and stimulus to conceptualize and complete this paper. Background ========== Graphs {#Graphs} ------ In this section, we formally introduce the graphs and the operations on graphs which we will use in our analysis of cacti. This is the approach as given in Appendix B of [@cact] in which cacti are characterized as a certain type of ribbon graph. Namely, a cactus is a marked treelike ribbon graph with a metric. ### Graphs {#graphs} A graph $\Gamma$ is a tuple $(V_{\Gamma},F_{\Gamma}, \imath_{\Gamma}: F_{\Gamma}\rightarrow F_{\Gamma},\del_{\Gamma}:F_{\Gamma} \rightarrow V_{\Gamma})$ where $\imath_{\Gamma}$ is an involution $\imath_{\Gamma}^2=id$ without fixed points. We call $V_{\Gamma}$ the vertices of $\Gamma$ and $F_{\Gamma}$ the flags of $\Gamma$. The edges $E_{\Gamma}$ of $\Gamma$ are the orbits of the flags under the involution $\imath_{\Gamma}$. A directed edge is an edge together with an order of the two flags which define it. In case there is no risk of confusion, we will drop the subscripts $\Gamma$. Notice that $f\mapsto (f,\imath(f))$ gives a bijection between flags and directed edges. We also call $F_v(\Gamma):=\del^{-1}(v)\subset F_{\Gamma}$ the set of flags of the vertex $v$ and call $|F_v({\Gamma})|$ the valence of $v$ and denote it by $\val(v)$. We also let $E(v)=\{\{f,\imath(f)\}|f\in F_{v}\}$ and call these edges the edges incident to $v$. The geometric realization of a graph is given by considering each flag as a half-edge and gluing the half-edges together using the involution $\imath$. This yields a one-dimensional CW complex whose realization we call the realization of the graph. ### Trees A graph is connected if its realization is. A graph is a tree if it is connected and its realization is contractible. A rooted tree is a pair $(\t,v_0)$ where $\t$ is a tree and $v_0\in V_{\t}$ is a distinguished vertex. In a rooted tree there is a natural orientation for edges, in which the edge points toward the root. That is we say $(f,\imath (f))$ is naturally oriented if $\del(\imath(f))$ is on the unique shortest path from $\del(f)$ to the root. This means that the set $E(v)$ splits up into incoming and outgoing edges. Given a vertex $v$, we let $|v|$ be the number of incoming edges and call it the arity of $v$. A vertex $v$ is called a leaf if $|v|=0$. Notice that the root is the only vertex for which $|v_0|=\val(v_0)$. For all other vertices $v\neq v_0$ one has $|v|=\val(v)-1$. A bi-colored or black and white (b/w) tree is a tree $\t$ together with a map $\color:V\rightarrow \mathbb{Z}/2\mathbb{Z}$. Such a tree is called bipartite if for all $f\in F_{\t}:\color(\del(f))+\color(\del(\imath(f)))=1$, that is edges are only between black and white vertices. We call the set $V_w:=\color^{-1}(1)$ the white vertices. If $(f,\imath (f))$ is a naturally oriented edge, we call the edge white if $\del(\imath(f))\in V_w$ and denote the set of white edges by $E_w$. Likewise we call $V_b:=\color^{-1}(0)$ the black vertices and let $E_b$ be the set of black edges, where a naturally oriented edge $(f,\imath (f))$ is called black if $\del(\imath(f))\in V_b$. The black leaves in a rooted black and white tree are called tails. The edges incident to the tails are called tail edges and are denoted $E_{tail}$. For tails, we will only consider those flags of the tail edges which are not incident to the tail vertices and call them $F_{tail}$. ### Planar trees and Ribbon graphs A ribbon graph is a connected graph whose vertices are of valence at least two together with a cyclic order of the set of flags of the vertex $v$ for every vertex $v$. A graph with a cyclic order of the flags at each vertex gives rise to bijections $N_v:F_v\rightarrow F_v$ where $N_v(f)$ is the next flag in the cyclic order. Since $F=\amalg F_v$ one obtains a map $N:F\rightarrow F$. The orbits of the map $N \circ \imath$ are called the cycles or the boundaries of the graph. These sets have the induced cyclic order. Notice that each boundary can be seen as a cyclic sequence of directed edges. The directions are as follows. Start with any flag $f$ in the orbit. In the geometric realization go along this half-edge starting from the vertex $\del(f)$, continue along the second half-edge $\imath(f)$ until you reach the vertex $\del(\imath(f))$ then continue starting along the flag $N(\imath(f))$ and repeat. A tree with a cyclic order of the flags at each vertex is called planar. A planar tree has only one cycle $c_0$. Planar planted trees -------------------- A planted planar tree is a rooted planar tree $(\t,v_0)$ together with a linear order of the set of flags at $v_0$. Such a tree has a linear order of all flags as follows: Let $f$ be the smallest element of $\del^{-1}(v_0)$, then every flag appears in $c_0$ and defining the flag $f$ to be the smallest gives a linear order on the set of all flags. This linear order induces a linear order on all oriented edges and on all un-oriented edges, by restricting to the edges in the orientation opposite the natural orientation i.e. pointing away from the root. We denote the latter by $\prec$ and its restriction to $E(v)$ or $F(v)$ by $\prec_v$. We will equivalently consider planar planted trees as defined above or as a rooted planar trees whose root vertex has valence one. The bijection in one direction is given by adding a new root vertex and one new edge such that the induced linear structure on the old root is the given one. This tree is called the realization of the planar planted tree. In the other direction the bijection is simply given by contracting the unique edge incident to the root, but retaining the linear order. In the realization of a planar planted tree, we call the unique edge incident to the (new) root $v_{root}$ the root edge and denote it by $e_{root}$ and set $f_{root}$ to be the flag of the root edge which is not incident to the root. Also $E_{root}=\{e_{root}\}, F_{root}=\{f_{root}\}$. An angle at a vertex $v$ in a planar tree is a pair of two flags incident to $v$ of which one is the immediate successor of the other in the cyclic order of $F_v$. There is a bijection between angles, flags and edges by associating to an angle its bigger flag and to the latter the unique edge defined by it. The genus of a ribbon graph and its surface ------------------------------------------- The genus $g(\Gamma)$ of a ribbon graph $\Gamma$ is given by $2g(\Gamma)+2=|V_\Gamma|-|E_{\Gamma}|+\#cycles$. The surface $\Sigma(\Gamma)$ of a ribbon graph $\Gamma$ is the surface obtained from the realization of $\Gamma$ by thickening the edges to ribbons. I.e. replace each 0-simplex $v$ by a closed oriented disc $D(v)$ and each 1-simplex $e$ by $e\times I$ oriented in the standard fashion. Now glue the boundaries of $e\times I$ to the appropriate discs in their cyclic order according to the orientations. Notice that the genus of $\Sigma(\Gamma)$ is $g(\Gamma)$ and that $\Gamma$ is naturally embedded as the spine of this surface. ### Treelike and marked ribbon graphs A ribbon graph together with a distinguished cycle $c_0$ is called [*treelike*]{} if - the graph is of genus $0$ and - for all cycles $c_i\neq c_0$: if $f\in c_i$ then $\imath(f)\in c_0$. In other words each edge is traversed by the cycle $c_0$. Therefore there is a cyclic order on all (non-directed) edges, namely the cyclic order of $c_0$. A [*marked ribbon graph*]{} is a ribbon graph together with a map $\mk:\{cycles\} \rightarrow F_{\Gamma}$ satisfying the conditions - For every cycle $c$ the directed edge $\mk(c)$ belongs to the cycle. - All vertices of valence two are in the image of $\mk$, that is $\forall v,\val(v)=2$ implies $v\in Im(\del\circ\mk)$. Notice that on a marked treelike ribbon graph there is a linear order on each of the cycles $c_i$. This order is defined by upgrading the cyclic order to the linear order $\prec_i$ in which $\mk(c_i)$ is the smallest element. ### Dual b/w tree of a marked ribbon graph Given a marked treelike ribbon graph $\G$, we define its dual tree to be the colored graph whose black vertices are given by $V_{\G}$ and whose set of white vertices is the set of cycles $c_i$ of $\G$. The set of flags at $c_i$ are the flags $f$ with $f\in c_i$ and the set of flags at $v$ are the flags $\{f:f \in c_0, \del(f)=v\}$. The involution is given by $\imath_{\t}(f)=N(f)$ if $f\in c_0$ and $\imath_{\t}(f)=N^{-1}(f)$ else. This graph is a tree and is b/w and bipartite by construction. It is also planar, since the $c_i$ and the sets $F(v)$ have a cyclic order and therefore also $F_v\cap c_0$. It is furthermore rooted by declaring $\del(\mk(c_0))$ to be the root vertex and declaring $\mk(c_0)$ to be the smallest element makes it into a planted tree. An equivalent definition is given by defining that there is an edge between a pair of a black and a white vertex if and only if the vertex corresponding to $b$ is on the boundary of the cycle $c_i$, i.e. $v\in \del(c_i):= \{\del(f):f\in c_i\}$. ### Spineless marked ribbon graphs {#spinlessgraph} A marked treelike ribbon graph is called [*spineless*]{}, if - There is at most one vertex of valence $2$. If there is such a vertex $v_0$ then $\del(mk(c_0))=v_{0}$. - The induced linear orders on the $c_i$ are compatible with that of $c_0$, i.e. $f\prec_i f'$ if and only if $\imath(f')\prec_0 \imath(f)$. ### Graphs with a metric A metric $w_{\Gamma}$ for a graph is a map $E_{\Gamma}\rightarrow \mathbb{R}_{>0}$. The (global) re-scaling of a metric $w$ by $\lambda$ is the metric $ \lambda w: \lambda w(e)=\lambda w(e)$. The length of a cycle $c$ is the sum of the lengths of its edges $length(c)=\sum_{f\in c} w(\{f,\imath(f)\})$. A metric for a treelike ribbon graph is called normalized if the length of each non-distinguished cycle is $1$. ### Marked ribbon graphs with metric and maps of circles. For a marked ribbon graph with a metric, let $c_i$ be its cycles, let $|c_i|$ be their image in the realization and let $r_i$ be the length of $c_i$. Then there are natural maps $\phi_i:S^1\rightarrow |c_i|$ which map $S^1$ onto the cycle by starting at the vertex $v_i:=\del(\mk(c_i))$ and going around the cycle mapping each point $\theta\in S^1$ to the point at distance $\frac{\theta}{2\pi}r_i$ from $v_i$ along the cycle $c_i$. ### Contracting edges The contraction $(\bar V_{\Gamma}, \bar F_{\Gamma},\bar \imath,\bar \del)$ of a graph $(V_{\Gamma},F_{\Gamma},\imath,\del)$ with respect to an edge $e=\{f,\imath(f)\}$ is defined as follows. Let $\sim$ be the equivalence relation induced by $\del(f)\sim\del(\imath(f))$. Then let $\bar V_{\Gamma}:=V_{\Gamma}/\sim$, $\bar F_{\Gamma}=F_{\Gamma}\setminus\{f,\imath(f)\}$ and $\bar \imath: \bar F_{\Gamma}\rightarrow \bar F_{\Gamma}, \bar\del: \bar F_{\Gamma}\rightarrow \bar V_{\Gamma}$ be the induced maps. For a marked ribbon graph, we define the marking of $(\bar V_{\Gamma}, \bar F_{\Gamma},\bar \imath,\bar \del)$ to be $\overline{\mk}(\bar c)=\overline{\mk(c)}$ if $\mk(c)\notin\{f,\imath(f)\}$ and $\overline{\mk}(\bar c)=\overline{N\circ \imath(\mk (c))}$ if $\mk(c)\in\{f,\imath(f)\}$, viz. the image of the next flag in the cycle. ### Labelling graphs By a labelling of the edges of a graph $\Gamma$ by a set $S$, we simply mean a map $E_{\Gamma}\rightarrow S$. A labelling of a ribbon graph $\Gamma$ by a set $S$ is a map $\lab\{$cycles of $\Gamma\}\rightarrow S$, we will write $c_i:=\lab^{-1}(i)$. By a labelling of a black and white tree by a set $S$ we mean a map $\lab:E_w\rightarrow S$. Again we will write $v_i:=\lab^{-1}(i)$. ### Planar planted bipartite labelled trees with white leaves We set $\wlbptree(n)$ to be the set of planar planted bipartite trees which are labelled from $\{1,\dots,n\}$ with white leaves only. To avoid cluttered notation, we also denote the respective free Abelian group and the $k$-vector space with basis $\wlbptree(n)$ by the same name and let $\wlbptree$ be their union respectively direct sum. Cacti ----- A cactus with $n$ lobes is a $\{0,1, \dots ,n\}$ labelled marked treelike ribbon graph with a metric. I.e. The set $\Cacti(n)$ is the set of these graphs. $\Cact(n)\subset \Cacti(n)$ is the subset of spineless graphs and called the spineless cacti or alternatively cacti without spines. $\Cacti^1(n)\subset \Cacti(n)$ is the subset of normalized graphs, called normalized cacti, and finally $\Cact^1(n)=\Cact(n)\cap\Cacti^1(n)$ is the set of normalized spineless cacti. ### Cactus terminology The edges of a cactus are traditionally called arcs or segments and the cycles of a cactus are traditionally called lobes. The vertices are sometimes called the marked or special points. Furthermore the distinguished cycle $c_0$ is called the outside circle or the perimeter and the vertex $\del(\mk(c_0))$ is called the global zero. And the vertices $\del(\mk(c_i)),i\neq 0$ are called the local zeros. In pictures these are represented by lines rather than fat dots. \[setrem\] It is clear that as sets $\Cacti(n)=\Cact(n)\times (S^1)^{\times n}$ and $\cact(n)= \cact^1(n)\times \mathbb{R}_{>0}^{\times n}$. For the first statement one notices for each lobe $v_i$ there is a unique lowest intersection point $b$ which is the vertex of the outgoing edge of $v$. Thus there is a canonical map $\phi'_i:S^1\rightarrow |c_i|$ which starts at $b$ and goes around the cycle opposite its natural orientation. So to each cycle we associate $(\phi'_i)^{-1}(\del(\mk(c_i)))$ that is the co-ordinate of the spine as measured by $\phi'_i$. This gives the projection onto the factors $(S^1)^{\times n}$. The projection onto the first factor is given by forgetting the spines, i.e. contracting the edges $\mk(c_i)$ if $\val(\del(\mk(c_i)))=2$ and changing the marking to the unique marking which makes the graph spineless. For the second statement the first projection is given by homogeneously scaling the weights of the edges of each non-marked cycle so that their lengths are one. The projection to the factors of $\mathbb{R}_{>0}$ are given by associating to each lobe its length. In both cases the inverse map is clear. The topological type of a spineless cactus in $\cact^1(n)$ is defined to be its dual b/w tree $\t \in \wlbptree(n)$. \[arctoedge\] Notice that the arcs of a cactus correspond to the set $E_{arcs}=E(\t)\setminus (\{e_{root}\})$. This bijection can be defined as follows. To a given $e\in E_{arcs}, e=\{w,b\}$ with $b$ black and $w$ white, we associate the unique arc between the points corresponding to the black vertices $b$ and $b-$ where $b-$ is the black vertex immediately preceding $b$ in the cyclic order of $v$. In other words if $e=\{f,\imath(f)\}$ with $f\in F_v$. Let $f-$ be the flag immediately preceding $f$ in the cyclic order at $v$, then $b-=\del(\imath(f-))$. Notice that if $|v|=0$ then and only then $f-=f$. \[typelemma\] A spineless cactus is uniquely determined by its topological type and the lengths of the segments. The CW complex of normalized spineless cacti -------------------------------------------- We recall from [@del] the CW complexes $\CWcact(n)$. For more details and pictures the reader is referred to [@del; @cact]. \[lengthrem\] For a normalized spineless cactus the lengths of the arcs have to sum up to the radius of the lobe and the number of arcs on a given lobe represented by a white vertex $v$ is $\val(v)=|v|+1$. Hence the lengths of the arcs lying on the lobe represented by a vertex $v$ are in 1-1 correspondence with points of the simplex $|\Delta^{|v|}|$. The coordinates of $|\Delta^{|v|}|$ naturally correspond to the arcs of the lobe represented by $v$ on one hand and on the other hand in the dual b/w graph to the edges incident to $v$. ### The tree differential in the spineless case {#diffdef} Let $\t\in \wlbptree$. We set $E_{angle}=E(\t)\setminus (E_{leaf}(\t)\cup \{e_{root}\})$ and we denote by $\num_E:E_{angle} \rightarrow \{1,\dots,N\}$ the bijection which is induced by the linear order $\prec^{(\t,p)}$. Let $\t\in \wlbptree$, $e\in E_{angle}$, $e=\{w,b\}$, with $w\in V_w$ and $b\in V_b$. Let $e-=\{w,b-\}$ be the edge preceding $e$ in the cyclic order $\prec^{\t}_w$ at $w$. Then $\del_e(\tau)$ is defined to be the planar tree obtained by collapsing the angle between the edge $e$ and its predecessor in the cyclic order of $w$ by identifying $b$ with $b-$ and $e$ with $e-$. Formally $w=\whitevert(e), e-=\prec^{\t}_w(e),\{b-\}= \del(e-)\cap V_b(\t)$, $V_{\del_e(\tau)}=V(\t)/(b\sim b-)$, $E_{\del_e(\tau)}=E_{\tau}/(e\sim e-)$. The linear order of $\del_e(\t)$ is given by keeping the linear order at all vertices which are not equal to $\bar b$ where $\bar b$ is the image of $b$ and $b-$. For $\bar b$ the order is given by extending the linear order $(\In(\bar b), \prec_{\bar b}^{\del_e(\t)}) =(\In(b-)\amalg\In(b), \prec^{\t}_{b-}\amalg \prec^{\t}_{b}) $ —the usual order on the union of totally ordered sets– to $E(\bar b)$ by declaring the image of $e$ and $e-$ to be the minimal element. We define the operator $\del$ on the space $\wlbptree$ to be given by the following formula: $\del(\t) := \sum_{e\in E_{angle}} (-1)^{\num_E(e)-1} \del_e (\tau) $. ### The Cell Complex We define $\wlbptree(n)^k$ to be the elements of $\wlbptree(n)$ with $|E_w|=k$. For $\t \in \wlbptree$ we define $\D(\t):=\times_{v \in V_w(\tau)}\D^{|v|}$. We define $C(\t)=|\D(\t)|$. Notice that $\dim(C(\t))=|E_w(\t)|$. Given $\D(\t)$ and a vertex $x$ of any of the constituting simplices of $\D(\t)$ we define the $x$-th face of $C(\t)$ to be the subset of $|\D(\t)|$ whose points have the $x$-th coordinate equal to zero. We let $\CWcact(n)$ be the CW complex whose k-cells are indexed by $\t \in \wlbptree(n)^k$ with the cell $C(\t)=|\D(\t)|$ and the attaching maps $e_{\t}$ defined as follows. We identify the $x$-th face of $C(\t)$ with $C(\t')$ where $\t'=\del_x(\t)$. This corresponds to contracting an edge of the cactus if its weight goes to zero (see Remark \[arctoedge\]) so that $\Delta(\del \t)$ is identified with $\del (\Delta(\t))$. We define the topology of $\cact^1(n)$ to be that induced by the bijection with $\CWcact(n)$. Via Remark \[setrem\] this gives a topology to the spaces $\Cact(n),\cacti(n)$ and $\cacti^1(n)$. The (quasi)-operad structure ---------------------------- ### The operad of cacti The gluing maps for cacti $$\circ_i:\cacti(n)\otimes \cacti(m)\rightarrow \cacti(n+m-1)$$ are defined on elements $(c,c')\mapsto c\circ_i c'$ as follows - Scaling the weight function $w'$ of $c'$ by the length $\frac{r_i}{R}$ where $r_i$ is the length of the cycle $c_i$ of the cactus $c$ and $R$ is the length of the cycle $c_0$ of $c'$. - Identifying the realization of the cycle $c_0$ of $c'$ with the cycle $c_i$ of $c$ via the maps $\phi_0(c')$ and $\phi_i(c)$, with the orientation on the second $S^1$ reversed, as usual. These maps together with the $\Sn$ action permuting the labels turn the collection $\{\cacti(n)\}$ into an operad $\cacti$. The collection $\{\cact(n)\}$ forms the suboperad $\cact$. ### The quasi-operad of normalized cacti We recall from [@cact] that a quasi-operad is the generalization of a (pseudo)-operad in which the axiom of associativity is omitted and the others are kept. The gluing maps for normalized cacti $$\circ_i:\cacti^1(n)\otimes \cacti^1(m)\rightarrow \cacti^1(n+m-1)$$ are defined on elements $(c,c') \mapsto c\circ_i c'$ simply by identifying the realization of the cycle $c_0$ of $c'$ with the cycle $c_i$ of $c$ via the maps $\phi_0(c')$ and $\phi_i(c)$ again with the orientation on the second $S^1$ reversed. These maps together with the $\Sn$ action permuting the labels turn the collection $\{\cacti^1(n)\}$ into a homotopy associative quasi-operad $\cacti^1$. The collection $\{\cact^1(n)\}$ forms a homotopy associative quasi-suboperad $\cact^1$ of $\cacti^1$ [@cact]. Relations among cacti --------------------- \[cactthm\] [@cact] Normalized cacti are homotopy equivalent through quasi-operads to the cacti. The same holds for the (quasi)-suboperads of normalized spineless cacti and spineless cacti. [@cact] Normalized cacti are quasi-isomorphic as quasi-operads to cacti and normalized spineless cacti are quasi-isomorphic as quasi-operads to spineless cacti. In particular in both cases the homology quasi-operads are operads and are isomorphic as operads. ### Remarks on the bi-crossed product In this section we recall the construction of the bi-crossed product as it was given in [@cact] to which we refer the reader for more details. First notice that there is an action of $S^1$ on $\Cact(n)$ given by rotating the base point [*clockwise*]{} (i.e. in the orientation opposite the usual one of $c_0$) around the perimeter. We denote this action by $$\rho^{S^1}: S^1 \times \Cact(n) \rightarrow \Cact(n)$$ With this action we can define the twisted gluing $$\begin{aligned} \label{circtheta} \circ_i^{S^1}:\Cact(n) \times S^1(n) \times \Cact(m) &\rightarrow& \Cact(n+m-1)\nn\\ (C,\theta,C')&\mapsto& C \circ \rho^{S^1}(\theta_i,C') =: C \circ_i^{\theta_i}C'\end{aligned}$$ Given a cactus without spines $C\in \Cact(n)$ the orientation reversed perimeter (i.e. going around the outer circle [*clockwise*]{} i.e. reversing the orientation of the source of $\phi_0$) gives a map $\Delta_C: S^1 \rightarrow (S^1)^n$. As one goes around the perimeter the map goes around each circle once and thus the map $\Delta_C$ is homotopic to the diagonal $ \Delta_C (S^1) \sim \Delta(S^1)$. We can use the map $\Delta_C$ to give an action of $S^1$ and $(S^1)^{\times n}$. $$\rho^C: S^1 \times(S^1)^{\times n}\stackrel{\Delta_C} {\rightarrow} (S^1)^{\times n} \times (S^1)^{\times n} \stackrel{\mu^n}{\rightarrow}(S^1)^{\times n}$$ here $\mu_n$ is the diagonal multiplication in $(S^1)^{\times n}$ and $\bar \circ_i$ is the operation which forgets the $i$-th factor and shuffles the last $m$ factors to the $i$-th, …, $i+m-1$st places. Set $$\begin{gathered} \label{perturbdef} \circ_i^C:(S^1)^{\times n} \times (S^1)^{\times m} \stackrel{(id \times \pi_i)(\Delta) \times id} {\longrightarrow} (S^1)^{\times n} \times S^1\times (S^1)^{\times m}\\ \stackrel{id \times \rho^C}{\longrightarrow} (S^1)^{\times n} \times (S^1)^{\times m} \stackrel{\bar\circ_i}{\longrightarrow}(S^1)^{\times n+m-1}\end{gathered}$$ These maps are to be understood as perturbations of the usual maps $$\begin{gathered} \circ_i:(S^1)^{\times n} \times (S^1)^{\times m} \stackrel{(id \times \pi_i)(\Delta) \times id} {\longrightarrow} (S^1)^{\times n} \times S^1\times (S^1)^{\times m}\\ \stackrel{id \times \rho}{\longrightarrow} (S^1)^{\times n} \times (S^1)^{\times m} \stackrel{\bar\circ_i}{\longrightarrow}(S^1)^{\times n+m-1}\end{gathered}$$ where now $\rho$ is the diagonal action of $S^1$ on $(S^1)^{\times n}$. The maps $\circ_i$ and the permutation action on the factors give the collection $\{\mathcal{S}^1(n)\}=(S^1)^{\times n}$ the structure of an operad. In fact this is exactly the usual construction of an operad built on a monoid. \[cactbicross\] [@cact] The operad of cacti is the bi–crossed product of the operad $\cact$ of spineless cacti with the operad $\mathcal {S}^1$ based on $S^1$. Furthermore this bi–crossed product is homotopic to the semi–direct product of the operad of cacti without spines with the circle group $S^1$. $$\cacti \cong \cact \bowtie {\mathcal S}^1 \simeq \cact \rtimes {\mathcal S}^1$$ The multiplication in the bi-crossed product is given by $$(C,\theta) \circ_i (C',\theta') = (C\circ_i^{\theta_i} C', \theta\circ_{i}^{C'}\theta')$$ The multiplication in the semi-direct product is given by $$(C,\theta) \circ_i (C',\theta') = (C\circ_i^{\theta_i} C', \theta\circ_{i}\theta')$$ Also, normalized cacti are homotopy equivalent to cacti which are homotopy equivalent to the bi-crossed product of normalized cacti with $\mathcal{S}^1$ and the semi-direct product with $\mathcal{S}^1$, where all equivalences are as quasi-operads $$\cacti^1 \sim \cacti \cong \cact \bowtie {\mathcal S}^1 \sim\cact^1 \bowtie {\mathcal S}^1\sim \cact^1 \rtimes {\mathcal S}^1$$ The proof of the first statement is given by verifying that the two operad structures coincide. For the second statement one notices that the homotopy diagonal is homotopy equivalent to the usual one and that one can find homotopies to the diagonal which continuously depend on the cactus. The third statement follows from contracting the factors $\mathbb{R}^n_{>0}$ and using Theorem \[cactthm\]. The homology operad of $\cacti$ is the semi-direct product of $\cacti$ and the homology of the operad $\mathcal{S}^1$ built on the monoid $S^1$. Relation to (framed) little discs --------------------------------- [@cact] The operad $\cact$ is equivalent to the little discs operad and the operad $\cacti$ is equivalent to the framed little discs operad. The latter result has been first stated by Voronov in [@Vor]. A CW decomposition for $\cacti^1$ and a chain model for the framed little discs =============================================================================== A $\Zz$ decoration for a black and white bipartite tree is a map $\Zdec: V_w \rightarrow \Zz$. \[firstcells\] The quasi–operad of normalized cacti $\cacti^1$ has a CW–decomposition which is given by cells indexed by planar planted bi–partite trees with a $\Zz$ decoration. The $k$ cells are indexed by trees with $k-i$ white edges and $i$ vertices marked by $1$. Moreover cellular chains are a chain model for the framed little discs operad and form an operad. This operad is isomorphic to the semi–direct product of the chain model of the little discs operad given by $CC_*(\cact)$ of [@del] and the cellular chains of the operad built on the monoid $S^1$. For the CW decomposition we note that as spaces $\cacti^1(n)= \cact^1(n) \times (S^1)^{\times n}$ see Remark \[setrem\]. Now viewing $S^1=[0,1]/0\sim1$ as a 1-cell together with the 0-cell given by $0\in S^1$ the first part of the proposition follows immediately, by viewing the decoration by 1 as indicating the presence of the 1-cell of $S^1$ for that labelled component in the product of cells. To show that the cellular chains indeed form an operad, we use the fact that the bi–crossed product is homotopy equivalent to the semi–direct product in such a way, that the action of a cell $S^1$ in the bi–crossed product is homotopic to the diagonal action. This is just the observation that the diagonal and the diagonal defined by a cactus are homotopic. Since a semi-direct product of a monoid with an operad is an operad the statement follows. Alternatively one could just remark, that there is also an obvious functorial map induced by the diagonal for these cells. The chains are a chain model for the framed little discs operad since $\cacti^1(n)$ and $\cacti(n)$ are homotopy equivalent and the latter is equivalent to the framed little discs operad. Although the above chain model is the one one would expect to use for framed little discs, it does not have enough cells for our purposes. In order to translate the proofs in the arc complex given in [@KLP] into statements about the Hochschild complex, we will need a slightly finer cell structure then the one above. After having used the larger structure one can reduce to the cell model with less cells as they are obviously equivalent. A spine decoration $\sdec$ for a planted planar bi–partite tree is a $\Zz$ decoration together with the marking of one angle at each vertex labelled by one and a flag at each vertex labelled by zero. We call the set of such trees which are $n$-labelled by $\swlbptree(n)$ and again use this notation as well for the free Abelian group and the $k$ vector space generated by these sets. We let $\swlbptree$ be their union respectively direct sum. In pictures we show the angle marking as a line emanating from the vertex which lies between the marked edges and an edge marking by a line through the respective edge. For an example see Figure \[cactexamples\] VI. We sometimes omit the edge marking if the marked edge is the outgoing edge, e.g. in Figure \[bvtopartcact\]. A realization $\hat \t$ of a planar planted bi–partite tree $\t$ with a spine decoration is a realization of $\t$ as a planar planted tree (the root is fixed to be black) together with one additional edge inserted into each marked angle connecting to a new vertex. We call the set of these edges spine edges and denote them by $E_{spine}$. Likewise set $V_{spine}$ to be the set of new vertices called the spine vertices which are defined to be black. The spine edges are then white edges. Like for tails, we will only consider the flags of $E_{spine}$, which are not incident to the spine vertices. We call the set of these flags $F_{spine}$. Notice that this tree is the dual tree of a cactus with an explicit marking of the flags $\mk(c_i)$. Given a cactus, we call its dual tree with explicit markings its topological type. If $\t$ had tails, we will split the set of tails of the realization into spines and free tails which are the images of the original tails. $E_{tails}(\hat\t)=E_{ftails}(\hat \t)\amalg E_{spine}(\hat \t)$ and likewise for the respective flags. A spine decoration induces a new linear order on the flags incident to the white vertices of its realization. This order $\prec'_v$ is given by the cyclic order at $v$ and declaring the smallest element to be the spine flag in case $\Zdec(v)=1$ and the marked flag in case $\Zdec(v)=0$. This gives a canonical identification of $\Fnum:F_v \rightarrow \{0,\dots, |v|\}$. \[secondcells\] The spaces $\cacti^1(n)$ of the quasi–operad of normalized cacti $\cacti^1$ have CW–decompositions $\CWtwo(n)$ whose cells are indexed by spine decorated planar planted bi–partite trees $(\t,\sdec)\in \swlbptree$ corresponding to the topological type of the cacti. The $k$ cells are indexed by $n$-labelled trees with $k-i$ white edges and $i$ markings by $1$. Moreover cellular chains of the complex above are a chain model for the framed little discs operad and form an operad. The decomposition is almost as in the preceding proposition except that in the product $\cact^1(n)\times (S^1)^{\times n}$ we decompose each factor $S^1$ as indicated by the lobe it presents. I.e. for the $S^1$ associated to the $n$–th lobe we chose the 0–cells to be corresponding to the marked points and 1–cells corresponding to the arcs with gluing given by attaching the 1–cells to the 0–cells representing the endpoints of the arcs. (E.g. 4 0-cells and 4 1-cells for the lobe 1 in Figure \[cactexamples\] VIa). In terms of trees, the arcs correspond to the angles and thus we take a marking of an arc to be the inclusion of the corresponding 1-cell in the tensor product of the cell complexes. Likewise the edges correspond to the marked points and we take a marking of an edge to be the inclusion of the corresponding 0-cell in the tensor product of the cell complexes. For the operadic properties, we remark that moving the spine along an arc and then gluing, which is what is parameterized by marking an angle on the lobe $i$ of $c$ when calculating $c\circ_ic'$, has the effect of moving the base point of $c'$ along a complete sequence of arcs until it coincides with a marked point in the composition of the two cacti. This is one side of the bi-crossed product. The effect on the local zeros of $c'$ of the movement of the base point is to move them corresponding to structure maps of the bi-crossed product above. The local zeros thus move through a full arc if the global zero passes through the arc on which they lie. Therefore the $\circ_i$ product of two cells results in sums of cells. Marking an arc of $c'$ obviously gives rise to a sum of cells. Alternatively, one can again just remark that there is a functorial map for the diagonal for this cell model, since there is such a map on the first factor by [@del] and its existence is obvious on the second factor. The associativity follows from the associativity of cacti. Let $C(\t)$, $\t\in \swlbptree(n)$ be the cells in the CW-complex and $\dot C(\t)$ their interior. Then $P(\t)=\dot C(\t)\times \mathbb{R}_{>0}^n, \t\in\swlbptree$ give a pseudo-cell decomposition $Cacti(n)=\amalg_{\t} P(\t)$. It is easy to see that $Im(P(\t)\circ_i P(\t'))=\amalg_k P(\t_k)$ for some $\t_k$ and $\circ_i$ is a bijection onto its image. Let $\circ_i^{comb}$ be the quasi-operad structure pulled back from $\CWtwo$ to $\swlbptree$ and $\circ_i^{+}$ be the operad structure pulled back from the pseudo-cell decomposition of $\Cacti$ to $\swlbptree$. Then these two operad structures coincide over $\mathbb{Z}/2\mathbb{Z}$ thus yielding associativity up to signs. The signs are just given by shuffles, c.f.§\[signsection\], and are associative as well. Pulling back the operadic compositions, the differential and the grading yields a dg-operad structure on $\swlbptree$ which is isomorphic to that of $\CCCacti:=\bigoplus_n CC_*(\CWtwo(n))$. The operation is briefly as follows: given two trees $\t,\t'\in\swlbptree$ the product is $\t\circ^{comb}_i\t'=\sum \pm \t_k$ where the $ \t_k$ are the trees obtained by the following procedure. Delete $v_i$ to obtain an ordered collection of trees $(\t^c_l,\prec'_v)$ then graft these trees to $\t'$ keeping their order by first identifying the spine edge or marked edge of $v_i$ with the root edge of $\t'$ and then grafting the rest of the branches to $\t'$ so that their original order is compatible with that of $\t'$. Lastly contract the image of the root edge of $\t'$ and declare the image of the root of $\t$ to be the new root. The sign is as explained in \[signsection\]. Due to the isomorphism between $\CCCacti$ and $\swlbptree$ we will drop the superscript $comb$. = The GBV structure ----------------- The picture for the GBV structure is essentially that of [@KLP] and goes back to [@CS1]. It appears here is another guise, however, since we are now dealing with cells in $\CCCacti$. First notice that there is a product on the chain level induced by the spineless cactus given by the rooted tree $\t_n$ depicted in Figure \[cactexamples\]. Explicitly: $a\cdot b\mapsto \gamma(\t^b_2; a,b)$ where $\gamma$ is the usual operadic composition. This product gives $\CCCacti$ the structure of an associative algebra with unit. Moreover the product is commutative up to homotopy. The homotopy is given by the usual operation which is induced by $\gamma(\t_1;a,b)$. This also induces a bracket which is Gerstenhaber up to homotopy. This can be seen by translating the statements from [@KLP; @del], but it also follows from the BV description of the bracket below (Figure \[bvcactfig\]). To give the BV structure, let $O'$ be the tree with one white vertex, no additional black edges, no free tails and a spine. Notice that the operation $\delta$ induced by $a\mapsto \gamma( O',a)$ on $\CCCacti$ breaks up on products of chains as follows, see Figure \[bvtopartcact\] $$\begin{aligned} \label{threedel} \delta(ab) &\sim& \vardel(a,b) + (-1)^{|a||b|}\vardel(b,a)\nn\\ \delta(abc) &\sim& \vardel(a,b,c)+(-1)^{|a|(|b|+|c|)}\vardel(b,c,a)\nn\\ &&+(-1)^{|c|(|a|+|b|)}\vardel(c,a,b)\end{aligned}$$ $$\begin{aligned} \delta(a_1 a_2\cdots a_n)&\sim& \sum_{i=0}^{n-1} (-1)^{\s(c^i,a)} \delta(a_{c^i(1)}, \dots, a_{c^i(n)})\end{aligned}$$ where $c$ is the cyclic permutation and $\s(c^i,a)$ is the sign of the cyclic permutations of the graded elements $a_i$. = 4in \[partbv\] $$\vardel(a,b,c) \sim (-1)^{(|a|+1)|b|} b \vardel(a,c) +\vardel(a,b)c -\vardel(a)bc$$ [**Proof.**]{} The proof is contained in Figure \[cactpartbvfig\]. = 4in \[GBVprop\] The chains $\CCCacti$ are a GBV algebra up to homotopy. The BV structure follows from the Lemma \[partbv\] via the calculation: $$\begin{aligned} \delta(abc) &\sim& \vardel(a,b,c)+(-1)^{|a|(|b|+|c|)}\vardel(b,c,a) +(-1)^{|c|(|a|+|b|)}\vardel(c,b,a)\nn\\ &\sim&(-1)^{(|a|+1)|b|} b \vardel(a,c) +\vardel(a,b)c -\vardel(a)bc + (-1)^{|a|} a \vardel(b,c)\nn\end{aligned}$$ $$\begin{aligned} &&+(-1)^{|a||b|}\vardel(b,a)c -(-1)^{|a|}a\vardel(b)c +(-1)^{(|a|+|b|)|c|} a \vardel(b,c)\nn\\ &&+(-1)^{|b|(|a|+1|)+|a||c|}b\vardel(c,a)c - (-1)^{|a|+|b|} ab\vardel(c)\nn\\ &\sim&\delta(ab)c+(-1)^{|a|} a\delta(bc) + (-1)^{|a+1||b|} b\delta (ac) -\delta(a)bc\nn\\ &&-(-1)^{|a|} a\delta(b)c-(-1)^{|a|+|b|}ab\vardel(c)\end{aligned}$$ Figure \[bvcactfig\] contains the homotopy relating the BV operator to the bracket. The action ========== Assumption ---------- Now we fix $A$ to be a finite dimensional associative algebra with unit $1$ together with an inner product $\eta: A\otimes A\rightarrow k$ which is non-degenerate and both i) invariant: $\eta(ab,c)=\eta(a,bc)$ and ii) symmetric: $\eta(a,b)=\eta(b,a)$. Such an algebra is called a Frobenius algebra. We will use $CH$ to stand for Hochschild cochains $CH^n(A,A):= Hom(A^{\otimes n},A)$. Actually, it would be enough to have a non-degenerate inner-product $\eta$ on $A\simeq CH^0(A,A)$ for which i) holds on $HH^0(A,A)$, that is up to homotopy for $A$. The condition ii) will then hold automatically up to homotopy since $CH^0(A,A)$ is commutative up to homotopy [@G]. If one wishes to furthermore relax the other conditions “up to homotopy”, one can fix that $\eta$ needs to be non-degenerate only on $HH^0(A,A)$ and only require that $HH^0(A,A)$ has to be finite dimensional. In this case, the operadic operations defined below will give operations $f:A^{\otimes n}\rightarrow HH^0(A,A)$ and will thus give actions only up to homotopy. This is enough to get the BV structure on $CH^*(A,A)$, but not quite enough to lift the action to the chain level. We are currently working on such a construction in formal geometry and defer the reader to [@stringarc]. Notation -------- Let $(e_i)$ be a basis for $A$ and let $C:= e_i \eta^{ij} \otimes e_j$ be the Casimir element, i.e. $\eta^{ij}$ is the inverse to $\eta_{ij}=\eta(e_i,e_j)$. With the help of the non–degenerate bilinear form, we identify $$\label{identify} CH^n(A,A)= Hom(A^{\otimes n},A) \cong A\otimes A^{* \otimes n} \cong A^{* \otimes n+1}$$ We would like to stress the order of the tensor products we choose. This is the order from right to left, which works in such a way that one does not need to permute tensor factors in order to contract. If $f \in Hom(A^{\otimes n},A)$, we denote by $\tilde f$ its image in $A^{* \otimes n+1}$, explicitly $\tilde f(a_0, \dots ,a_n)=\eta(a_0,f(a_1,\dots,a_n))$. With the help of (\[identify\]) we can pull back the Connes’ operators $b$ and $B$ (see e.g. [@Loday]) on the spaces $A^{\otimes n}$ to their duals and to $Hom(A^{\otimes n},A)$. Also let $t:A^{\otimes n}\rightarrow A^{\otimes n}$ be the operator given by performing a cyclic permutation $(a_1,\dots,a_n) \mapsto (-1)^{n-1}(a_n,a_1,\dots a_{n-1})$ and $N:=1+t+ \cdots +t^{n-1}:A^{\otimes n}\rightarrow A^{\otimes n}$. It is easy to check that the operator induced by $b$ is exactly the Hochschild differential; we will denote this operator by $\del$. We write $\Delta$ for the operator induced by $B$. It follows that $\Delta^2=0$ and $\Delta\del+\del\Delta=0$. Assumption {#assumption} ---------- To make the formulas simpler we will restrict to normalized Hochschild cochains $\overline {CH}^n(A,A)$ which are the $f\in CH^n(A,A)$ which vanish when evaluated on any tensor containing $1\in A$ as a tensor factor (see e.g. [@Loday]). On the normalized chains the operator $\Delta$ is explicitly defined as follows: for $f\in \overline {CH}^n(A,A)$ $$\eta(a_0,(\Delta f)(a_1,\dots a_{n-1})):=\eta(1,f\circ N(a_0,\dots a_n))$$ Correlators from decorated trees -------------------------------- We will use the notation of tensor products indexed by arbitrary sets, see e.g. [@Deligne]. For a linearly ordered set $I$ denote by $\bigcup_I a_i$ the product of the $a_i$ in the order dictated by $I$. Let $\t$ be the realization of a spine decorated planted planar b/w tree, $v \in V_w$, and $f\in \overline {CH}^{|v|}(A,A)$. We define $Y(v,f):A^{F_v(\t)}\rightarrow k$ by $$Y(v, f) (\bigotimes_{i\in F_v(\t)} a_i):=\eta(a_{\Fnum^{-1}(0)}, f(a_{\Fnum^{-1}(1)}\otimes \dots \otimes a_{\Fnum^{-1}(|v|)}))$$ Set $V_{b-int}:=V_b(\t)\setminus (V_{tail}\cup \{v_{root}\} \cup V_{spine})$. For $v\in V_{b-int}$ we define $Y(v):= A^{F_v(\t)}\rightarrow k$ by $$Y(v)(\bigoplus_{i\in F_v(\t)} a_i)=\eta(1, \bigcup_{i\in F_{v}} a_i)$$ \[treeactionone\] Let $\t$ be the realization of a planar planted b/w tree with $n$ free tails and $k$ labels and $f_i \in \overline{CH}^{n_i}(A,A)$. For such a tree there is a canonical identification $\{v_{root}\} \cup V_{ftail} \rightarrow \{0,1,\dots,|V_{ftail}|\}$ which is given by sending $v_{root}$ to $0$ and enumerating the tails in the linear order induced by the planted planar tree. Set $E_{int}(\t):= E(\t)\setminus (E_{tail}\cup E_{root} \cup E_{spine})$ and for $(a_0,\dots,a_n)\in A^{\otimes (\{v_{root}\}\cup V_{ftail})}$ set $$\begin{gathered} Y(\t)(f_1,\dots,f_k)(a_0,\dots,a_n) := \\ \left(\bigotimes_{v\in V_w(\t)} Y(v,f_{\lab(v)})\bigotimes_{v\in V_{b-int}} Y_v\right)\left( (\bigotimes_{i\in F_{ftail}(\t)\cup \{F_{root}\}}a_i)(\bigotimes_{j\in F_{spine}} 1) \otimes C^{\otimes E_{int}(\t)}\right)\end{gathered}$$ In other words, decorate the root flag by $a_0$, the free tail flags by $a_1,\dots,a_n$, the spines by $1$ and the edges by $C$ and then contract tensors according to the decoration at the white vertices while using the product at the black vertices. \[degreecount\] We extend the definition above by $$Y(\t)(f_1,\dots,f_k)(a_0,\dots,a_n)=0 \text{ if } |v_{\lab^{-1}(i)}| \neq n_i=:|f_i|$$ The foliage operator -------------------- Let $F$ be the foliage operator of [@del] applied to trees. This means that $F(\t)$ is the formal sum over all trees obtained from $\t$ by gluing an arbitrary number of free tails to the white vertices. The extra edges are called free tail edges $E_{ftail}$ and the extra vertices $V_{ftail}$ are defined to be black and are called free tail vertices. Using the trees defined in Figure \[cactexamples\] this corresponds to the formal sum $F(\t):= \sum_n l_n \circ_v \t$ where the operadic composition is the one for b/w trees which are not necessarily bi-partite (see [@del]). In our current setup we should first form $\tilde F(\t):=\sum_n \t_n \circ_v \t$ and then delete the images of all leaf edges together with their white vertices of the $\t_n$ to obtain $F(\t)$. Signs {#signsection} ----- The best way to fix signs of course is to work with tensors indexed by edges like in [@del; @KS]. For this one fixes a free object $L$ (free $\mathbb{Z}$-module or $k$-vector space) generated by one element of degree $\pm 1$ and calculates signs using $L^{\otimes E_w(\t)}$ before applying the foliage operator while using $L^{\otimes E_{weight}}$ after applying the foliage operator, where $E_{weight}=E_w\cup E_{root}\cup E_{ftail}\cup E_{spine}$. Explicitly, we fix the signs to be given as follows. For any tree $\t'$ in the linear combination above, we take the sign of $\t'$ to be the sign of the permutation which permutes the set $E_{weight}$ in the order induced by $\prec$ to the order where at each vertex one first has the root if applicable, then all non–tail edges, then all the free tails, and if there is a spine edge, the spine. The explicit signs above coincide with usual signs [@Loday] for the operations and the operators $b$ and $B$ and also coincide with the signs of [@G] for the $\circ_i$ and hence for the brace operations. The signs for the operations corresponding to operations on the Hochschild side are fixed by declaring the symbols “,” and “{” to have degree one. \[treeaction\] For $\t\in \swlbptree$ let $\hat\t$ be its realization. We define the operation of $\t$ on $\overline {CH}(A,A)$ by $$\eta(a_0,{\t(f_1,\dots, f_n)}(a_1,\dots, a_N)):=Y(F(\hat\t))(f_1,\dots, f_n)(a_0,\dots,a_N)$$ Notice that due to the Definition \[degreecount\] the right hand side is finite. Examples {#example} -------- We will first regard the tree $O'$ with one white vertex, no additional black edges, no free tails and a spine, see Figure \[cactexamples\]. For a function $f\in \overline {CH}^n$ we obtain: $$\begin{gathered} Y(F(O'))(f)(a_0,\dots,a_{n-1})= \eta(1,f(a_0,\dots a_{n-1})+(-1)^{n-1}f(a_{n-1}, a_0, \dots, a_{n-2}) +\dots)\\ =\eta(a_0,\Delta(f)(a_1,\dots,a_{n-1})) \end{gathered}$$ Let $\t'_{n,i}$ be the tree of Figure \[cactexamples\]. Then the operation corresponds to $$Y(F(\t'_{n,i}))(f;g_1,\dots,g_n)(a_0,\dots,a_{N})=\\ \eta(1,f\{' g_{i+1}, \dots, g_n, g_1, \dots, g_i\}(a_{(2)}, a_0,a_{(1)}))$$ where $N=|f|+\sum|g_i|-n-1$ and we used the short hand notation $$\begin{gathered} f\{' g_{j+1}, \dots, g_n, g_1, \dots, g_j\}(a_{(2)}, a_0,a_{(1)}) = \sum \pm f(a_{k+1},\dots, a_{i_{j+1}-1},\\ g_{j+1}(a_{i_{j+1}}, \dots, a_{i_{j+1}+|g_{j+1}|}), \dots, a_{i_n-1}, g_n(a_{i_n}, \dots, a_{i_n+|g_n|}), \dots , a_N, a_0, \\a_1, \dots, a_{i_1-1},g_1(a_{i_1}, \dots, a_{i_1+|g_1|}), \dots, a_{i_j-1}, g_j(a_{i_j}, \dots, a_{i_j+|g_j|}), \dots , a_k)\end{gathered}$$ where the sum runs over $1 \leq i_1 \leq \dots \leq i_j \leq \dots \leq k \leq \dots \leq i_{j+1} \leq \dots \leq i_{n} \leq N:$ $i_l + |g_l|\leq i_{l+1},i_j + |g_j|\leq k $ and the signs are as explained above. \[mainthm\] The Hochschild cochains of a finite-dimensional associative algebra with a non–degenerate, symmetric, invariant, bilinear form are an algebra over the chains of the framed little discs operad. This operation is compatible with the differentials. We will use the cellular chains $\CCCacti$ as a model for the chains of the framed little discs operad. It is clear that \[treeaction\] defines an action. On the Hochschild side, the $\circ_i$ operations are substitutions of the type $f_i=\psi(g_1,\dots,g_n)$. For $\CCCacti$ the $\t \circ_i \t'$ operations are the pull-back via the foliage operator of all possible substitutions of elements of $F(\t), \t \in \CCCacti$ into the position $i$ of $F(\t'$). The action $Y$ then projects onto the substitution $f_i=\psi(g_1,\dots,g_n)$ so that the action is operadic. Explicitly the substitution $t \circ^s_i t'$ for planted planar bi-partite trees with a decoration $\sdec$ and additional free tails is given as follows: Say the number of tails of $t'$ coincides with $|F(v_i)|$. In this case replace the vertex $v_i$ of $t$, its edges and the black vertices corresponding to the edges with the tree $t'$ matching the flags of $v_i$ with the tails of $t'$ by first matching the root edge with the marked flag of $v_i$ and then using the linear order. Lastly contract the image of the root flag. Otherwise set $t \circ^s_i t'=0$. With this definition it is easy to see that $F(\t \circ \t')=F(\t) \circ^s_i F(\t')$. The compatibility of the Hochschild differential with the differential of the cell complex follows from the relevant statements for $\t_n$ and $\t_n^b$, which are a straightforward but lengthy calculation (see e.g. [@del; @G]), together with the calculations above §\[example\] which are easily modified to show that $(\del O')(f)=\Delta(\del(f))$ and that $(\del\t'_{n,i})(f,g_1, \dots, g_n)= (\del\t'_{n,i})(f,g_1, \dots, g_n) \pm (\t'_{n,i})(\del f,g_1, \dots, g_n) + \sum_i\pm (\t'_{n,i})(f,g_1, \dots, \del(g_i), \dots, g_n)$ via an even more lengthy but still straightforward calculation. This then verifies the claim in view of the compatibility of the differentials and the respective operad structures. Alternatively, in view of the operation of the foliage operator, the compatibilities follow from a straightforward translation of trees with tails into operations on the Hochschild complex. The compatibility of the differential then follows from the almost identical definition of the differential for trees with tails of [@del] and that in the Hochschild complex as $\del(f)=f\circ \cup - (-1)^{|f||}\cup \circ f$. The normalized Hochschild cochains of an algebra as above are a BV algebra up to homotopy. This could of course have been checked directly without recourse to the operation of a chain model, but we do not know of any source for this result. It also seems to be difficult to guess the right homotopies as Gerstenhaber did in the non-cyclic case [@G]. The content of the next corollary was expected [@Connes], but we again could not find a source for it. The Hochschild cohomology of an algebra as above is a BV algebra, such that the induced bracket is the Gerstenhaber bracket. Lastly, since our second version of cellular chains of Proposition \[secondcells\] are a subdivision of the cell decomposition of Proposition \[firstcells\], we can also use the latter cell decomposition. The normalized Hochschild cochains of an algebra as above are an algebra over the semi–direct product over a chain model of the little discs operad and a chain model for the operad $\mathcal{S}$ built on the monoid $S^1$. The operation of the little discs operad by braces, viz. the original Deligne conjecture as discussed in [@del] for Frobenius algebras, corresponds to the decorations in which $\Zdec \equiv 0$ and the decorated edge is always the outgoing edge. In the Theorem \[mainthm\] we can relax the conditions and implications as explained in §\[assumption\]. Variations and relation to string topology ========================================== In terms of the setup of operadic correlation functions which we presented above, it is possible to analyze several generalizations. First, one can generalize from trees to more general graphs. This description then yields an action of the pseudo-cells of moduli spaces of curves or bordered surfaces [@ribbon]. One can also consider different types of chains, such as Hochschild chains or cyclic (co)–chains. The latter also works well with omitting markings to the trees or regarding unmarked graphs [@ribbon]. In [@KLP] we gave a map called loop which maps the so–called $\Arc$ operad to ribbon graphs with marked points on the cycles of the graph. In the case of no punctures the analysis of this map in terms of Strebel differentials yields another proof of Penner’s theorem [@P] on the homotopy equivalence of the suboperad of quasi–filling arcs and the moduli space of decorated bordered surfaces [@ribbon]. This in turn gives a cell decomposition of the aforementioned moduli space. Moreover the correspondence induces an operadic structure on ribbon graphs by pulling back the gluings from the $\Arc$ operad. Using the operadic correlation functions it is straightforward to obtain an action of the cells on a cyclic complex. In a similar spirit, an action of the framed little discs on a cyclic complex given by the $Tot$ of a special type of cyclic cosimplicial complex has been announced in [@MS]. Constructing the action in terms of our correlation functions then should allow us to construct an operation of the cells of moduli space on such a complex. Moreover a further decoration of the cells by $\mathbb{Z}/2\mathbb{Z}$ produces an operad which acts on the cyclic complex of such an algebra and is compatible with the differential [@ribbon]. The $A_{\infty}$– versions of these statements could be deduced from a conjectural “blow–up” of the cacti operads which is presented in [@woods]. Here the cells are given by products of associahedra and cyclohedra and are indexed by trees of the type appearing in [@KS]. Finally using the cyclic description of the free loop space or the iterated integral representation of [@Merk] together with the results mentioned above, we expect to be able to obtain an action of the (decorated) pseudo-cells of moduli space action on the free loop space of a compact manifold which extends the operation of the string PROP [@CS1; @CS2] thus completing a further step of the string topology program [@stringarc]. [99]{} A. Connes. Private communication. P. Deligne. [*Catégories tannakiennes.*]{} The Grothendieck Festschrift, Vol. II, 111–195, M. Chas and D. Sullivan. [*String Topology.*]{} Preprint math.GT/9911159 M. Chas and D. Sullivan. [*Closed string operators in topology leading to Lie bialgebras and higher string algebra.*]{} in: The legacy of Niels Henrik Abel, 771–784, Springer, Berlin, 2004. M. Gerstenhaber. [*The cohomology structure of an associative ring*]{}, Ann. of Math. 78 (1963), 267-288. R. M. Kaufmann. [*On several varieties of cacti and their relations.*]{} Algebraic & Geometric Topology 5 (2005), 237-300. R. M. Kaufmann. [*On Spineless Cacti, Deligne’s Conjecture and Connes–Kreimer’s Hopf Algebra.* ]{} in: N. Tongring and R. C. Penner “Woods Hole Mathematics. Perspectives in Mathematics and Physics”, Series on Knots and Everything - Vol. 34, World Scientific 2004. R. M, Kaufmann. [*Operads, Moduli of Surfaces and Quantum Algebras*]{}, in “Wood’s Hole Mathematical Meetings”, World Scientific.  To appear. R. M. Kaufmann. [*Arcs, Ribbons, Moduli spaces and operations on Hochschild*]{}. In preparation. R. M. Kaufmann [*String topology operations of moduli space via the arc operad*]{}. In preparation. R. M.Kaufmann, M. Livernet and R. B. Penner. [*Arc Operads and Arc Algebras.*]{} Geometry and Topology 7 (2003), 511-568. M. Kontsevich and Y. Soibelman. [*Deformations of algebras over operads and Deligne’s conjecture.*]{} Conférence Moshé Flato 1999, Vol. I (Dijon), 255–307, Math. Phys. Stud., 21, Kluwer Acad. Publ., Dordrecht, 2000. J–L. Loday [*Cyclic homology*]{}. Appendix E by Mar[í]{}a O. Ronco. Second edition. Chapter 13 by the author in collaboration with Teimuraz Pirashvili. Grundlehren der Mathematischen Wissenschaften , 301. Springer-Verlag, Berlin, 1998. J. E. McClure and J. H. Smith, [*Operads and cosimplicial objects: an introduction.*]{} in: Axiomatic, enriched and motivic homotopy theory, 133–171, NATO Sci. Ser. II Math. Phys. Chem., 131, Kluwer Acad. Publ., Dordrecht, 2004. S. A. Merkulov. [*De Rham model for string topology.*]{} Int. Math. Res. Not. 2004, no. 55, 2955–2981. R. C. Penner. [*Decorated Teichmüller theory of bordered surfaces.*]{} Comm. Anal. Geom. 12 (2004), no. 4, 793–820. A. A.  Voronov [*Notes on universal algebra.*]{} Preprint math.QA/0111009
--- abstract: 'By gauging an Abelian electromagnetic (em) solution through a non-Abelian transformation and in accordance with a theorem proved long time ago, we construct a simple class of colliding Einstein-Yang-Mills (EYM) plane waves. The solution is isometric to the Wu-Yang charged Kerr-Newman (KN) black hole and shares much of the properties satisfied by colliding Einstein-Maxwell (EM) plane waves. In the linear polarization limit with unit degenerate charge it reduces to the Bell-Szekeres (BS) solution for colliding em shock waves.' author: - Ozay Gurtug - Mustafa Halilsoy title: ' Restricted Class of Colliding Einstein-Yang-Mills Plane Waves' --- Introduction ============ Starting with 1970s, through 80s, were the years in which the subject of colliding waves in general relativity attracted much interest. All known results were compiled later into a cataloque of solutions[@JB]. In later decades the interest in the subject continued with less momentum, concentrating more on the sophisticated fields such as dilaton, axion and torsion coupled  to gravity and electromagnetic (em) fields. Formation of Cauchy horizon / singularity, and under what conditions the horizon remains stable dominated most discussions to date. To our knowledge the discussion has not been conclusive to the satisfaction of all yet. Our aim in this paper is not to contribute in this particular direction, but rather to point out a restricted class of colliding Yang-Mills (YM) waves which behaves almost em-like. The motivating factor to consider such a problem anew relies on the recent attempts of Large Hadron Collisions (LHC) at TeV scale that maintain the prime agenda at CERN. As the protons (anti-protons) are boosted to almost the speed of light they behave more wave-like than particle-like whose collisions are reminiscent of waves colliding in general relativity. Since the inner/color structure of hadrons have constituent YM fields, collision of such waves deserves further investigation. Within this context although there is a large collection of Einstein-Maxwell (EM) solutions available in the literature [@JB][@MH], extension of the problem to the Einstein-Yang-Mills (EYM) remained ever open. We aim to contribute in this regard, at least partly, for the gauge group $SO(3)$ and pave the way for further solutions underlying various gauge groups. Our starting point is a Theorem proved long time ago by P. Yasskin [@PY] in connection with Yang-Mills (YM) fields in a curved spacetime. The method in Yasskin’s Theorem is to start from  an Abelian $U(1)$ em solution and map it through a non-Abelian transformation to the YM problem. In this process, naturally, the degenerate YM charges are defined from the original em charge. By employing the Wu-Yang ansatz [@WY] for the YM fields in the trapped region of the Kerr-Newman (KN) black hole we construct solution that describes colliding EYM plane waves. In this process, rotation of the black hole transforms into the cross polarization of the colliding waves. In the linear polarization limit we recover the colliding em wave solution due to Bell and Szekeres (BS)[@BS].  More generally, any colliding EM metric/field can be shown to represent at the same time colliding EYM waves, in a restricted sense provided the YM field is defined according to the Yasskin’s Theorem. This guarantees that the incoming YM fields are em-like plane waves so that they do not create extra currents. It can be anticipated that non-planar waves will induce their own sources through self-interaction which is the case that we avoid in the present study. Although YM fields is known to be a subject in the realm of quantum (chromodynamics), our treatment is entirely classical here. Stated otherwise, we adopt the viewpoint that anything that interacts with the classical gravity must itself behave classical. Organization of the paper is as follows. In section II, we introduce YM fields in a KN black hole geometry. Colliding EYM plane waves follows in section III. Section IV, concentrates on the linear polarization of the waves. We complete the paper with conclusion in section V. KN Black Hole and YM Fields. ============================ The KN black hole solution is given by the line element $$ds^{2}=\frac{U^{2}}{\rho ^{2}}\left( dt-\overline{a}\sin ^{2}\theta d\varphi \right) ^{2}-\frac{\sin ^{2}\theta }{\rho ^{2}}\left[ Fd\varphi -\overline{a}dt\right] ^{2}-\frac{\rho ^{2}}{U^{2}}dr^{2}-\rho ^{2}d\theta ^{2},$$ where $$U^{2}=r^{2}-2mr+\overline{a}^{2}+Q^{2},\text{ \ \ \ \ \ \ }\rho ^{2}=r^{2}+\overline{a}^{2}\cos ^{2}\theta ,\text{ \ \ \ \ \ }F=r^{2}+\overline{a}^{2},\text{\ }$$ in which $\overline{a}$ is the parameter of rotation and $Q$ is the em charge. By invoking Yasskin’s Theorem [@PY] we extend this solution to represent a particular YM field as follows. A suitable YM gauge potential 1-form $A^{i}=A_{\mu }^{i}dx^{\mu },\left( i=1,2,3\right) $ is chosen as $$A^{i}=\frac{1}{\rho ^{2}}Q^{i}\cos \theta \left[ \left( r^{2}+\overline{a}^{2}\right) d\varphi -\overline{a}dt\right] .$$ Here the gauge charge $Q^{i}$ satisfies the constraint $$\gamma _{ij}Q^{i}Q^{j}=Q^{2},$$ with the invariant group metric $\gamma _{ij}=\delta _{ij}$ and $Q$ is the charge of the KN  black hole. The YM field 2-form is defined by $F^{i}=\frac{1}{2}F_{\mu \nu }^{i}dx^{\mu }\wedge dx^{\nu },$ where $\wedge $ stands for the wedge product and $$F_{\mu \nu }^{i}=\partial _{\mu }A_{\nu }^{i}-\partial _{\nu }A_{\mu }^{i}+\frac{1}{2Q}\epsilon _{jk}^{i}A_{\mu }^{j}A_{\nu }^{k},$$ in which $\epsilon _{jk}^{i}$ is the structure constant for the group. In the $SO(3)$ basis, $T^{i}$ $\left( i=1,2,3\right) $ are given by $$T^{1}=\left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0\end{array}\right) ,\text{ \ \ \ }T^{2}=\left( \begin{array}{ccc} 0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0\end{array}\right) ,\text{ \ \ \ }T^{3}=\left( \begin{array}{ccc} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0\end{array}\right) ,$$ and the YM potential 1-form has the representation $$A=A_{\mu }^{i}T^{i}dx^{\mu }.$$ For the particular choice ($Q^{3}=Q\neq 0=Q^{1}=Q^{2}$) we have $$\begin{aligned} A_{\varphi }^{3} &=&\frac{Q\cos \theta }{\rho ^{2}}\left( r^{2}+\overline{a}^{2}\right) , \\ A_{t}^{3} &=&-\frac{\overline{a}Q\cos \theta }{\rho ^{2}},\text{ \ \ \ \ \ \ \ } \notag\end{aligned}$$ which has both electric and magnetic components. If $\overline{a}=0,$ i.e. for the Reissner-Nordstrom black hole, we have a pure magnetic field. Now we apply a gauge transformation on $A_{\mu }$ through $$A_{\mu }\rightarrow \widetilde{A}_{\mu }=GA_{\mu }G^{-1}-Q\left( \partial _{\mu }G\right) G^{-1},$$ where the gauge transformation matrix $G$ is $$G=\left( \begin{array}{ccc} \sin \varphi & \cos \varphi \cos \theta & \cos \varphi \sin \theta \\ -\cos \varphi & \sin \varphi \cos \theta & \sin \varphi \sin \theta \\ 0 & -\sin \theta & \cos \theta\end{array}\right) .$$ We obtain the components of the new gauge potential 1-forms (after suppressing the tilde over $A_{\mu }$ ) as $$\begin{aligned} A^{1} &=&\frac{Q}{2\rho ^{2}}\sin 2\theta \cos \varphi \left[ \left( r^{2}+\overline{a}^{2}\right) d\varphi -\overline{a}dt\right] +Q\sin \varphi d\theta , \\ A^{2} &=&\frac{Q}{2\rho ^{2}}\sin 2\theta \sin \varphi \left[ \left( r^{2}+\overline{a}^{2}\right) d\varphi -\overline{a}dt\right] -Q\cos \varphi d\theta , \notag \\ A^{3} &=&-\frac{Q}{\rho ^{2}}(r^{2}\sin ^{2}\theta d\varphi +\overline{a}\cos ^{2}\theta dt). \notag\end{aligned}$$ In the next section we shall transform these potentials and metric (1) into the colliding wave spacetime and interpret it as a solution to the colliding EYM problem. Colliding EYM Plane Waves. ========================== The metric function $U^{2}=r^{2}-2mr+\overline{a}^{2}+Q^{2}=0,$ in Eq.(2) has two roots, $r=r_{+}$ (the outer horizon) and $,$ $r=r_{-}$ (the inner horizon). By the particular choice of parameters it is possible to make $r_{+}=r_{-}$ , which is called the extremal case, however, we shall consider here only the case $r_{+}>r_{-}$ $\neq 0.$ It can easily be seen that for  $r_{-}<r<r_{+}$, $U^{2}<0,$ which makes the spacetime to admit two spacelike Killing vectors apt for colliding waves. For simplicity we choose the mass, $m=1$ and apply the following transformation $$\begin{aligned} r &=&1+\alpha \tau ,\text{ \ \ \ }\sigma =\cos \theta ,\text{ \ \ \ }t=\alpha x,\text{ \ \ \ \ }y=\varphi , \\ &&\left( \alpha =\sqrt{1-\overline{a}^{2}-Q^{2}}=\frac{1}{p}\right) \notag\end{aligned}$$ to the line element (1). After an overall scaling of the metric we obtain $$ds^{2}=X\left( \frac{d\tau ^{2}}{\Delta }-\frac{d\sigma ^{2}}{\delta }\right) -X^{-1}\left( Rdx^{2}+Edy^{2}-2Gdxdy\right) ,$$ where $$\begin{aligned} X &=&\left( p+\tau \right) ^{2}+a_{0}^{2}\sigma ^{2},\text{ \ \ \ \ \ }R=\Delta +a_{0}^{2}\delta ,\text{\ \ \ \ \ \ } \\ E &=&\Delta A^{2}+\delta B^{2},\text{ \ \ \ \ \ \ }G=\Delta A+a_{0}\delta B, \notag \\ A &=&a_{0}\delta ,\text{ \ \ \ \ \ \ }B=\left( p+\tau \right) ^{2}+a_{0}^{2}, \notag \\ \Delta &=&1-\tau ^{2},\text{ \ \ \ \ \ \ }\delta =1-\sigma ^{2}, \notag\end{aligned}$$ in which $$\tau =\sin \left( au\theta \left( u\right) +bv\theta \left( v\right) \right) ,\text{ \ \ \ \ \ }\sigma =\sin \left( au\theta \left( u\right) -bv\theta \left( v\right) \right) ,$$ are the new coordinates in terms of the null  coordinates $\left( u,v\right) $ and $\left( a,b\right) $ are arbitrary constants. We note also that $a_{0}=\overline{a}p.$ By introducing another constant $q=Qp$, we can check that the constraint condition $$p^{2}-a_{0}^{2}-q^{2}=1,$$ holds. The YM potential 1-forms are now $$\begin{aligned} A^{1} &=&\frac{Q\sigma \sqrt{\delta }}{X}\cos y\left( Bdy-a_{0}dx\right) -Q\sin y\frac{d\sigma }{\sqrt{\delta }}, \\ A^{2} &=&\frac{Q\sigma \sqrt{\delta }}{X}\sin y\left( Bdy-a_{0}dx\right) +Q\cos y\frac{d\sigma }{\sqrt{\delta }}, \notag \\ A^{3} &=&\frac{Q\sigma ^{2}}{X}\left( Bdy-a_{0}dx\right) -dy. \notag\end{aligned}$$ We note that in the null coordinates $\left( u,v\right) $ we have $$\begin{aligned} d\sigma &=&-\sqrt{\delta }\left( a\theta \left( u\right) du-b\theta \left( v\right) dv\right) , \\ d\tau &=&\sqrt{\Delta }\left( a\theta \left( u\right) du+b\theta \left( v\right) dv\right) , \notag\end{aligned}$$ in which $\theta \left( u\right) $ and $\theta \left( v\right) $ are the unit step functions introduced as the requirement of the collision problem. Let us note that insertion of the step functions must be checked critically to ensure the absence of any redundant current sheets. In most cases such an insertion fails to work but here, due to its em analogy it does work. The YM field 2-form $F^{i}$ are given by. $$F^{i}=Q\sqrt{\delta }\left( \cos y,\sin y,\frac{\sigma }{\sqrt{\delta }}\right) F,$$ where the 2-form $F$ is $$F=\frac{1}{X^{2}}\left[ \left( \left( p+\tau \right) ^{2}-a_{0}^{2}\sigma ^{2}\right) d\sigma -2\sigma \left( p+\tau \right) d\tau \right] \wedge \left( Bdy-a_{0}dx\right) ,$$ while its dual takes the form $$^{\ast }F=\frac{1}{X^{2}}\left[ \left( \left( p+\tau \right) ^{2}-a_{0}^{2}\sigma ^{2}\right) d\tau \wedge \left( dx-a_{0}\delta dy\right) -2a_{0}\sigma \left( p+\tau \right) d\sigma \wedge \left( Bdy-a_{0}dx\right) \right] .$$ Without much difficulty, which is more transparent in the $\left( \tau ,\sigma ,x,y\right) $, compared with the $\left( u,v,x,y\right) $ coordinates, one can show that the integrability equations $$dF+A\wedge F=0,$$ and the YM equations $$d^{\ast }F+A\wedge ^{\ast }F=0,$$ are all satisfied. This solves the problem of colliding EYM plane waves where the YM field is obtained from the theorem of Yasskin while the metric is the inner horizon region of the KN geometry explored first within the context of EM by Chandrasekhar and Xanthopoulos [@CX2]. For a detailed analysis of this spacetime we refer to [@CX2]. The Linear Polarization Limit. ============================== In this section we consider the metric obtained in the previous section and set $a_{0}=0$ to make waves linearly polarized. In the null coordinates after an overall scaling the line element takes the form $$ds^{2}=\Sigma ^{2}\left( 2dudv-\delta dy^{2}\right) -\frac{\Delta }{\Sigma ^{2}}dx^{2},$$ where $$\Sigma =1+\alpha \tau ,\ \ \ \ \Delta =1-\tau ^{2},\ \ \ \ \ \delta =1-\sigma ^{2},\ \ \ \ \ \alpha =\sqrt{1-Q^{2}},$$ while the $SO(3)$ valued gauge potential 1-forms are $$\begin{aligned} A^{1} &=&Q\left[ \left( \sigma \sqrt{\delta }\cos y\right) dy-\left( \sin y\right) \left( a\theta \left( u\right) du-b\theta \left( v\right) dv\right) \right] , \\ A^{2} &=&Q\left[ \left( \sigma \sqrt{\delta }\sin y\right) dy+\left( \cos y\right) \left( a\theta \left( u\right) du-b\theta \left( v\right) dv\right) \right] , \notag \\ A^{3} &=&-Q\delta dy. \notag\end{aligned}$$ The YM field 2-form $F^{i}$ and $^{\ast }F^{i}$can be expressed as $$\begin{aligned} F^{i} &=&Q\delta \left( \cos y,\sin y,\frac{\sigma }{\sqrt{\delta }}\right) \left[ a\theta \left( u\right) du-b\theta \left( v\right) dv\right] \wedge dy, \\ ^{\ast }F^{i} &=&\frac{Q\sqrt{\Delta \delta }}{\left( \Sigma \right) ^{2}}\left( \cos y,\sin y,\frac{\sigma }{\sqrt{\delta }}\right) \left[ a\theta \left( u\right) du+b\theta \left( v\right) dv\right] \wedge dx \notag\end{aligned}$$ In the null tetrad basis 1-forms $\left( l,n,m,\overline{m}\right) $ of Newman and Penrose $$\begin{aligned} l &=&\Sigma du, \\ n &=&\Sigma dv, \notag \\ m+\overline{m} &=&\sqrt{2}\Sigma \sqrt{\delta }dy, \notag \\ m-\overline{m} &=&\sqrt{2}i\frac{\sqrt{\Delta }}{\Sigma }dx, \notag\end{aligned}$$ the energy-momentum tensor $T_{\mu \nu }$ becomes, $$4\pi T_{\mu \nu }=\frac{Q^{2}}{\Sigma ^{4}}\left[ a^{2}\theta \left( u\right) l_{\mu }l_{\nu }+b^{2}\theta \left( v\right) n_{\mu }n_{\nu }+ab\theta \left( u\right) \theta \left( v\right) \left( m_{\mu }m_{\nu }+\overline{m}_{\mu }\overline{m}_{\nu }\right) \right] .$$ Prior to the collision the incoming, coupled EYM plane waves are obtained by setting $v<0$  $(u<0)$ in Eq.(24). For $u<0$ and $v<0$ we have a flat space in which the YM field vanishes. For $u>0$ and $v>0$ , the Weyl scalars $\Psi _{2},$ $\Psi _{0}$ and $\Psi _{4}$ are all regular as can be checked from $$\begin{aligned} \Psi _{2} &=&\frac{\alpha \left( \alpha +\tau \right) }{\Sigma ^{4}}ab\theta \left( u\right) \theta \left( v\right) , \\ \Psi _{4} &=&-\frac{a\delta \left( u\right) \left[ \alpha +\sin \left( bv\theta \left( v\right) \right) \right] }{\cos \left( bv\theta \left( v\right) \right) \left[ 1+\alpha \sin \left( bv\theta \left( v\right) \right) \right] }+3a^{2}\theta \left( u\right) \frac{\alpha \left( \alpha +\tau \right) }{\Sigma ^{4}}, \notag \\ \Psi _{0} &=&-\frac{b\delta \left( v\right) \left[ \alpha +\sin \left( au\theta \left( u\right) \right) \right] }{\cos \left( au\theta \left( u\right) \right) \left[ 1+\alpha \sin \left( au\theta \left( u\right) \right) \right] }+3b^{2}\theta \left( v\right) \frac{\alpha \left( \alpha +\tau \right) }{\Sigma ^{4}}, \notag\end{aligned}$$ satisfying the type-D condition $\Psi _{0}\Psi _{4}=9\Psi _{2}^{2}.$ The invariants $I=2\left( \Psi _{0}\Psi _{4}+3\Psi _{2}^{2}\right) $ and $J=6\Psi _{2}\left( \Psi _{0}\Psi _{4}-\Psi _{2}^{2}\right) $ are both regular as they should be in the inside of the collision region. On the boudaries $u=0,bv=\pi /2$ $(v=0,au=\pi /2)$, however,  there are null singularities which are also present in the problem of colliding em shock waves[@BS]. The pathology involved ( if any) on the hypersurfaces $\tau =1$ and $\sigma =\pm 1$ has been discussed extensively in the literature of colliding waves ( see Ref.[@JB] and references therein). An analytic extension beyond the horizon $(\tau =1)$, albeit it is a non - unique process, reveals the geodesics completeness and other issues [@GH]. The metric (23)  has well known limits. For $Q=0$ $\left( \alpha =1\right) $ the YM field vanishes and one recovers the colliding gravitational waves locally isometric to the Schwarzschild interior[@JB]. For $Q=1$ $\left( \alpha =0\right) $ we have the case of colliding em waves [@BS]. This implies that the Bell-Szekeres metric can be interpreted at the same time to represent colliding YM plane waves for a unit charge $Q=1$. The Wu-Yang ansatz solution[@WY] for the static YM fields has the gauge potential $$A^{i}=-\frac{Q}{r^{2}}\epsilon _{jk}^{i}x^{k}dx^{j}$$ where $x^{i}$ stands for the Cartesian coordinates and $Q=Q^{3}\neq 0,$ is the only non-zero gauge charge. By the substitution, $x^{1}=r\sin \theta \cos \varphi ,$ $x^{2}=r\sin \theta \sin \varphi $ and $x^{3}=r\cos \theta , $ followed by  $\cos \theta =\sin \left( au\theta \left( u\right) -bv\theta \left( v\right) \right) $ and $y=\varphi $, one obtains potential 1-forms ( Eq.(25)). The YM potentials in Eq.(25) just corresponds to the curved space generalization of the Wu-Yang ansatz solution[@WY]. Conclusion. =========== Customarily YM fields arise as part of a quantum theory inside nuclei. It carries its own gauge charge and self-interacts with itself. These properties are fundamentally different from em waves which correspond to the classical equivalence of photons. In this paper we treat YM waves along with gravitational waves as classical and consider their collision problem in a restricted form. This is not to be compared with a Feynman diagram of quantum chromodynamics. The highly non-linear YM waves acts in this simple picture parallel to gravitational waves to distort spacetime upon interaction. The example which we present here is a case that forms Cauchy horizon ( of Reissner-Nordstrom and KN types) instead of a singularity. Yasskin’s theorem and the Wu-Yang ansatz aided in obtaining this simple class. Different combinations/collisions which employ the full non-linearity and non-Abelian character may change this picture completely. This, however, remains as challenging as ever. [9]{} J.B. Griffiths. *Colliding Plane Waves in General Relativity*, Oxford University Press, Oxford. (1991). M. Halilsoy. *J. Math. Phys.* (NY), **31**, 2694-2698, (1990). P.B. Yasskin, *Phys. Rev. D***12**, 2212-2217, (1975). T.T. Wu and C.N. Yang, *in Properties of Matter Under Unusual Conditions*, edited by H. Mark and S. Fernbach ( Interscience, New York, 1969). P. Bell and P. Szekeres. *Gen. Rel. and Grav*. **5**, 275. (1974). S. Chandrasekhar and B.C. Xanthopoulos. *Proc. R. Soc. London,* **A414**, 1. (1987). O. Gurtug and M. Halilsoy, *Int. Journal Mod. Phys. A.* **24**, 3171. (2009).
--- abstract: 'We start from a parity-breaking MCS QED$_{3}$ model with spontaneous breaking of the gauge symmetry as a framework for evaluation of the electron-electron interaction potential and for attainment of numerical values for the $e^{-}e^{-}-$ bound state. Three expressions ($V_{\text{eff}_{\uparrow \uparrow }},V_{\text{eff}_{\uparrow \downarrow }},V_{\text{eff}_{\downarrow \downarrow }})$ are obtained according to the polarization state of the scattered electrons. In an energy scale compatible with Condensed Matter electronic excitations, these three potentials become degenerated. The resulting potential is implemented in the Schrödinger equation and the variational method is applied to carry out the electronic binding energy. The resulting binding energies in the scale of $10-100$ $meV$ and a correlation length in the scale of $10-30$Å are possible indications that the MCS-QED$_{3}$ model adopted may be suitable to address an eventual case of $e^{-}e^{-}$ pairing in the presence of parity-symmetry breakdown. The data analyzed here suggest an energy scale of $10$-$100$ $meV$ to fix the breaking of the $U(1)$-symmetry.' address: | $^{a}$[*Grupo de Física Teórica José Leite Lopes*]{}\ Petrópolis - RJ - Brazil\ $^{b}$[*Centro Brasileiro de Pesquisas Físicas (CBPF)*]{},\ Coordenação de Teoria de Campos e Partículas (CCP),\ Rua Dr. Xavier Sigaud, 150 - Rio de Janeiro - RJ 22290-180 - Brazil.\ $^{c}$[*Universidade Federal do Maranhão (UFMA)*]{},\ Departamento de Física, Campus Universitário do Bacanga,\ São Luiz - MA, 65085-580 - Brazil. author: - 'H. Belich$^{a,b}$, O.M. Del Cima$^{a}$, M.M. Ferreira Jr.$^{a,c}$ and J.A. Helayël-Neto$^{a,b}$[^1]' title: 'Electron-Electron Bound States in Maxwell-Chern-Simons-Proca QED$_{3}$ ' --- *\#1[\#1]{} \#1[\#1]{} \#1[\#1]{}* Introduction ============ The advent of high-T$_{c}$ superconductivity [@Bednorz], in 1986, brought about a great excitation in both the theoretical and experimental physical panorama, drawing attention for the issue of formation of Cooper pairs in planar systems. In the late 90's, there arose a field-theoretical approach to address the mechanism of electronic pairing: the evaluation of the electron-electron Möller scattering as a tool for the attainment of the $e^{-}e^{-}$ interaction potential in the nonrelativistic approximation. This line of action searches for an attractive potential in such a way to induce the formation of correlated electron-electron pairs, (the charge carriers of the high-T$_{c}$ superconductors). The present work shall follow this general procedure. By direct application of the Gauss's law in ($1+2)$-dimensions for the massless gauge field, the Coulombian interaction takes on the form of a confining potential $\left( \ln r\right) $. The Kato condition [@Chadan] establishes the finiteness of the number of bound states, in $D=1+2,$ associated to a certain potential $V,$ and can be used as a criterion for determining the character confining or condensating of the potential. The fact the logarithmic potential to be confining (according to the Kato criterion) indicates it does not lead to bound states, becoming clear the need of a finite range, screened interaction. The Chern-Simons (CS) term [@DJ] is then introduced as the generator of (topological) mass for the photon, implying an intensive screening of the Coulombian interaction. The Maxwell-Chern-Simons (MCS) model, a particular case of Planar Quantum Electrodynamics - QED$_{3},$ then arose as a theoretical framework able for providing an attractive but not confining electron-electron interaction. This model was then used by some authors [@Kogan], [@Girotti], [@Dobroliubov], [@Groshev] as basic tool for evaluation of the Möller scattering amplitude at tree-level, whose Fourier transform (in the Born approximation) yields the $e^{-}e^{-}$ interaction potential. In a general way, these works have led to the same result: the electron-electron potential comes out attractive whenever the topological mass $\left( \vartheta \right) $ exceeds the electron mass $\left( m_{e}\right) $. Georgelin and Wallet [@Georgelin] started from two MCS-QED$_{3}$ Lagrangians, the first (second) with the gauge field nonminimally coupled to fermions (bosons), in such a way to consider the introduction of the anomalous momentum of the electron in the problem. Working in the perturbative regime ($1/k\ll 1),$ these authors found an attractive potential for fermions $\left( V_{\psi \psi }<0\right) ,$ and also for scalar bosons $\left( V_{\varphi \varphi }<0\right) ,$ in the nonrelativistic approximation. The presence of the nonminimal coupling seems to be the key-factor for the attainment of the attractive potential between charges with the same sign. In this case, the potential remains negative even in the limit of a small topological mass $\left( \vartheta \ll m_{e}\right) $, under a suitable choice of parameters. The nonrenormalizability of this model (due to the nonminimal coupling), however, implies a restriction to the validity of their results only at tree-level calculations. All the MCS models, except the one exposed in Ref. [@Georgelin], failed under the perspective of yielding a realistic electron-electron condensation into the domain of a Condensed Matter system due to the condition $\vartheta >m_{e},$ necessary for making the $e^{-}e^{-}$ pairing feasible. One must believe to be unlikely the existence of a physical excitation with so large energy in a real solid state system (the superconductors usually are characterized by excitations in the $meV$ scale). We will see that the introduction of the Higgs mechanism in the context of the MCS-Electrodynamics will bring out a negative contribution to the scattering potential that will allow a global attractive potential despite the condition $\vartheta >m_{e}$. In our work, we shall rely on a version of planar QED for which the photonic excitations appear as a by-product of a spontaneous symmetry breaking (SSB) realized on the MCS Lagrangian. The consideration of a Higgs sector (a complex scalar field endowed with a self-interaction potential so as to induce a SSB) in the context of the MCS model provides a new mass term to the topological gauge field: the well-known Proca term $\left( m^{2}A_{\mu }A^{\mu }\right) $. In this way, once the spontaneous breaking of the local U(1)-symmetry has taken place, a neutral massive Higgs scalar remains and the gauge field becomes a Maxwell-Chern-Simons-Proca vector field, a clear reference to its two mass components (the topological and the Proca one). The physical mass of such a photon, that may assume two different values, will be written in terms of these two mass parameters, as explicitly given by the expressions read off from the poles of the gauge-field propagator (see Section III). For our purposes, one can assert that the enhancement of complexity determined by the coexistence of a topological and a Proca term in the gauge sector is compensated by the attainment of a gauge propagator with two massive poles (standing for the photon mass). Operationally, in the perspective of a tree-level field-theory investigation, the determination of the gauge propagator and the Feynman rules enable us to derive the interaction potential between two elementary particles as mediated by this gauge field. This paper, therefore, adopts as starting point a MCS-Proca Lagrangian with the clear purpose of performing a usual field-theory derivation (in the non-relativistic limit) of an interaction potential, which is further on applied to obtain bound states (in a typical quantum mechanical procedure). Last but not least, the fact that the photon becomes massive is a microscopical information that renders feasible the observation of the Meissner effect in such a system, which opens the applicability of such kind of model for an  eventual superconducting planar system endowed with parity breaking [@Parity-breaking]. This theoretical possibility, however, is out of the scope of this work. In a recent paper [@Int-Journal], we have derived an interaction potential associated to the scattering of two identically polarized electrons in the framework of a Maxwell-Chern-Simons QED$_{3}$ with spontaneous breaking of local-U(1) symmetry. Our result revealed the interesting possibility of an attractive electron-electron interaction whenever the contribution stemming from the Higgs sector overcomes the repulsive contribution from the gauge sector, which can be achieved by an appropriate fitting of the free parameters. In the present work, we generalize the results attained in Ref. [@Int-Journal], [@Tese] contemplating the existence of two fermionic families $\left( \psi _{+},\psi _{-}\right) ,$ and performing the numerical evaluation of the $e^{-}e^{-}$ binding energies. The procedure here accomplished is analogous to the one enclosed in Ref. [@Int-Journal], [@Tese]: starting from a QED$_{3}$ Lagrangian (now built up by two spinor polarizations, $\psi _{+},\psi _{-})$ with SSB, one evaluates the Möller scattering amplitudes (in the nonrelativistic approximation) having the Higgs and the massive photon as mediators and the corresponding interaction potential, that now emerges in three different expressions: $V_{_{\uparrow \uparrow }},V_{_{\uparrow \downarrow }},V_{_{\downarrow \downarrow }}$ (depending on the spin polarization of the scattered electrons). The same theoretical possibility of attractiveness, pointed out in Ref. [@Int-Journal], is now manifested by these three potentials. A numerical procedure (variational method) is then implemented in order to carry out the binding energy of the Cooper pairs. Having in mind the nonrelativistic approximation, a reduced potential is implemented into the Schrödinger equation, whose numerical solution provides the data contained in Tables \[table1\], \[table2\], \[table3\]. The achievement of binding energies in the $meV$ scale and correlation length in the $10-30$Å scale is an indicative that the adopted MCS-QED$_{3}$ model may be suitable for addressing an eventual electronic pairing in a system endowed with parity-breaking. This paper is outlined as follows: in Section II, we present the QED$_{3}$ Lagrangian, its general features and one realizes the spontaneous breaking of U(1)-local symmetry that generates the Higgs boson and the Maxwell-Chern-Simons-Proca photon; in Section III, one evaluates the amplitudes for the Möller scattering; their Fourier transform will provide the $e^{-}e^{-}$ interaction potentials $V_{_{\uparrow \uparrow }},V_{_{\uparrow \downarrow }},V_{_{\downarrow \downarrow }}$ (despite the complex form of these potentials, they maintain the theoretical possibility of being attractive); in Section IV, one performs an analysis in order to obtain the $e^{-}e^{-}$ binding energies by means of the numerical solution of the Schrödinger equation (by the variational method), whose results are disposed in Tables \[table1\], \[table2\], \[table3\]. In Section V, we present our General Conclusions. . The MCS QED$_{3}$ with Spontaneous Symmetry Breaking and two Spinor Polarizations ================================================================================= The action for a QED$_{3}$ model built up by two polarization fermionic fields ($\psi _{+},\psi _{-}$), a gauge $\left( A_{\mu }\right) $ and a complex scalar field $\left( \varphi \right) $, mutually coupled, and endowed with spontaneous breaking of a local U(1)-symmetry [@N.Cimento], [@Int-Journal], reads as $$\begin{aligned} S_{QED-MCS} & = \int d^{3}x\{-\frac{1}{4}F^{\mu \nu }F_{\mu \nu }+i\overline{\psi }_{+}\gamma ^{\mu }D_{\mu }\psi _{+}+i\overline{\psi }_{-}\gamma ^{\mu }D_{\mu }\psi _{-}+{\frac12}\theta \epsilon ^{\mu v\alpha }A_{\mu }\partial _{v}A_{\alpha }-m_{e}(\overline{\psi }_{+}\psi _{+}-\overline{\psi }_{-}\psi _{-})+ \nonumber \\ & - y(\overline{\psi }_{+}\psi _{+}-\overline{\psi }_{-}\psi _{-})\varphi ^{\ast }\varphi +D^{\mu }\varphi ^{\ast }D_{\mu }\varphi -V(\varphi ^{\ast }\varphi )\}, \label{actionMCS}\end{aligned}$$ where $V(\varphi ^{\ast }\varphi )$ represents the sixth-power self-interaction potential, $$V(\varphi ^{\ast }\varphi )=\mu ^{2}\varphi ^{\ast }\varphi +\frac{\zeta }{2}(\varphi ^{\ast }\varphi )^{2}+\frac{\lambda }{3}(\varphi ^{\ast }\varphi )^{3},$$ which is responsible for the SSB; it is the most general one renormalizable in $1+2$ dimensions [@Delcima]. The mass dimensions of the parameters $\mu ,\zeta ,\lambda $ and $y$ are respectively: 1,1,0 and 0. For the present purpose, we are interested only on stable vacuum, restriction satisfied by imposing some conditions on the potential parameters: $\lambda >0,\zeta <0$ and $\mu ^{2}\leq \frac{3\zeta ^{2}}{16\lambda }.$ The covariant derivatives are defined as: $D_{\mu }\psi _{\pm }=(\partial _{\mu }+ie_{3}A_{\mu })\psi _{\pm }$  and $D_{\mu }\varphi =(\partial _{\mu }+ie_{3}A_{\mu })\varphi ,$ where $e_{3}$ is the coupling constant of the $U(1)$-local gauge symmetry, here with dimension of (mass)$^{1/2}$, particularity that will be more explored in the numerical analysis section. In $\left( 1+2\right) -$dimensions, a fermionic field has its spin polarization fixed up by the mass sign [@Binegar]; however, in the action (\[actionMCS\]), it is manifest the presence of two spinor fields of opposite polarization. In this sense, it is necessary to stress that we have two positive-energy spinors (two spinor families), both solutions of the Dirac equation, each one with one polarization state according to the sign of the mass parameter, instead of the same spinor with two possibilities of spin-polarization. Considering $\langle \varphi \rangle =v,$ the vacuum expectation value for the scalar field product $\varphi ^{\ast }\varphi $ is given by: $$\langle \varphi ^{\ast }\varphi \rangle =v^{2}=-\zeta /\left( 2\lambda \right) +\left[ \left( \zeta /\left( 2\lambda \right) \right) ^{2}-\mu ^{2}/\lambda \right] ^{1/2},$$ while the condition for minimum reads as: $\mu ^{2}+\frac{\zeta }{2}v^{2}+\lambda v^{4}=0$.  After the spontaneous symmetry breaking, the scalar complex field can be parametrized by $\varphi =v+H+i\theta $, where $H $ represents the Higgs scalar field and $\theta $ the would-be Goldstone boson; the SSB will be manifest when this parametrization is replaced in the action (\[actionMCS\]). Thereafter, in order to preserve the manifest renormalizability of the model, one adopts the 't Hooft gauge by adding  the fixing gauge term $\left( S_{R_{\xi }}^{gt}=\int d^{3}x[-\frac{1}{2\xi }(\partial ^{\mu }A_{\mu }-\sqrt{2}\xi M_{A}\theta )^{2}]\right) $ to the broken action; finally, by retaining only the bilinear and the Yukawa interaction terms, one has, $$\begin{aligned} {S}_{{\rm QED}}^{{\rm SSB}}& =\int d^{3}x\biggl\{-\frac{1}{4}F^{\mu \nu }F_{\mu \nu }+\frac{1}{2}M_{A}^{2}A^{\mu }A_{\mu }-\frac{1}{2\xi }(\partial ^{\mu }A_{\mu })^{2}+\overline{\psi }_{+}(i{\rlap{\hbox{$\mskip 1 mu /$}}\partial }-m_{eff})\psi _{+}+\overline{\psi }_{-}(i{\rlap{\hbox{$\mskip 1 mu /$}}\partial }+m_{eff})\psi _{-}+{\frac12}\theta \epsilon ^{\mu v\alpha }A_{\mu }\partial _{v}A_{\alpha }+ \nonumber \\ & +\partial ^{\mu }H\partial _{\mu }H-M_{H}^{2}H^{2}+\partial ^{\mu }\theta \partial _{\mu }\theta -M_{\theta }^{2}\theta ^{2}-2yv(\overline{\psi }_{+}\psi _{+}-\overline{\psi }_{-}\psi _{-})H-e_{3}\left( \overline{\psi }_{+}{\rlap{\hbox{$\mskip 1 mu /$}}A}\psi _{+}+\overline{\psi }_{-}{\rlap{\hbox{$\mskip 1 mu /$}}A}\psi _{-}\right) \biggr\}, \label{actionMCS3}\end{aligned}$$ whose mass parameters, $$M_{A}^{2}=2v^{2}e_{3}^{2},\text{ \ \ \ \ }m_{eff}=m_{e}+yv^{2},\ \ \ M_{{\small H}}^{2}=2v^{2}(\zeta +2\lambda v^{2}),\text{ \ }M_{\theta }^{2}=\xi M_{A}^{2},$$ are entirely or partially dependent on the SSB mechanism. The Proca mass, $M_{A}^{2}$, represents the mass acquired by the photon through the Higgs mechanism, while the Higgs mass, $M_{H}^{2}$, is the one associated with the real scalar field. The Higgs mechanism also corrects the mass of the electron, resulting in an effective electronic mass,  $m_{eff}$. On the other hand, the would-be Goldstone mode, endowed with mass $(M_{\theta }^{2}) $, does not represent a physical excitation, since $\xi $ is just a unphysical (dimensionless) gauge-fixing parameter. At this moment, it is instructive to point out the presence of two photon mass-terms in eq. (\[actionMCS3\]): the Proca and the topological one. The physical mass of the gauge field will emerge as a function of two mass parameters, as shown in the next Section. The Electron-Electron Scattering Potential in the nonrelativistic Limit ======================================================================= In the low-energy limit (Born approximation), the two-particle interaction potential is given by the Fourier transform of the two-particle scattering amplitude [@Sakurai].  It is important to stress that, in the case of the nonrelativistic Möller scattering, one should consider only the t-channel (direct scattering) [@Sakurai] even for indistinguishable electrons, since in this limit they recover the classical notion of trajectory. The Möller scattering will be mediated by two particles: the Higgs scalar and the massive gauge field. From the action (\[actionMCS3\]), one reads off the propagators associated to the Higgs scalar and Maxwell-Chern-Simons-Proca field: $$\begin{aligned} \text{\ }\langle H(k)H(-k)\rangle &=&\frac{i}{2}\frac{1}{k^{2}-M_{H}^{2}};\text{ \ \ \ }\langle A_{\mu }(k)A_{\nu }(-k)\rangle =-i\biggl\{\frac{k^{2}-M_{A}^{2}}{(k^{2}-M_{A}^{2})^{2}-k^{2}\theta ^{2}}\biggl(\eta _{\mu \nu }-\frac{k_{\mu }k_{\nu }}{k^{2}}\biggr)+ \nonumber \\ &&+\frac{\xi }{(k^{2}-\xi M_{A}^{2})}\frac{k_{\mu }k_{\nu }}{k^{2}}+\frac{\theta }{(k^{2}-M_{A}^{2})^{2}-k^{2}\theta ^{2}}i\epsilon _{\mu \alpha \nu }k^{\alpha }\biggr\}.\end{aligned}$$ The photon propagator can be split in the following form, $$\langle A_{\mu }A_{\nu }\rangle =-i\left[ \frac{C_{+}}{k^{2}-M_{+}^{2}}+\frac{C_{-}}{k^{2}-M_{-}^{2}}\right] (\eta _{\mu \nu }-\frac{k_{\mu }k_{\nu }}{k^{2}})+\frac{-i\xi k_{\mu }k_{\nu }}{k^{2}(k^{2}-\xi M_{A}^{2})}+i\left[ \frac{C}{k^{2}-M_{+}^{2}}-\frac{C}{k^{2}-M_{-}^{2}}\right] \epsilon _{\mu \alpha \nu }k^{\alpha },$$ with the positive definite constants $C_{+},C_{-},C$ and the quadratic masses poles $M_{+}^{2}$ and $M_{-}^{2}$ given by: $$C_{\pm }=\frac{1}{2}\left[ 1\pm \frac{\theta }{\sqrt{4M_{A}^{2}+\theta ^{2}}}\right] ;\text{ \ \ }C=\frac{1}{\sqrt{4M_{A}^{2}+\theta ^{2}}};\text{ \ }M_{\pm }^{2}=\frac{1}{2}\left[ (2M_{A}^{2}+\theta ^{2})\pm |\theta |\sqrt{4M_{A}^{2}+\theta ^{2}}\right] .$$ Here, $C_{\pm }$ and $C$ are constants with mass dimension $0$ and $-1$ respectively, whereas $M_{\pm }^{2}$ represent the two possible expressions for the physical mass of the photon (around which occur photonic excitations). Consequently, these two masses, rather than  $M_{A}^{2}$ and $\theta ^{2}$, will be the relevant ones in the forthcoming evaluation of the interaction potential. From the action (\[actionMCS3\]), it is easy to extract the vertex Feynman rules: $V_{\psi _{\pm }H\psi _{\pm }}=\pm 2ivy;V_{\psi A\psi }=ie_{3}\gamma ^{\mu }.$ Since in the low-energy limit only the t-channel must be considered, the whole scattering amplitudes are written in the form: $$\begin{aligned} -i{\cal M}_{\pm H\pm } &=&\overline{u}_{\pm }(p_{1})(\pm 2ivy)u_{\pm }(p_{1}^{^{\prime }})\left[ \langle H(k)H(-k)\rangle \right] \overline{u}_{\pm }(p_{2})(\pm 2ivy)u_{\pm }(p_{2}^{^{\prime }}), \\ -i{\cal M}_{\pm H\mp } &=&\overline{u}_{\pm }(p_{1})(\pm 2ivy)u_{\pm }(p_{1}^{^{\prime }})\left[ \langle H(k)H(-k)\rangle \right] \overline{u}_{\mp }(p_{2})(\mp 2ivy)u_{\mp }(p_{2}^{^{\prime }}), \\ -i{\cal M}_{\pm A\pm } &=&\overline{u}_{\pm }(p_{1})(ie_{3}\gamma ^{\mu })u_{\pm }(p_{1}^{^{\prime }})\left[ \langle A_{\mu }(k)A_{\nu }(-k)\rangle \right] \overline{u}_{\pm }(p_{2})(ie_{3}\gamma ^{\nu })u_{\pm }(p_{2}^{^{\prime }}), \\ -i{\cal M}_{\pm A\mp } &=&\overline{u}_{\pm }(p_{1})(ie_{3}\gamma ^{\mu })u_{\pm }(p_{1}^{^{\prime }})\left[ \langle A_{\mu }(k)A_{\nu }(-k)\rangle \right] \overline{u}_{\mp }(p_{2})(ie_{3}\gamma ^{\nu })u_{\mp }(p_{2}^{^{\prime }}).\end{aligned}$$ The first two expressions represent the scattering amplitude mediated by the Higgs particles for equal and opposite electron polarizations, while in the last two ones the mediator is the massive Chern-Simons-Proca photon. The spinors $u_{+}(p),$ $u_{-}(p)$ stand for the positive-energy solution of the Dirac equation, satisfying the normalization conditions: $\overline{u}_{\pm }(p)u_{\pm }(p)=\pm 1.$ Working in the center-of-mass frame, the momenta of the interacting particles and the momentum transfer take a simpler form, useful for writing the spinors $u_{+}(p),$ $u_{-}(p)$, as it is properly shown in the Appendix. With these definitions, one carries out the fermionic current elements,  also displayed in the Appendix, so that the evaluation of the scattering amplitudes (for low momenta approximation), at tree-level, associated to the Higgs and the gauge particle become: $$\text{ \ \ \ \ \ \ }{\cal M}_{Higgs}=-2v^{2}y^{2}\biggl(\frac{1}{\overrightarrow{k}^{2}+M_{{\small H}}^{2}}\biggr), \label{MHiggs}$$ $${\cal M}_{\uparrow A\uparrow }={\cal M}_{1}+{\cal M}_{2}+{\cal M}_{3},\text{\ \ }{\cal M}_{\downarrow A\downarrow }={\cal M}_{1}-{\cal M}_{2}+{\cal M}_{3},\text{ \ \ \ }{\cal M}_{\uparrow A\downarrow }={\cal M}_{\downarrow A\uparrow }={\cal M}_{1}+{\cal M}_{3},$$ with: $${\cal M}_{1}=e_{3}^{2}\left[ \frac{C_{+}}{\overrightarrow{k}^{2}+M_{+}^{2}}+\frac{C_{-}}{\overrightarrow{k}^{2}+M_{-}^{2}}\right] ,\text{ }{\cal M}_{2}=\frac{e_{3}^{2}\overrightarrow{k}^{2}}{m_{\text{{\small eff}}}}\left[ \frac{C}{\overrightarrow{k}^{2}+M_{+}^{2}}-\frac{C}{\overrightarrow{k}^{2}+M_{-}^{2}}\right] ,\text{ }{\cal M}_{3}=\frac{-i\sin \phi }{(1-\cos \phi )}{\cal M}_{2},$$ where it was used $\overrightarrow{k}^{2}=2p^{2}(1-\cos \phi )$. Furthermore, it is clear that the Higgs amplitude is independent of the electron polarization, while the gauge amplitude splits into three different expressions, depending on the polarization of the scattered electrons. The terms ${\cal M}_{1}$,${\cal M}_{2}$ correspond to the real part of the Möller scattering amplitude, while ${\cal M}_{3}$ describes the Aharonov-Bohm amplitude for fermions [@Kogan],[@Dobroliubov],[@Georgelin]. The interaction potentials are obtained through the Fourier transform of the scattering amplitude (inside the Born approximation limit): $V(\overrightarrow{r})=\int \frac{d^{2}k}{(2\pi )^{2}}{\cal M}e^{i\overrightarrow{k}.\overrightarrow{r}}.$ According to this approximation, Eq.(\[MHiggs\]) yields an attractive Higgs potential, $$V_{Higgs}(r)=-\frac{1}{2\pi }2v^{2}y^{2}K_{0}(M_{{\small H}}r),\text{ \ \ }$$ while in the gauge sector there appear three different potentials (depending on the polarization state): $$\text{\ }V_{gauge\text{ }\uparrow \uparrow }(r)=V_{1}(r)+V_{2}(r)+V_{3}(r),\text{ \ }V_{gauge\text{ }\uparrow \downarrow }(r)=V_{1}(r)+V_{3}(r),\text{ \ }V_{gauge\text{ }\downarrow \downarrow }(r)=V_{1}(r)-V_{2}(r)+V_{3}(r),$$ $V_{1}(r),$ $V_{2}(r),$ $V_{3}(r)$ being respectively the Fourier transforms of the amplitudes ${\cal M}_{1,}{\cal M}_{2},{\cal M}_{3}$, given explicitly by: $$\begin{aligned} V_{1}(r)& =\frac{e_{3}^{2}}{2\pi }\biggl[C_{+}K_{0}(M_{+}r)+C_{-}K_{0}(M_{-}r)\biggr],\text{ } \\ V_{2}(r)& =-\frac{e_{3}^{2}}{2\pi }\frac{C}{m_{\text{{\tiny eff}}}}\biggl[M_{+}^{2}K_{0}(M_{+}r)-M_{-}^{2}K_{0}(M_{-}r)\biggr], \\ V_{3}(r)& =2\frac{e_{3}^{2}}{2\pi }\frac{Cl}{m_{\text{{\tiny eff}}}r}\biggl [M_{+}K_{1}(M_{+}r)-M_{-}K_{1}(M_{-}r)\biggr].\end{aligned}$$ The complete potential expressions are obtained joining the Higgs and gauge contributions: $V(r)=V_{Higgs}+V_{gauge}$: $$\begin{aligned} V(r)_{\uparrow \uparrow }& =-\frac{1}{2\pi }2v^{2}y^{2}K_{0}(M_{h}r)+\frac{e_{3}^{2}}{2\pi }\text{ }\biggl\{(C_{+}-\frac{C}{m}M_{+}^{2})K_{0}(M_{+}r)+(C_{-}+\frac{C}{m_{\text{{\tiny eff}}}}M_{-}^{2})K_{0}(M_{-}r)+ \nonumber \\ & +2\frac{Cl}{m_{_{\text{{\tiny eff}}}}r}(M_{+}K_{1}(M_{+}r)-M_{-}K_{1}(M_{-}r))\biggr\}, \label{V1}\end{aligned}$$ $$\begin{aligned} V(r)_{\uparrow \downarrow }& =-\frac{1}{2\pi }2v^{2}y^{2}K_{0}(M_{h}r)+\frac{e_{3}^{2}}{2\pi }\text{ }\biggl\{C_{+}K_{0}(M_{+}r)+C_{-}K_{0}(M_{-}r)+2\frac{Cl}{m_{\text{{\tiny eff}}}r}[M_{+}K_{1}(M_{+}r)+ \nonumber \\ & -M_{-}K_{1}(M_{-}r)]\biggr\}, \label{V2}\end{aligned}$$ $$\begin{aligned} V(r)_{\downarrow \downarrow }& =-\frac{1}{2\pi }2v^{2}y^{2}K_{0}(M_{h}r)+\frac{e_{3}^{2}}{2\pi }\text{ }\biggl\{(C_{+}+\frac{C}{m_{\text{{\tiny eff}}}}M_{+}^{2})K_{o}(M_{+}r)+(C_{-}-\frac{C}{m_{\text{{\tiny eff}}}}M_{-}^{2})K_{0}(M_{-}r) \nonumber \\ & +2\frac{Cl}{m_{\text{{\tiny eff}}}r}(M_{+}K_{1}(M_{+}r)-M_{-}K_{1}(M_{-}r))\biggr\}. \label{V3}\end{aligned}$$ Here, $K_{0}(x)$ and $K_{1}(x)$ are the modified Bessel functions and $l$ is the angular momentum. The last three equations represent the tree-level potentials evaluated at the Born approximation. Now, it is convenient to define the limit of validity of the potentials (\[V1\]), (\[V2\]), (\[V3\]). They have been derived in the low-energy limit, consequently they must be valid in the perturbative regime, where the loop corrections are negligible before the semi-classical terms. For a typical MCS model, the perturbative limit is given by $\frac{e^{2}}{\theta }\ll 1$; in the case of the present model, nevertheless, there are four dimensionless parameters  ${e_{3}^{2}}/m$, ${e_{3}^{2}}/{M_{H}}$, ${e_{3}^{2}}/{M_{+}}$, ${e_{3}^{2}}/{M_{-}}$. According to the discussion realized in Ref. [@Int-Journal], the pertubative regime is valid whenever ${e_{3}^{2}}/{M_{+}}\ll 1$ and $y\ll 1$ (the first condition obviously implies ${e}_{3}^{{2}}/m\ll 1$). A remarkable point to be highlighted concerns the attainment of three different potentials: $V(r)_{\uparrow \uparrow },V(r)_{\uparrow \downarrow },V(r)_{\downarrow \downarrow }$. Our results put in explicit evidence the dependence of the potential on the spin state. Were parity preserved, this would not be the result; however, by virtue of the explicit breaking of parity, as induced by the Chern-Simons term, expressions (\[V1\]), (\[V2\]), (\[V3\]) differ from one another as it can be understood on the basis of parity transformation arguments. Another signal of parity-breaking is the linear dependence of $V$ on $l$: $l\rightarrow -l$ is not a symmetry of the potential. Although the gauge invariance is broken by the appearance of a Proca mass during the SSB, one expects that the interaction potential associated to the system comes to preserve the characteristics of the original Lagrangian (before the SSB). This fact leads us to study a way to assure the gauge invariance of the effective interaction potential. Analysis of the Galilean limit of the field theories in (1+2) dimensions, carried out by Hagen [@Hagen], have shown that the 2-body scattering problem, as mediated by a gauge particle, must lead to an effective potential that preserves the structure of a perfect square form $(l-\alpha ^{2})^{2}$, and can be identified with the Aharonov-Bohm scattering potential. [** **]{}The quartic order term $\left( \alpha ^{4}\right) $ is related to the presence of 2-photon diagrams induced by the seagull vertex $\left( \varphi ^{\ast }\varphi A_{\mu }A^{\mu }\right) $, and thus associated to the gauge invariance of the resulting potential. In this way, the potential structure $(l-\alpha ^{2})^{2}$ must be also pursued in more complex electron-electron scatterings panoramas, in order to ensure gauge invariance. Actually, this is just the signal of a more general result. Electron-electron scatterings, in general, no matter the complexity of the interactions, must exhibit the combination $(l-\alpha ^{2})^{2}$ for the sake of gauge invariance of the final result. This kind of procedure is found in Ref. [@Dobroliubov], where a nonrelativistic interaction potential was derived in the context of a MCS-QED$_{3}$ (without scalar sector), in the perturbative regime, $1/k\ll 1,$ with $k$ being the statistic parameter  (in our present case $k\equiv $ $4\pi \theta /e_{3}^{2})$. In this reference, in order to ensure the gauge invariance, at the low-energy approximation, one takes into account the two-photons diagrams, which amounts to adding up to the tree-level potential the quartic order term $\left\{ \frac{e^{2}}{2\pi \theta }[1-\theta rK_{1}(\theta r)]\right\} ^{2}$, turning out into the following gauge-invariant effective potential form[** **]{}[@Kogan],[@Dobroliubov][**:**]{} $$V_{{\rm MCS}}(r)=\frac{e^{2}}{2\pi }\left[ 1-\frac{\theta }{m_{e}}\right] K_{0}(\theta r)+\frac{1}{m_{e}r^{2}}\left\{ l-\frac{e^{2}}{2\pi \theta }[1-\theta rK_{1}(\theta r)]\right\} ^{2}~. \label{Vmcs}$$ In the expression above, the first term corresponds to the electromagnetic potential, whereas the last one incorporates the centrifugal barrier $\left( l/mr^{2}\right) ,$ the Aharonov-Bohm term and the 2-photon exchange term. One observes that this procedure becomes necessary when the model is analyzed or defined out of the pertubative limit. In Ref. [@Georgelin], for instance, one accomplishes an evaluation of the scattering potential, in the Born approximation, whose final result is not supplemented by the term $\left\{ \frac{e^{2}}{2\pi \theta }[1-\theta rK_{1}(\theta r)]\right\} ^{2}$, under the justification that derivation has been done in the pertubative regime $\left( 1/k\ll 1\right) .$ In such a regime, the 2-photon term becomes negligible (for it is proportional to $1/k^{2})$ and shows itself unable to jeopardize the gauge invariance of the model. In a scenery where one searches for applications to Condensed Matter Physics, one must require $\theta \ll m_{e}$, and the scattering potential given by Eq.(\[Vmcs\]) then comes out positive. This implication prevents a possible application of this kind of model to superconductivity, where the characteristic energies are of $meV$ order. Since the effective electron mass ($m_{\text{{\tiny eff}}}=m_{e}+yv^{2})$ is $\sim 10^{5}eV,$ energy scale much greater than that corresponding to the condensed matter interactions $\left( meV\right) $, one must impose the following condition on the physical excitations of the model: $$m_{\text{{\tiny eff}}}\gg \vartheta ,M_{A},M_{\pm }\text{ .} \label{Condmat-lim}$$ In the limit $M_{A}\rightarrow 0,$ one has: $M_{+}\sim \vartheta $; in this situation, the dimensionless parameter ${e_{3}^{2}}/{M}_{+}\ $ reduces to ${e_{3}^{2}}/{\vartheta ,}$ that now lies outside the pertubative regime, since $\vartheta $ is now small $\left( \sim meV\right) $. Therefore, in this energy scale, our results may not be restricted to the pertubative limit; the consideration of the 2-photon term to Eqs.(\[V1\], \[V2\], \[V3\]) becomes then relevant in order to assure the gauge invariance of these potentials. As a final result, one rewrites the three expressions for the effective-gauge-invariant scattering potentials: $$\begin{aligned} V_{\text{eff}_{\uparrow \uparrow }}(r) &=&-{\frac{1}{2\pi }}2v^{2}y^{2}K_{0}(M_{H}r)+\frac{e_{3}^{2}}{2\pi }\biggl\{\left[ (C_{+}-\frac{C}{m_{\text{{\tiny eff}}}}M_{+}^{2}\right] K_{0}(M_{+}r)+\biggl[C_{-}+\frac{C}{m_{\text{{\tiny eff}}}}M_{-}^{2}\biggr]K_{0}(M_{-}r)\biggr\} \nonumber \\ &&+\frac{1}{m_{\text{{\tiny eff}}}r^{2}}\left\{ l+\frac{e_{3}^{2}}{2\pi }Cr[M_{+}K_{1}(M_{+}r)-M_{-}K_{1}(M_{-}r)]\right\} ^{2}~, \label{Veff1}\end{aligned}$$ $$\begin{aligned} V_{\text{eff}_{\uparrow \downarrow }}(r) &=&-{\frac{1}{2\pi }}2v^{2}y^{2}K_{0}(M_{H}r)+\frac{e_{3}^{2}}{2\pi }\text{ }\left[ C_{+}K_{0}(M_{+}r)+C_{-}K_{0}(M_{-}r)\right] +\frac{1}{m_{\text{{\tiny eff}}}r^{2}}\biggl \{l+\frac{e_{3}^{2}}{2\pi }Cr[M_{+}K_{1}(M_{+}r)+ \nonumber \\ &&-M_{-}K_{1}(M_{-}r)]\biggr\}^{2}, \label{Veff2}\end{aligned}$$ $$\begin{aligned} V_{\text{eff}_{\downarrow \downarrow }}(r)& =-{\frac{1}{2\pi }}2v^{2}y^{2}K_{0}(M_{H}r)+\frac{e_{3}^{2}}{2\pi }\biggl\{\left[ C_{+}+\frac{C}{m_{\text{{\tiny eff}}}}M_{+}^{2}\right] K_{0}(M_{+}r)+\biggl[C_{-}-\frac{C}{m_{\text{{\tiny eff}}}}M_{-}^{2}\biggr]K_{0}(M_{-}r)\biggr\} \nonumber \\ & \text{ }+\frac{1}{m_{\text{{\tiny eff}}}r^{2}}\left\{ l+\frac{e_{3}^{2}}{2\pi }Cr[M_{+}K_{1}(M_{+}r)-M_{-}K_{1}(M_{-}r)]\right\} ^{2}, \label{Veff3}\end{aligned}$$ where $\frac{l^{2}}{mr^{2}}$ represents the centrifugal barrier, and the term proportional to $C^{2}$ comes from the 2-photon exchange. In the energy scale given by condition (\[Condmat-lim\]), the proportionality coefficients of $V_{2}(r)$ become negligible: $$m_{\text{{\tiny eff}}}\gg \vartheta ,M_{A},M_{\pm }\text{ \ \ \ \ \ }\Longrightarrow \text{ \ \ }\frac{C}{m_{\text{{\tiny eff}}}}M_{+}^{2}\ll 1,\text{ }\frac{C}{m_{\text{{\tiny eff}}}}M_{-}^{2}\ll 1. \label{approximation}$$ As a consequence of these considerations, one can observe that only the first term of the expressions  (\[Veff1\], \[Veff2\], \[Veff3\]) is attractive, which corresponds to the Higgs interaction. At the same time, the potential $V_{2}(r)$ reveals itself small before $V_{1}(r)$ and $V_{3}(r),$ leading to a simplification in the expressions $\left( \text{\ref {Veff1}}\right) ,$ $\left( \text{\ref{Veff2}}\right) ,$ $\left( \text{\ref {Veff3}}\right) $, that degenerate to a single form: $$\begin{aligned} V_{\text{eff}}(r)& =-{\frac{1}{2\pi }}2v^{2}y^{2}K_{0}(M_{H}r)+\frac{e_{3}^{2}}{2\pi }\text{ }\biggl[C_{+}K_{0}(M_{+}r)+C_{-}K_{0}(M_{-}r)\biggr]+\frac{1}{m_{\text{{\tiny eff}}}r^{2}}\biggl\{l+ \nonumber \\ +& \frac{e_{3}^{2}}{2\pi }Cr[M_{+}K_{1}(M_{+}r)-M_{-}K_{1}(M_{-}r)]\biggr\}^{2}, \label{Veffdegenerado}\end{aligned}$$ The fact that $C_{\pm }>0,$ $\forall $ $\vartheta ,M_{A}$ makes the second term (proportional to $e^{2}/2\pi )$ of the equation above to be positive, revealing the repulsive nature of gauge sector. This trivial analysis shows that the potentials $\left( \text{\ref{Veff1}}\right) ,$ $\left( \text{\ref {Veff2}}\right) ,$ $\left( \text{\ref{Veff3}}\right) $ will be attractive only when the contribution originated from the Yukawa interaction overcomes the one coming from the gauge sector, which can be achieved by accomplishing a suitable fitting on the model parameters. The fulfillment of this condition can render the formation of $e^{-}e^{-}$ bound states feasible , once the above potentials are “weak” in the sense of Kato criterion, analyzed by Chadan [*et al.*]{} [@Chadan] in the context of the low-energy scattering theory in $(1+2)$ dimensions. Finally, it is instructive to show how the gauge sectors of the potentials $\left( \text{\ref{Veff1}}\right) $, $\left( \text{\ref{Veff2}}\right) $,$\left( \text{\ref{Veff3}}\right) $ behave in the limit of a vanishing Proca mass: $M_{A}\rightarrow 0$. In this case, the propagator of the gauge field reduces to the Maxwell-Chern-Simons one, leading to the following limits: $$M_{+}\longrightarrow\theta;M_{-}\longrightarrow0;C_{+}\longrightarrow 1;C_{-}\longrightarrow0;K_{1}(M_{-}r)\longrightarrow\frac{1}{M_{-}r};C\longrightarrow\frac{1}{\theta};$$ $$\lim_{M_{A}\longrightarrow 0}V_{_{\uparrow \uparrow }}(r)=\frac{e_{3}^{2}}{2\pi }(1-\frac{\theta }{m_{\text{{\tiny eff}}}})K_{0}(\theta r)+\frac{1}{m_{\text{{\tiny eff}}}r^{2}}\left[ l-\frac{e_{3}^{2}}{2\pi \theta }(1-\theta rK_{1}(\theta r))\right] ^{2}, \label{limit1}$$ $$\lim_{M_{A}\longrightarrow 0}V_{_{\uparrow \downarrow }}(r)=\frac{e_{3}^{2}}{2\pi }K_{0}(\theta r)+\frac{1}{m_{\text{{\tiny eff}}}r^{2}}\left[ l-\frac{e_{3}^{2}}{2\pi \theta }(1-\theta rK_{1}(\theta r))\right] ^{2},$$ $$\lim_{M_{A}\longrightarrow 0}V_{_{\downarrow \downarrow }}(r)=\frac{e_{3}^{2}}{2\pi }(1+\frac{\theta }{m_{\text{{\tiny eff}}}})K_{0}(\theta r)+\frac{1}{m_{\text{{\tiny eff}}}r^{2}}\left[ l-\frac{e_{3}^{2}}{2\pi \theta }(1-\theta rK_{1}(\theta r))\right] ^{2}.$$ One remarks that Eq. (\[limit1\]) encloses exactly the same result achieved by Dobrolibov [@Dobroliubov] [*et al.*]{} and others [@Kogan], [@Groshev] for the scattering of two up-polarization electrons, which enforces the generalization realized in this paper. Numerical Analysis ================== The numerical procedure adopted here consists on the implementation of the variational method for the Schrödinger equation supplemented by the interaction potential (\[Veffdegenerado\]). In this sense, it is necessary to expose some properties of the wavefunction representing the $e^{-}e^{-}$ and of the two-dimensional Schrödinger equation. The composite wave-function and the Schrödinger equation {#sec3} -------------------------------------------------------- The Pauli exclusion principle states the antisymmetric character of the total two-electron wavefunction $(\Psi )$ with respect to an electron-electron permutation: $\Psi ({\bf \rho }_{1},s_{1,}{\bf \rho }_{2},s_{2})=-\Psi ({\bf \rho }_{2},s_{2,}{\bf \rho }_{1},s_{1}).$ Assuming that no significant spin-orbit interaction takes place, the function $\Psi $ can be split into three independent functions: $\Psi (\rho _{1},s_{1,}\rho _{2},s_{2})=\psi ({\bf R})\varphi ({\bf r})\chi \left( s_{1},s_{2}\right) $, which represent, respectively, the center-of-mass wave function, the relative one, and the spin wave function (${\bf R}$ and $s$ being the center-of-mass and spin coordinates respectively, while ${\bf r}$ is the relative coordinate of the electrons). Taking into account the Pauli principle, the total wavefunction $\Psi $ in the center-of-mass frame reads as $$\Psi ^{S=1}=\varphi _{{\rm odd}}({\bf r})\chi _{{\rm even}}^{S=1}(s_{1},s_{2})~,\text{ \ \ }\Psi ^{S=0}=\varphi _{{\rm even}}({\bf r})\chi _{{\rm odd}}^{S=0}(s_{1},s_{2})~, \label{antisym2}$$ where $\chi ^{S=0},$ $\chi ^{S=1}$, $\varphi _{{\rm even}}({\bf r}),$ $\varphi _{{\rm odd}}({\bf r})$ stand respectively for the (antisymmetric) singlet spin-function, the (symmetric) spin triplet, the even space-function ($l=0$: $s$-wave, $l=2$: $d$-wave), and the odd space-function ($l=1$: $p$-wave: , $l=3$: $f$-wave). Within the nonrelativistic approximation, the binding energy associated to an $e^{-}e^{-}$ pair is given by planar Schrödinger equation for the relative space-function $\varphi (r),$ $$\frac{\partial ^{2}\varphi (r)}{\partial r^{2}}+\frac{1}{r}\frac{\partial \varphi (r)}{\partial r}-\frac{l^{2}}{r^{2}}\varphi (r)+2\mu _{{\rm eff}}[E-V(r)]\varphi (r)=0~, \label{diff1}$$ where $V(r)$ represents the interaction potential given by Eq. (\[Veffdegenerado\]), and $\mu _{{\rm eff}}=\frac{1}{2}m_{\text{{\tiny eff}}},$ is the effective reduced mass of the system. By means of the following transformation $\varphi (r)=\frac{1}{\sqrt{r}}~g(r)$, one has $$\frac{\partial ^{2}g(r)}{\partial r^{2}}-\frac{l^{2}-\frac{1}{4}}{r^{2}}g(r)+2\mu _{{\rm eff}}[E-V(r)]g(r)=0~. \label{diff2}$$ The Variational Method and the Choice of the trial function ----------------------------------------------------------- To work out the variational method, one must take as starting point the choice of the trial function that represents the generic features of the $e^{-}e^{-}$ pair. The definition of a trial function must observe some conditions, such as the asymptotic behavior at infinity, the analysis of its free version and its behavior at the origin. For a zero angular momentum ($l=0$) state, the Eq.(\[diff2\]) becomes $$\biggl\{\frac{\partial ^{2}}{\partial r^{2}}+\frac{1}{4r^{2}}+2\mu _{{\rm eff}}[E+C_{s}K_{0}(M_{H}r)]\biggr\}g(r)=0,~ \label{diff3}$$ whose free version ($V(r)=0)$, for $l=0$ state, $\left[ \frac{\partial ^{2}}{\partial r^{2}}+\frac{1}{4r^{2}}+k^{2}\right] u(r)=0~,$ has as solution $u(r)=B_{1}\sqrt{r}J_{0}(kr)+B_{2}\sqrt{r}Y_{0}(kr)$, with $B_{1}$ and $B_{2}$ being arbitrary constants and $k=\sqrt{2\mu _{{\rm eff}}E}$. In the limit $r\rightarrow 0$, this solution goes simply as $u(r)\longrightarrow \sqrt{r}+\lambda \sqrt{r}\ln (r).$ Since the second term in the last equation behaves like an attractive potential, $-1/4r^{2}$, this implies the possibility of obtaining a bound state ($E<0$) even for $V(r)=0$ [@Chadan]. This is not physically acceptable, leading to a restriction on the needed self-adjoint extension of the differential operator $-d^{2}/dr^{2}-1/4r^{2}$. Among the infinite number of self-adjoint extensions of this operator, the only physical choice corresponds to the Friedrichs extension ($B_{2}=0$), which behaves like $\sqrt{r}$ at the origin, indicating this same behavior for $u(r)$. In this way the behavior of the trial function at the origin is determined. The complete equation, $V(r)\neq 0$, will preserve the self-adjointness of free Hamiltonian, if the potential is “weak” in the sense of the Kato condition: $\ \int_{0}^{\infty }r(1+|\ln (r)|)|V(r)|dr<\infty ~.$ Provided the interaction potential, given by Eq. (\[Veffdegenerado\]), satisfies the Kato condition, the self-adjointness of the total Hamiltonian is assured and the existence of bound states is allowed. On the other hand, at infinity, the trial function must vanish asymptotically in order to fulfill square integrability. Therefore, a good choice can then be given by $g(r)=f(r)\exp (-\beta r),$ where $f(r)$ is a well-behaved function that satisfies the limit condition: $\lim_{r\rightarrow 0}f(r)=\sqrt{r}$. By simplicity, the trial function (for zero angular momentum) read as $$g(r)=\sqrt{r}\exp (-\beta r)~, \label{funcaoteste1}$$ where $\beta $ is a free parameter whose variation approximately determines an energy minimum. An analogous procedure can be undertaken to determine the behavior of the trial function when the angular momentum is different from zero ($l\neq 0$). In this case, and in the limit $r\rightarrow 0$, Eq.(\[diff2\]) reduces to $\left[ \frac{\partial ^{2}}{\partial r^{2}}-\frac{l^{2}-\frac{1}{4}}{r^{2}}+k^{2}\right] u(r)=0~,$ whose general solution reads as $u(r)=B_{1}r^{(l+1/2)}+B_{2}r^{(-l+1/2)}$. For $l>0,$ the choice $r^{(l+1/2)}$ entails a trial function that is well-behaved at the origin. Since the Schrödinger equation depends only on $l^{2}$, any of the choices, $l>0$ or $l<0$, is enough to provide the energy values of the physical states and one gets $$g(r)=r^{1/2+l}\exp (-\beta r)~, \label{funcaoteste2}$$ where $\beta $ is again a spanning free parameter to be numerically fixed in order to maximize the binding energy. Though this last result is mathematically correct, we should point out that the discussion regarding non-zero angular momentum states here is merely for the sake of completeness. The true wave-function in this case should include the angular components which remain precluded in this approach. The Analysis of the Potential and the Numerical Data ---------------------------------------------------- The numerical analysis of the potentials $V_{\text{eff}_{\uparrow \uparrow }},V_{\text{eff}_{\uparrow \downarrow }},V_{\text{eff}_{\downarrow \downarrow }}$ is  totally dependent on the parameters of the field-theoretical model. As a first step, it is convenient to realize an analysis on the relevant parameters and thereafter to initiate a numerical procedure. The central purpose of this section is to demonstrate that the potentials obtained are attractive and lead to the formation of bound states $e^{-}e^{-}$, whose energy is situated into a range relevant to some Condensed Matter systems, like the high-T$_{c}$ superconductors. As well-known, to parallel-spin states (spin triplet) there must be a p-wave (spin triplet and $l=1$) associated, whereas the antiparallel-spin states (spin singlet) are linked to an s-wave (spin singlet and $l=0$). Here, despite the parity-breakdown to be associated to the state $l=1,$ the s-wave can also appear as solution, since the breakdown is not necessarily manifest in all states. Given the degeneracy of the potentials $V_{\text{eff}_{\uparrow \uparrow }},V_{\text{eff}_{\uparrow \downarrow }},V_{\text{eff}_{\downarrow \downarrow }}$on the reduced potential (\[Veffdegenerado\]), the issue concerning the wavefunction symmetry looses some of its status: both the s- and p-wave appear as solutions for the system. According to Eqs. (\[funcaoteste1\]), (\[funcaoteste2\]), the implementation of the variational method requires a trial-function with $r^{1/2}-$behaviour at the origin in the case of an s-wave and a $r^{3/2}-$behaviour for a p-wave. Before starting the numerical calculations, it is instructive to show the relevant parameters: $$\begin{aligned} e_{3}^{2}& =\frac{e^{2}}{l_{\perp }}=\frac{1}{137,04}\frac{1973,26}{l_{\perp }}=\frac{14,399}{l_{\perp }}, \\ \alpha & =\frac{\vartheta }{M_{A}}, \\ \zeta & <0,\text{ }\lambda \geq \frac{3}{4}\frac{|\zeta |}{\upsilon ^{2}}, \label{ineq} \\ \lambda & =\frac{3}{4}\frac{|\zeta |}{\nu ^{2}}\Longrightarrow M_{H}^{2}=\nu ^{2}|\zeta |, \label{Mhiggs4} \\ \lambda & =\frac{|\zeta |}{\nu ^{2}}\Longrightarrow M_{H}^{2}=2\nu ^{2}|\zeta |. \label{Mhiggs5}\end{aligned}$$ Specifically, in $D=1+2$, the electromagnetic coupling constant squared, $e_{3}^{2}$, has dimension of mass, rather than the dimensionless character of the usual four-dimensional QED$_{4}$ coupling constant. This fact might be understood as a memory of the third dimension that appears (into the coupling constant) when one tries to work with a theory intrinsically defined in three space-time dimensions. This dimensional peculiarity could be better implemented through the definition of a new coupling constant in three space-time dimensions [@Kogan],[@Randjbar]: $e\rightarrow e_{3}=e/\sqrt{l_{\perp }}$, where $l_{\perp }$ represents a length orthogonal to the planar dimension. The smaller is $l_{\perp }$, the smaller is the remnant of the frozen dimension, the larger is the planar character of the model and the coupling constant $e_{3}$, what reveals its effective nature. In this sense, it is instructive to notice that the effective value of $e_{3}^{2}$ is larger than $e^{2}=1/137$ whenever $l_{\perp }$ $<1973.26$ Å, since 1 (Å)$^{-1}=1973.26$ $eV$. This particularity broadens the repulsive interaction for small $l_{\perp }$ and requires an even stronger Higgs contribution to account for a total attractive interaction. Finally, this parameter must be evaluated inside a range appropriated to not jeopardize the planar nature of the system, so that one requires that: $2<l_{\perp }<15$Å. The parameter $\alpha $ is defined as the ratio between the Proca mass and the Chern-Simons mass, while $\zeta ,\lambda $ are parameters of $V-$potential and are important to assure a stable vacuum, condition given by Eq. (\[ineq\]). The imposition of some relations between $\zeta ,\lambda ,\nu ^{2}$, like Eqs.(\[Mhiggs4\]) e (\[Mhiggs5\]), imply a kind of expression for the Higgs mass that exhibit dependence only on $\nu ^{2}$ and $|\zeta |$. This set of conditions impose a lower bound for the Higgs mass: $M_{H\min }^{2}=3|\zeta |/4\lambda $. Besides the factors above, the entire determination of the potential (\[Veffdegenerado\]) also depends on $v^{2},$ the vacuum expectation value (v.e.v.), and on $y$, the parameter that measures the coupling between the fermions and the Higgs scalar. Being a free parameter, $v^{2}$ indicates the energy scale of the spontaneous breakdown of the $U(1)-$local symmetry, usually determined by some experimental data associated to the phenomenology of the model under investigation, as it occurs in the electroweak Weinberg-Salam model, for example. On the other hand, the $y-$parameter measures the coupling between the fermions and the Higgs scalar, working in fact as an effective constant that embodies contributions of all possible mechanisms of the electronic interaction via Higgs-type (scalar) excitations, as the spinless bosonic interaction mechanisms: phonons, plasmons, and other collective excitations. This theoretical similarity suggests an identification of the field theory parameter with an effective electron-scalar coupling (instead of an electron-phonon one): $y\rightarrow \lambda _{{\rm es}}$. The numerical analysis is developed by means of the implementation of the variational method on the Schrödinger equation, supplemented by the degenerated potential. The procedure is initiated by the use of the an s-wave trial function: $g\left( r\right) =r^{1/2}e^{-\beta r}$, given by Eq. $\left( \text{\ref{funcaoteste1}}\right) $. Tables \[table1\] and \[table2\] exhibit the values of the $e^{-}e^{-}$ bound state and the average length of the $e^{-}e^{-}$ state ($\xi _{ab})$ for $V_{\text{eff}},$ in accordance with the input parameters ($\nu ^{2},Z,\alpha ,y,\zeta )$, for $l=0.$ The degenerated potential obviously assures the following equality: [E]{}$_{ee\uparrow \uparrow }=$[E]{}$_{ee\downarrow \downarrow }=$[E]{}$_{ee\uparrow \downarrow },$ $\xi _{ab\uparrow \uparrow }=\xi _{ab\downarrow \downarrow }=\xi _{ab\uparrow \downarrow }.$ Table \[table3\] contains numerical data generated by the variational method, for $l=1,$ starting from the following trial function: $\varphi \left( r\right) =r^{3/2}e^{-\beta r},$ given by Eq. (\[funcaoteste2\]).   ----------------------------------------------------------------------------------------------------------------------------------------------------------- $v^{2}$(meV) $l_{\perp }$(Å) $y$   $\alpha $ $\zeta $ (eV) $M_{H}=\sqrt{\nu ^{2}|\zeta |}$ $\beta $ $E_{e^{-}e^{-}}$(meV)$\ $ $\xi _{ab}$(Å) -------------- ----------------- ------- ------------- --------------- --------------------------------- ---------- --------------------------- ----------- $47.0$ $10.0$ $4.0$ $1.0$ $4.0$ $433.0$ $51.1$ $-59.2$ $19.3$ $47.0$ $10.0$ $4.0$ $0.5$ $4.0$ $433.0$ $51.8$ $-23.7$ $19.0$ $48.0$ $10.0$ $4.0$ $0.5$ $4.0$ $438.0$ $29.8$ $-50.2$ $16.6$ $48.0$ $10.0$ $3.9$ $1.0$ $4.0$ $438.0$ $29.8$ $-24.8$ $33.1$ $60.0$ $8.0$ $4.0$ $1.0$ $8.0$ $693.0$ $71.1$ $-33.3$ $13.9$ $60.0$ $8.0$ $4.0$ $0.5$ $6.0$ $600.0$ $69.2$ $-32.8$ $14.3$ $60.0$ $8.0$ $3.9$ $1.0$ $5.0$ $548.0$ $27.1$ $-30.4$ $36.4$ $70.0$ $7.0$ $4.0$ $0.4$ $7.0$ $700.0$ $89.2$ $-62.7$ $11.6$ $70.0$ $7.0$ $4.0$ $0.6$ $8.0$ $748.0$ $87.5$ $-54.0$ $11.3$ $70.0$ $7.0$ $3.9$ $1.0$ $7.0$ $700.0$ $51.2$ $-32.3$ $19.3$ $70.0$ $7.0$ $3.9$ $0.5$ $5.0$ $590.0$ $50.8$ $-38.5$ $19.4$ ----------------------------------------------------------------------------------------------------------------------------------------------------------- : Input parameters: $\protect\nu ^{2},l_{\perp },\protect\alpha ,\protect\zeta ,M_{H}^{2}=\protect\nu ^{2}|\protect\zeta |$ and $l=0$; output numerical data: $E_{e^{-}e^{-}}\ $and $\protect\xi _{ab}$. Trial Function: $\protect\varphi \left( r\right) =r^{1/2}e^{-\protect\beta r}$                                     []{data-label="table1"} $v^{2}$ (meV) $l_{\perp }$(Å) $y$   $\alpha $ $\zeta $ (eV) $M_{H}=\sqrt{2\nu ^{2}|\zeta |}$ $\beta $ $E_{e^{-}e^{-}}$(meV)$\ $ $\xi _{ab}$(Å) --------------- ----------------- ------- ------------- --------------- ---------------------------------- ---------- --------------------------- ---------------- $40.0$ $12.0$ $4.0$ $1.0$ $2.0$ $400.0$ $56.1$ $-54.1$ $17.6$ $40.0$ $12.0$ $4.0$ $0.5$ $2.0$ $400.0$ $59.2$ $-24.5$ $16.7$ $40.0$ $12.0$ $4.0$ $0.3$ $2.0$ $400.0$ $58.1$ $-17.2$ $17.0$ $40.0$ $12.0$ $4.0$ $1.0$ $2.5$ $447.2$ $57.9$ $-31.4$ $17.0$ $50.0$ $10.0$ $4.0$ $1.5$ $6.3$ $793.7$ $79.1$ $-41.1$ $12.5$ $50.0$ $10$ $4.0$ $1.5$ $5.3$ $728.0$ $79.1$ $-63.1$ $12.5$ $60.0$ $8.0$ $4.0$ $0.5$ $3.0$ $600.0$ $69.2$ $-32.8$ $14.3$ $60.0$ $8.0$ $3.9$ $0.1$ $2.0$ $489.9$ $51.2$ $-38.6$ $19.3$ $60.0$ $8.0$ $3.9$ $1.0$ $2.0$ $489.9$ $27.2$ $-62.8$ $36.3$ $80.0$ $6.0$ $4.0$ $0.5$ $4.0$ $800.0$ $79.1$ $-40.2$ $12.5$ $80.0$ $6.0$ $4.0$ $0.1$ $3.0$ $692.8$ $78.1$ $-76.7$ $12.6$ $80.0$ $6.0$ $3.9$ $0.5$ $2.5$ $632.5$ $27.1$ $-36.0$ $36.4$ $80.0$ $6.0$ $3.9$ $0.6$ $2.5$ $632.5$ $29.8$ $-45.7$ $33.1$ : Input parameters: $\protect\nu ^{2},l_{\perp },\protect\alpha ,\protect\zeta ,M_{H}^{2}=\protect\nu ^{2}|\protect\zeta |$ and $l=0$; output numerical data: $E_{e^{-}e^{-}}\ $and $\protect\xi _{ab}$. Trial Function: $\protect\varphi \left( r\right) =r^{1/2}e^{-\protect\beta r}$                                     []{data-label="table2"} ------------------------------------------------------------------------------------------------------------------------------------------------------------- $v^{2}$(meV) $l_{\perp }$(Å) $y$  $\alpha $ $\zeta $ (eV)  $M_{H}=\sqrt{2\nu ^{2}|\zeta |}$ $\beta $ $E_{e^{-}e^{-}}$ (meV) $\xi _{ab}$(Å)$\ $ -------------- ----------------- ------- ------------ ---------------- ---------------------------------- ---------- ------------------------ --------------- $30.0$ $16.0$ $4.0$ $2.0$ $-2.0$ $489.9$ $55.1$ $-71.5$ $53.7 $ $30.0$ $15.5$ $4.0$ $2.0$ $-3.0$ $489.9$ $40.7$ $-23.2$ $72.7 $ $30.0$ $15.5$ $4.0$ $3.0$ $-4.0$ $489.9$ $42.2$ $-56.2$ $70.1 $ $32.0$ $15.0$ $4.0$ $2.0$ $-3.0$ $438.2$ $70.7$ $-49.5$ $41.9 $ $32.0$ $15.0$ $4.0$ $1.0$ $-2.0$ $357.8$ $51.1$ $-18.0$ $58.9 $ $50.0$ $10.0$ $4.0$ $1.5$ $-5.3$ $728.0$ $80.9$ $-43.9$ $36.6 $ $50.0$ $10.0$ $4.0$ $1.5$ $-4.0$ $632.4$ $79.1$ $-77.3$ $37.4 $ $50.0$ $10.0$ $4.0$ $0.8$ $-3.0$ $547.7$ $72.4$ $-49.5$ $40.9 $ $50.0$ $10.0$ $4.0$ $0.5$ $-3.0$ $547.7$ $42.9$ $-25.0$ $45.0 $ $80.0$ $6.5$ $3.8$ $1.0$ $-4.0$ $800.0$ $61.3$ $-21.6$ $48.3$ $80.0$ $6.5$ $3.8$ $0.5$ $-3.0$ $692.8$ $50.7$ $-18.8$ $58.4$ $80.0$ $6.5$ $3.8$ $0.5$ $-2.5$ $632.5$ $51.8$ $-52.3$ $57.1$ ------------------------------------------------------------------------------------------------------------------------------------------------------------- : Input parameters: $\protect\nu ^{2},l_{\perp },\protect\alpha ,\protect\zeta ,M_{H}^{2}=2\protect\nu ^{2}|\protect\zeta |$ and $l=1$; output data: $E_{e^{-}e^{-}}\ $and $\protect\xi _{ab}$.  Trial function: $\protect\varphi \left( r\right) =r^{3/2}e^{-\protect\beta r}$[]{data-label="table3"}   From the data of the Tables \[table1\], \[table2\], \[table3\], it is possible to get an understanding of the influence of the parameters on the values of the $e^{-}e^{-}$ energy and $\xi _{ab}$. When $|\zeta |$ and $\nu ^{2}$ increase, the Higgs mass grows up, reducing the range of the attractive interaction, which is noticed through reduction of the bound state energy. In the same way, the rising of the $\alpha -$parameter implies a larger Chern-Simons mass and a reduction of the repulsive interaction range, determining an increment of the bound state energy. The parameter $l_{\perp }$ acts directly in the coupling constant $e_{3}$: the bigger is $l_{\perp }$, the smaller is gauge coupling, and the smaller the repulsive interaction, favoring again the increase of bound state energy. The parameters $\nu ^{2}$ and $y$ act on the Higgs interaction coupling, in such a way to promote a sensitive raising of the biding energy. In the particular case of  Table \[table3\], it is evident a sensitive enhancement in the value of $\xi _{ab}$, a consequence of the isotropic trial function that behaves as $r^{3/2}$ at the origin . This isotropic character results in a non-realistic approximation, since the angular momentum state $l=1$ must exhibit some anisotropy. This observation attributes to the data of Table \[table3\] a more qualitative aspect without invalidating the fundamental result of this section: by means of a suitable fitting of the parameters, it is possible to obtain values of the energy[** **]{}and the correlation length for the pairs $e^{-}e^{-}$  inside a scale usual for some solid state systems. General Conclusions =================== The electron-electron interaction potentials, derived from a MCS Electrodynamics with spontaneous symmetry breaking, puts in evidence the physical possibility of electronic pairing and the formation of bound states. This theoretical prediction occurs when the parameters of the model are so chosen that the contribution stemming from the scalar (Higgs) sector overcomes the contribution induced by the gauge boson exchange (always repulsive in the energy scale relevant for the solid state excitations, $\theta \ll m_{e}$). The numerical results, displayed in Tables \[table1\], \[table2\] and \[table3\], reveal the achievement of binding energies in the $meV-$scale, and correlation lengths in the scale $10-30$Å, which is a possible argument in favour of the MCS QED$_{3}$ adopted here to address the electronic pairing process in the realm of some Condensed Matter planar systems, with manifestation of parity-breaking, such as the Hall systems (there are also some references that discuss the nonconservation of parity symmetry in the context of the high-T$_{c}$ superconductors [@Parity-breaking]). Finally, we must observe that the present MCS model bypasses the difficulties found by several other models [@Kogan], [@Girotti], [@Dobroliubov], [@Groshev] that attempted to obtain $e^{-}e^{-}$ bound states considering only the exchange of vector bosons. The $v^{2}-$values disposed in Tables \[table1\], \[table2\], \[table3\] reconfirm the energy scale $\left( 10-100meV\right) $ for the breaking of U(1)-local symmetry obtained in the framework of planar superconductors [@Bound], [@Tese] and in the case of a parity-preserving electronic pairing [@Tese], [@Bound2]. Appendix ======== In this Appendix one presents the spinor algebra $\ so$(1,2) that generates the Dirac spinors, solutions of the Dirac equation in $D=1+2$ $\ $dimensions. The adopted metric is $\eta ^{\mu \nu }=(+,-,-),$ and the Dirac equation is written as:    $$\begin{aligned} \left( \rlap{\hbox{$\mskip1 mu /$}}p-m\right) u_{+}(p)& =0, \label{solution1} \\ \left( \rlap{\hbox{$\mskip1 mu /$}}p+m\right) u_{-}(p)& =0, \label{solution2}\end{aligned}$$ where $u_{+}(p),$ $u_{-}(p)$ stands for the positive energy spinors with polarization “up” and “down” respectively. The solution of the equations (\[solution1\],\[solution2\]) are given by:  $$\begin{aligned} u_{+}(p)& =\frac{\rlap{\hbox{$\mskip1 mu /$}}p+m}{\sqrt{2m(E+m)}}u_{+}(m,\overrightarrow{0}), \\ u_{-}(p)& =\frac{\rlap{\hbox{$\mskip1 mu /$}}p-m}{\sqrt{2m(E+m)}}u_{+}(m,\overrightarrow{0}),\end{aligned}$$ where $u_{+}(m,\overrightarrow{0})$ and $u_{-}(m,\overrightarrow{0})$ represent an up-electron and down-electron (respectively) in the rest frame: $$u_{+}(m,\overrightarrow{0})=\left[ \begin{array}{c} 1 \\ 0 \end{array} \right] ;\text{ \ \ \ \ \ \ }u_{-}(m,\overrightarrow{0})=\left[ \begin{array}{c} 0 \\ 1 \end{array} \right]$$ In $D=1+2,$ the generators of the group SO(1,2) are given by: $$\Sigma ^{jl}=\frac{1}{4}[\gamma ^{j,}\gamma ^{l}],$$ where the $\gamma $ matrices must satisfy the $so(1,2)$ algebra $$\lbrack \gamma _{\mu },\gamma _{\nu }]=2i\epsilon _{\mu \nu \alpha }\gamma ^{\alpha },$$ and are taken by: $\gamma ^{\mu }=(\sigma _{z},-i\sigma _{x},i\sigma _{y}).$ Using this convention, the spinors $u_{+}(p),$ $u_{-}(p)$ are written at the form:  $$\begin{aligned} u_{+}(p)& =\frac{1}{\sqrt{2m(E+m)}}\left[ \begin{array}{c} E+m \\ -ip_{x}-p_{y} \end{array} \right] ;\overline{u}_{+}(p)=\frac{1}{\sqrt{2m(E+m)}}\left[ \begin{array}{cc} E+m & -ip_{x}+p_{y} \end{array} \right] , \\ u_{-}(p)& =\frac{1}{\sqrt{2m(E+m)}}\left[ \begin{array}{c} ip_{x}-p_{y} \\ E+m \end{array} \right] ;\overline{u}_{-}(p)=\frac{1}{\sqrt{2m(E+m)}}\left[ \begin{array}{cc} -ip_{x}-p_{y} & E+m \end{array} \right] ,\end{aligned}$$ They obviously satisfy the normalization condition: : $\overline{u}_{+}(p)u_{+}(p)=1$ and $\overline{u}_{-}(p)u_{-}(p)=-1.$ In the center of mass frame, the 3-momenta of the scattered electrons (elastic scattering hypothesis) can be written as: $$\begin{aligned} p_{1}& =(E,p,0),\text{ \ \ }p_{1}^{^{\prime }}=(E,p\cos \phi ,p\sin \phi ), \\ p_{2}& =(E,-p,0),\text{ \ \ }p_{2}^{^{\prime }}=(E,-p\cos \phi ,-p\sin \phi ), \\ k& =\text{\ }p_{1}^{^{\prime }}-p_{1}=(0,p(\cos \phi -1),p\sin \phi ),\end{aligned}$$ where $\phi $ is the angle defined (in relation to the initial direction) by the particles after the scattering. Adopting this convention, the current terms are evaluated:   $$\begin{aligned} \left[ \overline{u}_{+}(p_{1}^{^{\prime }})\gamma _{_{0}}u_{+}(p_{1})\right] & =\frac{(E+m)^{2}+p^{2}e^{i\theta }}{2m(E+m)}=\left[ \overline{u}_{+}(p_{2}^{^{\prime }})\gamma _{_{0}}u_{+}(p_{2})\right] ;\text{ \ \ \ } \\ \left[ \overline{u}_{+}(p_{1}^{^{\prime }})\gamma _{_{1}}u_{+}(p_{1})\right] & =-\frac{p}{2m}(1+e^{i\theta })=-\left[ \overline{u}_{+}(p_{2}^{^{\prime }})\gamma _{_{1}}u_{+}(p_{2})\right] ; \\ \left[ \overline{u}_{+}(p_{1}^{^{\prime }})\gamma _{_{2}}u_{+}(p_{1})\right] & =\frac{-ip}{2m}(1-e^{i\theta })=-\left[ \overline{u}_{+}(p_{2}^{^{\prime }})\gamma _{_{2}}u_{+}(p_{2})\right] ;\end{aligned}$$ $$\begin{aligned} \left[ \text{\ }\overline{u}_{-}(p_{1}^{^{\prime }})\gamma _{_{0}}u_{-}(p_{1})\right] & =\frac{(E+m)^{2}+p^{2}e^{-i\theta }}{2m(E+m)}=\left[ \overline{u}_{-}(p_{2}^{^{\prime }})\gamma _{_{0}}u_{-}(p_{2})\right] ; \\ \left[ \overline{u}_{-}(p_{1}^{^{\prime }})\gamma _{_{1}}u_{-}(p_{1})\right] & =-\frac{p}{2m}(1+e^{-i\theta })=-\left[ \overline{u}_{-}(p_{2}^{^{\prime }})\gamma _{_{1}}u_{-}(p_{2})\right] ; \\ \left[ \overline{u}_{-}(p_{1}^{^{\prime }})\gamma _{_{2}}u_{-}(p_{1})\right] & =\frac{ip}{2m}(1-e^{-i\theta })=-\left[ \overline{u}_{-}(p_{2}^{^{\prime }})\gamma _{_{2}}u_{-}(p_{2})\right]\end{aligned}$$ These current terms were used in the evaluation of the scattering amplitudes in the nonrelativistic approximation: $p^{2}\ll m^{2}.$ Finally, given the correlation between mass and spin [@Binegar], valid in QED$_{3}$, it is reasonable to inquire if the spinor $u_{-}(p)$ does not represent an antiparticle rather than the spin-down particle. This issue is solved in the Appendix of Ref. [@N.Cimento], where one shows that the charge of the spinor $u_{-}(p)\ $is equal to one of the spinor $u_{+}(p).$ [**Acknowledgments:** ]{} M.M.F. Jr. is grateful to CCP-CBPF for the kind hospitality. J. A. Helayël-Neto expresses his gratitude to CNPq for the invaluable financial help. J. G. Bednorz and D. E. Müller, Z. Phys. [**B64**]{}, 189 (1986). K. Chadan, N.N. Khuri, A. Martin and T.T. Wu, Phys. Rev. D [**58**]{}, 025014 (1998). S. Deser, R. Jackiw and S. Templeton, Phys. Rev. Lett. [**48**]{}, 975 (1982) and Ann. Phys. (N.Y.) [**140**]{}, 372 (1982). Ya.I. Kogan, JETP Lett. [**49**]{}, 225 (1989). S. Randjbar-Daemi [*et al.*]{}, Nucl. Phys. [**B340**]{}, 403 (1990). H.O. Girotti [*et al.,*]{} Phys. Rev. Lett. [**69**]{}, 2623 (1992); H.O. Girotti, M. Gomes and A.J. da Silva, Phys. Lett. B [**274**]{}, 357 (1992). C.R. Hagen, Phys. Rev. Lett. [**71**]{}, 202 (1993); H.O. Girotti[* et al.,*]{} Phys. Rev. Lett. [**71**]{}, 203 (1993). C.R. Hagen, Phys. Rev. D [**31**]{}, 848 (1985). M.I. Dobroliubov, D. Eliezer, I.I. Kogan, G.W. Semenoff and R.J. Szabo, Mod. Phys. Lett. A [**8**]{}, 2177 (1993). A. Groshev and E.R. Poppitz, Phys. Lett. B [**235**]{}, 336 (1990). Y. Georgelin and J.C. Wallet, Phys. Rev. D [**50**]{}, 6610 (1994). M. Covington [*et al.*]{}, Phys. Rev. Lett. [**79**]{}, 277 (1997); M. Fogelström [*et al.*]{}, Phys. Rev. Lett. [**79**]{}, 281 (1997); R.B. Laughlin, Phys. Rev. Lett. [**80**]{}, 5188 (1998); B.G. Levi, Phys. Today, november, 20 (1997). M.A. De Andrade, O.M. Del Cima and J.A. Helayël-Neto, Il Nuovo Cimento [**111**]{}, 1145 (1998).  O.M. Del Cima, D.H.T. Franco, J.A. Helayël-Neto and O. Piguet, Phys. Lett. B [**410**]{}, 250 (1997) and Phys. Lett. B [**416**]{}, 402 (1998). H. Belich, O.M. Del Cima, M. M. Ferreira Jr and J.A. Helayël-Neto, [*“Electron-electron attractive interaction in Maxwell-Chern-Simons QED*]{}$_{3}$[* at zero temperature”*]{}, Int. J. Modern Phys. [**A16**]{}, 4939 (2001). H. Christiansen, O.M. Del Cima, M. M. Ferreira Jr. and J.A. Helayël-Neto, [*“Eletronic Bound States in Partity-Preserving QED*]{}$_{3}$[* Applied to High-T*]{}$_{c}$[* Cuprate Superconductors*]{}”, cond-mat/0107155. To appear in Int. J. Mod. Phys. A. M.M. Ferreira Jr., Ph.D. Thesis: “[*Investigation of Electron-Electron Bound States in the Framework of the QED$_{3}"$*]{}, in portuguese, CBPF-DCP (December 2001)- Rio de Janeiro - Brazil. H. Belich, O.M. Del Cima, M. M. Ferreira Jr and J.A. Helayël-Neto, [*“Electron-electron States in Parity-Preserving QED*]{}$_{3}$[*”*]{}, hep-th/0204024 work submitted to publication. B. Binegar, J. Math. Phys. [**23**]{}, 1511 (1982); S. Deser and R. Jackiw, Phys. Lett. [**B263**]{}, 431 (1991); R. Jackiw and V. P. Nair, Phys. Rev. D43, 1933 (1991);  J. Fröhlich and P. A. Marchetti, Lett. in Math. Phys. [**16**]{}, 347 (1988). J. J. Sakurai, [*Advanced Quantum Mechanics*]{}, Addison-Wesley Publishing Company, 1967. [^1]: [e-mails: belich©cbpf.br, delcima©gft.ucp.br, manojr©cbpf.br, helayel©gft.ucp.br]{}
--- abstract: 'In this article, we first derive the [*wavevector- and frequency-dependent, microscopic current response tensor*]{} which corresponds to the “macroscopic” ansatz ${\boldsymbol{D}} = \varepsilon_0\varepsilon_{\rm eff}{\boldsymbol{E}}$ and ${\boldsymbol{B}}= \mu_0\mu_{\rm eff}{\boldsymbol{H}}$ with wavevector- and frequency-[*independent*]{}, “effective” material constants $\varepsilon_{\rm eff}$ and $\mu_{\rm eff}$. We then deduce the electromagnetic and optical properties of this effective material model by employing exact, microscopic response relations. In particular, we argue that for recovering the standard relation $n^2=\varepsilon_{\rm eff}{\hspace{0.3pt}}\mu_{\rm eff}$ between the refractive index and the effective material constants, it is imperative to start from the microscopic wave equation in terms of the [*transverse dielectric function*]{}, $\varepsilon_{\rm T}({\boldsymbol{k}},\omega)=0$. At the same time, this result refutes again the relation $n^2=\varepsilon_{\rm r}{\hspace{0.3pt}}\mu_{\rm r}$ in terms of the microscopic response functions, and thus confirms the recently developed [*microscopic theory of the refractive index*]{} \[[Optik [**140**]{}, 62 (2017)](https://doi.org/10.1016/j.ijleo.2017.03.088)\]. On the phenomenological side, our result is especially relevant for metamaterials research, which draws directly on the standard relation for the refractive index in terms of effective material constants. Since for a wide class of materials, the current response tensor can be calculated from first principles and compared to the model expression derived here, this work also paves the way for a systematic search for new metamaterials.' author: - 'R. Starke' - 'G.A.H. Schober' bibliography: - '/net/home/lxtsfs1/tpc/schober/Ronald/masterbib.bib' title: | Microscopic theory of refractive index applied to metamaterials:\ Effective current response tensor corresponding to standard relation $n^2 =\varepsilon_{\rm eff} {\hspace{1pt}}\mu_{\rm eff}$ --- Introduction ============ The traditional approach to electrodynamics in media [@Jackson; @Griffiths; @Landau] is based on the division of electric charge and current densities into “free” and “bound” contributions, combined with the so-called “macroscopic” Maxwell equations which are usually written in the form $$\begin{aligned} \nabla\cdot{\boldsymbol{D}} & = \rho_{\rm f} \,, \label{eq_MaxMac1}\\[2pt] \nabla\times{\boldsymbol{E}} &= -\partial_t{\boldsymbol{B}} \,, \label{eq_MaxMac2}\\[2pt] \nabla\cdot{\boldsymbol{B}} &= 0 \,, \label{eq_MaxMac3}\\[2pt] \nabla\times{\boldsymbol{H}} &= {\boldsymbol{j}}_{\rm f} + \partial_t{\boldsymbol{D}} \,.\label{eq_MaxMac4}\end{aligned}$$ These equations have to be complemented by the so-called “constitutive laws”, which are—more often than not—assumed to be simple linear relations between ${\boldsymbol{D}}$ and ${\boldsymbol{E}}$ as well as between ${\boldsymbol{H}}$ and ${\boldsymbol{B}}$, i.e., $$\begin{aligned} {\boldsymbol{D}} & = \varepsilon_0 {\hspace{1pt}}\varepsilon_{\rm r} {\hspace{1pt}}{\boldsymbol{E}} \,, \label{eq_permitt}\\[2pt] {\boldsymbol{H}} & = \mu_0^{-1} \mu_{\rm r}^{-1} {\boldsymbol{B}} \,, \label{eq_permea}\end{aligned}$$ where $\varepsilon_{\rm r}$ and $\mu_{\rm r}$ are the [*relative*]{} permittivity and permeability. To lighten the notation, we will in the following suppress the subscript $\rm r$ and simply write $\varepsilon \equiv \varepsilon_{\rm r}$ and $\mu \equiv \mu_{\rm r}$ for these dimensionless quantities. (They should, however, not be confused with the corresponding [*absolute*]{} quantities given by $\varepsilon_0 {\hspace{0.3pt}}\varepsilon_{\rm r}$ and $\mu_0 {\hspace{0.3pt}}\mu_{\rm r}$.) In particular, the fundamental field equations – determine the involved field quantities $\{ {\boldsymbol{D}}, {\boldsymbol{H}}, {\boldsymbol{E}}, {\boldsymbol{B}}\}$ only once the constitutive laws are used (to substitute, for example, ${\boldsymbol{D}}$ and ${\boldsymbol{H}}$ in terms of ${\boldsymbol{E}}$ and ${\boldsymbol{B}}$). By contrast, without the constitutive laws the fields would remain underdetermined. The constitutive laws on their side are usually assumed to be given empirically. In the simplest case, one thinks of them as being formulated in terms of [*effective*]{} material [*constants*]{}, which are measurable in principle. In general, though, the constitutive laws could be much more complicated (involving retardation effects, non-linearities, etc.). Consequently, the field equation for, say, the divergence of the “magnetic field” ${\boldsymbol{H}}$ is material dependent in the Standard Approach described above. In principle, this divergence has to be determined by plugging the relation ${\boldsymbol{B}}={\boldsymbol{B}}[{\boldsymbol{H}}]$ into the field equation . However, along with the advent of [*ab initio materials physics*]{} [@Giuliani; @Kohanoff; @Martin], a new, [*microscopic*]{} approach to electrodynamics in media has been developed [@NozieresPines; @NozieresPines2; @Kittel; @Fliessbach; @Melrose1Book; @Melrose2Book]. Its basis is the division of both the electromagnetic source terms (i.e., charge and current densitites) and the electromagnetic fields into their respective [*external*]{} and [*induced*]{} contributions [@Bruus; @MartinRothen; @Kaxiras; @SchafWegener; @Hanke; @Strinati], where the term “induced” means “generated under the influence of [*externally applied*]{} fields”. For convenience, one also considers “total” fields, which are defined as the sum of external and induced contributions. In this microscopic approach, all electric and magnetic fields are uniquely determined by the microscopic Maxwell equations [@Zangwill], $$\begin{aligned} \nabla\cdot{\boldsymbol{E}}({\boldsymbol{x}}, t) &= \rho({\boldsymbol{x}}, t)/\varepsilon_0 \,, \\[2pt] \nabla\times{\boldsymbol{E}}({\boldsymbol{x}}, t) &= -\partial_t{\boldsymbol{B}}({\boldsymbol{x}}, t) \,, \\[2pt] \nabla\cdot{\boldsymbol{B}}({\boldsymbol{x}}, t) &= 0 \,, \\[2pt] \nabla\times{\boldsymbol{B}}({\boldsymbol{x}}, t) &= \mu_0{\hspace{1pt}}{\boldsymbol{j}}({\boldsymbol{x}}, t) + \varepsilon_0\mu_0{\hspace{1pt}}\partial_t{\boldsymbol{E}}({\boldsymbol{x}}, t) \,.\end{aligned}$$ These equations now hold separately for the external, induced and total fields in terms of the respective external, induced and total sources. Thus, all electric and magnetic fields are uniquely determined, [*independently of the material under consideration*]{}. The latter’s influence only comes into play when we consider [*response relations*]{}, which mostly express an induced field quantity in terms of an externally applied field via a corresponding [*response function*]{}. In principle, these microscopic response functions can be calculated from [*first principles*]{} (many-body Schrödinger equation combined with Kubo formalism). Hence, they do [*not*]{} constitute freely adjustable material parameters, but they can be theoretically predicted as well. An important quantity in the [*ab initio*]{} context is given, for example, by the [*density response function*]{} $\upchi$, which is implicitly defined by $$\label{def_density_response} \rho{_{\rm ind}}({\boldsymbol{x}},t) = \int \! {\mathrm d}^3 {\boldsymbol{x}}' \int \! c \, {\mathrm d}t'\, \upchi({\boldsymbol{x}},{\boldsymbol{x}}';t-t') {\hspace{1pt}}\varphi{_{\rm ext}}({\boldsymbol{x}}',t')\,,$$ where $\varphi{_{\rm ext}}$ denotes the externally applied scalar potential. In particular, in the microscopic approach, response functions are in general given in terms of non-local (possibly tensorial) integral kernels. Only for homogeneous systems (which, admittedly, constitute the most important practical application in theoretical materials science), the response functions depend only on the coordinate difference, such that the response laws have a purely multiplicative structure [*in Fourier space*]{}, i.e. in the above example, $$\rho{_{\rm ind}}({\boldsymbol{k}},\omega) = \upchi({\boldsymbol{k}},\omega) {\hspace{1pt}}\varphi{_{\rm ext}}({\boldsymbol{k}},\omega) \,.$$ Once this microscopic density response function is given, a more traditional material property like the (relative) [*dielectric function*]{} can be calculated by means of the standard relation [@EDWave §5.1] $$\varepsilon^{-1}({\boldsymbol{k}},\omega) = 1 + \frac{\upchi({\boldsymbol{k}},\omega)}{\varepsilon_0|{\boldsymbol{k}}|^2} \,.$$ In principle, this quantity can be identified with the [*permittivity*]{} used in the Standard Approach (see Eq. ). This has to be shown on the basis of the Fundamental Field Identifications being given by [@ED1; @ED2] $$\begin{aligned} {\boldsymbol{D}}({\boldsymbol{x}}, t) &= \varepsilon_0 {\hspace{0.3pt}}{\boldsymbol{E}}{_{\rm ext}}({\boldsymbol{x}}, t) \,,\\[1pt] {\boldsymbol{P}}({\boldsymbol{x}}, t) &= -\varepsilon_0 {\hspace{0.3pt}}{\boldsymbol{E}}{_{\rm ind}}({\boldsymbol{x}}, t) \,, \\[1pt] {\boldsymbol{E}}({\boldsymbol{x}}, t) &= {\boldsymbol{E}}{_{\rm tot}}({\boldsymbol{x}},t) \,,\end{aligned}$$ and by $$\begin{aligned} \mu_0 {\hspace{0.3pt}}{\boldsymbol{H}}({\boldsymbol{x}}, t) &= {\boldsymbol{B}}{_{\rm ext}}({\boldsymbol{x}}, t) \,, \\[1pt] \mu_0 {\hspace{0.3pt}}{\boldsymbol{M}}({\boldsymbol{x}}, t) &= {\boldsymbol{B}}{_{\rm ind}}({\boldsymbol{x}}, t) \,, \\[1pt] {\boldsymbol{B}}({\boldsymbol{x}}, t) &= {\boldsymbol{B}}{_{\rm tot}}({\boldsymbol{x}}, t) \,.\end{aligned}$$ As a matter of principle, these identifications relate the microscopic fields used in [*ab initio*]{} materials science to their macroscopic counterparts used in the Standard Approach. However, the Fundamental Field Identifications lead to the following problem which does not exist in the Standard Approach: if all electromagnetic fields (external, induced and total) are already completely determined by means of their respective Maxwell equations, while on the other hand the induced fields are also determined in terms of the external fields via the corresponding “direct” response functions (or in terms of the total fields via the “proper” response functions [@Refr §2.3]), then apparently there exists an [*overdetermination*]{} in the theory which could in principle lead to contradictions. For example, in the traditional approach the expression simply remains undetermined, while in the microscopic approach we necessarily obtain $\nabla\cdot{\boldsymbol{H}}=0$ on the basis of the Fundamental Field Identifications, although at the same time we have ${\boldsymbol{B}}=\mu{\boldsymbol{H}}$ (within the limits of linear response theory, where $\mu$ in general denotes a tensorial integral kernel). The resolution of this apparent paradox lies in another surprising feature of the microscopic approach, which distinguishes it sharply from its traditional macroscopic counterpart: the microscopic response functions cannot be prescribed arbitrarily. Instead, they are subject to [*constraints*]{} which guarantee the validity of the microscopic Maxwell equations. In particular, this also implies that the response functions are in general not independent of each other, but interrelated by the Universal Response Relations [@ED1]. Concretely, it turns out that the microscopic [*current response tensor*]{} determines all other linear electromagnetic response properties uniquely and explicitly [@ED1; @ED2]. Practically, the Universal Response Relations greatly facilitate both theoretical calculations and experimental measurements as they allow for the deduction of one response function from another one. Conceptually, however, the somewhat shocking implication of this microscopic approach is that the standard formula for the refractive index in terms of the relative permittivity and permeability, $$n^2 (\omega) \stackrel{?}{=} \varepsilon(\omega) {\hspace{1pt}}\mu(\omega) \label{eq_MaxwellStand} \,,$$ cannot be true [*in this form*]{} [@EDWave; @Refr; @EDFresnel], [*i.e., as a formula expressing the refractive index in terms of response functions*]{}. Instead, its allegedly approximate version, $$n^2 (\omega) = \varepsilon(\omega) \,, \label{eq_MaxwellMoreCorrect}$$ turns out to be the more correct formula, which can be justified microscopically [@Refr]. Here, it is understood that the involved quantities refer to the [*macroscopic limit*]{} (${\boldsymbol{k}}\rightarrow {\boldsymbol{0}}$) of microscopic response functions as they can be calculated from first principles. Fortunately, in most cases the failure of the standard relation does not pose any serious problems [@EDFresnel]. In fact, textbooks in condensed matter theory often even [*define*]{} [@Kittel; @Ashcroft; @Cardona] the refractive index by the allegedly approximate relation . Furthermore, a [*bulk material*]{} where the standard formula would apply with independently obtained material parameters $\varepsilon$, $\mu$ and $n$ is not known. In the research domain of so-called [*metamaterials*]{}, however, one draws directly on the original Maxwell relation if only with “effective” (i.e., not calculated from first principles) [*material parameters*]{} (not response functions) $\varepsilon_{\rm eff}(\omega)$ and $\mu_{\rm eff}(\omega)$ [@Pendry]. Concretely, it has been argued that $n$ should be regarded as a negative number if both $\varepsilon_{\rm eff} < 0$ and $\mu_{\rm eff} < 0$ [@Veselago]. Such a negative effective permeability can occur in [*artificial*]{} materials by exploiting the concept of a split-ring resonator [@Pendry; @Smith00]. An anomalous light refraction at metamaterials has been observed experimentally [@Shelby]. Therefore, metamaterials are regarded as promising candidates for technological applications such as [*superlenses*]{} [@PendrySuper; @SmithPendry] and [*invisibility cloaks*]{} [@Ergin]. With this state of affairs, we now face two questions: 1. Although the refractive index is microscopically not determined by the standard formula , is it still possible to have a material whose microscopic response functions involve two (constant) [*material parameters,*]{} which have an interpretation as “effective” electric permittivity and permeability, such that the standard formula instead holds in terms of these “effective” material constants? 2. More generally, to which [*current response tensor*]{} does the simple macroscopic ansatz – correspond, which is used in the traditional approach for the derivation of the standard formula ? To answer these questions is precisely the aim of the present article. Phenomenological derivation =========================== We start from the “macroscopic” Maxwell equations written in Fourier space as $$\begin{aligned} {\boldsymbol{k}} \cdot {\boldsymbol{B}}({\boldsymbol{k}}, \omega) & = 0 \,, \\[3pt] {\boldsymbol{k}} \times {\boldsymbol{E}}({\boldsymbol{k}}, \omega) - \omega {\boldsymbol{B}}({\boldsymbol{k}}, \omega) & = 0 \,, \\[3pt] {\mathrm i}{\boldsymbol{k}} \cdot {\boldsymbol{D}}({\boldsymbol{k}}, \omega) & = \rho_{\rm f}({\boldsymbol{k}}, \omega) \,, \\[3pt] {\mathrm i}{\boldsymbol{k}} \times {\boldsymbol{H}}({\boldsymbol{k}}, \omega) + {\mathrm i}\omega {\boldsymbol{D}}({\boldsymbol{k}}, \omega) & = {\boldsymbol{j}}_{\rm f}({\boldsymbol{k}}, \omega) \,.\end{aligned}$$ By means of the first two, homogenous equations, we can introduce the potentials $$\begin{aligned} {\boldsymbol{B}}({\boldsymbol{k}}, \omega) & = {\mathrm i}{\boldsymbol{k}} \times {\boldsymbol{A}}({\boldsymbol{k}}, \omega) \,, \\[3pt] {\boldsymbol{E}}({\boldsymbol{k}}, \omega) & = -{\mathrm i}{\boldsymbol{k}} {\hspace{1pt}}\varphi({\boldsymbol{k}}, \omega) + {\mathrm i}\omega {\boldsymbol{A}}({\boldsymbol{k}}, \omega) \,.\end{aligned}$$ Furthermore, in the last two, inhomogeneous Maxwell equations, we substitute $$\begin{aligned} {\boldsymbol{D}}({\boldsymbol{k}}, \omega) & = \varepsilon_0 {\hspace{1pt}}\varepsilon_{\rm eff} {\hspace{1pt}}{\boldsymbol{E}}({\boldsymbol{k}}, \omega) \,, \\[3pt] {\boldsymbol{H}}({\boldsymbol{k}}, \omega) & = \mu_0^{-1} \mu^{-1}_{\rm eff} {\hspace{1pt}}{\boldsymbol{B}}({\boldsymbol{k}}, \omega) \,,\end{aligned}$$ with the [*effective*]{} permittivity and permeability, $\varepsilon_{\rm eff}$ and $\mu_{\rm eff}$, which are assumed to be (wavevector- and frequency-independent) [*constants.*]{} Thus, we obtain the inhomogeneous equations for the potentials (suppressing ${\boldsymbol{k}}$ and $\omega$ dependencies in the notation), $$\varepsilon_0 {\hspace{0.3pt}}\varepsilon_{\rm eff} {\hspace{1pt}}(|{\boldsymbol{k}}|^2 {\hspace{1pt}}\varphi - \omega {\hspace{1pt}}{\boldsymbol{k}} \cdot {\boldsymbol{A}}) = \rho_{\rm f} \,,$$ and $$\frac{1}{\mu_0 {\hspace{0.3pt}}\mu_{\rm eff}} {\hspace{1pt}}\big(|{\boldsymbol{k}}|^2 {\boldsymbol{A}} - {\boldsymbol{k}} {\hspace{1pt}}({\boldsymbol{k}} \cdot {\boldsymbol{A}}) \big) + \varepsilon_0 {\hspace{0.3pt}}\varepsilon_{\rm eff}{\hspace{1pt}}(\omega {\hspace{0.3pt}}{\boldsymbol{k}} {\hspace{1pt}}\varphi - \omega^2 {\boldsymbol{A}}) = {\boldsymbol{j}}_{\rm f} \,.$$ In matrix form, they can be rewritten in terms of Lorentz four-vectors as $$\label{zwischen_1} \begin{aligned} \mu_0 {\hspace{1pt}}\bigg( \!\! \begin{array}{c} c \rho_{\rm f} \\[3pt] {\boldsymbol{j}}_{\rm f} \end{array} \!\! \bigg) & = \varepsilon_{\rm eff} \, \Bigg( \! \begin{array}{cc} |{\boldsymbol{k}}|^2 & -\frac{\omega}{c} {\hspace{1pt}}{\boldsymbol{k}}^{\rm T} \\[3pt] \frac{\omega}{c} {\hspace{1pt}}{\boldsymbol{k}} & -\frac{\omega^2}{c^2} \end{array} \! \Bigg) \bigg( \!\! \begin{array}{c} \varphi/c \\[3pt] {\boldsymbol{A}} \end{array}\!\! \bigg) \\[3pt] & \quad \, + \frac{1}{\mu_{\rm eff}} {\hspace{1pt}}\bigg( \! \begin{array}{cc} 0 & 0 \\[3pt] 0 & |{\boldsymbol{k}}|^2 - {\boldsymbol{k}} {\boldsymbol{k}}^{\rm T} \end{array}\!\! \bigg) \bigg( \!\! \begin{array}{c} \varphi/c \\[3pt] {\boldsymbol{A}} \end{array}\!\! \bigg) \,. \end{aligned}$$ Finally, defining the $(4 \times 4)$ matrices $$\begin{aligned} M_{\rm e}({\boldsymbol{k}}, \omega) & \stackrel{\rm def}{=} \Bigg( \! \begin{array}{cc} |{\boldsymbol{k}}|^2 & -\frac{\omega}{c} {\hspace{1pt}}{\boldsymbol{k}}^{\rm T} \\[3pt] \frac{\omega}{c} {\hspace{1pt}}{\boldsymbol{k}} & -\frac{\omega^2}{c^2} \end{array} \! \Bigg) \,, \\[3pt] M_{\rm m}({\boldsymbol{k}}, \omega) & \stackrel{\rm def}{=} \bigg( \! \begin{array}{cc} 0 & 0 \\[3pt] 0 & |{\boldsymbol{k}}|^2 - {\boldsymbol{k}} {\boldsymbol{k}}^{\rm T} \end{array}\!\! \bigg) \,,\end{aligned}$$ we can rewrite Eq.  compactly as $$\label{compact_result} \mu_0 {\hspace{1pt}}j_{\rm f} = \bigg(\varepsilon_{\rm eff} {\hspace{1pt}}M_{\rm e} + \frac{1}{\mu_{\rm eff}} {\hspace{1pt}}M_{\rm m} \bigg) A \,,$$ where $j \equiv j^\mu = (c\rho, {\hspace{0.3pt}}{\boldsymbol{j}})^{\rm T}$ and $A^\nu = (\varphi/c, {\boldsymbol{A}})$ are the four-current and four-potential, respectively. In the following, this relation will form the basis for the identification of the [*fundamental response tensor*]{} [@ED1 §5.1] in this effective model for metamaterials. Proper fundamental response tensor ================================== In order to perform the transition from the traditional macroscopic approach to electrodynamics in media to its modern microscopic counterpart, we now identify $j_{\rm f} \equiv j_{\rm ext} $ and $A \equiv A_{\rm tot}$ [@ED1; @ED2] such that Eq.  turns into $$\label{jext} \mu_0 {\hspace{1pt}}j_{\rm ext} = \bigg(\varepsilon_{\rm eff} {\hspace{1pt}}M_{\rm e} + \frac{1}{\mu_{\rm eff}} {\hspace{1pt}}M_{\rm m} \bigg) A_{\rm tot} \,.$$ Next, we use that [@ED1 Eq. (3.30)] $$\mu_0 {\hspace{1pt}}j_{\rm tot}(k) = k^2 P_{\rm T}(k) {\hspace{1pt}}A_{\rm tot}(k) \,,$$ where $k^\mu=(\omega/c, {\hspace{1pt}}{\boldsymbol{k}})^{\rm T}$ denotes the four-momentum, $k^2 = k^\mu k_\mu = |{\boldsymbol{k}}|^2 - \omega^2/c^2$, and the [*Minkowskian transverse projector*]{} is given by [@EDFullGF §2.1 and §2.2] $$P_{\rm T}({\boldsymbol{k}}, \omega)= \frac{1}{|{\boldsymbol{k}}|^2 - \omega^2 /c^2} \, \Bigg( \! \begin{array}{cc} |{\boldsymbol{k}}|^2 & -\frac{\omega}{c} {\hspace{1pt}}{\boldsymbol{k}}^{\rm T} \nonumber \\[5pt] \frac{\omega}{c} {\hspace{1pt}}{\boldsymbol{k}} & |{\boldsymbol{k}}|^2 - \frac{\omega^2}{c^2} - {\boldsymbol{k}} {\boldsymbol{k}}^{\rm T} \end{array} \! \Bigg) \,,$$ which is equivalent to $$P_{\rm T}(k) = \frac{1}{k^2} {\hspace{1pt}}\big(M_{\rm e}(k) + M_{\rm m}(k) \big)\,.$$ Together, these formulae imply the identity $$\mu_0 {\hspace{1pt}}j_{\rm tot} = (M_{\rm e} + M_{\rm m}) {\hspace{1pt}}A_{\rm tot} \,.$$ Combining this with Eq.  yields $$\begin{aligned} \mu_0 {\hspace{1pt}}j_{\rm ind} & = \mu_0 {\hspace{1pt}}j_{\rm tot} - \mu_0 {\hspace{1pt}}j_{\rm ext} \\[2pt] & = \bigg((1-\varepsilon_{\rm eff}) {\hspace{1pt}}M_{\rm e} + \bigg( 1 - \frac{1}{\mu_{\rm eff}}\bigg) M_{\rm m} \bigg) {\hspace{1pt}}A_{\rm tot} \,.\end{aligned}$$ We now interpret the term in brackets as the [*proper*]{} fundamental response tensor [@ED1; @Refr; @EDOhm], which is hence given by $$\label{proper_fund} \mu_0 \, \widetilde \chi({\boldsymbol{k}}, \omega) \equiv (1-\varepsilon_{\rm eff}) {\hspace{1pt}}M_{\rm e}({\boldsymbol{k}}, \omega) + \bigg( 1 - \frac{1}{\mu_{\rm eff}}\bigg) M_{\rm m}({\boldsymbol{k}}, \omega) \,.$$ One easily checks that $M_{\rm e}$ and $M_{\rm m}$ separately fulfill the constraints $$k_\mu {\hspace{1pt}}M\indices{^\mu_\nu}(k) = M\indices{^\mu_\nu}(k) {\hspace{1pt}}k^\nu = 0\,,$$ and hence they are completely determined by their respective spatial parts, $$\begin{aligned} {\overset\leftrightarrow{M}}_{\rm e}({\boldsymbol{k}}, \omega) & = -\frac{\omega^2}{c^2} {\hspace{1pt}}{\overset\leftrightarrow{1}} \,, \\[3pt] {\overset\leftrightarrow{M}}_{\rm m}({\boldsymbol{k}}, \omega) & = |{\boldsymbol{k}}|^2 {\hspace{1pt}}{\overset\leftrightarrow{1}} - {\boldsymbol{k}} {\boldsymbol{k}}^{\rm T} = |{\boldsymbol{k}}|^2 {\hspace{1pt}}{\overset\leftrightarrow{P}}_{\rm T}({\boldsymbol{k}}) \,,\end{aligned}$$ where ${\overset\leftrightarrow{P}}_{\rm T}({\boldsymbol{k}})$ denotes the [*Cartesian transverse projector*]{} [@EDWave §2.1]. The proper fundamental response tensor therefore fulfills the same constraints, and it is completely determined by the [*proper current response tensor,*]{} $$\mu_0 {\hspace{1pt}}{\overset\leftrightarrow{\widetilde \chi}}({\boldsymbol{k}}, \omega) = (\varepsilon_{\rm eff} - 1) \, \frac{\omega^2}{c^2} {\hspace{1pt}}{\overset\leftrightarrow{1}} + \bigg( 1 - \frac{1}{\mu_{\rm eff}}\bigg) {\hspace{1pt}}|{\boldsymbol{k}}|^2 {\hspace{1pt}}{\overset\leftrightarrow{P}}_{\rm T}({\boldsymbol{k}}) \,.$$ We can write this equivalently as [@EffWW Appendix D.1] $$\label{eq_IsoForm} {\overset\leftrightarrow{\widetilde \chi}}({\boldsymbol{k}}, \omega) = \widetilde \chi_{\rm L}({\boldsymbol{k}}, \omega) {\hspace{1pt}}{\overset\leftrightarrow{P}}_{\rm L}({\boldsymbol{k}}) + \widetilde \chi_{\rm T}({\boldsymbol{k}}, \omega) {\hspace{1pt}}{\overset\leftrightarrow{P}}_{\rm T}({\boldsymbol{k}}) \,,$$ with the [*longitudinal*]{} and [*transverse*]{} proper current response functions $$\begin{aligned} \widetilde \chi_{\rm L}({\boldsymbol{k}}, \omega) & = \varepsilon_0 {\hspace{1pt}}(\varepsilon_{\rm eff} - 1) \, \omega^2 \,, \label{propL} \\[3pt] \widetilde \chi_{\rm T}({\boldsymbol{k}}, \omega) & = \varepsilon_0 {\hspace{1pt}}(\varepsilon_{\rm eff} - 1) \, \omega^2 + \frac{1}{\mu_0} {\hspace{1pt}}\bigg( 1 - \frac{1}{\mu_{\rm eff}}\bigg) {\hspace{1pt}}|{\boldsymbol{k}}|^2 \,. \label{propT}\end{aligned}$$ In particular, this shows that the phenomenological model defined by the fundamental response tensor describes a homogeneous and isotropic system. Thus, Eq.  represents the first central result of this article. It gives the [*microscopic, wavevector- and frequency-dependent (proper) fundamental response tensor*]{} which corresponds to the traditional ansatz defined by Eqs. –. In fact, this microscopic response tensor depends on only two “effective” material constants, $\varepsilon_{\rm eff}$ and $\mu_{\rm eff}$. However, it remains to show that: (i) although these [*material constants*]{} do not coincide with the electric permittivity and the magnetic permeability (in the sense of [*response functions*]{}), they can still be interpreted as their “effective” versions; (ii) the microscopic wave equation leads to a refractive index which is simply given by the product of these effective material parameters, hence $n^2 = \varepsilon_{\rm eff} {\hspace{1pt}}\mu_{\rm eff}$. Effective permittivity and permeability ======================================= For an isotropic system, the dielectric tensor has an analogous form as Eq. . The resulting longitudinal and transverse dielectric functions are related to the corresponding proper current response functions by [@EDWave §5.1] $$\begin{aligned} \varepsilon_{\rm L}({\boldsymbol{k}}, \omega) & = 1 + \frac{1}{\varepsilon_0 {\hspace{1pt}}\omega^2} \, \widetilde \chi_{\rm L}({\boldsymbol{k}}, \omega) \,, \\[3pt] \varepsilon_{\rm T}({\boldsymbol{k}}, \omega) & = 1 + \frac{1}{\varepsilon_0 {\hspace{1pt}}(\omega^2 - c^2 |{\boldsymbol{k}}|^2)} \, \widetilde \chi_{\rm T}({\boldsymbol{k}}, \omega) \,.\end{aligned}$$ Using Eqs. –, we therefore obtain $$\begin{aligned} \varepsilon_{\rm L}({\boldsymbol{k}}, \omega) & = \varepsilon_{\rm eff} \,, \\[3pt] \varepsilon_{\rm T}({\boldsymbol{k}}, \omega) & = \frac{\varepsilon_{\rm eff} \, \omega^2 - \mu_{\rm eff}^{-1} {\hspace{1pt}}c^2 |{\boldsymbol{k}}|^2}{\omega^2 - c^2 |{\boldsymbol{k}}|^2} \,. \label{epsT}\end{aligned}$$ Hence, both the longitudinal and the transverse dielectric function fulfill $$\lim_{|{\boldsymbol{k}}|\rightarrow 0}\varepsilon_{\rm L/T}({\boldsymbol{k}}, \omega) = \varepsilon_{\rm eff}\,,$$ and thus $\varepsilon_{\rm eff}$ can indeed be interpreted as an “effective” permittivity. In particular, the longitudinal dielectric function is even [*constant*]{} and given by $\varepsilon_{\rm eff}$. We remark, however, that this does not imply a proportionality between the external and the total electric field. Instead, we have the following relations between the longitudinal and transverse components of the respective fields: $$\begin{aligned} {\boldsymbol{E}}_{\rm ext, {\hspace{0.3pt}}L}({\boldsymbol{k}}, \omega) & = \varepsilon_{\rm eff} {\hspace{1pt}}{\boldsymbol{E}}_{\rm tot, {\hspace{0.3pt}}L}({\boldsymbol{k}}, \omega) \,, \\[5pt] {\boldsymbol{E}}_{\rm ext, {\hspace{0.3pt}}T}({\boldsymbol{k}}, \omega) & = \frac{\varepsilon_{\rm eff} {\hspace{1pt}}\omega^2 - \mu_{\rm eff}^{-1} {\hspace{1pt}}c^2 |{\boldsymbol{k}}|^2}{\omega^2 - c^2 |{\boldsymbol{k}}|^2} \, {\boldsymbol{E}}_{\rm tot, {\hspace{0.3pt}}T}({\boldsymbol{k}}, \omega) \,.\end{aligned}$$ In particular, this shows that $\varepsilon_0 {\hspace{0.3pt}}{\boldsymbol{E}}_{\rm ext}$ [*does not coincide with the displacement field ${\boldsymbol{D}}$ used in the phenomenological derivation.*]{} Instead, the Fundamental Field Identification holds only for the respective longitudinal parts, such that the transverse displacement field ${\boldsymbol{D}}_{\rm T}$ remains completely undetermined. In principle, it would then also be unclear how the corresponding transverse response function can actually be measured. In practice, however, this does not pose any problems since in the microscopic approach, all field quantities are uniquely determined by their respective Maxwell equations. Correspondingly, [*we here consider Eqs. – as a heuristic ansatz, whose sole purpose lies in the deduction of the proper fundamental response tensor defined by Eq. .*]{} The interpretation of the material parameters appearing there as “effective” permittivity and permeability can be justified [*ex post,*]{} i.e., independently of the originally macroscopic ansatz. For the electric case, this has already be shown by the above arguments. It remains to prove the analogous result for the magnetic material parameter. Thus, let us next investigate the magnetic properties of the model defined by Eq. . We first note that the [*direct*]{} fundamental response tensor [@Refr §2.3] has again the isotropic form , such that the longitudinal and transverse components can be calculated as [@EDWave §5.1] $$\begin{aligned} \chi_{\rm L}({\boldsymbol{k}}, \omega) & = \frac{\widetilde \chi_{\rm L}({\boldsymbol{k}}, \omega)}{\varepsilon_{\rm L}({\boldsymbol{k}}, \omega)} \,, \\[3pt] \chi_{\rm T}({\boldsymbol{k}}, \omega) & = \frac{\widetilde \chi_{\rm T}({\boldsymbol{k}}, \omega)}{\varepsilon_{\rm T}({\boldsymbol{k}}, \omega)} \,.\end{aligned}$$ From our previous results, we obtain $$\begin{aligned} \chi_{\rm L}({\boldsymbol{k}}, \omega) & = \varepsilon_0 \, \omega^2 {\hspace{1pt}}\bigg( 1 - \frac{1}{\varepsilon_{\rm eff}} \bigg) \,, \label{fundL} \\[5pt] \chi_{\rm T}({\boldsymbol{k}}, \omega) & = \varepsilon_0 {\hspace{1pt}}(\omega^2 - c^2 |{\boldsymbol{k}}|^2) \nonumber \\[1pt] & \quad \, \times \bigg( 1 - \frac{\omega^2 - c^2 |{\boldsymbol{k}}|^2}{\varepsilon_{\rm eff} {\hspace{1pt}}\omega^2 - \mu_{\rm eff}^{-1} {\hspace{1pt}}c^2 |{\boldsymbol{k}}|^2} \bigg) \,.\label{fundT}\end{aligned}$$ Furthermore, with the Green function of the d’Alembert operator given by [@ED1 Eq. (3.9)] $$\mathbbmsl D_0({\boldsymbol{k}}, \omega) = \frac{1}{\varepsilon_0 {\hspace{1pt}}(c^2 |{\boldsymbol{k}}|^2 - \omega^2)} \,,$$ we can write the transverse current response function as $$\label{fundTalt} \chi_{\rm T}({\boldsymbol{k}}, \omega) = \mathbbmsl D_0^{-1}({\boldsymbol{k}}, \omega) {\hspace{1pt}}\bigg( \frac{\omega^2 - c^2 |{\boldsymbol{k}}|^2}{\varepsilon_{\rm eff} {\hspace{1pt}}\omega^2 - \mu_{\rm eff}^{-1} {\hspace{1pt}}c^2 |{\boldsymbol{k}}|^2} - 1 \bigg) \,.$$ In particular, we note that the density response function (see Eq. ) is determined by the longitudinal current response function as [@ED1 Eq. (7.11)] $$\upchi({\boldsymbol{k}}, \omega) = -\frac{|{\boldsymbol{k}}|^2}{\omega^2} {\hspace{1pt}}\chi_{\rm L}({\boldsymbol{k}}, \omega) \,.$$ From Eq. , we therefore obtain $$\upchi({\boldsymbol{k}}, \omega) = \varepsilon_0 {\hspace{1pt}}\bigg( \frac{1}{\varepsilon_{\rm eff}} - 1 \bigg) {\hspace{1pt}}|{\boldsymbol{k}}|^2 \,.$$ Finally, the [*magnetic susceptibility*]{} is determined by the transverse current response function as [@ED1 Eq. (7.9)] $$\chi_{\rm m}({\boldsymbol{k}}, \omega) = \mathbbmsl D_0({\boldsymbol{k}}, \omega) {\hspace{1pt}}\chi_{\rm T}({\boldsymbol{k}}, \omega) \,.$$ From Eq. , we directly obtain $$\label{zw_1} \chi_{\rm m}({\boldsymbol{k}}, \omega) = \frac{\omega^2 - c^2 |{\boldsymbol{k}}|^2}{\varepsilon_{\rm eff} {\hspace{1pt}}\omega^2 - \mu_{\rm eff}^{-1} {\hspace{1pt}}c^2 |{\boldsymbol{k}}|^2} - 1 \,.$$ In particular, the static susceptibility is given by $$\chi_{\rm m}({\boldsymbol{k}}, \omega = 0) {\hspace{1pt}}= {\hspace{1pt}}\mu_{\rm eff} - 1 \,,$$ and this shows that the material constant $\mu_{\rm eff}$ indeed has the interpretation of an effective permeability. Finally, we remark that Eq.  can also be derived directly from Eq.  by using the general identity [@ED1 Eq. (6.48)] $$\chi_{\rm m}({\boldsymbol{k}}, \omega) = \mu({\boldsymbol{k}}, \omega) - 1 \,,$$ together with the Universal Response Relation [@Refr Eq. (3.61)] $$\mu({\boldsymbol{k}}, \omega) = \frac{1}{\varepsilon_{\rm T}({\boldsymbol{k}}, \omega)}$$ between transverse response functions. Refractive index ================ On the microscopic level, the fundamental wave equation for transverse electromagnetic oscillations in (isotropic) materials is given in terms of the transverse dielectric function as [@Refr] $$\left(-\frac{\omega^2}{c^2} + |{\boldsymbol{k}}|^2 \right)\varepsilon_{\rm T}({\boldsymbol{k}}, \omega) {\hspace{1pt}}{\boldsymbol{E}}({\boldsymbol{k}},\omega) = 0 \,. \label{eq_MicrWave}$$ The standard wave equation of [*ab initio*]{} materials physics, $$\left(-\frac{\omega^2}{c^2}\varepsilon_{\rm L}({\boldsymbol{k}},\omega) + |{\boldsymbol{k}}|^2\right){\boldsymbol{E}}({\boldsymbol{k}},\omega) = 0 \,,$$ can be obtained from this under the usual assumption of [*coinciding longitudinal and transverse conductivities*]{} (see Ref. [@Refr §4.4]), i.e., $\widetilde \sigma_{\rm L}({\boldsymbol{k}}, \omega)=\widetilde \sigma_{\rm T}({\boldsymbol{k}}, \omega)$. However, from Eqs. – and from the Universal Response Relation $\widetilde \chi({\boldsymbol{k}}, \omega)={\mathrm i}\omega{\hspace{1pt}}\widetilde \sigma({\boldsymbol{k}}, \omega)$ between the current response tensor and the conductivity tensor [@ED2 §3.2.3], we conclude that this assumption is not fulfilled in our case, and hence we have to work directly with the fundamental, microscopic wave equation . We first note that in the vacuum case where $\varepsilon=1$, Eq.  would revert to the free wave equation (in Fourier space). In materials, however, we have $\omega\neq c|{\boldsymbol{k}}|$ such that the prefactor in brackets can be canceled. Thus, we obtain the dispersion relation $\omega_{{\boldsymbol{k}}}$ for the propagation of light in the material from the condition [@Refr; @EDLor; @Dolgov] $$\varepsilon_{\rm T}({\boldsymbol{k}}, \omega_{{\boldsymbol{k}}}) = 0 \,.$$ Using our result , this yields $$\omega_{{\boldsymbol{k}}}^2 = \frac{1}{\varepsilon_{\rm eff} {\hspace{0.3pt}}\mu_{\rm eff}} \, c^2 |{\boldsymbol{k}}|^2 \,.$$ Furthermore, the speed of light is given by the phase velocity, $$u_{{\boldsymbol{k}}} = \frac{\omega_{{\boldsymbol{k}}}}{|{\boldsymbol{k}}|} = \frac{c}{\sqrt{\varepsilon_{\rm eff} {\hspace{0.3pt}}\mu_{\rm eff}}} \,,$$ and this implies for the refractive index, $n_{{\boldsymbol{k}}}=c/u_{{\boldsymbol{k}}}$, the standard relation $$n^2 = \varepsilon_{\rm eff} {\hspace{0.3pt}}\mu_{\rm eff} \,.$$ In particular, this implies that the refractive index of our model is [*wavevector independent*]{}. We have thus shown that the standard formula for the refractive index is indeed recovered in this phenomenological model, but in terms of the [*effective*]{} permeability and permittivity. By contrast, the same does not hold true for the corresponding microscopic response functions. Conclusion {#conclusion .unnumbered} ========== We have derived a simple, phenomenological model for the microscopic current response tensor which corresponds to the macroscopic description of media in terms of “effective” permittivity and permeability constants. In particular, we have shown that the microscopic wave equation in media, $\varepsilon_{\rm r, {\hspace{0.3pt}}T}({\boldsymbol{k}},\omega)=0$, which is formulated in terms of the transverse, frequency- and wavevector-dependent dielectric function, yields back the standard equation $n^2=\varepsilon_{\rm eff}{\hspace{0.3pt}}\mu_{\rm eff}$ for the refractive index in terms of these effective material constants, but not in terms of the corresponding microscopic response functions, i.e., $n^2\neq\varepsilon_{\rm r} {\hspace{0.3pt}}\mu_{\rm r}$. Since the microscopic current response tensor is in principle accessible from [*ab initio*]{} calculations, one could check whether for certain materials it reverts in some appropriate limit to the form with simultaneously negative material constants (as they are expected for metamaterials). Thus, our work also provides a criterion for the [*ab initio screening*]{}, i.e., the systematic search for new metamaterials based on modern first-principles calculations. Acknowledgments {#acknowledgments .unnumbered} =============== This research was supported by the DFG grant HO 2422/12-1. R.S. thanks the Institute for Theoretical Physics at TU Bergakademie Freiberg for its hospitality. The authors are grateful to Prof. em. Franz J. Wegner (Universität Heidelberg) for illuminating discussions and critical remarks.
--- address: - 'Army Research Laboratory, Materials and Manufacturing Science Division, Lightweight and Specialty Metals Branch, APG, MD 21014' - 'Center for Advanced Vehicular Systems, Mississippi State University, Mississippi State, MS 39762' - 'Air Force Research Laboratory, Wright-Patterson AFB, OH 45433' - 'School for Engineering of Matter, Transport, and Energy, Arizona State University, Tempe, AZ 85287' author: - 'M. A. Tschopp' - 'J. D. Miller' - 'A. L. Oppedal' - 'K. N. Solanki' title: 'Characterizing the local primary dendrite arm spacing in directionally-solidified dendritic microstructures' --- Introduction ============ Developing an enhanced understanding of mechanical behavior in materials relies upon sufficiently characterizing microstructure details at the relevant length scales that contribute to this behavior. Moreover, to truly enhance the predictive capability of processing-structure-property models that aim to improve material performance requires a quantitative stereological description of the relevant microstructure features and, thereby, the material itself. Predictive models that effectively capture the linkage between processing and properties (through microstructure) can be utilized within an integrated computational materials engineering (ICME) approach to design materials and accelerate their insertion into application. The focus of the present work is on single crystal nickel-based superalloys, which are used in turbine blades within the high temperature section of the modern turbine engine [@Ree2006; @Pol2006]. In single crystal nickel-based superalloys, there are a number of length scales of microstructure that contribute to mechanical behavior, ranging from the $\gamma^\prime$ precipitates to pores and eutectic particles to the dendrites themselves. At the largest microstructure length scale in directionally-solidified single crystal microstructures, the features of interest are the dendrites; many features at lower length scales (e.g., eutectic particles, precipitates, etc.) or at similar scales (e.g., porosity, freckle defects, etc.) are strongly associated with the dendrite arm spacing and morphology [@Whi2001; @Ell2004; @Mel2005; @Lam2007; @Bru2012]. Historically, the primary dendrite arm spacing (PDAS) has been found to correlate with processing (e.g., solidification rate) [@McC1981; @Hui2002; @Wan2003; @Mil2012; @Bru2011; @Bru2012] as well as with properties (e.g., creep strength, fatigue properties)[@Wil2008; @Lam2007]. For instance, Lamm and Singer [@Lam2007] produced single crystal nickel-based microstructures (PWA 1483) with a varied range of different dendrite arm spacings (250 $\mu$m to 600 $\mu$m) and found that decreasing the mean dendrite arm spacing was associated with an increased high-cycle fatigue life. The fatigue cracks were found to originate at shrinkage porosity and the largest pores correlated with a large PDAS. The traditional approach for measuring primary dendrite arm spacing in single crystal metals, whereby the number of dendrite cores in a specified area is related to the dendrite arm spacing [@Fle1974; @Jac1976; @McC1981] is given by: $$\lambda = c \sqrt{\frac{A}{n}} \label{lambda}$$ where $\lambda$ is primary dendrite arm spacing, $A$ is the area analyzed, $n$ is the number of dendrites, and $c$ is a coefficient that depends on the microstructure. McCartney and Hunt [@McC1981] showed that $c=0.5$ for a random array of points, $c=1$ for a square array of points, and $c=1.075$ for a hexagonal array of points; they had to apply a correction for the bulk dendrite arm spacing $\lambda$ as processing conditions caused a change in the local environment of the dendrites. However, this approach is insufficient for capturing local arm spacings or the dendrite arm spacing distribution, and may provide problems with complex geometries such as turbine blades. In fact, part of the motivation for quantifying the local PDAS is that a narrow distribution (i.e., low standard deviation) of local PDAS values may result in a more homogeneous distribution of interdendritic microstructure features and, more importantly, a narrow distribution of mechanical properties. The research objective herein is to evaluate the capability of some recent approaches, as well as some modified versions of these approaches, for characterizing the local dendrite arm spacing within experimental dendritic microstructures. In this work, an experimental dendritic microstructure is used for this analysis along with three different techniques that are based on the nearest neighbor spacing and/or a Voronoi tessellation of the dendrite cores. Comparison of existing and new metrics with traditional primary dendrite arm spacing metrics is discussed for both local and global measures. The current methods investigated supply statistical information of local spacing and coordination number while introducing a technique for addressing edge effects and examining the parameter sensitivity of these different methods. In comparison to previous work [@Tsc2013], this work introduces and compares the statistical distributions of local dendrite arm spacings for the four methods, for a more quantitative analysis. It was found that augmenting existing techniques with Voronoi tesselations to define the subset of first nearest neighbors or refining existing Voronoi-based techniques resulted in a more physical description of the local dendrite arm spacing. Moreover, for certain cases, the mean local dendrite statistics can adequately approximate the PDAS found with the traditional bulk characterization technique (Eq. \[lambda\]). Methodology =========== The approach utilized herein to measure the local dendrite arm spacing is based on a Voronoi tessellation of the spatial array of dendrite cores. The following analysis techniques were implemented in MATLAB R2012a (The MathWorks, Inc.). To illustrate how the present method works and differs from some other published methods, we generated a synthetic 5x5 cubic pattern of points with a small degree of noise (100% noise fraction, 0.20$a_0$ noise fraction \[2\]), as shown in Figure \[5x5\] [@Tsc2013]. For the purposes of describing several different methods shown in Figure \[various\_methods\], this synthetic pattern of points can be considered as the cores of primary dendrites. [0.3]{} ![The difference between various methods for defining the nearest neighbors (red dots) and their spacing for a single point (large black dot). (a) Initial 5 x 5 cubic pattern with noise fraction of 100% and noise level of 0.20$a_0$. (b) The Warnken–Reed method with $\alpha$ = 1.5 and $k_{initial} = 3$. The inner circle represents the average spacing, $d_{avg}$, of these neighbors and the outer circle represents the cutoff for adding the next neighbor, $d_{avg} + \alpha d_{std}$. (c) Voronoi tesselation diagram for the points. The potential first nearest neighbors are identified through shared vertices with each point. (d) The modified Warnken–Reed method with $\alpha = 1.5$ and $k_{initial} = 3$, whereby the neighbors are restricted to only those identified using the Voronoi tesselation. (e) Using only shared vertices (and connecting lines forming a polygon) of the Voronoi tesselation to identify the nearest neighbors ($d_{crit}=0.0$). (f) Modified tesselation-based technique whereby the nearest neighbors are identified as those with line lengths above a critical threshold fraction of the total perimeter line length $d_{crit}=0.10$ of the tesselated polygon for the point (Reprinted from [@Tsc2013]).[]{data-label="various_methods"}](Figure_1a "fig:"){width="\textwidth"} [0.3]{} ![The difference between various methods for defining the nearest neighbors (red dots) and their spacing for a single point (large black dot). (a) Initial 5 x 5 cubic pattern with noise fraction of 100% and noise level of 0.20$a_0$. (b) The Warnken–Reed method with $\alpha$ = 1.5 and $k_{initial} = 3$. The inner circle represents the average spacing, $d_{avg}$, of these neighbors and the outer circle represents the cutoff for adding the next neighbor, $d_{avg} + \alpha d_{std}$. (c) Voronoi tesselation diagram for the points. The potential first nearest neighbors are identified through shared vertices with each point. (d) The modified Warnken–Reed method with $\alpha = 1.5$ and $k_{initial} = 3$, whereby the neighbors are restricted to only those identified using the Voronoi tesselation. (e) Using only shared vertices (and connecting lines forming a polygon) of the Voronoi tesselation to identify the nearest neighbors ($d_{crit}=0.0$). (f) Modified tesselation-based technique whereby the nearest neighbors are identified as those with line lengths above a critical threshold fraction of the total perimeter line length $d_{crit}=0.10$ of the tesselated polygon for the point (Reprinted from [@Tsc2013]).[]{data-label="various_methods"}](Figure_1b "fig:"){width="\textwidth"} [0.3]{} ![The difference between various methods for defining the nearest neighbors (red dots) and their spacing for a single point (large black dot). (a) Initial 5 x 5 cubic pattern with noise fraction of 100% and noise level of 0.20$a_0$. (b) The Warnken–Reed method with $\alpha$ = 1.5 and $k_{initial} = 3$. The inner circle represents the average spacing, $d_{avg}$, of these neighbors and the outer circle represents the cutoff for adding the next neighbor, $d_{avg} + \alpha d_{std}$. (c) Voronoi tesselation diagram for the points. The potential first nearest neighbors are identified through shared vertices with each point. (d) The modified Warnken–Reed method with $\alpha = 1.5$ and $k_{initial} = 3$, whereby the neighbors are restricted to only those identified using the Voronoi tesselation. (e) Using only shared vertices (and connecting lines forming a polygon) of the Voronoi tesselation to identify the nearest neighbors ($d_{crit}=0.0$). (f) Modified tesselation-based technique whereby the nearest neighbors are identified as those with line lengths above a critical threshold fraction of the total perimeter line length $d_{crit}=0.10$ of the tesselated polygon for the point (Reprinted from [@Tsc2013]).[]{data-label="various_methods"}](Figure_1c "fig:"){width="\textwidth"} \ [0.3]{} ![The difference between various methods for defining the nearest neighbors (red dots) and their spacing for a single point (large black dot). (a) Initial 5 x 5 cubic pattern with noise fraction of 100% and noise level of 0.20$a_0$. (b) The Warnken–Reed method with $\alpha$ = 1.5 and $k_{initial} = 3$. The inner circle represents the average spacing, $d_{avg}$, of these neighbors and the outer circle represents the cutoff for adding the next neighbor, $d_{avg} + \alpha d_{std}$. (c) Voronoi tesselation diagram for the points. The potential first nearest neighbors are identified through shared vertices with each point. (d) The modified Warnken–Reed method with $\alpha = 1.5$ and $k_{initial} = 3$, whereby the neighbors are restricted to only those identified using the Voronoi tesselation. (e) Using only shared vertices (and connecting lines forming a polygon) of the Voronoi tesselation to identify the nearest neighbors ($d_{crit}=0.0$). (f) Modified tesselation-based technique whereby the nearest neighbors are identified as those with line lengths above a critical threshold fraction of the total perimeter line length $d_{crit}=0.10$ of the tesselated polygon for the point (Reprinted from [@Tsc2013]).[]{data-label="various_methods"}](Figure_1d "fig:"){width="\textwidth"} [0.3]{} ![The difference between various methods for defining the nearest neighbors (red dots) and their spacing for a single point (large black dot). (a) Initial 5 x 5 cubic pattern with noise fraction of 100% and noise level of 0.20$a_0$. (b) The Warnken–Reed method with $\alpha$ = 1.5 and $k_{initial} = 3$. The inner circle represents the average spacing, $d_{avg}$, of these neighbors and the outer circle represents the cutoff for adding the next neighbor, $d_{avg} + \alpha d_{std}$. (c) Voronoi tesselation diagram for the points. The potential first nearest neighbors are identified through shared vertices with each point. (d) The modified Warnken–Reed method with $\alpha = 1.5$ and $k_{initial} = 3$, whereby the neighbors are restricted to only those identified using the Voronoi tesselation. (e) Using only shared vertices (and connecting lines forming a polygon) of the Voronoi tesselation to identify the nearest neighbors ($d_{crit}=0.0$). (f) Modified tesselation-based technique whereby the nearest neighbors are identified as those with line lengths above a critical threshold fraction of the total perimeter line length $d_{crit}=0.10$ of the tesselated polygon for the point (Reprinted from [@Tsc2013]).[]{data-label="various_methods"}](Figure_1e "fig:"){width="\textwidth"} [0.3]{} ![The difference between various methods for defining the nearest neighbors (red dots) and their spacing for a single point (large black dot). (a) Initial 5 x 5 cubic pattern with noise fraction of 100% and noise level of 0.20$a_0$. (b) The Warnken–Reed method with $\alpha$ = 1.5 and $k_{initial} = 3$. The inner circle represents the average spacing, $d_{avg}$, of these neighbors and the outer circle represents the cutoff for adding the next neighbor, $d_{avg} + \alpha d_{std}$. (c) Voronoi tesselation diagram for the points. The potential first nearest neighbors are identified through shared vertices with each point. (d) The modified Warnken–Reed method with $\alpha = 1.5$ and $k_{initial} = 3$, whereby the neighbors are restricted to only those identified using the Voronoi tesselation. (e) Using only shared vertices (and connecting lines forming a polygon) of the Voronoi tesselation to identify the nearest neighbors ($d_{crit}=0.0$). (f) Modified tesselation-based technique whereby the nearest neighbors are identified as those with line lengths above a critical threshold fraction of the total perimeter line length $d_{crit}=0.10$ of the tesselated polygon for the point (Reprinted from [@Tsc2013]).[]{data-label="various_methods"}](Figure_1f "fig:"){width="\textwidth"} One such method for measuring the local dendrite arm spacing is the Warnken–Reed method [@War2011; @War2011a]. The Warnken–Reed method calculates the dendrite arm spacing for a single point (black dot) by starting with an initial number of nearest neighbors (3 closest neighbors) and iteratively adding potential nearest neighbors that are within a cutoff distance defined by the already-added nearest neighbors. For instance, the inner circle in Figure \[inner\_circle\] represents the average spacing, $d_{avg}$, of these neighbors and the outer circle represents the cutoff for adding the next neighbor, $d_{avg} + \alpha d_{std}$, where $d_{std}$ is the standard deviation of the nearest neighbor spacings and $\alpha$ is a parameter that is typically between 1 and 2. Neighbors continue to be added until the cutoff does not include any new neighbors. The local coordination number and dendrite arm spacing is calculated from the neighbors added (shown as red dots). However, if the standard deviation of the distances of the nearest neighbors $d_{std}$ or the parameter $\alpha$ is large, this technique can continue to add nearest neighbors beyond the first nearest neighbors; our implementation stopped after 20 nearest neighbors. Clearly, a method for restricting the number of nearest neighbors using such a technique is necessary. A simple way of identifying the potential first nearest neighbors is to perform a Voronoi tessellation of the space surrounding the points, as shown in Figure \[voronoi\_tesselation\]. The polygon edges are equidistant between the points contained in the two adjacent polygons and the triple points (merging of three lines) are equidistant between the points contained in the three adjacent polygons. Therefore, the first nearest neighbors (FNNs, shown as open circles in Fig. \[voronoi\_tesselation\]) correspond to the edges of the central polygon (that contains the black dot). This subset of points is the maximum number of nearest neighbors that the central point can have. In this manner, several techniques have been identified to quantify a local dendrite arm spacing based on the Voronoi-identified FNNs [@Tsc2013]. For instance, the Voronoi Warnken–Reed method (Figure \[mod\_warnken\_reed\]) only includes the Voronoi FNNs as potential nearest neighbors and cannot expand beyond these, alleviating a potential problem of selecting second nearest neighbors or greater. Another method using the Voronoi FNNs is to consider all of these potential nearest neighbors as nearest neighbors (Figure \[nearest\_neighbor\]), as in Brundidge et al. [@Bru2011]. Unfortunately, this approach is sensitive to small perturbations in the spatial positions of the neighbors. For instance, if the lower right hand neighbor in Figure \[nearest\_neighbor\] moves away from the central point, it no longer shares an edge with the polygon containing the black dot; in this scenario, the two adjacent polygons on either side effectively “pinch” off this neighbor. This scenario, however, has a physical basis as these two dendrite cores mainly compete with the central core, and the lower right core has a much less prominent effect on the central core. The last method, which is examined in the present paper, utilizes a criterion based on the edge lengths of the Voronoi polygon. In Figure \[mod\_tesselation\], those neighbors with edge lengths less than a critical fraction, $d_{crit}$, of the total polygon perimeter are excluded as nearest neighbors (e.g., 10% in Figure \[mod\_tesselation\]). In the present study, the local dendrite arm spacing statistics are evaluated using these four techniques: Warnken–Reed, Voronoi Warnken–Reed, and the Voronoi technique with ($d_{crit}>0$) and without ($d_{crit}=0$) a line length threshold. As an example of a more disordered structure, Figure \[various\_methods2\] plots the four different methods for a different configuration of surrounding points (dendrite cores). In Figure \[inner\_circle2\], the iterative Warnken–Reed method continues to non-physically add neighbors beyond the first nearest neighbors due to a large initial $d_{std}$ value from the initial three distances. The Voronoi-modified version in Figure \[mod\_warnken\_reed2\] stops at four nearest neighbors despite the fact that several points lie within the outer boundary computed by this method. The Voronoi method with $d_{crit}=0.0$ clearly overestimates the number of nearest neighbors, while the four nearest neighbors identified through $d_{crit}=0.10$ (Figure \[mod\_tesselation2\]) perhaps offers a better approximation of the number of nearest neighbors. Interestingly, comparing the methods in Figure \[mod\_warnken\_reed2\] and \[mod\_tesselation2\], the coordination number is the same, but the nearest neighbors identified is different. This is due to the Warnken–Reed method being a distance-based method, and identifying the four closest neighbors, while the modified Voronoi technique is based on the edge lengths of the Voronoi polygon, and hence utilizes this to identify nearest neighbors (which may not be the closest neighbors). [0.3]{} ![The difference between various methods for defining the nearest neighbors (red dots) and their spacing for a single point (large black dot) with a distorted local environment. Parts (a)-(f) are as Figure 1: (a) initial pattern, (b) the Warnken–Reed method with $\alpha$ = 1.5 and $k_{initial} = 3$, (c) the Voronoi tesselation diagram for the points, (d) the Voronoi-modified Warnken–Reed method with $\alpha = 1.5$ and $k_{initial} = 3$, (e) the Voronoi method ($d_{crit}=0.0$), and (f) the modified tesselation-based technique with $d_{crit}=0.10$.[]{data-label="various_methods2"}](Figure_2a "fig:"){width="\textwidth"} [0.3]{} ![The difference between various methods for defining the nearest neighbors (red dots) and their spacing for a single point (large black dot) with a distorted local environment. Parts (a)-(f) are as Figure 1: (a) initial pattern, (b) the Warnken–Reed method with $\alpha$ = 1.5 and $k_{initial} = 3$, (c) the Voronoi tesselation diagram for the points, (d) the Voronoi-modified Warnken–Reed method with $\alpha = 1.5$ and $k_{initial} = 3$, (e) the Voronoi method ($d_{crit}=0.0$), and (f) the modified tesselation-based technique with $d_{crit}=0.10$.[]{data-label="various_methods2"}](Figure_2b "fig:"){width="\textwidth"} [0.3]{} ![The difference between various methods for defining the nearest neighbors (red dots) and their spacing for a single point (large black dot) with a distorted local environment. Parts (a)-(f) are as Figure 1: (a) initial pattern, (b) the Warnken–Reed method with $\alpha$ = 1.5 and $k_{initial} = 3$, (c) the Voronoi tesselation diagram for the points, (d) the Voronoi-modified Warnken–Reed method with $\alpha = 1.5$ and $k_{initial} = 3$, (e) the Voronoi method ($d_{crit}=0.0$), and (f) the modified tesselation-based technique with $d_{crit}=0.10$.[]{data-label="various_methods2"}](Figure_2c "fig:"){width="\textwidth"} \ [0.3]{} ![The difference between various methods for defining the nearest neighbors (red dots) and their spacing for a single point (large black dot) with a distorted local environment. Parts (a)-(f) are as Figure 1: (a) initial pattern, (b) the Warnken–Reed method with $\alpha$ = 1.5 and $k_{initial} = 3$, (c) the Voronoi tesselation diagram for the points, (d) the Voronoi-modified Warnken–Reed method with $\alpha = 1.5$ and $k_{initial} = 3$, (e) the Voronoi method ($d_{crit}=0.0$), and (f) the modified tesselation-based technique with $d_{crit}=0.10$.[]{data-label="various_methods2"}](Figure_2d "fig:"){width="\textwidth"} [0.3]{} ![The difference between various methods for defining the nearest neighbors (red dots) and their spacing for a single point (large black dot) with a distorted local environment. Parts (a)-(f) are as Figure 1: (a) initial pattern, (b) the Warnken–Reed method with $\alpha$ = 1.5 and $k_{initial} = 3$, (c) the Voronoi tesselation diagram for the points, (d) the Voronoi-modified Warnken–Reed method with $\alpha = 1.5$ and $k_{initial} = 3$, (e) the Voronoi method ($d_{crit}=0.0$), and (f) the modified tesselation-based technique with $d_{crit}=0.10$.[]{data-label="various_methods2"}](Figure_2e "fig:"){width="\textwidth"} [0.3]{} ![The difference between various methods for defining the nearest neighbors (red dots) and their spacing for a single point (large black dot) with a distorted local environment. Parts (a)-(f) are as Figure 1: (a) initial pattern, (b) the Warnken–Reed method with $\alpha$ = 1.5 and $k_{initial} = 3$, (c) the Voronoi tesselation diagram for the points, (d) the Voronoi-modified Warnken–Reed method with $\alpha = 1.5$ and $k_{initial} = 3$, (e) the Voronoi method ($d_{crit}=0.0$), and (f) the modified tesselation-based technique with $d_{crit}=0.10$.[]{data-label="various_methods2"}](Figure_2f "fig:"){width="\textwidth"} The traditional PDAS metric does not consider the order or disorder of the dendrites within the microstructure. Figure \[various\_methods2\] illustrates why a local metric for PDAS may be needed. For the field of view given in Figs. \[5x5\] and \[5x52\], the bulk PDAS metric would be the same since the number of dendrites $n$ and the area $A$ are equal (see Eq. \[lambda\]). However, the disorder of the dendritic structure in the case of Fig. \[various\_methods2\] may yield (i) a more uneven distribution of solute elements, (ii) the formation of second phase particles, (iii) the formation of gas or shrinkage porosity, or (iv) the lateral growth of secondary dendrite arms. Hence, in addition to the bulk PDAS values, understanding how processing conditions may impact the disorder of the dendritic structure may be important for understanding the properties of directionally-solidified alloys. Other techniques exist for quantifying the homogeneity or heterogeneity of primary dendrite arm spacing in directionally-solidified dendritic microstructures. For instance, the minimal spanning tree (MST) method [@Dus1986] provides a statistical analysis of the disorder in a system of points by connecting all points with the shortest line segments possible. In this manner, the mean distance of all line segments ($m$) and the standard deviation ($\sigma$) characterize the disorder of the system and casting these values into a $m$-$\sigma$ design space allows for comparison between different point systems [@Dus1986]. This has been effectively applied to characterize the mean dendrite arm spacing, PDAS distribution, and the disorder in first Pb-Tl alloys [@Bil1991] and subsequently in other alloy systems [e.g., @Tew2002; @Hui2002; @Pen2013]. As an example of this technique, Figure \[MST\] plots the dendrite cores and connecting line segments for the single crystal nickel-based superalloy micrograph used in this study (Figure \[sx\_nickel\]). Moreover, other methods such as radial distribution functions, fast Fourier transforms, and/or correlation functions can also be used to characterize the dendrite arm spacing distribution. However, it should be noted that these approaches are not intended for local characterization of the dendrite arm spacing and are not as effective for correlating the local spacing with local microstructure features as shown herein. Moreover, these techniques do not quantify the number of nearest neighbors and are often coupled with Voronoi polygons to compute the nearest neighbor distributions. Rather, these analysis methods are more effective at characterizing and comparing the homogeneity/heterogeneity of the dendritic structure between different processing conditions. Hence, there will be limited discussion of these techniques in the present work. Results ======= Application to dendritic microstructure --------------------------------------- A micrograph of a directionally-solidified single crystal nickel-based superalloy microstructure that is polished and imaged perpendicular to the solidification direction is shown in Figure \[sx\_nickel\]. This microstructure was produced using the liquid metal cooling technique, as described in Miller [@Mil2011] and Elliott et al. [@Ell2004]. First, the dendrite cores were identified manually (white and black dots). Automated methods to identify dendrite cores can be invaluable for future large scale analysis [@Tsc2010a; @Tsc2010b]. Moreover, the white particles in this image are eutectic particles. A total of 393 dendrite cores are contained in this image over an area of 24.25 mm$^2$, giving a PDAS of 248.4 $\mu$m using $c=1$ (Equation \[lambda\]). The remainder of the analysis uses this micrograph as a template for characterizing the local dendrite arm spacing. Accounting for image/part edge effects -------------------------------------- The ability to handle edge effects when computing local dendrite arm spacings with dendrite cores is vital for quantifying statistics in thin sections, such as the wall of an airfoil blade that may only contain 1–3 dendrite cores across the section [e.g., @Tsc2010a; @Tsc2010b]. As a first example of one such a technique, we have used a convex hull of the dendrite cores in Figure \[sx\_nickel\] to identify “edge” dendrite cores and quantify the dendrite arm spacing. The dendrite core locations are first extracted from the experimental image, as shown in Figure \[sx\_nickel2\]. Then, a convex hull is generated around the points; this is the minimum “convex” area that contains all the points. Next, the edge points (white dots in Fig. \[sx\_nickel\]) are identified by finding those points with Voronoi vertices that lie outside of the convex hull (dotted blue line in Figure \[hulla\]). Then, to utilize Voronoi-based techniques for these points, a new polygon is generated by the intersection of the initial polygon from the Voronoi tessellation and the convex hull; the new polygon of the edge dendrite cores is colored red in \[hulla\] to distinguish from the bulk dendrite cores. The polygons belonging to the interior and edge dendrites are shown in Figures \[hullb\] and \[hullc\], with a random coloring scheme used to delineate the different polygons. Last, the neighbors can now be calculated using either a new criterion or the same criterion used for interior points. For the present analysis, the same criterion (polygon with edge length threshold) was used for all points; although herein the interior dendrite cores are used to compare statistics with other techniques and bulk PDAS values. More complicated techniques are needed to deal with complex geometries that include concave character and internal passages in order to eventually apply these techniques to complex structures such as turbine blades. Multiple instantiations of microstructures with edge effects can shed light on the appropriate method for determining the local PDAS at edges, which may be different from that used in the interior. [0.75]{} ![(a) Voronoi tessellation of dendritic structure from Figure \[sx\_nickel\]. The dotted blue line (surrounding the points) denotes the convex hull of the dendrite cores and the red polygons delineate the cores that intersect the convex hull. The interior and edge dendrites are shown in (b) and (c), respectively, with each polygon colored differently as a guide to the eye.[]{data-label="sx_nickel2"}](Figure_5a "fig:"){width="\textwidth"} [0.475]{} ![(a) Voronoi tessellation of dendritic structure from Figure \[sx\_nickel\]. The dotted blue line (surrounding the points) denotes the convex hull of the dendrite cores and the red polygons delineate the cores that intersect the convex hull. The interior and edge dendrites are shown in (b) and (c), respectively, with each polygon colored differently as a guide to the eye.[]{data-label="sx_nickel2"}](Figure_5b "fig:"){width="\textwidth"} [0.475]{} ![(a) Voronoi tessellation of dendritic structure from Figure \[sx\_nickel\]. The dotted blue line (surrounding the points) denotes the convex hull of the dendrite cores and the red polygons delineate the cores that intersect the convex hull. The interior and edge dendrites are shown in (b) and (c), respectively, with each polygon colored differently as a guide to the eye.[]{data-label="sx_nickel2"}](Figure_5c "fig:"){width="\textwidth"} Spatial distribution of local primary dendrite arm spacings ----------------------------------------------------------- [0.45]{} ![(a) Local dendrite arm spacing ($\mu$m) and (b) coordination number based on the Voronoi tessellation with edge length threshold of $d_{crit}$=0.12 or 12%.[]{data-label="dendrite"}](Figure_6a "fig:"){width="\textwidth"} [0.45]{} ![(a) Local dendrite arm spacing ($\mu$m) and (b) coordination number based on the Voronoi tessellation with edge length threshold of $d_{crit}$=0.12 or 12%.[]{data-label="dendrite"}](Figure_6b "fig:"){width="\textwidth"} \ [0.45]{} ![(a) Local dendrite arm spacing ($\mu$m) and (b) coordination number based on the Voronoi tessellation with edge length threshold of $d_{crit}$=0.12 or 12%.[]{data-label="dendrite"}](Figure_6c "fig:"){width="\textwidth"} [0.45]{} ![(a) Local dendrite arm spacing ($\mu$m) and (b) coordination number based on the Voronoi tessellation with edge length threshold of $d_{crit}$=0.12 or 12%.[]{data-label="dendrite"}](Figure_6d "fig:"){width="\textwidth"} The spatial distribution of local dendrite arm spacing and coordination number can provide insight into the order/disorder of primary dendrites and can identify regions that could potentially contain more/less interdendritic features and/or contain different properties. For instance, the primary dendrite arm spacing and coordination number for the directionally-solidified superalloy micrograph (Figure \[sx\_nickel\]) is shown in Figure \[dendrite\]. In this example, we used the Voronoi tessellation-based technique with an edge length threshold of $d_{crit} = 0.12$. Dendrite cores with local PDAS similar to the mean PDAS of the bulk (248.4 $\mu$m) are colored white and those with PDAS above (below) the mean PDAS are red (blue); the lower and upper bounds of the colorbar are -25% and +25% of the mean PDAS value, respectively. In general, the exterior dendrite cores have similar PDAS as the interior dendrite cores using this technique. A similar colorbar is used for the coordination number as well. As would be expected, the exterior dendrite cores tend to have a lower coordination number than the interior dendrite cores, with a few that only have 2 nearest neighbors. However, the dendrite cores with a low coordination number on the edges are not consistently over/under the mean PDAS (i.e., they do not significantly bias the statistics from the edge dendrite cores). Future work will examine what techniques may be most applicable for characterizing local dendrite arm spacings and coordination numbers for dendrite cores on free surfaces. It is envisioned that sectioning large numbers of instantiations of synthetically-generated microstructures of known bulk dendrite arm spacings can be used to understand the bias introduced by edge effects and to understand what are the best techniques for quantifying the local spacing. [0.75]{} ![Local dendrite arm spacing ($\mu$m) for the three techniques not shown in Figure \[dendrite\]: (a) Voronoi tessellation with edge length threshold of $d_{crit}$=0.0, (b) Warnken–Reed technique with $\alpha=2.0$, and (c) Voronoi Warnken-Reed with $\alpha=2.0$.[]{data-label="dendrite2"}](Figure_7a "fig:"){width="\textwidth"} \ [0.45]{} ![Local dendrite arm spacing ($\mu$m) for the three techniques not shown in Figure \[dendrite\]: (a) Voronoi tessellation with edge length threshold of $d_{crit}$=0.0, (b) Warnken–Reed technique with $\alpha=2.0$, and (c) Voronoi Warnken-Reed with $\alpha=2.0$.[]{data-label="dendrite2"}](Figure_7b "fig:"){width="\textwidth"} [0.45]{} ![Local dendrite arm spacing ($\mu$m) for the three techniques not shown in Figure \[dendrite\]: (a) Voronoi tessellation with edge length threshold of $d_{crit}$=0.0, (b) Warnken–Reed technique with $\alpha=2.0$, and (c) Voronoi Warnken-Reed with $\alpha=2.0$.[]{data-label="dendrite2"}](Figure_7c "fig:"){width="\textwidth"} [0.45]{} ![Local dendrite arm spacing ($\mu$m) for the three techniques not shown in Figure \[dendrite\]: (a) Voronoi tessellation with edge length threshold of $d_{crit}$=0.0, (b) Warnken–Reed technique with $\alpha=2.0$, and (c) Voronoi Warnken-Reed with $\alpha=2.0$.[]{data-label="dendrite2"}](Figure_7d "fig:"){width="\textwidth"} The local dendrite arm spacing for the remaining three techniques is shown in Figure \[dendrite2\]. The same color bar for local PDAS as in Figure \[dendrite\] is used here. First, notice that the Voronoi tessellation-based technique with an edge length threshold of $d_{crit} = 0.0$ has a much larger fraction of dendrite cores with PDAS greater than the bulk PDAS than below the bulk PDAS (83.5% above 248.4 $\mu$m). Clearly, the local primary dendrite arm spacing is overpredicted in this case. The Warnken–Reed and Voronoi Warnken–Reed methods are shown in Figures \[wr\] and \[mwr\]. At first glance, a majority of the local PDAS values are very similar between the two methods ($\sim$79%). However, $\sim$21% of the cores resulted in a difference between the two techniques, which is caused by the original Warnken–Reed method using neighbors outside of those FNNs identified from the Voronoi polygons. In every case, the Warnken–Reed method resulted in higher local PDAS values than the Voronoi Warnken–Reed method, as would be expected since this is purely a distance-based method and subsequent additions can only increase the local PDAS. [0.75]{} ![Difference in the local dendrite arm spacing ($\mu$m) between the Warnken–Reed and Voronoi Warnken–Reed techniques with $\alpha=2.0$. The Warnken–Reed technique had a greater PDAS value in every case ($\sim$21% of dendrite cores are different).[]{data-label="dendrite3"}](Figure_8a "fig:"){width="\textwidth"} \ [0.75]{} ![Difference in the local dendrite arm spacing ($\mu$m) between the Warnken–Reed and Voronoi Warnken–Reed techniques with $\alpha=2.0$. The Warnken–Reed technique had a greater PDAS value in every case ($\sim$21% of dendrite cores are different).[]{data-label="dendrite3"}](Figure_8b "fig:"){width="\textwidth"} Figure \[dendrite3\] shows the difference in local PDAS values between the two techniques. In several cases, the difference is greater than 250 $\mu$m and/or 100% of the PDAS value quantified by the Voronoi Warnken-Reed method. The differing dendrite cores is approximately an equal percentage for edge dendrites as well as interior dendrite cores. For some cases, it is apparent that one of the closest three dendrite cores is significantly closer or further away than the other two, thereby resulting in a larger standard deviation $d_{std}$ and a greater chance to add multiple neighbors; this case is similar to that shown in Figure \[various\_methods2\]. Local primary dendrite arm spacing statistics --------------------------------------------- The local dendrite arm spacing statistics are also calculated for the interior dendrite cores to compare with the traditional PDAS measurement. The cumulative distribution function plot for the local dendrite arm spacing is shown in Figure \[probability\] for the three different techniques over a range of parameter values, which are given in the legend. The bulk PDAS measurement is shown as a vertical black line and the hexagonal star shows the 50$^\textrm{th}$ percentile intersection point. The local dendrite arm spacing distributions are characterized in terms of mean, standard deviation, skewness, and kurtosis (Table \[table1\]), while the coordination number distributions are characterized in terms of their mean and the percentages of 3, 4, 5, 6, and 7+ nearest neighbors (Table \[table2\]). The skewness and kurtosis measure the asymmetry of the distribution and the sharpness of the peak/thickness of the tail, respectively. The skewness and kurtosis are 0 and 3, respectively, for a normal distribution. [0.45]{} ![Probability distribution functions for the various local characterization methods compared within for the internal dendrites within the dendritic microstructure shown in Figure \[sx\_nickel\]. The four different techniques are compared with the bulk PDAS for a range of parameter values. The upper bound of the parameter range for each technique is shown as a dotted line. To facilitate the comparison, the Warnken–Reed and the Voronoi technique ($d_{crit}=0.0$) are shown in (a), and the remaining Voronoi-modified techniques are shown in (b).[]{data-label="probability"}](Figure_9a "fig:"){width="\textwidth"} [0.45]{} ![Probability distribution functions for the various local characterization methods compared within for the internal dendrites within the dendritic microstructure shown in Figure \[sx\_nickel\]. The four different techniques are compared with the bulk PDAS for a range of parameter values. The upper bound of the parameter range for each technique is shown as a dotted line. To facilitate the comparison, the Warnken–Reed and the Voronoi technique ($d_{crit}=0.0$) are shown in (a), and the remaining Voronoi-modified techniques are shown in (b).[]{data-label="probability"}](Figure_9b "fig:"){width="\textwidth"} There are distinct differences between the local dendrite arm spacing distributions calculated by the four techniques (Figure \[probability\], Tables \[table1\]). The Warnken–Reed and Voronoi Warnken–Reed are compared initially. In the case of the Warnken–Reed method, the PDAS distribution is shifted towards large PDAS values at high $\alpha$ values (a positive skewness value gives a long tail) and has a sharper peak and a longer, fatter tail (high kurtosis values), more so than the other methods. This skewness is caused by an overestimation of the number of nearest neighbors in some cases, due to large values of either $d_{std}$ or the parameter $\alpha$. Hence, while the calculated mean PDAS can approach the bulk-measured value of 248.4 $\mu$m (within 0.1% for $\alpha$=1.8), this mean PDAS is highly sensitive to these large PDAS values. This overprediction of nearest neighbors, and their result on the local PDAS distribution, is also apparent by comparing this with the Voronoi Warnken–Reed method, whereby the potential nearest neighbors are restricted to only those FNNs defined by the Voronoi polygon. In this case, there is a lack of a long tail and the skewness/kurtosis of the distribution tends more towards normality. However, the calculated mean PDAS with this method tends to underestimate the bulk-measured PDAS. While the maximum number of nearest neighbors (8 for $\alpha \ge 1.2$) is more realistic, a large percentage of dendrite cores are predicted to have only 3 nearest neighbors, even in the case of a large $\alpha$ parameter (48.6% for $\alpha =2.0$). It is also interesting that increasing the $\alpha$ parameter for the case of the Voronoi Warnken–Reed method tends to shift the slope of the probability distribution function without affecting either the minimum or maximum local dendrite arm spacings. For comparison, the minimal spanning tree method (Fig. \[MST\]) was also included in Table \[table1\]. Not surprisingly, the mean distance of the connecting line segments is much shorter than the bulk calculated PDAS using Eq. \[lambda\] with $c=1$. Remember that the MST method is composed of the shortest line segments to connect all dendrites. Both the MST standard deviation and kurtosis values are larger than the Voronoi tesselation method (for all $d_{crit}$) and the Voronoi Warnken-Reed method (for all $\alpha$), indicating a wider distribution and a larger deviation of the distribution from normality (kurtosis = 3). Moreover, the distribution is skewed towards a larger tail at lower distances (negative skewness) unlike the other techniques, which again is associated with the selection of the shortest line segments to characterize the spacing. [c c c . c cc]{} & &\ & & & & & &\ Bulk (Eq. \[lambda\], $c=1$) & - & 248.4 & 0.0 & - & - & -\ Voronoi Tesselation & $d_{crit}$ = 0.00 & 272.9 & 9.9 & 28.0 & 0.1 & 3.3\ Voronoi Tesselation & $d_{crit}$ = 0.02 & 270.0 & 8.7 & 27.2 & 0.1 & 3.2\ Voronoi Tesselation & $d_{crit}$ = 0.04 & 266.4 & 7.2 & 26.8 & 0.1 & 3.1\ Voronoi Tesselation & $d_{crit}$ = 0.06 & 263.0 & 5.9 & 25.7 & 0.1 & 3.1\ Voronoi Tesselation & $d_{crit}$ = 0.08 & 258.4 & 4.0 & 26.1 & 0.2 & 3.1\ Voronoi Tesselation & $d_{crit}$ = 0.10 & 253.3 & 2.0 & 25.3 & 0.2 & 3.0\ Voronoi Tesselation & $d_{crit}$ = 0.12 & 247.6 & -0.3 & 26.0 & 0.2 & 2.9\ Voronoi Warnken–Reed & $\alpha$ = 1.0 & 230.0 & -7.4 & 25.0 & 0.4 & 3.5\ Voronoi Warnken–Reed & $\alpha$ = 1.2 & 230.9 & -7.0 & 26.0 & 0.4 & 3.4\ Voronoi Warnken–Reed & $\alpha$ = 1.4 & 232.7 & -6.3 & 27.4 & 0.4 & 3.2\ Voronoi Warnken–Reed & $\alpha$ = 1.6 & 236.1 & -5.0 & 29.5 & 0.4 & 3.1\ Voronoi Warnken–Reed & $\alpha$ = 1.8 & 239.0 & -3.8 & 30.5 & 0.4 & 3.0\ Voronoi Warnken–Reed & $\alpha$ = 2.0 & 242.2 & -2.5 & 31.8 & 0.3 & 3.0\ Warnken–Reed & $\alpha$ = 1.0 & 230.1 & -7.4 & 25.1 & 0.3 & 3.4\ Warnken–Reed & $\alpha$ = 1.2 & 231.8 & -6.7 & 26.9 & 0.5 & 3.5\ Warnken–Reed & $\alpha$ = 1.4 & 234.5 & -5.6 & 29.4 & 0.6 & 3.8\ Warnken–Reed & $\alpha$ = 1.6 & 239.2 & -3.7 & 33.0 & 0.8 & 4.0\ Warnken–Reed & $\alpha$ = 1.8 & 248.1 & -0.1 & 48.1 & 1.9 & 8.3\ Warnken–Reed & $\alpha$ = 2.0 & 259.2 & 4.3 & 64.2 & 1.9 & 6.5\ Minimal spanning tree & N/A & 215.2 & -13.4 & 34.1 & -0.5 & 4.8\ \[table1\] The Voronoi tessellation techniques are also compared. First, quantifying the coordination number and the local PDAS values using all FNNs identified by the Voronoi tessellation polygons ($d_{crit}=0$) clearly overestimates both measures; mean PDAS is $\sim$10% off from the bulk-measured PDAS value and $\sim$20% of dendrite cores have more than 6 nearest neighbors. As the edge length threshold parameter increases, less nearest neighbors are identified and the calculated mean PDAS approaches the bulk-measured PDAS value (within 0.3% for $d_{crit}=0.12$). For $d_{crit}=0.12$, the majority of dendrite cores have 4 nearest neighbors ($>$50%), followed by 5 nearest neighbors (26.9%) and 3 nearest neighbors (17.3%). Moreover, the local PDAS distribution has a low skewness value (0.2) and a kurtosis of 2.9, indicating an approximately normal distribution. In general, the Voronoi tessellation-based technique with an edge length threshold criterion of $d_{crit}=0.12$ tends to give the best agreement in terms of both bulk-measured PDAS and coordination number. Furthermore, this technique allows for calculating the local PDAS value and the local PDAS distribution, which may be important for assessing the homogeneity of dendrite growth and/or for identifying local regions where the local growth conditions/properties are different from the norm. [c c . . . . . c]{} & & &\ & & & & & & &\ Voronoi Tesselation & $d_{crit}$ = 0.00 & 0.0 & 2.5 & 20.4 & 57.3 & 19.8 & 5.98\ Voronoi Tesselation & $d_{crit}$ = 0.02 & 0.0 & 4.0 & 26.0 & 57.0 & 13.0 & 5.80\ Voronoi Tesselation & $d_{crit}$ = 0.04 & 0.0 & 7.4 & 34.7 & 51.4 & 6.5 & 5.57\ Voronoi Tesselation & $d_{crit}$ = 0.06 & 0.0 & 11.8 & 45.2 & 39.9 & 3.1 & 5.35\ Voronoi Tesselation & $d_{crit}$ = 0.08 & 1.5 & 24.1 & 49.2 & 24.8 & 0.3 & 4.98\ Voronoi Tesselation & $d_{crit}$ = 0.10 & 5.3 & 43.0 & 41.5 & 10.2 & 0.0 & 4.57\ Voronoi Tesselation & $d_{crit}$ = 0.12 & 17.3 & 52.3 & 26.9 & 3.4 & 0.0 & 4.16\ Voronoi Warnken–Reed & $\alpha$ = 1.0 & 94.1 & 4.6 & 0.9 & 0.0 & 0.3 & 3.08\ Voronoi Warnken–Reed & $\alpha$ = 1.2 & 87.6 & 8.7 & 2.8 & 0.3 & 0.6 & 3.18\ Voronoi Warnken–Reed & $\alpha$ = 1.4 & 79.3 & 11.8 & 5.3 & 2.5 & 1.2 & 3.35\ Voronoi Warnken–Reed & $\alpha$ = 1.6 & 65.9 & 18.0 & 8.0 & 5.3 & 2.8 & 3.62\ Voronoi Warnken–Reed & $\alpha$ = 1.8 & 56.3 & 19.8 & 12.1 & 8.4 & 3.4 & 3.84\ Voronoi Warnken–Reed & $\alpha$ = 2.0 & 48.6 & 19.8 & 14.9 & 11.5 & 5.3 & 4.07\ Warnken–Reed & $\alpha$ = 1.0 & 91.6 & 7.1 & 0.9 & 0.0 & 0.3 & 3.10\ Warnken–Reed & $\alpha$ = 1.2 & 83.9 & 9.6 & 4.0 & 1.2 & 1.2 & 3.27\ Warnken–Reed & $\alpha$ = 1.4 & 72.8 & 14.6 & 7.1 & 2.8 & 2.8 & 3.52\ Warnken–Reed & $\alpha$ = 1.6 & 58.5 & 21.7 & 9.0 & 4.0 & 6.8 & 3.89\ Warnken–Reed & $\alpha$ = 1.8 & 52.0 & 19.2 & 10.8 & 6.2 & 11.8 & 4.63\ Warnken–Reed & $\alpha$ = 2.0 & 44.0 & 19.2 & 12.1 & 7.1 & 17.6 & 5.55\ \[table2\] Correlation with interdendritic features ---------------------------------------- The relationship between the occurrence of interdendritic features (e.g., pores or eutectic particles) and the local dendrite arm spacing (or distance from cores, etc.) can provide insight into the importance of quantifying the local spacings. We have examined how these metrics may relate to the formation of eutectic particles in this work by first segmenting the interdendritic particles and then computing probability distribution functions. The eutectic particles in Figure \[sx\_nickel\] were segmented using the following process. The particles were segmented by first leveling the intensity of the micrograph using a cubic polynomial with interaction terms. This step ensured that there wasn’t a shift in contrast from one side of the micrograph to the other (due to uneven etching, etc.). The threshold intensity was then selected by maximizing the difference in the mean intensity between the two distributions (eutectic particle and matrix). Then, a size threshold was enforced by discarding eutectic particles with less than 5 pixels (1 pixel $\sim{1.7}$ $\mu$m, i.e., 5 pixels = $15.2$ $\mu$m$^2$). As an example of the segmentation, Figure \[subimages\_a\] shows a 1 mm x 1 mm region from Figure \[sx\_nickel\] and Figure \[subimages\_b\] shows the corresponding binary image of the segmented eutectic particles (in white). The Euclidean distance to the nearest dendrite core and the nearest Voronoi vertex was then calculated for each pixel within the micrograph. The Euclidean distance is the distance from each pixel to the nearest feature, which in this case is either the centroids of the dendrite cores or the Voronoi vertices, and this metric is repeated over all pixels within the image to create a map. As an example, Figure \[subimages\_c\] shows the Euclidean distance map for the same 1 mm x 1 mm area utilizing the dendrite core centroids identified in Figure \[sx\_nickel\]. The darker intensity indicates closer Euclidean distances to the dendrite core and the lightest pixels between the dendrite cores actually correspond to the boundaries of the Voronoi tesselation. [0.31]{} ![(a) A 1 mm x 1 mm subregion from Figure \[sx\_nickel\] is shown along with two corresponding images of the same area: (b) a binary image with segmented eutectic particles (white) and (c) a Euclidean distance map from the dendrite core centroids, where lighter intensity refers to further distances from the dendrite cores.[]{data-label="subimages"}](Figure_10a "fig:"){width="\textwidth"} [0.31]{} ![(a) A 1 mm x 1 mm subregion from Figure \[sx\_nickel\] is shown along with two corresponding images of the same area: (b) a binary image with segmented eutectic particles (white) and (c) a Euclidean distance map from the dendrite core centroids, where lighter intensity refers to further distances from the dendrite cores.[]{data-label="subimages"}](Figure_10b "fig:"){width="\textwidth"} [0.31]{} ![(a) A 1 mm x 1 mm subregion from Figure \[sx\_nickel\] is shown along with two corresponding images of the same area: (b) a binary image with segmented eutectic particles (white) and (c) a Euclidean distance map from the dendrite core centroids, where lighter intensity refers to further distances from the dendrite cores.[]{data-label="subimages"}](Figure_10c "fig:"){width="\textwidth"} The probability of encountering (or forming) a eutectic particle can then be calculated as a function of this Euclidean distance from the nearest dendrite core or the Voronoi vertex, as shown in Figure \[pdf\]. Based on the image segmentation, the area fraction of eutectic particles in Figure \[sx\_nickel\] is 3.6% and is shown as a red line in Figure \[pdf\]. The pixels lying within 100 $\mu$m of the image boundaries were excluded to eliminate the possibility that dendrite cores just outside of the field of view could affect the statistics. As can be seen from Fig. \[pdf\_a\], the left (right) blue line indicates the distance whereby all distances below (above) have a probability of having a eutectic particle that is lower (higher) than the global area fraction (red line), i.e., it is less (more) favorable for a eutectic particle to form. The transition distance of eutectic particle favorability is between 86-93 $\mu$m, i.e., approximately ${1/3}$ of the primary dendrite arm spacing (248.4 $\mu$m). This plot shows that it is not favorable for eutectic particles to form close to the primary dendrite core. Figure \[pdf\_b\] is a similar plot as a function of distance from the vertices of the Voronoi tessellation (see schematic). This plot was generated in a similar manner to Figure \[pdf\_a\]; a Euclidean distance map was first formed from the Voronoi vertices, then the boundary pixels within 100 $\mu$m of the image boundaries were excluded, etc. There is an increased occurrence of eutectic particles at vertices, regardless of their distance from the dendrite core. This observation (along with the fact that the probability of occurence is higher than in Figure \[pdf\_a\] by almost 2%) suggests that solute is forced near the Voronoi vertices, thereby increasing the probability of eutectic particle occurrence. The transition distance in this plot is between 67-90 $\mu$m, i.e., at approximately ${1/3}$ of the primary dendrite arm spacing. While this analysis shows the preferential formation of eutectic particles based on the local distances, correlation with the size of particles is also important. [0.475]{} ![The probability of a eutectic particle as a function of the distance to (a) the nearest dendrite core or (b) the nearest Voronoi vertex. The inset schematic shows the refence point(s) for the Euclidean distance in each plot.[]{data-label="pdf"}](Figure_11a "fig:"){width="\textwidth"} [0.475]{} ![The probability of a eutectic particle as a function of the distance to (a) the nearest dendrite core or (b) the nearest Voronoi vertex. The inset schematic shows the refence point(s) for the Euclidean distance in each plot.[]{data-label="pdf"}](Figure_11c "fig:"){width="\textwidth"} The eutectic particle size may be correlated with the distance from the dendrite cores or Voronoi vertices as well. Figure \[a50\] shows the eutectic particle size as a function of the distance from the nearest dendrite core and the nearest Voronoi vertice. The solid line shows the 50$^{th}$-percentile area, $A_{50}=410$ $\mu{m}^2$, which refers to the eutectic particle size where 50% of the eutectic particle area lies above/below this size. There is a noticeable tendency for the larger particles ($A>A_{50}$) to form further away from the dendrite core and closer to the Voronoi vertices, while the smaller eutectic particles ($A<A_{50}$) can form at all distances. However, it is difficult to quantitatively tell from the following plot what the preference is for smaller or larger particles as a function of distance. Therefore, to further quantify this relationship with respect to the size of the particles, the probability associated with a eutectic particle pixel belonging to either a small or large particle is calculated in Figure \[size\]. Interestingly, in Figure \[size\_a\], at distances closer to the dendrite cores, there is a clear preference for smaller particles ($A<A_{50}$) to form over larger particles ($A>A_{50}$). At a distance of 84.5 $\mu{m}$ ($\sim{1/3}$ PDAS), as denoted by the solid line, there is a crossover in the probability function and larger particles are statistically favored to form over smaller particles. In the case of distances from the Voronoi vertices, there is a similar behavior except that *larger* particles are favored at smaller distances (closer to Voronoi vertices). The crossover in the probability functions occurs at 79.3 $\mu{m}$ ($\sim{1/3}$ PDAS again). At distances greater than this, there is not as definitive of a trend as with the dendrite cores, i.e., in some cases, there is a greater probability for smaller particles to form and, in some cases, for larger particles to form. This lack of a well-defined trend at larger distances may be caused by the fact that these larger distances could lie close or far away from the dendrite core, further obscuring the trend. Clearly, the distance from the dendrite cores and, hence, the local primary dendrite arm spacing affect the probabilities of interdendritic particles to form, though. In a similar manner, it is anticipated that a similar relationship may be associated with shrinkage porosity, gas porosity, and other interdendritic defects. [0.475]{} ![The eutectic particle size as a function of the distance to (a) the nearest dendrite core or (b) the nearest Voronoi vertex. The distance for each particle is the distance for the particle centroid. The 50$^{th}$-percentile area, $A_{50}=410$ $\mu{m}^2$, refers to the particle size where 50% of the eutectic particle area lies above/below this size.[]{data-label="a50"}](Figure_12a "fig:"){width="\textwidth"} [0.475]{} ![The eutectic particle size as a function of the distance to (a) the nearest dendrite core or (b) the nearest Voronoi vertex. The distance for each particle is the distance for the particle centroid. The 50$^{th}$-percentile area, $A_{50}=410$ $\mu{m}^2$, refers to the particle size where 50% of the eutectic particle area lies above/below this size.[]{data-label="a50"}](Figure_12b "fig:"){width="\textwidth"} [0.475]{} ![The probability of a eutectic particle of a certain size occurring as a function of the distance to (a) the nearest dendrite core or (b) the nearest Voronoi vertex. Two particle sizes are considered: particle sizes below and above the 50$^{th}$-percentile area $A_{50}$. The solid line denotes the distance at which the probability functions first intercept, indicating a transition fromthe favorability of small particles to large particles (in \[size\_a\]) or vice versa (in \[size\_b\]).[]{data-label="size"}](Figure_13a "fig:"){width="\textwidth"} [0.475]{} ![The probability of a eutectic particle of a certain size occurring as a function of the distance to (a) the nearest dendrite core or (b) the nearest Voronoi vertex. Two particle sizes are considered: particle sizes below and above the 50$^{th}$-percentile area $A_{50}$. The solid line denotes the distance at which the probability functions first intercept, indicating a transition fromthe favorability of small particles to large particles (in \[size\_a\]) or vice versa (in \[size\_b\]).[]{data-label="size"}](Figure_13b "fig:"){width="\textwidth"} Conclusions =========== In summary, characterizing the primary dendrite arm spacing in directionally-solidified microstructures is an important step for developing process-structure-property relationships by enabling the quantification of (i) the influence of processing on microstructure and (ii) the influence of microstructure on properties. Thin-walled directionally-solidified structures (e.g., a turbine blade) require new approaches for characterizing the dendrite arm spacing and the microstructure. In this work, we utilized a new Voronoi-based approach for spatial point pattern analysis that was applied to an experimental dendritic microstructure. This technique utilizes a Voronoi tessellation of space surrounding the dendrite cores to determine nearest neighbors and the local primary dendrite arm spacing. In addition, we compared this technique to a recent distance-based technique, the Warnken–Reed method, and a modification to this using Voronoi tesselations, along with the minimal spanning tree method. Moreover, a convex hull-based technique was used to include edge effects for such techniques, which can be important for thin specimens. These methods were used to quantify the distribution of local primary dendrite arm spacings as well as their spatial distribution for an experimental directionally-solidified superalloy micrograph. Last, eutectic particles were segmented to correlate distances from dendrite cores and Voronoi vertices to the occurence and size of these interdendritic features. Interestingly, with respect to the distance from the dendrite core, it was found that there is a greater probability of occurence of large eutectic particles ($>410$ $\mu$m) over small particles at distances greater than approximately ${1}/{3}$ of the bulk-measured primary dendrite arm spacing. In conclusion, this systematic study of the different techniques for quantifying local primary dendrite arm spacings, and their effect on microstructure, can be an important step for correlating with both processing and properties in single crystal nickel-based superalloys. Acknowledgments {#acknowledgments .unnumbered} =============== MAT would like to acknowledge AFOSR for support for this research through contract FA9550-12-1-0135 (PM: Dr. David Stargel, AFOSR/RSA). MAT would like to acknowledge support from the U.S. Army Research Laboratory (ARL) administered by the Oak Ridge Institute for Science and Education through an interagency agreement between the U.S. Department of Energy and ARL. [10]{} R. C. Reed, . Cambridge University Press, 2006. T. M. Pollock, S. Tin, , 22(2) (2006) 361–374. H. S. Whitesell, R. A. Overfelt, , 318(1-2) (2001)264–276. A. J. Elliott, S. Tin, W. T. King, S. C. Huang, M. F. X. Gigliotti, T. M. Pollock, , 35A(10) (2004) 3221–3231. M. Melo, E. M. S. Rizzo, R. G. Santos, , 40(7) (2005) 1599–1609. M. Lamm, R. F. Singer, , 38A(6) (2007) 1177–1183. C. L. Brundidge, D. Vandrasek, B. Wang, T. M. Pollock, , 43A(3) (2012) 965–976. D. G. McCartney, J. D. Hunt, , 29(11) (1981)1851–1863. J. Hui, R. Tiwari, X. Wu, S. N. Tewari, R. Trivedi, , 33(11) (2002) 3499–3510. W. Wang, P. D. Lee, M. [McLean]{}, , 51(10) (2003) 2971–2987. J. D. Miller, T. M. Pollock, in: , John Wiley & Sons, Inc., 2012, pp. 653–662. C. L. Brundidge, J. D. Miller, T. M. Pollock, , 42A(9) (2011) 2723–2732. B. C. Wilson, E. R. Cutler, G. E. Fuchs, , 479(1-2) (2008) 356–364. M. C. Flemings, , 5(10) (1974) 2121–2134. H. Jacobi, K. Schwerdtfeger, , 7(6) (1976) 811–820. M.A. Tschopp, A.L Oppedal, J.D. Miller, M.A. Groeber, A.H. Rosenberger, K.N. Solanki, in: , John Wiley & Sons, Inc., 2013, pp. 299–310. N. Warnken, Roger C. Reed, , 42A(6) (2011) 1675–1683. N. Warnken, R. C. Reed, , 27 (2011) 1–5. C. Dussert, G. Rasigni, M. Rasigni, J. Palmari, A. Llebaria, , 34(5) (1986) 3528–3531. B. Billia, H. Jamgotchian, H. N. Thi, , 22(12) (1991) 3041–3050. S. N. Tewari, Y. H. S. Weng, G. L. Ding, R. Trivedi, , 33(4) (2002) 1229–1243. P. Peng, X. Z. Li, Y. Q. Su, D. M. Liu, J. J. Guo, H. Z. Fu, , 28(5) (2013) 740–746. J. D. Miller. PhD thesis, University of Michigan, 2011. M. A. Tschopp, M. A. Groeber, R. Fahringer, J. P. Simmons, A. H. Rosenberger, C. Woodward, , 62(6) (2010) 357–360. M. A. Tschopp, M. A. Groeber, J. P. Simmons, A. H. Rosenberger, C. Woodward, , 61(12) (2010) 1406–1417.
--- abstract: | We analyze longitudinal pion spectra from $E_{\rm lab}= 2A$ GeV to $\sqrt {s_{\rm NN}}=200$ GeV within Landau’s hydrodynamical model. From the measured data on the widths of the pion rapidity spectra, we extract the sound velocity $c_s^2$ in the early stage of the reactions. It is found that the sound velocity has a local minimum (indicating a softest point in the equation of state, EoS) at $E_{\rm beam}=30A$ GeV. This softening of the EoS is compatible with the assumption of the formation of a mixed phase at the onset of deconfinement. address: 'Institut für Theoretische Physik, J. W. Goethe Universität, 60438 Frankfurt am Main, Germany' author: - Marcus Bleicher title: | Evidence for the Onset of Deconfinement\ from Longitudinal Momentum Distributions?\ Observation of the Softest Point of the Equation of State --- Over the last years, a wealth of detailed data in the $20A-160A$ GeV energy regime has become available. The systematic study of these data revealed surprising (non-monotonous) structures in various observables around $30A$ GeV beam energy. Most notable irregular structures in that energy regime include, - the sharp maximum in the K$^+/\pi^+$ ratio [@Afanasiev:2002mx; @Gazdzicki:2004ef], - a step in the transverse momentum excitation function (as seen through $\langle m_\perp\rangle -m_0$ ) [@Gazdzicki:2004ef; @na49_blume], - an apparent change in the pion per participant ratio [@Gazdzicki:2004ef] and - increased ratio fluctuations (due to missing data at low energies it is unknown if this is a local maximum or an ongoing increase of the fluctuations) [@Roland:2005pr]. It has been speculated, that these observation hint towards the onset of deconfinement already at $30A$ GeV beam energy. Indeed, increased strangeness production [@Koch:1986ud] and enhanced fluctuations have long been predicted as a sign of QGP formation [@Bleicher:2000ek; @Shuryak:2000pd; @Heiselberg:2000ti; @Muller:2001wj; @Gazdzicki:2003bb; @Gorenstein:2003hk] within different frameworks and observables. The suggestion of an enhanced strangeness to entropy ratio ($\sim K/\pi$) as indicator for the onset of QGP formation was especially advocated in [@SMES]. Also the high and approximately constant $K^\pm$ inverse slopes of the $m_T$ spectra above $\sim 30A$ GeV - the ’step’ - was also found to be consistent with the assumption of a parton $\leftrightarrow$ hadron phase transition at low SPS energies [@Gorenstein:2003cu; @Hama:2004re]. Surprisingly, transport simulations (supplemented by recent lattice QCD (lQCD) calculations) have also suggested that partonic degrees of freedom might already lead to visible effects at $\sim 30A$ GeV [@Weber98; @MT-prl; @Bratkovskaya:2004kv]. Finally, the comparison of the thermodynamic parameters $T$ and $\mu_B$ extracted from the transport models in the central overlap region [@Bravina] with the experimental systematics on chemical freeze-out configurations [@Braun-Munzinger:1996mq; @Braun-Munzinger:1998cg; @Cleymans] in the $T-\mu_B$ plane do also suggest that a first glimpse on a deconfined state might be possible around $10A-30A$ GeV. In this letter, we explore whether similar irregularities are also present in the excitation function of longitudinal observables, namely rapidity distributions. Here we will employ Landau’s hydrodynamical model [@Fermi:1950jd; @Landau:gs; @Belenkij:cd; @Shuryak:1972zq; @Carruthers:ws; @Carruthers:dw; @Carruthers:1981vs]. This model entered the focus again after the most remarkable observation that the rapidity distributions at all investigated energies can be well described by a single Gaussian at each energy. The energy dependence of the width can also be reasonably described by the same model. For recent applications of Landau’s model to relativistic hadron-hadron and nucleus-nucleus interactions the reader is referred to [@Feinberg:1988et; @Stachel:1989pa; @Steinberg:2004vy; @Murray:2004gh; @Roland:2004] (and Refs. therein). The main physics assumptions of Landau’s picture are as follows: The collision of two Lorentz-contracted nuclei leads to full thermalization in a volume of size $V/\sqrt{s_{\rm NN}}$. This justifies the use of thermodynamics and establishes the system size and energy dependence. Usually, a simple equation of state $p=c_s^2\epsilon$ with $c_s^2=1/3$ ($c_s$ denotes the speed of sound) is assumed. For simplicity, chemical potentials are not taken into account. From these assumptions follows a universal formula for the distribution of the produced entropy, determined mainly by the initial Lorentz contraction and Gaussian rapidity spectrum for newly produced particles. Under the condition that $c_s$ is independent of temperature, the rapidity density is given by [@Shuryak:1972zq; @Carruthers:dw]: $$\frac{dN}{dy}=\frac{Ks_{\rm NN}^{1/4}}{\sqrt{2\pi \sigma_y^2}}\,\exp\left(-\frac{y^2}{2\sigma_y^2}\right) \label{eq1}$$ with $$\sigma_y^2=\frac{8}{3}\frac{c_s^2}{1-c_s^4}\,{\rm ln}({\sqrt {s_{\rm NN}}}/{2m_p})\quad, \label{eq2}$$ where $K$ is a normalisation factor and $m_p$ is the proton mass. The model relates the observed particle multiplicity and distribution in a simple and direct way to the parameters of the QCD matter under consideration. Let us now analyze the available experimental data on rapidity distributions of negatively charged pions in terms of the Landau model. Fig. \[rapwidth\] shows the measured root mean square $\sigma_y$ of the rapidity distribution of negatively charged pions in central Pb+Pb (Au+Au) reactions as a function of the beam rapidity. The dotted line indicates the Landau model predictions with the commonly used constant sound velocity $c_s^2=1/3$. The full line shows a linear fit through the data points, while the data points [@na49_blume; @Roland:2004; @klay; @brahms] are depicted by full symbols. At a first glance the energy dependence looks structureless. The data seem to follow a linear dependence on the beam rapidity $y_p$ without any irregularities. However, the general trend of the rapidity widths is also well reproduced by Landau’s model with an equation of state with a fixed speed of sound. Nevertheless, there seem to be systematic deviations. At low AGS energies and at RHIC, the experimental points are generally underpredicted by Eq. (\[eq2\]), while in the SPS energy regime Landau’s model overpredicts the widths of the rapidity distributions. Exactly these deviations from the simple Landau picture do allow to gain information on the equation of state of the matter produced in the early stage of the reaction. By inverting Eq. (\[eq2\]) we can express the speed of sound $c_s^2$ in the medium as a function of the measured width of the rapidity distribution: $$c_s^2=-\frac{4}{3}\frac{{\rm ln}({\sqrt {s_{\rm NN}}}/{2 m_p})}{\sigma_y^2} +\sqrt{\left[\frac{4}{3}\frac{{\rm ln}({\sqrt {s_{\rm NN}}}/{2 m_p})}{\sigma_y^2}\right]^2+1}\quad. \label{eq3}$$ Let us now investigate the energy dependence of the sound velocities extracted from the data. Figure \[cs2\] shows the speed of sound as a function of beam energy for central Pb+Pb (Au+Au) reactions as obtained from the data using Eq. (\[eq3\]). The sound velocities exhibit a clear minimum (usually called the softest point) around a beam energy of $30A$ GeV. A localized softening of the equation of state is a long predicted signal for the mixed phase at the transition energy from hadronic to partonic matter [@Hung:1994eq; @Rischke:1995pe; @Brachmann:1999mp]. Therefore, we conclude that the measured data on the rapidity widths of negatively charged pions are indeed compatible with the assumption of the onset of deconfinement at the lower SPS energy range. However, presently we can not rule out that also an increased resonance contribution may be the cause of the softening [@soft]. In conclusion, we have explored the excitation functions of the rapidity widths of negatively charged pions in Pb+Pb (Au+Au) collisions. - The rapidity spectra of pions produced in central nucleus-nucleus reactions at all investigated energies can be well described by single Gaussians. - The energy dependence of the width of the pion rapidity distribution follows the prediction of Landau’s hydrodynamical model if a variation of the sound velocity is taken into account. - The speed of sound excitation function extracted from the data has a pronounced minimum (softest point) at $E_{\rm beam}=30A$ GeV. - This softest point might be due to the formation of a mixed phase indicating the onset of deconfinement at this energy. Further explorations of this energy domain is needed and can be done at the future FAIR facility and by CERN-SPS and BNL-RHIC experiments. Acknowledgements {#acknowledgements .unnumbered} ================ The author thanks C. Blume and M. Gazdzicki for fruitful and stimulating discussions. This work was supported by GSI, DFG and BMBF. This work used computational resources provided by the Center for Scientific Computing at Frankfurt (CSC). References {#references .unnumbered} ========== [10]{} S. V. Afanasiev [*et al.*]{} \[The NA49 Collaboration\], Phys. Rev. C [**66**]{} (2002) 054902 \[arXiv:nucl-ex/0205002\]. M. Gazdzicki [*et al.*]{} \[NA49 Collaboration\], J. Phys. G [**30**]{} (2004) S701 \[arXiv:nucl-ex/0403023\]. C. Blume, J. Phys. G: Nucl. Part. Phys. 31, S57 (2005) C. Roland \[NA49 Collaboration\], J. Phys. G [**31**]{} (2005) S1075. P. Koch, B. Muller and J. Rafelski, Phys. Rept.  [**142**]{} (1986) 167. M. Bleicher, S. Jeon and V. Koch, Phys. Rev. C [**62**]{} (2000) 061902 \[arXiv:hep-ph/0006201\]. E. V. Shuryak and M. A. Stephanov, Phys. Rev. C [**63**]{} (2001) 064903 \[arXiv:hep-ph/0010100\]. H. Heiselberg and A. D. Jackson, Phys. Rev. C [**63**]{} (2001) 064904 \[arXiv:nucl-th/0006021\]. B. Muller, Nucl. Phys. A [**702**]{} (2002) 281 \[arXiv:nucl-th/0111008\]. M. Gazdzicki, M. I. Gorenstein and S. Mrowczynski, Phys. Lett. B [**585**]{}, 115 (2004) \[arXiv:hep-ph/0304052\]. M. I. Gorenstein, M. Gazdzicki and O. S. Zozulya, Phys. Lett. B [**585**]{}, 237 (2004) \[arXiv:hep-ph/0309142\]. M. Gazdzicki and M. I. Gorenstein, Acta Phys. Polon. B [**30**]{}, 2705 (1999). M. I. Gorenstein, M. Gazdzicki and K. A. Bugaev, Phys. Lett. B [**567**]{}, 175 (2003) \[arXiv:hep-ph/0303041\]. Y. Hama, F. Grassi, O. Socolowski, T. Kodama, M. Gazdzicki and M. Gorenstein, Acta Phys. Polon. B [**35**]{} (2004) 179. H. Weber, C. Ernst, M. Bleicher [*et al.*]{}, Phys. Lett. B [**442**]{}, 443 (1998). E. L. Bratkovskaya [*et al.*]{}, Phys. Rev. Lett. [**92**]{}, 032302 (2004) E. L. Bratkovskaya [*et al.*]{}, Phys. Rev. C [**69**]{}, 054907 (2004) L. V. Bravina [*et al.*]{}, Phys. Rev. C [**60**]{}, 024904 (1999), Nucl. Phys. A [**698**]{}, 383 (2002). P. Braun-Munzinger and J. Stachel, Nucl. Phys. A [**606**]{}, 320 (1996). P. Braun-Munzinger and J. Stachel, Nucl. Phys. A [**638**]{}, 3 (1998). J. Cleymans and K. Redlich, Phys. Rev. C [**60**]{}, 054908 (1999). E. Fermi, Prog. Theor. Phys.  [**5**]{}, 570 (1950). L. D. Landau, Izv. Akad. Nauk Ser. Fiz.  [**17**]{}, 51 (1953). S. Z. Belenkij and L. D. Landau, Usp. Fiz. Nauk [**56**]{}, 309 (1955). E. V. Shuryak, Yad. Fiz.  [**16**]{}, 395 (1972). P. Carruthers, Annals N.Y.Acad.Sci. 229, 91 (1974). P. Carruthers and M. Doung-van, Phys. Rev. D [**8**]{}, 859 (1973). P. Carruthers, LA-UR-81-2221 E. L. Feinberg, Z. Phys. C [**38**]{} (1988) 229. J. Stachel and P. Braun-Munzinger, Phys. Lett. B [**216**]{}, 1 (1989). P. Steinberg, arXiv:nucl-ex/0405022. M. Murray, arXiv:nucl-ex/0404007. G. Roland, Talk presented at Quark Matter 2004, see proceedings. J.Klay [*et al.*]{} \[E895 Collaboration\], Phys. Rev. C [**68**]{}, 054905 (2003) I.G. Bearden [*et al.*]{} \[Brahms Collaboration\], Phys. Rev. Lett. [**94**]{}, 162301 (2005) C. M. Hung and E. V. Shuryak, Phys. Rev. Lett.  [**75**]{}, 4003 (1995) \[arXiv:hep-ph/9412360\]. D. H. Rischke, Y. Pursun, J. A. Maruhn, H. Stoecker and W. Greiner, Heavy Ion Phys.  [**1**]{}, 309 (1995) \[arXiv:nucl-th/9505014\]. J. Brachmann, A. Dumitru, H. Stoecker and W. Greiner, Eur. Phys. J. A [**8**]{}, 549 (2000) \[arXiv:nucl-th/9912014\]. R. Hagedorn, Nuov. Cim. Suppl. 3, 147 (1965); J. Rafelski and R. Hagedorn, Bielefeld Symp., ed. H. Satz, pp. 253 (1980)
--- abstract: | The decay of $\a$ particle from a nucleus is viewed as a quantum resonance state of a two-body scattering process of the $\a$+daughter nucleus pair governed by a novel nucleus-nucleus potential in squared Woods-Saxon form. By the application of the rigorous optical model (OM) potential scattering (S-matrix) theory the genuineness of the potential for the system is established by giving good explanation of the elastic scattering and reaction cross sections data of the $\a$+nucleus pair. From the pole position in the complex momentum (k) plane of the S-matrix defined above, the energy and width of the resonance state akin to the decaying state of emission of $\a$ particle are extracted and from this width, the result of $\a$-decay half-life is derived to account for the experimental result of half-life in the cases of large number of $\a$-emitters including heavy and super-heavy nuclei. The S-matrix of the full OM calculation above is replaced by an analytical function expressed in terms of exact Schrödinger solutions of a global potential that closely represents the Coulomb-nuclear interaction in the interior and the pure Coulomb wave functions outside, and the resonant poles of this S-matrix in the complex momentum plane are used to give satisfactory results of decay half-lives of $\a$ coming out from varieties of nuclei.\ PACS number(s): 23.60.+e, 21.10.Tg, 23.70.+j, 27.90.+b author: - 'Basudeb Sahu$^{1}$ and Swagatika Bhoi$^{2}$' title: 'The potential for $\a$ induced nuclear scattering, reaction and decay, and a resonance-pole-decay model with exact explicit analytical solutions ' --- INTRODUCTION ============ The process of decay of alpha ($\alpha$) particle from heavy and super-heavy nuclei has been studied intensively in the past few years [@moh06; @xu05; @den05; @gam05; @dup06; @don05; @dong05; @chow06; @sahu08; @sahu10; @sah16; @sah13]. In many papers a simple two-body model was applied [@sah16; @sah13; @gur87] and in most papers a potential was derived that was able to fit the measured $\a$-decay half-lives of the alpha emitters. In recent time, such a potential for the $\a$+nucleus two-body interaction is generated microscopically in the double folding ($t\rho \rho$ approximation) using explicitly the nuclear (both proton and neutron) densities [@gam03; @gam05]. However, most of the studies did not attempt to use these potentials for the description of other experimental quantities such as, for example, $\a$-scattering cross sections or reaction (fusion) cross sections. Using the potential extracted from the fitting of decay rate data, Devisov and Ikezoe [@den05] estimated the values of fusion cross section as a function of energy by treating fusion process as a one-dimensional barrier passing mechanism. Bhagwat and Gambhir [@bha08] have tried to account for the measured results of fusion cross sections in some cases of $\a$+nucleus systems by the similar one-dimensional treatment of the fusion process and found no success in explaining the data of fusion cross sections and the decay half-lives by using the potential obtained within the framework of mean field theory. In both the studies stated above, the potential under question has not been used or tested for the analysis of angular variation of the experimental values of differential scattering cross sections at different incident energies. It is well known that the genuineness of a nucleus-nucleus potential rests on the satisfactory explanation of the above elastic scattering data in the optical model potential (OMP) analysis. Through this analysis only, can one know the exact height and radial position of the Coulomb-nuclear potential barrier. Using a potential, without proper verification of its barrier height and position, in the studies of other processes namely fusion and decay does not go well with the physical understanding of the processes. In principle, the application of semi-classical model for tunneling is not necessary for the calculation of $\a$-decay half-lives and the fusion reaction cross section in the quantum mechanical two-body collision process of $\a$-nucleus system. From the potential, using the rigorous S-matrix (SM) theory of potential scattering, one can directly obtain the energy and decay width from the poles of the SM in the lower half of the complex momentum (k) plane close to the real axis [@sahu10]. Further, this SM method can be amalgamated within a code developed for calculating phase-shifts and cross sections in the same $\a$+nucleus collision problem to explain the elastic scattering and reaction (fusion) cross section data in a unified way [@bsahu08; @bsahu12]. The motivation of this paper is to present a phenomenological potential for the $\a$+nucleus system which is consistent with the potential generated by relativistic mean field (RMF) theory [@kum16] and is suitable for simultaneous description of three important events of $\a$ induced nuclear reaction namely (i) elastic scattering, (ii) reaction, and (iii) $\a$ emission by giving satisfactory explanation of the measured quantities: elastic scattering cross section, reaction cross section and $\a$-decay half-life by the calculated results obtained using the SM theory of potential scattering. Further, the combined Coulomb-nuclear potential adopted above is closely reproduced by a r-dependent potential expression. Using this potential form, we exactly solve the Schrödinger equation and match them with the analytical Coulomb wave functions outside and obtain an expression for the S-matrix explicitly as a function of the incident energy and the potential parameters. Then, from the pole position in the complex momentum (k) plane of the S-matrix, we extract the energy and width of the resonance state akin to the decaying state of emission of $\a$ particle from a nucleus. With this simple model of potential scattering calculation we achieve good explanation of the experimental results of $\a$-decay half-lives in the cases of several $\a$-emitters that include heavy and super-heavy nuclei. In Sec. II, the details of the OMP calculation and the derivation of the expression for the S-matrix of the exactly solvable potential are given. Section III discusses the applications of the formulation to the explanation of the experimental data of elastic scattering cross section, reaction cross section and $\a$-decay half-life. In Sec. IV, we present the summary and conclusion of the work.\ Theoretical formulation ======================= The nuclear optical potential model is developed for the analysis of the results of scattering and reaction cross sections obtained in the measurements of nucleus-nucleus collisions. In this quantum collision theory, the following reduced radial Schrödinger equation +(E-V(r))(r)=0, for a complex Coulomb-nuclear potential V(r)=V\_N(r)+V\_C(r)+V\_(r), the sum of the complex nuclear potential ($V_N(r)$), the electrostatic potential ($V_C(r)$), and the centrifugal potential ($V_{\ell}(r)$) in the spatial region $0 < r \le R_{max}$, a distance where the attractive nuclear potential becomes zero, is solved using the Runge-Kutta (RK) type of numerical integration or multistep potential approximation [@bs2008] and the wave function ($\phi(r)$) and its derivative ($d\phi(r)/dr$) at the radial position r=$R_{max}$ are obtained. In the outer region $r > R_{max}$, the potential of the $\a$+nucleus interaction is only Coulombic, V$_C(r)$, with the centrifugal term V$_{\ell}$=$\frac{\hbar^2}{2\mu}\frac{\ell(\ell+1)}{r^2}$ for different angular momentum partial wave $\ell$. Here, $\mu$ stands for the reduced mass of the two-body system. Using the exact Coulomb wave functions, i.e., F$_{\ell}(r)$ (regular) and G$_{\ell}(r)$ (irregular) and their derivatives F$_{\ell}^{\prime}(r)$ and G$_{\ell}^{\prime}(r)$ in the outer region $r>R_{max}$ and the wave function $\phi(r)$ and its derivative $\frac{d\phi(r)}{dr}$ in the left side of $r=R_{max}$ and matching them at $r=R_{max}$, we get the expression for the partial wave S-matrix denoted by S$_{\ell}$ as S\_=2i C\_+1, where C\_=, H=\_[r=R\_[max]{}]{}. with $k=\sqrt{\frac{2\mu}{\hbar^2}E}$ for the incident energy E. In (4), prime ($^{\prime}$) denotes derivative with respect to $\rho$=kr. We estimate the results of differential elastic scattering cross section in ratio to Rutherford scattering as a function of scattering angle ($\theta$) by using the S-matrix, S$_{\ell}$, given by (3) in the following expression: =e\^[-2i\_0]{} {sin()}\^[2(i+1)]{} \_0\^(2+1)e\^[2i\_]{}(S\_-1)P\_(cos) \^[2]{}. Here, the Sommerfeld parameter $\eta=\frac{\mu}{\hbar^2}\frac{Z_1Z_2e^2}{k}$ defined in terms of the wave number k, reduced mass $\mu$, and proton numbers Z$_1$ and Z$_2$ of the two interacting nuclei with the charge value e$^2$=1.4398 MeV fm. Further, $\sigma_0$ stands for s-wave Coulomb phase-shift and is expressed as \_0=+ log--- -. The Coulomb phase-shift, $\sigma_{\ell}$, for higher partial waves is evaluated using \_=\_[-1]{}+tan\^[-1]{}. P$_{\ell}(cos\theta)$ stands for the Legendre polynomials. For the total reaction cross section one can use the formula \_R=\_(2+1)(1-S\_\^2). In the optical model potential V(r) given by (2) for the collision of two nuclei of mass numbers A$_1$ and A$_2$ and proton number Z$_1$ and Z$_2$, the complex nuclear potential V$_N$(r)=V$_N^{R}$(r)+iV$_N^{I}$(r), the sum of real part V$_N^R(r)$ and imaginary part V$_N^{I}(r)$. We express V\_N\^R(r)=-V\_0, V\_N\^I(r)=-. The radii R$_v$, R$_s$ and R$_I$ are expressed as R$_v=r_v(A_1^{1/3}+A_2^{1/3})$, R$_s=r_s(A_1^{1/3}+A_2^{1/3})$, and R$_I=r_I(A_1^{1/3}+A_2^{1/3})$, respectively in terms of distance parameters r$_v$, r$_s$, and r$_I$ in fermi units. The parameters a$_s$ and $a_I$ stand for the slope of the potentials for the real and the imaginary parts, respectively. The depth parameters V$_0 >0$ and W$_0 > 0$ and they are in energy (MeV) units. In the real part V$_N^R(r)$ (10), there is a parameter $\delta$ which decides the depth as V$_0(1+\delta)$ near the origin. The use of these potentials in squared Woods-Saxon form has been found successful in the description of $\alpha+^{16}O$ elastic scattering and $\alpha$-cluster structure in $^{20}$Ne by Michel [*et al.*]{}, [@mich83]. Thus, the real nuclear potential, V$_N^R(r)$ (10) is a five-parameter co-ordinate dependent expression with the adjustable parameters V$_0$, r$_v$, r$_s$, a$_s$, and $\delta$. The imaginary nuclear potential, V$_N^{I}(r)$ (11) is a three-parameter formula with the parameters W$_0$, r$_I$, and a$_I$ which are also adjustable. The Coulomb potential, V$_C(r)$, based on homogeneous charge distributions is expressed as V\_C(r)= { [cl]{} (3-), & [if r R\_C,]{}\ ,& [if r &gt; R\_C,]{} . where radius parameter R$_C$=r$_C(A_1^{1/3}+A_2^{1/3})$ with r$_C\simeq$ 1.2 fm. Thus, the complete OMP, V(r) (2), is specified by altogether nine parameters V$_0$, r$_s$, a$_s$, $\delta$, r$_v$, r$_C$, W$_0$, r$_I$, and a$_I$.\ Poles of S-matrix for resonance and decay rate ---------------------------------------------- It may be mentioned here that the real nuclear potential V$_N^R(r)$ given by (10) in combination with the Coulomb potential V$_C(r)$ (12) and the centrifugal term V$_{\ell}(r)=\frac{\hbar^2}{2\mu}\frac{\ell(\ell+1)}{r^2}$, generates a repulsive barrier in the outer region with a prominent pocket in the inner side in each partial wave trajectory specified by $\ell=$0, 1, 2, 3, ....... The barrier along with the pocket found in a given $\ell$ gradually vanishes with the increase of $\ell$. Such potentials with well defined pockets can generate significant resonances in the reaction. These potential resonances are identified by the poles of the S-matrix as defined below. Representing the S-matrix S$_{\ell}$ (3) by S$_{\ell}$(k)=$\frac{F(k)}{F(-k)}$, as a function of the wave number k, a zero at k=k$_r$-ik$_i$ of $F(-k)$ in the lower half of the complex k-plane gives rise to a pole in the S$_{\ell}$(k). When k$_i<< $k$_r$, this corresponds to a resonance state with positive resonance energy E$_r$ and width $\Gamma_r$: E\_r=()(k\_r\^2-k\_i\^2), \_r=()(4k\_rk\_i). Thus, in the complex energy E plane S$_{\ell}$(k) has a pole at E=E$_r-i\Gamma_r/2$. The width $\Gamma_r$, expressed in energy unit, is related to decay constant $\lambda_d$, mean life T and half-life T$_{1/2}$ through the relation =T=T\_[1/2]{}/0.693=.\ Decay rate with exact explicit analytic solution ------------------------------------------------ Let us study the nature of the potential adopted above for the successful explanation of scattering, reaction and decay of an $\a$+nucleus system. In Fig. 1, we plot the real nuclear part $V_N^R(r)$ (10) combined with Coulomb potential $V_C(r)$ (12) for $\ell=0$ trajectory with regard to the $\a$+daughter nucleus, $\a+^{208}$Pb, pair and show it by the dashed curve. It is clearly seen that there is a well defined pocket followed by a prominent barrier with height V$_B$=20.27 MeV positioned at r=R$_B$=10.75 fm. This potential (dashed curve) is found close to the dotted curve which represents the potential calculated using energy density profiles of nucleons (proton and neutron) in RMF theory [@kum16] except near the origin where the depth of the RMF potential is small. As mentioned in Sec. II, this combined Coulomb-nuclear potential is responsible for generating resonances or quasi-molecular state that eventually decays. In order to match with the above effective potential, we designate the following form V\_[eff]{}(r)=H\_0{\_1-(\_1-\_2)(r)}, where $\rho(r)$=\[cosh$^2(\frac{R_0-r}{d})]^{-1}$ is well known Eckart form factor and the strength parameter $H_0 >0$. This potential (16) can be solved in the Schrödinger equation exactly. It has five parameters namely $H_0$, $\xi_1$, $\xi_2$, $R_0$, and $d$. With the values of the radial position of the barrier obtained by using global formula [@brog81] R$_B$=$r_0(A_1^{1/3}+A_2^{1/3})$+2.72 fm with $r_0=$1.07 fm, the height of the barrier V$_B$=$\frac{Z_1Z_2e^2}{R_B}(1-\frac{a_g}{R_B})$ with $a_g=$0.63 fm and setting R$_0$=R$_B$, H$_0\xi_2$=V$_B$, depth $H_0\xi_1$=$-$100 MeV, and diffuseness d=9.63 fm, the effective potential V$_{eff}(r)$ (16) is shown by a solid curve in Fig. 1 in the spatial region $0 < r < R_B$. As we see, it closely matches, in region $0 < r < R_B$, with the dashed curve that represents the Coulomb+nuclear optical potential for $\ell$=0 described above. For the potential expressed by Eq. (16), the s-wave radial Schrödinger equation can be written as +\[\^2-ik\_0\^2(1+i)(r)\]u=0, where $\kappa^2=k^2-k_0^2\xi_1$, $\xi=\xi_1-\xi_2$, $k_0^2=\frac{2\mu}{\hbar^2}H_0$. The exact solution of the above Eq. (17) is given as u(r)=AZ\^[d]{}F(a,b,c,Z)+BZ\^[-d]{} F(a\^,b\^,c\^,Z), where Z=$[cosh^2\frac{R_0-r}{d}]^{-1}$ and F(a,b,c,Z) is the hyper-geometric function. The other terms are a=(+id),  b=(1-+id),   c=1+id, a\^=(-id),   b\^=(1--id),   c\^=1-id, =-\[1-i(2k\_0d)\^2(i)\]\^[1/2]{}. Using the boundary condition $u(r=0)=0$, we get Z(r=0)=Z\_0=, and C=-=Z\_0\^[id]{}. For cosh$^2\frac{R_0}{d}>>1$, $Z_0<<1$, Cexp(-2i). The logarithmic derivative of the wave function at r=$R_0$ is given by f(R\_0)=\_[r=R\_0]{}= . In the region $r>R_0=R_B$ where the potential is pure Coulombic, the Coulomb wave functions (regular F$_0$ and irregular G$_0$) and their derivatives (F$_0^{\prime}$ and G$_0^{\prime}$) for $\ell=0$ case are expressed as [@abr65] F\_0=  exp(),    F\_0\^=(\^[-2]{}+t\^[-2]{}\^4)F\_0, G\_0=,    G\_0\^=\[-\^[-2]{}+t\^[-2]{}\^4\]G\_0, t=,     =\[\]\^, =2(\[t(1-t)\]\^+arcsin t\^-), k=,    =kR\_0,    = . By requirement of continuity at r=$R_0$, the wave functions and their derivatives are matched at r=$R_0$ to obtain the scattering matrix denoted by S(k) as S(k)=, where $f(R_0)$ is given by (25). A pole of S(k) arising from the zero of the denominator of S(k) (31) in the lower half of complex k-plane gives us the resonance energy equal to Q-value and the decay half-life as described in Sec. II.\ Results and discussion ====================== In the application of the above formulation to the explanation of measured data with regard to events namely scattering, reaction and $\a$-decay in a $\a$+daughter nucleus system, we select the $\a$+$^{208}$Pb reaction which has been subjected to extensive experiments for the measurements of elastic scattering cross section, reaction cross section, and rate of decay of $\a$ particle from the parent $^{212}$Po nucleus. The values of the nine potential parameters we use for the total optical model potential for this reaction are V$_0$= 22 MeV, r$_s$=1.27 fm, a$_s$=0.62 fm, r$_v$=0.66 fm, $\delta$=3.5, r$_C$=1.2 fm, W$_0$=5 MeV, a$_I$=0.28 fm, and r$_I$=1.2 fm for E$_{lab}$= 19 MeV, r$_I$=1.3 fm for E$_{lab}$= 20 MeV, r$_I$=1.37 fm for E$_{lab}$= 22 MeV where E$_{lab}$ stands for incident energy in the laboratory. Using the S-matrix $S_{\ell}$ (3) in the expression (6), we obtain the results of angular variation of differential elastic scattering cross section $\sigma_{Sc}$ in ratio to Rutherford scattering cross section $\sigma_{Ru}$ and compare them (full curves) with the corresponding measured data (solid circles) obtained from Ref. [@bar1974] in Fig. 2. It is found that the fitting of the data at three different energies around the s-wave barrier height (=20.27 MeV) is quite good. With this, the test for the authenticity of the nuclear optical potential adopted in the present analysis is successful. Now the same potential is to be tested for the explanation of reaction cross section and also the result of $\a$-decay rate. By using the expression (9), the total reaction cross section $\sigma_R$ as a function of bombarding energy are obtained and they are shown by a solid curve in Fig. 3 and compared with the corresponding experimental data represented by solid circles [@bar1974] in this Fig. 3. It is clearly seen that the explanation of the data by our calculated results (solid curve) is quite satisfactory at different energies around the barrier. It may be pointed out here that the results of $\sigma_{R}$ at low ($< 20 MeV$) incident energies analyzed here are sometimes considered as fusion cross sections [@bha08]. Coming to the calculation of resonance energy and the decay width through the poles of S-matrix, we first find out the resonance energy equal to the Q-value of $\a$-decay from the position of a peak in the variation of the result of $\sigma_R$ as a function of energy by using the same set of potential parameters which is found successful in explaining the elastic and $\sigma_R$ data mentioned above. In order to obtain the resonance energy exactly equal to the Q-value we may need to marginally vary the depth or the diffuseness parameter of the real nuclear potential which eventually does not affect the results of elastic or reaction cross sections obtained earlier. This resonance energy equal to the Q-value is used as a trial value of the real part of the pole position ($k_r^0$) of the S-matrix, $S_{\ell}$ (3). The trial value of the imaginary part $(k_i^0$) of the pole position is taken to be a very small value ($\approx$ 0.001MeV) to begin with. Starting with these trial values, Newton-Raphson iterative technique is used to obtain the zero of the Jost function of the Coulomb-nuclear S-matrix, $S_{\ell}$ (3), which corresponds to the resonance or quasi-bound state pole. From this pole, using Eq. (13), the resonance energy E$_r$ is obtained to represent the Q-value. The corresponding result of width is obtained by using (14) and from this, using relation (15), the value of decay half-life denoted by $\hlomp$ is obtained within the framework of optical model calculation. In the case of $\a$-emitter $^{212}$Po with Q-value $\q$=8.954 MeV, we find $\hlomp$= 2.89$\times 10^{-7}s$ which is very close to the experimental value of half-life $\hlex$= 2.99$\times 10^{-7}s$. For other $\a$+daughter nucleus pairs, we can use the same optical model potential parameters used above in the analysis of the $\a$+$^{208}$Pb pair and estimate the $\a$-decay half-life $\hlomp$ of the parent nuclei. Having fixed all other parameters namely V$_0$=22 MeV, r$_s$=1.27 fm, a$_s$=0.62 fm, and r$_C$=1.2 fm, one has to marginally vary the value of the parameter r$_v$ around 0.66 fm and that of $\delta$ around 3 to generate the resonance energy exactly at the Q-value of a given pair. This formulation can easily be applied to estimate the results of decay half-lives of $\a$ particle emitted out with some angular momentum $\ell > 0$. For this, one has to simply generate the resonance at the energy equal to the given Q-value of decay in the specified partial wave trajectory $\ell$ by the variation of r$_v$ and $\delta$ outlined above. We calculate the results of decay half-life in decimal logarithm, ($\lghlomp$), for several $\a$-emitters in the list of Polonium (Po) isotopes for $\ell$=0 state and compare them with the corresponding experimental data denoted by $\lghlexpt$ in Table I. We find that our calculated results are close to the respective measured data in most cases of $\a$+daughter nucleus pairs. We now calculate $\hl$ of the $\a$+daughter system from the resonance pole of the S-matrix, S(k) (31) derived by using exact wave functions of Coulomb-nuclear interaction. The same procedure adopted above in the OMP calculation is used here to locate the pole of S(k) depicting the resonance at the energy equal to the Q-value of decay. In this case, the diffuseness parameter ’d’ of the potential (16) is varied to obtain the above situation of resonance at the Q-value. From this pole of S(k) we derive decay $\hl$ by using formula (15) and denote the results by $\hlanal$ as it is based on exact analytical solutions unlike those used in the derivation of $\hlomp$ from the pole of S$_{\ell}$ (3) obtained within the OMP potential calculation. In this potential model calculation, we obtain the result of $\hlanal$= 3.02$\times 10^{-7}s$ for the $\a$-decay of the $^{212}$Po nucleus which is very close to the experimental result $\hlex$=2.99$\times 10^{-7}s$ with $\q$=8.954 MeV. Also, it is found that the result of $\hlanal$ is very close to the value of $\hlomp$=2.89$\times 10^{-7}$s. This closeness between our calculated results of $\hlanal$ and $\hlomp$ indicates that the effective potential (16) with parameters r$_0$=1.07 fm, a$_g$= 0.63 fm and d=9.63 fm is a good approximation for the Coulomb-nuclear interaction potential of the $\a$+$^{208}$Pb pair for the estimate of decay half-life using exact solution of the potential for the S-matrix and its resonance pole. As the potential (16) uses global formula for its parameters for barrier position and height, one can use this in the cases of other $\a$+daughter nucleus pairs with some variation of the diffuseness parameter d around 9 fm to generate the resonance at the energy equal to the Q-value of the of the given pair and obtain the result of decay half-life from the pole of this resonance. We calculate the results ($\hlanal$) for several $\a$-emitters in the list of Polonium (Po) isotopes for $\ell$=0 state and compare them with the corresponding experimental data in Table I. We find that our calculated results of half-life in decimal logarithm, ($\lghlanal$), having closely reproduced the results ($\lghlomp$) obtained in OMP calculation above are found close to the respective measured data $\lghlexpt$ in most cases of $\a$-emitters. From the above analysis we learn that for the emission of $\a$ particle in $\ell$=0 situation, instead of using poles from S$_{\ell}$ (3) which requires tedious numerical calculations of wave functions of the full OMP, it is okay to use the simple poles of S(k) (31) which is expressed analytically in terms of exact solution of a global potential and analytical Coulomb wave functions, and estimate the results of decay half-life, $\hlanal$, for the explanation of experimental decay rate in various $\a$+daughter nucleus pairs. In Table II, we present the results of $\hlanal$ for several $\a$-emitters. On comparison with the corresponding experimental data denoted by $\hlex$ in the same Table II, we find that our results provide satisfactory explanation of the measured ones in most cases of the $\a$-emitters. For emission of $\a$ particle with $\ell>0$, we have to use the full optical potential in different trajectories specified by $\ell$s and estimate the results, $\hlomp$, from the poles of S$_{\ell}$ (3) described above. In Table III, we present our results of half-life, $\lghlomp$, in decimal logarithm for several $\a$-emitters in different $\ell >$0 situations and compare them with the corresponding experimental results denoted by $\lghlexpt$. We find that the explanation of the measured data by our calculated ones is quite satisfactory in most cases of decay. In few cases, namely $^{159}_{73}X$, $^{214}_{91}X$, $^{229}_{91}X$, and $^{257}_{101}X$, the angular momenta assigned in the experimental results are different from the $\ell$s we need to consider for proper fitting of the $\q$-values and the corresponding half-lives. These $\ell$s decided by our systematic calculation are noted within brackets () by the side of the measured $\ell$s for the above four nuclei. Having obtained successes in the explanation of all the three $\a$ induced nuclear collision events: elastic scattering, reaction and decay by the use of the nuclear potential (10) in squared Woods-Saxon form, the following few words are in order in favour of this potential.\ (i) The potential has a surface part defined by the parameters $V_0$, $r_s$, and $a_s$ as in normal Woods-Saxon form and it takes care of proper description of the measured data of elastic scattering which is a surface phenomenon. \(ii) There is a volume part in this potential expression (10) governed by the parameters r$_v$ and $\delta$ which controls the diffuseness of the potential in the interior side. Interestingly, variation of these parameters does not disturb much the fitting of the elastic scattering cross section provided by the surface part stated in point (i). On the other hand, by selecting some values r$_v\sim$0.66 fm and $\delta\sim$ 3, we, in combination with the repulsive Coulomb part, find an effective potential which is slowly falling in nature towards the left hand side of the Coulomb barrier. This bulging character of the Coulomb+nuclear potential provides all the remarkable explanation of the experimental data of $\a$-decay half-lives and also the reaction cross sections at energies near and below the barrier in large number of $\a$+daughter nucleus collision events. From the successful application of this form of nuclear potential we understand that the volume part deciding the diffuseness of the Coulomb barrier in the interior side is the life-line for the explanation of the decay rate and reaction or fusion cross section in the collision of $\a$+nucleus system. \(iii) This short of potential with less diffuseness in the interior side of the Coulomb barrier potential is consistent with the potential calculated using density profiles of nucleons in the RMF theory [@kum16].\ Summary and Conclusion ====================== Considering the process of decay of an $\a$ particle from a parent nucleus as a two-body quantum collision of $\a$+daughter nucleus pair, three events namely decay, elastic scattering and reaction (fusion) are addressed in one platform within the framework of three-dimensional optical model potential scattering (S-matrix) theory. A novel expression for the nuclear potential in squared Woods-Saxon form is adopted for the nucleus-nucleus collision. Using the S-matrix of the complex nuclear plus electrostatic potentials, the measured data of elastic scattering and reaction cross sections are explained to proof the genuineness of the potential. From the poles of the same S-matrix in the complex momentum plane, we extract the energy and width of the resonance state akin to the decaying state of emission of $\a$ particle and from this width the result of decay half-life of the $\a$-emission is obtained to account for the experimental data of half-life in the cases of large number of $\a$-emitters including heavy and super-heavy nuclei. In this comprehensive analysis of three physical phenomena, we find that the versatile form of nuclear potential adopted by us in this paper, by virtue of its surface part, explains data of elastic scattering cross section and by the help of its volume part controlling the diffuseness of the potential in the interior side, decides the results of reaction cross section and decay rate yielding good explanation of respective measured data. The sum of the above nuclear potential (real part) with the Coulomb potential based on homogeneous distribution of charges for s-wave is closely represented by an analytical expression as a function of radial distance r which is solved exactly to express the S-matrix in terms of the explicit analytical Schrödinger solutions and Coulomb wave functions. From the resonant poles of this well defined S-matrix of a global soluble potential, the results of decay half-lives are obtained to explain the corresponding experimental data in several heavy as well as super-heavy $\a$ emitting nuclei giving rise to satisfactory explanation of the data. In conclusion, we believe that the emission of $\a$ particle from a radioactive nucleus is governed by the fundamental principle of quantal decay of charged particle from a resonance state generated by a two-body ($\a$+daughter nucleus) potential that describes the elastic and reaction cross sections of the $\a$+daughter nucleus collision. And the width of the resonant pole of the S-matrix of the potential yields the result of decay half-life. Further, the S-matrix of the full optical model calculation involving tedious numerical computation for wave functions can be replaced by a S-matrix which is expressed in terms of exact analytical solutions of a soluble potential that closely represents the real part of the potential describing elastic scattering data and expressions of Coulomb wave functions, and the resonant poles of this S-matrix in the complex momentum plane can be used to give satisfactory results of $\a$-decay half-lives.\ ACKNOWLEDGMENTS We would like to thank Bharat Kumar, Ph.D. Scholar, Institute of Physics, Bhubaneswar, India, for supplying the results of the potential for the $\a$+$^{208}$Pb system using RMF theory. We acknowledge the research facilities extended to us by the Institute of Physics, Bhubaneswar, India. [99]{} Peter Mohr, Phys. Rev. C [**73**]{}, 031301 (R) (2006). C. Xu and Z. Ren, Nucl. Phys. A [**753**]{}, 174 (2005). V. Yu. Denisov and H. Ikezoe, Phys. Rev. C [**72**]{}, 064613 (2005). Y. K. Gambhir, A. Bhagwat, and M. Gupta, Phys. Rev. C [**71**]{}, 037301 (2005). Z. A. Dupre$^{\prime}$ and J. J. Bürvenich, Nucl. Phys. A [**767**]{}, 81 (2006). T. Dong and Z. Ren, Eur. Phys. J. A [**26**]{}, 69 (2005). T. Dong and Z. Ren, Phys. Rev. C [**72**]{}, 064331 (2005). P. R. Chowdhury, C. Samanta, and D. N. Basu, Phys. Rev. C [**73**]{}, 014612 (2006). B. Sahu, Phys. Rev. C [**78**]{}, 044608 (2008); Phys. Rev. C [**84**]{}, 037607 (2011); Phys. Rev. C [**85**]{}, 057601 (2012). B. Sahu, Y. K. Gambhir, and C. S. Shastry, Mod. Phys. Lett. A [**25**]{}, 535 (2010). B. Sahu and Swagatika Bhoi, Phys. Rev. C [**93**]{}, 044301 (2016). B. Sahu, R. Paira, and B. Rath, Nucl. Phys. A [**908**]{}, 40 (2013). S. A. Gurvitz and G. kälbermann, Phys. Rev. Lett. [**59**]{}, 262 (1987). Y. K. Gambhir, A. Bhagwat, M. Gupta, and A. K. Jain, Phys. Rev. C [**68**]{}, 044316 (2003). A. Bhagwat and Y. K. Gambhir, J. Phys. G: Nucl. Part. Phys. [**35**]{}, 065109 (2008). B. Sahu, G. S. Mallick, B. B. Sahu, S. K. Agarwalla, and C. S. Shastry, Phys. Rev. C [**77**]{}, 024604 (2008). B. Sahu and B. Sahu, Int. J. Mod. Phys. E [**21**]{}, 1250067 (2012). B. Kumar, S. K. Biswal, S. K. Singh, C. Lahiri, and S. K. Patra, Int. J. Mod. Phys. E [**25**]{}, 1650020 (2016). B. Sahu, B. B. Sahu, and S. K. Agarwalla, Pramana [**70**]{}, 27 (2008). F. Michel, J. Albinski, P. Belery, Th. Delbar, Gh. Gregoire, B. Tasiaux, and G. Reidemeister, Phys. Rev. C [**28**]{}, 1904 (1983). R. A. Broglia, A. Winther, [*Heavy-Ion Reactions Lecture Notes*]{} (Addison-Wesley, Redwood, 1981). M. Abramowitz and I. A. Stegun, [*Handbook of Mathematical Functions*]{} (Dover, New York, 1965), p. 542. A. R. Barnett and J. S. Lilley, Phys. Rev. C [**9**]{}, 2010 (1974). Yuejiao Ren and Zhongzhou Ren, Phys. Rev. C [**85**]{}, 044608 (2012). Dongdong Ni and Zhongzhou Ren, Nucl. Phys. A [**825**]{}, 145 (2009). G. Royer, Nucl. Phys. A [**848**]{}, 279 (2010). [TABLE I.]{} [Comparison of experimental results of $\a$-decay half-life in decimal logarithm, $\lghlexpt$ (third column) with the calculated results $\lghlomp$ (fourth column) obtained from the poles of S-matrix, S$_{\ell}$ (3), and $\lghlanal$ (fifth column) from the poles of analytical S-matrix S(k) given by (31). In the sixth column, the values of diffuseness parameter ’d’ used in the calculation of $\lghlanal$ are listed. For the calculation of $\lghlomp$, the values of real optical potential parameters V$_0$=22 MeV, r$_s$=1.27 fm, a$_s$=0.62 fm, and r$_C$=1.2 fm are kept same for all nuclei, and the values of the parameters r$_v$ and $\delta$ are varied around 0.66 fm and 3, respectively, to exactly reproduce the experimental $\q$ value of a given alpha decaying isotope. Experimental data are obtained from Ref. [@ren2012]. ]{}\ ----------------------------------- -- -- ------- -- -- --------- -- -- --------- -- -- --------- -- -- -------- -- -- -- -- -- -- -- -- $^{218}$Po$\rightarrow$$^{214}$Pb 6.115 2.27 2.36 2.28 7.9070 $^{216}$Po$\rightarrow$$^{212}$Pb 6.906 $-$0.84 $-$0.75 $-$0.89 7.9915 $^{214}$Po$\rightarrow$$^{210}$Pb 7.833 $-$3.38 $-$3.71 $-$3.97 8.0996 $^{212}$Po$\rightarrow$$^{208}$Pb 8.954 $-$6.52 $-$6.68 $-$6.52 9.6357 $^{210}$Po$\rightarrow$$^{206}$Pb 5.407 7.08 6.83 6.40 8.9432 $^{208}$Po$\rightarrow$$^{204}$Pb 5.215 7.96 7.96 7.47 8.8738 $^{206}$Po$\rightarrow$$^{202}$Pb 5.327 7.14 6.97 6.88 8.8554 $^{204}$Po$\rightarrow$$^{200}$Pb 5.485 6.28 6.14 6.05 8.8442 $^{202}$Po$\rightarrow$$^{198}$Pb 5.701 5.15 5.06 4.96 8.8425 $^{200}$Po$\rightarrow$$^{196}$Pb 5.981 3.74 3.74 3.64 8.8511 $^{198}$Po$\rightarrow$$^{194}$Pb 6.309 2.27 2.30 2.19 8.8677 $^{196}$Po$\rightarrow$$^{192}$Pb 6.657 0.77 0.90 0.79 8.8765 $^{194}$Po$\rightarrow$$^{190}$Pb 6.987 $-$0.41 $-$0.31 $-$0.43 8.9045 $^{190}$Po$\rightarrow$$^{186}$Pb 7.693 $-$2.61 $-$2.65 $-$2.77 8.9461 ----------------------------------- -- -- ------- -- -- --------- -- -- --------- -- -- --------- -- -- -------- -- -- -- -- -- -- -- -- [TABLE II.]{} [Comparison of the experimental $\a$-decay half-lives with the calculated ones for nuclei with the neutron number N $>$ 126. The first and second columns denote the elemental symbol and the mass number of the parent nucleus. The third and fourth columns are, respectively, the experimental decay energies ($\q$ values) and half-lives ($\hlex$) of $\a$ decay obtained from Ref. [@ni2009]. The half-lives, $\hlanal$, calculated from the poles of analytical S-matrix S(k) given by (31) are presented in the fifth column. In the sixth column, the values of diffuseness parameter ’d’ used in the calculation are listed.$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$]{}\ ---- -- -- -- ----- -- -- -- ------- -- -- -- ---------------------- -- -- -- ------------------------ -- -- -- ------ -- -- -- Pb 210 3.792 3.69$\times10^{16}$ 3.00$\times10^{16}$ 8.73 Po 212 8.954 2.99$\times10^{-7}$ 3.02 $\times10^{-7}$ 9.63 214 7.833 1.64$\times10^{-4}$ 1.02 $\times10^{-4}$ 8.10 216 6.906 1.45$\times10^{-1}$ 1.24 $\times10^{-1}$ 7.99 218 6.115 1.86$\times10^{2}$ 1.91 $\times10^{2}$ 7.90 Rn 214 9.208 2.70 $\times10^{-7}$ 1.22 $\times10^{-7}$ 8.26 216 8.200 4.50 $\times10^{-5}$ 5.10 $\times10^{-5}$ 8.14 218 7.263 3.5 $\times10^{-2}$ 4.68 $\times10^{-2}$ 8.03 220 6.405 5.56 $\times10^{1}$ 9.1 $\times10^{1}$ 7.93 222 5.590 3.31 $\times10^{5}$ 6.1 $\times10^{5}$ 7.85 Ra 216 9.526 1.82 $\times10^{-7}$ 1.05 $\times10^{-7}$ 8.29 218 8.546 2.56 $\times10^{-5}$ 3.10 $\times10^{-5}$ 8.17 220 7.592 1.81 $\times10^{-2}$ 2.36 $\times10^{-2}$ 8.06 222 6.679 3.92 $\times10^{1}$ 5.37 $\times 10^{1}$ 7.96 224 5.789 3.33 $\times10^{5}$ 5.96 $\times 10^{5}$ 7.86 226 4.871 5.35 $\times10^{10}$ 12.56 $\times 10^{10}$ 7.76 Th 218 9.849 1.09 $\times10^{-7}$ 0.89 $\times 10^{-7}$ 8.33 220 8.953 9.70 $\times10^{-6}$ 1.33 $\times10^{-5}$ 8.22 222 8.127 2.05 $\times10^{-3}$ 2.80 $\times10^{-3}$ 8.13 224 7.298 1.33 $\times10^{0}$ 1.61 $\times10^{0}$ 8.03 226 6.451 2.46 $\times10^{3}$ 3.94 $\times10^{3}$ 7.94 228 5.520 8.49 $\times10^{7}$ 1.65 $\times10^{8}$ 7.84 ---- -- -- -- ----- -- -- -- ------- -- -- -- ---------------------- -- -- -- ------------------------ -- -- -- ------ -- -- -- [    TABLE II.]{}[                          (continued).]{} ---- -- -- -- ----- -- -- -- ------- -- -- -- ---------------------- -- -- -- ---------------------- -- -- -- ------ -- -- -- 230 4.770 3.12 $\times10^{12}$ 7.82 $\times10^{12}$ 7.77 232 4.082 5.69 $\times10^{17}$ 2.21 $\times10^{18}$ 7.71 U 222 9.500 1.40 $\times10^{-6}$ 2.62 $\times10^{-6}$ 8.29 224 8.620 9.40 $\times10^{-4}$ 5.66 $\times10^{-4}$ 8.18 226 7.701 2.69 $\times10^{-1}$ 4.14 $\times10^{-1}$ 8.08 228 6.803 5.75 $\times10^{2}$ 9.78 $\times10^{2}$ 7.98 230 5.993 2.67 $\times10^{6}$ 5.08 $\times10^{6}$ 7.90 232 5.414 3.20 $\times10^{9}$ 7.32 $\times10^{9}$ 7.85 234 4.858 1.09 $\times10^{13}$ 2.52 $\times10^{13}$ 7.80 236 4.673 1.00 $\times10^{15}$ 3.08 $\times10^{15}$ 7.79 238 4.270 1.78 $\times10^{17}$ 8.82 $\times10^{17}$ 7.78 Pu 232 6.716 1.71 $\times10^{4}$ 1.77 $\times10^{4}$ 7.99 234 6.310 7.73 $\times10^{5}$ 12.31 $\times10^{5}$ 7.96 236 5.867 1.30 $\times10^{8}$ 2.11 $\times10^{8}$ 7.93 238 5.593 3.90 $\times10^{9}$ 6.62 $\times10^{9}$ 7.92 240 5.256 2.84 $\times10^{11}$ 6.82 $\times10^{11}$ 7.91 242 4.985 1.52 $\times10^{13}$ 3.95 $\times10^{13}$ 7.90 244 4.666 3.17 $\times10^{15}$ 8.11 $\times10^{15}$ 7.88 Cm 240 6.398 3.30 $\times10^{6}$ 3.75 $\times10^{6}$ 8.02 242 6.216 1.90 $\times10^{7}$ 2.82 $\times10^{7}$ 8.03 244 5.902 7.48 $\times10^{8}$ 1.16 $\times10^{9}$ 8.01 246 5.475 1.82 $\times10^{11}$ 3.20 $\times10^{11}$ 7.98 248 5.162 1.43 $\times10^{13}$ 3.03 $\times10^{13}$ 7.97 Cf 240 7.719 9.09 $\times10^{1}$ 8.77 $\times10^{1}$ 8.17 242 7.517 2.62 $\times10^{2}$ 4.77 $\times10^{2}$ 8.17 244 7.329 1.55 $\times10^{3}$ 2.45 $\times10^{3}$ 8.17 246 6.862 1.62 $\times10^{6}$ 2.12 $\times10^{5}$ 8.14 248 6.361 3.54 $\times10^{7}$ 4.37 $\times10^{7}$ 8.10 ---- -- -- -- ----- -- -- -- ------- -- -- -- ---------------------- -- -- -- ---------------------- -- -- -- ------ -- -- -- [    TABLE II.]{}[                          (continued).]{} ---- -- -- -- ----- -- -- -- ------- -- -- -- ---------------------- -- -- -- ---------------------- -- -- -- ------ -- -- -- 250 6.128 4.88 $\times10^{8}$ 6.48 $\times10^{8}$ 8.09 252 6.217 1.02 $\times10^{8}$ 2.10 $\times10^{8}$ 8.14 254 5.927 2.04 $\times10^{9}$ 7.07 $\times10^{9}$ 8.12 Fm 246 8.374 1.55 $\times10^{0}$ 2.69 $\times10^{0}$ 8.31 248 8.002 4.56 $\times10^{1}$ 4.67 $\times10^{1}$ 8.29 250 7.557 2.28 $\times10^{3}$ 2.08 $\times10^{3}$ 8.25 252 7.153 1.09 $\times10^{5}$ 8.30 $\times10^{4}$ 8.22 254 7.308 1.37 $\times10^{4}$ 1.78 $\times10^{4}$ 8.28 256 7.027 1.35 $\times10^{5}$ 2.78 $\times10^{5}$ 8.27 No 252 8.550 4.18 $\times10^{0}$ 3.96 $\times10^{0}$ 8.38 254 8.226 7.14 $\times10^{1}$ 4.58 $\times10^{1}$ 8.37 256 8.581 3.64 $\times10^{0}$ 2.89 $\times10^{0}$ 8.45 Rf 256 8.930 2.02 $\times10^{0}$ 1.32 $\times10^{0}$ 8.46 258 9.250 9.23 $\times10^{-2}$ 1.36 $\times10^{-1}$ 8.54 ---- -- -- -- ----- -- -- -- ------- -- -- -- ---------------------- -- -- -- ---------------------- -- -- -- ------ -- -- -- [TABLE III.]{} [Comparison of experimental values in decimal logarithm log$_{10}T^{(expt)}_{1/2}$ of half-life of $\a$-decay and corresponding results of present calculation log$_{10}T^{(OMP)}_{1/2}$ obtained from the poles of S$_{\ell}$ (3) in decimal logarithm. The experimental $\q$ values, half-lives and $\l$ values are obtained from Ref. [@roy2010].]{} ------------------ -- ------- -- --- -- ------- -- ------- -- ------------------ -- ------- -- ------ -- ------- -- ------- -- -- -- -- -- $^{112}_{ 53}$ 2.990 4 5.45 5.54 $^{149}_{ 65}$ 4.077 2 4.97 4.96 $^{151}_{ 65}$ 3.496 2 8.82 8.80 $^{159}_{ 73}$ 5.681 5(0) 0.11 0.12 $^{162}_{ 73}$ 5.010 1 3.68 3.62 $^{175}_{ 77}$ 5.400 2 3.02 3.38 $^{181}_{ 79}$ 5.751 2 3.39 3.34 $^{191}_{ 83}$ 6.778 5 2.85 2.96 $^{193}_{ 83}$ 6.304 5 4.50 4.22 $^{195}_{ 83}$ 5.832 5 6.79 6.34 $^{210}_{ 85}$ 5.631 2 7.73 7.34 $^{210}_{ 87}$ 6.650 2 2.43 2.69 $^{212}_{ 83}$ 6.207 5 4.57 4.18 $^{212}_{ 85}$ 7.824 5 -0.42 -0.42 $^{212}_{ 87}$ 6.529 2 4.10 4.10 $^{213}_{ 83}$ 5.982 5 5.15 5.18 $^{214}_{ 83}$ 5.621 5 7.16 7.20 $^{214}_{ 87}$ 8.589 5 -2.27 -2.20 $^{214}_{ 89}$ 7.350 2 1.23 1.35 $^{214}_{ 91}$ 8.430 4(1) -2.10 -2.32 $^{216}_{ 89}$ 9.235 5 -3.31 -3.30 $^{220}_{ 87}$ 6.801 1 1.62 1.82 $^{221}_{ 87}$ 6.457 2 2.55 2.73 $^{223}_{ 89}$ 6.783 2 2.60 2.51 $^{224}_{ 89}$ 6.327 1 5.73 5.31 $^{225}_{ 89}$ 5.935 2 6.23 6.34 $^{225}_{ 91}$ 7.390 2 0.39 0.47 $^{226}_{ 89}$ 5.537 2 9.25 9.42 $^{228}_{ 91}$ 6.264 3 7.60 7.23 $^{229}_{ 91}$ 5.835 1(3) 10.03 10.02 $^{230}_{ 91}$ 5.439 2 11.31 10.95 $^{235}_{ 93}$ 5.194 1 13.94 13.62 $^{235}_{ 95}$ 6.610 1 5.17 5.35 $^{237}_{ 93}$ 4.958 1 16.19 15.75 $^{239}_{ 95}$ 5.922 1 11.11 10.92 $^{241}_{ 95}$ 5.638 1 12.60 12.50 $^{243}_{ 95}$ 5.439 1 14.16 13.67 $^{245}_{ 97}$ 6.455 2 9.37 9.79 $^{245}_{ 99}$ 7.909 3 3.52 3.37 $^{249}_{ 97}$ 5.525 2 13.61 13.55 $^{252}_{ 99}$ 6.790 1 7.83 7.85 $^{257}_{ 101}$ 7.558 1(4) 7.57 7.83 ------------------ -- ------- -- --- -- ------- -- ------- -- ------------------ -- ------- -- ------ -- ------- -- ------- -- -- -- -- --
--- abstract: 'Waveguide mirrors possess nano-structured surfaces which can potentially provide a significant reduction in thermal noise over conventional dielectric mirrors. To avoid introducing additional phase noise from motion of the mirror transverse to the reflected light, however, they must possess a mechanism to suppress the phase effects associated with the incident light translating across the nano-structured surface. It has been shown that with carefully chosen parameters this additional phase noise can be suppressed. We present an experimental measurement of the coupling of transverse to longitudinal displacements in such a waveguide mirror designed for light. We place an upper limit on the level of measured transverse to longitudinal coupling of one part in seventeen thousand with 95% confidence, representing a significant improvement over a previously measured grating mirror.' author: - 'SLeavey$^{1}$' - 'BWBarr$^{1}$' - 'ASBell$^{1}$' - 'NGordon$^{1}$' - 'C Gräf$^{1}$' - 'SHild$^{1}$' - 'SHHuttner$^{1}$' - 'E-BKley$^{2}$' - 'SKroker$^{2}$' - 'JMacarthur$^{1}$' - 'CMessenger$^{1}$' - 'MPitkin$^{1}$' - 'BSorazu$^{1}$' - 'KStrain$^{1}$' - 'A Tünnermann$^{3}$' bibliography: - 'sidemotion.bib' title: Upper Limit to the Transverse to Longitudinal Motion Coupling of a Waveguide Mirror --- Correspondence: [s.leavey.1@research.gla.ac.uk](s.leavey.1@research.gla.ac.uk) 1\. SUPA, School of Physics and Astronomy, The University of Glasgow, Glasgow, G128QQ, UK\ 2. Friedrich-Schiller-University, Abbe Center of Photonics, Institute of Applied Physics, Max-Wien-Platz 1, 07743 Jena, Germany\ 3. Fraunhofer Institute of Applied Optics and Precision Engineering, Albert-Einstein-Str. 7, 07745 Jena, Germany Introduction {#sec:intro} ============ Major upgrades to the worldwide network of gravitational wave detectors are currently under way. New designs for the Advanced LIGO [@Harry2010], Advanced Virgo [@Avirgo2009], KAGRA [@Somiya2012] and GEO-HF [@Willke2006] detectors will provide unmatched ability to detect gravitational waves in the audio spectrum. At their most sensitive frequencies, these detectors are expected to be limited by Brownian thermal noise arising from the reflective coatings on the detectors’ test masses [@Levin1998; @Nakagawa2002; @Harry2002; @Crooks2002]. In order to help mitigate this limitation beyond the next generation of detectors, efforts are under way to develop mirror coatings with lower thermal noise [@Flaminio2010; @Bassiri2013]. In the case of Advanced LIGO, each end test mass (ETM) consists of a substrate with 19 pairs of sub-wavelength coatings which produce a transmission of for light [@Dannenberg2009]. Each layer within this stack contributes to the overall thermal noise [@Harry2002; @Crooks2002]. The approach taken by Levin to calculate the thermal noise of mirrors [@Levin1998] shows that mechanical loss at the front surface of a mirror contributes more to the Brownian noise level than loss from an equivalent volume in the substrate. Additionally, typical coating materials tend to exhibit mechanical loss orders of magnitude higher than typical substrate materials [@Harry2002; @Crooks2002]. For these reasons particular attention is being given to the reduction of coating thermal noise to improve the sensitivity of future detectors. One strategy, to be applied for example in KAGRA, is to cool the mirrors to cryogenic temperatures. While this can potentially reduce the thermal noise of the mirrors [@Uchiyama2012], the application of cryogenic mirrors requires new infrastructure, different choices of mirror substrate and coating materials and poses the challenge of heat extraction from the mirror without spoiling its seismic isolation and thermal noise performance. Efforts in the application of cryogenics are also under way to identify suitable substrate and coating materials for ET-LF, the low frequency interferometer as part of the proposed Einstein Telescope [@Punturo2010; @Martin2010; @Hild2011; @Abernathy2011]. Apart from using different coating materials [@Granata2013; @Cole2013] or different beam shapes [@Mours2006; @DAmbrosio2004; @Bondarescu2006] such as with LG33 modes [@Sorazu2013], another potential approach is to utilise waveguide mirrors (WGMs) [@Brueckner2008; @Brueckner2009; @Brueckner2010; @Friedrich2011]. These mirrors can possess high reflectivity at a wavelength determined by their structure. In contrast to conventional dielectric mirrors, mirrors possessing waveguide coatings can exhibit high reflectivity without requiring multiple stacks [@Bunkowski2006]. A waveguide coating instead presents incident light with a periodic grating structure of high refractive index material $n_H$ on top of a substrate with low refractive index $n_L$ (see Figure \[fig:waveguide\_reflection\]). Light is forced into a single reflective diffraction order, the . In transmission, only the and diffraction orders are allowed as long as the condition in Equation \[eq:grating\_equation\] for the grating period, $p$; and the light’s wavelength in vacuum, $\lambda$, is fulfilled [@Brueckner2008]. The light diffracted into the order undergoes total internal reflection at the substrate boundary where it excites resonant waveguide modes. Light leaving the waveguide then contains a phase shift with respect to the order transmitted light, causing destructive interference such that most of the incident light is reflected [@Sharon1997]. $$\frac{\lambda}{n_{H}} < p < \frac{\lambda}{n_{L}} \label{eq:grating_equation}$$ ![Propagation of light within a waveguide mirror. The grating and waveguide layers have refractive index $n_H$, and sit atop a substrate of refractive index $n_L$. Blue arrows represent incident light and red arrows represent reflected light. In realisations of waveguide mirrors such as this, a thin etch-stop layer is placed between the grating and waveguide layers to assist fabrication [@Friedrich2011].[]{data-label="fig:waveguide_reflection"}](waveguide_reflection.pdf){width="0.7\columnwidth"} A recent set of calculations by Heinert *et al.* [@Heinert2013] showed that a suitably optimised WGM can provide a reduction in coating thermal noise amplitude of a factor of 10 at cryogenic temperature compared to mirrors employed in Advanced LIGO. Previous efforts to demonstrate grating structures as alternatives to dielectric mirrors have identified phase noise in the light reflected from the grating not otherwise present in dielectric mirrors [@Wise2005; @Freise2007]. This effect arises from transverse motion of grating mirrors with respect to the incident light. Incident light at angle $\alpha$ is reflected into the m^th^ diffraction order, exiting at angle $\beta_m$ (see Figure \[fig:grating\_propagation\]). The change in path length $\delta l_L$ between the reflected and incident light is then $$\delta l_L = \zeta_a + \zeta_b = \delta y \left( \sin{\alpha} + \sin{\beta_m} \right),$$ where $\zeta_a$ and $\zeta_b$ represent the relative optical path length of each depicted ray. The phase modulation induced in the light reflected from the WGM is proportional to Fourier frequency with a phase lead over the transverse motion [@Barr2011]. The noise added to the reflected light can be enough to mitigate the improvement in coating thermal noise, as witnessed in a study of order Littrow gratings [@Barr2011]. Although WGMs also possess gratings, the resonant waveguide structure has been shown in simulations by Brown *et al.* to be invariant to transverse to longitudinal coupling [@Brown2013]. ![Optical path length changes $\zeta_a$ and $\zeta_b$ due to transverse motion of a Littrow grating. Incident light diffracted into a different order undergoes a path length change $\delta l_L = \zeta_a + \zeta_b$.[]{data-label="fig:grating_propagation"}](grating_propagation.pdf){width="0.5\columnwidth"} **Parameter** **Value** ------------------ ----------- Materials , , Design $\lambda$ Grating depth Waveguide depth Etch stop depth Grating period Fill factor 0.38 Reflectivity 96% : Design parameters of the WGM produced by Friedrich-Schiller Jena for the experiment to measure transverse to longitudinal coupling. It is similar to the one used in [@Friedrich2011], with increased reflective surface area.[]{data-label="tab:waveguide_parameters"} There are two mechanisms by which grating mirrors can couple transverse motion into longitudinal phase changes (see Figure \[fig:waveguide\_scanning\]). The first is through transverse motion of the grating, which can in principle be minimised with appropriate suspension design. The second mechanism is the coupling of changes in the opposite cavity mirror’s alignment into the spot position on the grating mirror. This effect is of particular importance to gravitational wave observatories, where longer arm lengths can increase its detrimental impact. For this reason the second mechanism is considered in more detail in this work. In order to quantify its transverse coupling, a WGM was produced in collaboration with Friedrich-Schiller University Jena, Germany (see Table \[tab:waveguide\_parameters\] for its properties). It was designed for light of wavelength , and consisted of an etched grating structure on top of a waveguide layer, both tantala, on a silica substrate. This article details an experiment carried out to measure its transverse coupling level. ![Two ways in which light can be scanned across the surface of the WGM. The left panel shows the effect of WGM motion with respect to a static beam, while the right panel shows the effect of light beam motion (due to rotation of the cavity mirror opposite the WGM) with respect to a static WGM. The latter effect is the one primarily considered in this article.[]{data-label="fig:waveguide_scanning"}](waveguide_scanning.pdf){width="0.5\columnwidth"} Experiment ========== The fabricated WGM was used as the input coupler for a [Fabry-Pérot ]{}cavity, held on resonance using the Pound-Drever-Hall (PDH) technique [@Drever1983]. The error signal provided by the PDH technique represents changes in cavity length, and this can be fed back to the laser’s frequency *via* a frequency stabilisation servo. Cavity Length Signals {#sec:lengthsignals} --------------------- A non-zero WGM transverse to longitudinal coupling, $\omega_1$, produces a phase shift on the reflected light. This manifests itself as an effective change in cavity length, $\delta l_W$, as the laser light is scanned across its grooves by a rotation of the ETM: $$\delta l_W \left( \theta, \kappa, \omega_1 \right) = \theta \kappa \omega_1, \label{eq:wgm_length_change}$$ where $\theta$ is the ETM’s rotation angle and $\kappa$ is the cavity’s coefficient of ETM rotation to transverse WGM spot motion. Additional cavity length changes are also produced *via* two geometrical effects (see Figure \[fig:mirror\_longitudinal\_effect\]). The first effect, $\delta l_s$, is due to the position of the beam with respect to the centre of the mirror’s surface. For a rotation $\theta$, a beam offset from the centre of the mirror by a displacement $y$ will receive a change in (longitudinal) path length of $$\delta l_s \left( y, \theta \right) = y \tan{\theta} \approx y \theta \label{eq:offset_effect}$$ for small angles. The second effect, $\delta l_d$, is due to the depth $d$ of the mirror, proportional to the rotation angle $\theta$. The position of the centre of the mirror with respect to the zero rotation case, $y_d$, is then $$y_d \left( d, \theta \right) = \frac{d}{2} \tan{\frac{\theta}{2}} \approx \frac{d}{4} \theta,$$ and the change in path length this causes is $$\delta l_d \left( d, \theta \right) = y_d \tan{\theta} \approx \frac{d}{4} \theta^2.$$ The total longitudinal effect $\delta l_E$ caused by the rotation of the ETM is therefore $$\delta l_E \left(y, \theta, d \right) = \delta l_s + \delta l_d \approx y \theta + \frac{d}{4} \theta^2. \label{eq:etm_length_change}$$ ![Geometrical ETM longitudinal effects. For a given rotation $\theta$ and spot centre position offset $y$, the (longitudinal) position change in the surface of the mirror (show in blue) as seen by the reflected light is approximately $y \theta + \frac{d}{4} \theta^2$. The straight, solid red line in the figure shows this longitudinal change.[]{data-label="fig:mirror_longitudinal_effect"}](mirror_longitudinal_effect.pdf){width="0.5\columnwidth"} Considering the ETM’s level of rotation and its dimensions and mass, it is possible to calculate the cavity length change due to the two geometrical effects shown in Equation \[eq:etm\_length\_change\] and then, from the residual cavity length change, infer the WGM’s coupling level. The phase effect associated with transverse to longitudinal coupling is expected to be independent of spot position, whereas there is a phase change about the ETM’s centre of rotation. It is therefore expected that a spot position will exist, for a non-zero WGM transverse coupling level, offset from the ETM’s centre of rotation, for which there is a cavity error signal minimum. This effect arises as a result of $\delta l_W$ and $\delta l_{E}$ combining coherently (see Figure \[fig:individual\_factors\]). The spot position corresponding to the cavity error signal minimum allows the WGM’s transverse to longitudinal coupling level to be inferred. ![Simulations of indicative cavity longitudinal error signals during ETM rotation for different levels of WGM coupling. The signals are functions of the transverse position of the reflected light relative to the ETM’s centre of rotation, the angle of rotation, the mirror depth and the WGM’s coupling level. The rotation to longitudinal coupling of the ETM (black dashed line) combines with the transverse to longitudinal coupling of the WGM (red, green and blue dashed lines) to produce cavity length changes (red, green and blue solid lines). In this example configuration, the ETM rotation is , the ETM’s depth is and the corresponding WGM coupling levels are 1:370 (red), 1:3700 (green) and 1:37000 (blue).[]{data-label="fig:individual_factors"}](individual_factors.pdf){width="1.2\columnwidth"} Examples of WGM coupling levels yielding cavity length changes smaller than (blue), larger than (red) and roughly equivalent to (green) the ETM’s effects are shown in Figure \[fig:individual\_factors\]. For cases where the WGM’s coupling level yields a significant cavity length change with respect to that of the ETM’s rotation, coherent combination creates a trough offset from the ETM’s centre of rotation. The Glasgow 10 m Prototype {#sec:glasgow10m} -------------------------- The Glasgow prototype facility provided a test bed in which the WGM’s transverse to longitudinal coupling could be quantified. The prototype is housed in a Class 1000 clean room and consists of an input bench at atmospheric pressure and a vacuum envelope able to reach pressures of order $10^{-5}$mBar. The envelope consists of nine diameter steel tanks, each connected by steel tubes, arranged into two parallel arms of length , with a shorter arm for input optics situated between them. In the experiment, laser light was passed through a single-mode fibre to provide spatial filtering and an electro-optic modulator (EOM) to impose RF sidebands on the light to facilitate PDH control. The light was then coupled into the vacuum system *via* a periscope. This configuration can be viewed in Figure \[fig:prototype\_setup\]. Tanks 2 and 3 housed a beam splitter and steering mirror, respectively, attached to double stage suspensions. In tanks 4 and 5 were sets of two triple suspension chains based on the GEO-600 design [@Plissi2000]. A viewport present to the rear of tank 5, and to the side of tank 1, allowed for light to exit the vacuum envelope for the purposes of sensing and control. ![The experimental setup in the prototype facility. The laser light is passed through input optics (not shown), a mode cleaning fibre and an EOM before being coupled into the vacuum system *via* a periscope. It then travels to tank 2 where it is reflected off a beam splitter and directed into one of the arms of the prototype by a steering mirror in tank 3. The two cavity mirrors in tanks 4 and 5 form a [Fabry-Pérot ]{}cavity. The cavity mirrors are suspended from triple stage suspensions, and the beam splitter and steering mirror are both suspended from double suspensions.\ \ The ETM is rotated in yaw using the source. It is fed to a coil driver where it is coupled into tank 5 *via* a vacuum feedthrough. Coil formers on the front edges of the reaction mass contain wound copper wire connected to the vacuum feedthrough. Magnets are attached to the back of the ETM. The reaction mass is behind the ETM, containing a hole in its centre to allow light to exit the vacuum tank where it can be viewed with the CCD camera. A larger version of the contents of tank 5 can be viewed in the panel to the right of the figure.\ \ The cavity is held on resonance by the frequency stabilisation servo. This feeds back to the light’s frequency *via* the laser crystal’s temperature below and its PZT above up to a unity gain frequency of .[]{data-label="fig:prototype_setup"}](waveguide_arm_in_prototype.pdf){width="0.9\columnwidth"} The WGM was attached to an aluminium block of mass and suspended from tank 4’s cascaded (triple) pendulum, forming the cavity’s ITM. A silica test mass, also , with a transmission coating, was used as the ETM, suspended from a similar triple pendulum in tank 5. On the rear surface of the ETM were three magnets for the purpose of actuation, the positions of which are shown in Figure \[fig:etm\_rear\]. With optimal alignment the mirrors formed an overcoupled cavity with finesse . ![The positions of the magnets on the rear surface of the ETM. The magnet designations used in this article are shown in red text. The top magnet is positioned at the centre of yaw, near the top of the mass. The left and right magnets are positioned either side of the centre of yaw. Coils on the ETM’s reaction mass (not shown) are positioned coaxially behind each magnet.[]{data-label="fig:etm_rear"}](etm_back.pdf){width="0.3\columnwidth"} A three-stage reaction chain was placed behind the triple pendulum of the ETM to provide voice coil actuation upon the magnets on the ETM’s rear surface. The upper and intermediate stages were identical to those of the chain carrying the ETM, howeverfor the purposes of another experiment, not reported herethe lower stage was split into two parts separately suspended from the intermediate stage. The part closer to the ETM was a aluminium block that carried the voice coils. The other part was a aluminium block required to balance the suspension. **Parameter** **Description** ------------------------- ----------------- Cavity input power Approx. ETM transmissivity $40$ ppm ETM radius of curvature ETM spot size ITM transmissivity ITM radius of curvature $\infty$ ITM spot size Cavity length Cavity finesse Cavity g-factor Beam waist size Beam waist position At ITM Sideband frequency : Cavity parameters.[]{data-label="tab:cavity_parameters"} Measuring Cavity Length Changes ------------------------------- An RF photodetector was placed at the viewport on tank 1, where it could view the light reflected from the cavity. By using PDH demodulation, the signal from this photodetector provided an error signal for the cavity length. This signal was fed back to the laser *via* the frequency stabilisation servo to maintain cavity resonance. The frequency stabilisation servo’s high frequency feedback signala voltage applied across the laser’s piezoelectric transducer (PZT)provided a means of calibrating cavity length changes at frequencies greater than . Using the PZT’s frequency response, , the cavity length change $\delta l$ per error signal volt could be calculated to be . Measurements and Analysis {#sec:measurements} ========================= From the orientation of the WGM’s gratings, it was expected that actuation of the ETM in yaw, which would scan the cavity light across the WGM’s surface transverse to the direction of its grooves, would exhibit WGM transverse to longitudinal coupling if present. For the purposes of actuation upon the ETM, two sinusoidal signals $V_L$ and $V_R$ (corresponding to the left and right voice coils on the ETM’s reaction mass, respectively) were produced using separate, phase locked signal generators. A signal frequency of was chosen so as to be above the suspensions’ pole frequencies but low enough to provide an adequate signal-to-noise ratio. The signals $V_L$ and $V_R$, with suitable balancing (see below), could then be actuated in- or out-of-phase to produce longitudinal or yaw actuation upon the ETM, respectively. When $V_L$ and $V_R$ were identical in magnitude but out-of-phase, the ETM’s movement contained a linear combination of rotational and longitudinal components due to force imbalances between the voice coils. To ensure that actuation upon the ETM contained only a yaw component, the cavity’s longitudinal error signal was minimised during out-of-phase actuation by changing the gain of $V_L$. This balanced the magnitude of the torque applied by each actuator to the left and right sides of the ETM. Any WGM transverse to longitudinal coupling present would act with phase orthogonal to this voice coil actuation and would thus be unchanged by the torque balancing. Pitch actuation upon the ETM, which would scan the cavity light in a direction parallel to the WGM’s grooves, was not expected to contribute to the cavity’s error signal via the WGM’s coupling. However, unintended pitch actuation upon the ETM would couple into the cavity’s length *via* the same geometrical mechanism as yaw shown in Equation \[eq:etm\_length\_change\]. To minimise the ETM’s pitch component during actuation in yaw, the cavity’s error signal was minimised by applying an offset voltage to the top coil. In practice, minimal pitch coupling was achieved when the offset signal was zero. Actuator Calibration -------------------- To calibrate the cavity’s longitudinal response to voice coil actuation, the voice coils were actuated with the balanced $V_L$ and $V_R$ signals in-phase at a frequency $f = \SI{70}{\hertz}$ for a period of . This, along with the ETM’s mass $m$, could then be used to obtain the force applied to the ETM by the voice coils: $$F = 4 \pi^2 f^2 m \delta l. \label{eq:force_calibration}$$ Measurement of Waveguide Mirror Transverse to Longitudinal Coupling {#sec:length_changes} ------------------------------------------------------------------- Four spot positions corresponding to $y$ in Equation \[eq:offset\_effect\] were chosen across the surface of the ETM. The input beam was aligned to the cavity axis corresponding to each spot position using the beam splitter and steering mirror nearest to the ITM, and the cavity mirrors were aligned to create a fundamental mode resonance. The voice coil signals $V_L$ and $V_R$ were set out-of-phase to produce motion on the ETM in yaw. The magnitudes of $V_L$ and $V_R$ were not altered between the longitudinal calibration and this yaw actuation, so it was expected that the previously outlined minimisation of yaw to tilt actuation would also result in minimal longitudinal to tilt actuation. The cavity length signal was recorded for a period of . For each nominal spot position an additional measurement was taken with $V_L$ set to $\pm \SI{0.1}{\volt}$ from its balanced setting for a period of . This allowed two additional data points to be obtained for each spot position. By calculating the gradient (cavity length change per spot position with respect to the centre of yaw) of the central and inner-left spot positions, it was possible to assign an effective spot position for each of the offset points. The spot positions used to obtain cavity error signals are shown in Table \[tab:spot\_positions\]. These positions are shown with respect to the centre of the ETM’s reflective surface. The spot positions were subject to two sources of error: the measurement of the spot positions with respect to the centre, and the error in the ETM’s centre of rotation due to misalignment between the voice coils and their corresponding magnets. The spot position error was assumed to be +/- from visual inspection of the suspensions, measured *via* the CCD camera placed in transmission of the ETM, using the known width of the ETM’s reaction mass as a calibration. The error in the spot position measurements dominated the error in voice coil alignment. Although misaligned voice coils could have lead to a change in the expected ETM force coupling (leading to a change in the centre of rotation of the ETM), it was found from separate measurements that the effect of any possible misalignment during the experiment could only account for a drop in force of %. This contributed a negligible error (+/-) to the results. [|c|c|c|]{}\ & &\ & &\ & &\ & &\ & &\ Knowledge of the distance of the ETM’s voice coils from the centre of rotation, $y_c$; the ETM’s moment of inertia, $I$; the coil driving frequency, $f$; and the force calibration from Equation \[eq:force\_calibration\], allowed the rotation angle to be obtained geometrically using the relation $$\theta = \frac{F y_c}{4 \pi^2 f^2 I}. \label{eq:rotation_calibration}$$ The numerical simulation tool *FINESSE* [@Freise2004] was then used to calculate $\kappa$ for the cavity parameters shown in Table \[tab:cavity\_parameters\]. This was determined to be . The WGM’s transverse displacement was then the product of $\kappa$ and $\theta$. Analysis of the Coupling Level {#sec:simulations} ------------------------------ Using the known contribution to the cavity length signal from the rotation of the ETM, $\delta l_E$, and the cavity length signals $\delta l$ measured during the experiment, the WGM’s coupling level could be calculated statistically using Bayes’ theorem. For this experiment, Bayes’ theorem can be expressed mathematically as: $$p \left( \vec{\omega} | \mathcal{D} \right) \propto p \left( \mathcal{D} | \vec{\omega} \right) p \left( \vec{\omega} \right), \label{eq:bayes}$$ where $p \left( \vec{\omega} | \mathcal{D} \right)$ is the probability density distribution of the experimental parameters, $\vec{\omega}$, given the observed data, $\mathcal{D}$ (the *posterior*); $p \left( \mathcal{D} | \vec{\omega} \right)$ is the likelihood and $p \left( \vec{\omega} \right)$ is the probability distribution of the experimental parameters. The observed data $\mathcal{D}$ are the measured cavity error signals for each of the spot positions. In this analysis we are primarily interested in estimates of the model parameters. We are therefore free to ignore the constant evidence factor $p \left( \mathcal{D} \right)$ present in Bayes’ theorem when calculating the posterior. In the future it may be of interest to compare different models for the coupling level (or lack thereof), in which case the evidence could be calculated to obtain a model odds ratio. ### Model and Parameters To obtain a posterior for the WGM’s coupling level, it was necessary to build a model and state prior belief of the parameters’ probability distributions. In the model, the ETM’s geometrical longitudinal effect at arbitrary spot position $y$ (Equation \[eq:etm\_length\_change\]) for the rotation and mirror depth used in the experiment was combined coherently with a specified level of WGM transverse to longitudinal coupling, $\omega_1$. It was then possible to predict the total change in cavity length $\delta l$ as a function of spot position $y$, given the fixed parameters $\theta$, $\kappa$ and $d$, using equations \[eq:wgm\_length\_change\] and \[eq:etm\_length\_change\]: $$\begin{split} \delta l \left( \vec{\omega}, y, \theta, \kappa, d \right) & = \delta l_W \left( \theta, \kappa, \omega_1 \right) + \delta l_E \left( y, \theta, d \right) \\ & \approx \theta \kappa \omega_1 + y \theta + \frac{d}{4} \theta^2. \end{split}$$ The effect of *beam smearing* was also considered. The suspended optics contain residual displacement noise, leading to a broadening of the trough at which the ETM’s longitudinal coupling and any WGM coupling cancel (see Figure \[fig:individual\_factors\]). To model this effect, the assumption was made that the motion of the spots on the ETM followed a Gaussian distribution about their nominally measured position. Eight-hundred small ‘offset distances’ $\delta y$ were applied uniformly to the spot positions, drawn from a randomly generated Gaussian distribution. The number of offset distances was chosen as a compromise between adequate statistical significance and technical constraints. Calculating the cavity length change as a function of spot position for each of these offset positions, and combining them in an uncorrelated sum, allowed an average, ‘smeared’ signal to be modelled which more closely resembled the measurements. The standard deviation of the Gaussian distribution was an additional parameter, $\omega_2$, provided as an input to the model. The summing of signals introduced by the modelling of beam smearing led to an artificial increase in the magnitude of the model’s predicted cavity length signals. To compensate for this effect, a further parameter was introduced: a multiplicative scaling factor, $\omega_3$, applied uniformly to the model. This factor also had the additional effect of compensating for the uncertainty in the calibrated cavity length signals. By marginalising over a suitable distribution of scaling factors, it was possible to account for this uncertainty in the analysis of the WGM’s coupling level. The model used in the analysis to predict the smeared, scaled cavity length change, $\delta l'$, was then: $$\delta l' \left( \vec{\omega}, y, \theta, \kappa, d \right) = \omega_3 \sqrt{\sum_{i=1}^{800} \delta l \left( \vec{\omega}, y + \delta y_i, \theta, \kappa, d \right)^2}, \label{eq:model}$$ where $\delta y_i$ is the $i^\text{th}$ offset distance, drawn from a Gaussian distribution with standard deviation $\omega_2$. ### Likelihood The likelihood function assumed for the model was a Gaussian distribution, $$p \left( \vec{\omega} | \mathcal{D} \right) \propto \exp \left( -\frac{1}{2} \sum_{i=1}^{N} \frac{\left( \mathcal{D}_i - \delta l' \left( \vec{\omega}, y_i, \theta, \kappa, d \right) \right)^2}{\sigma^2} \right), \label{eq:likelihood}$$ where $N$ is the number of spot positions and $\sigma^2$ is the (identical) variance of each of the measured spot positions. ### Priors Bayes’ theorem requires an assumption of probability distributions (*priors*) for each of the free parameters prior to the consideration of the measured data. The assumptions made for each free parameter in the model can be found in Table \[tab:priors\]. The upper bound on coupling was assumed to be a factor better than the grating mirror measured in [@Barr2011], given the indication from [@Brown2013] that no coupling is present. The bounds on the scaling factor and spot smearing standard deviation were chosen from earlier observations of the behaviour of the signals during the experiment. All priors were assumed to be uniform. **Parameter** **Symbol** **Distribution** **Dimensions** ----------------------------------------- ------------ ----------------------------------------------- ----------------------------------------------------------------------------------- WGM transverse to longitudinal coupling $\omega_1$ Uniform, $\left[ 0, \frac{1}{1000} \right]$ $\frac{\SI{}{\meter} \text{ (longitudinal)}}{\SI{}{\meter} \text{ (transverse)}}$ Spot smearing noise standard deviation $\omega_2$ Uniform, $\left[ 0, 3 \times 10^{-3} \right]$ $\SI{}{\meter} \text{ (transverse)}$ Calibration scaling $\omega_3$ Uniform, $\left[ 0, \frac{1}{10} \right]$ : The distributions assumed for each of the free parameters in the model, along with their dimensions, prior to the computation of the posterior.[]{data-label="tab:priors"} ### Algorithm A form[^1] of the Metropolis-Hastings Markov-Chain Monte-Carlo (MCMC) algorithm [@Hastings1970] was applied to the model to marginalise over the three parameters. The outputs of the MCMC are a chain of samples (values at each parameter) that are drawn from the posterior distribution. A histogram of samples for a given parameter gives the marginal posterior distribution for that parameter from which the mean and standard deviation can be calculated. To ensure the convergence of the MCMC on the posterior, a ‘burn-in’ period of iterations was performed. The convergence was verified manually following completion. A further iterations were then used to sample from the posterior and this second set is the one that we used for our results. Results {#sec:summary} ======= From the parameter marginalisation it was possible to produce a posterior probability density distribution for the coupling level as shown in Figure \[fig:final\_result\_prob\]. The coupling level predicted from the distribution is bounded between 0 and 1:17000 with 95% confidence, with a mean coupling level of 1:27600. The probability density distributions for the scaling and standard deviation parameters are shown in Figure \[fig:final\_posteriors\]. The scaling posterior distribution indicates a mean value of with standard deviation . The posterior distribution for the beam smearing parameter indicates a range of possible values between and . The measured cavity length signals as well as the 95% upper limit and mean WGM coupling level predicted by the analysis are shown in Figure \[fig:final\_result\]. The phase discrepancy between the model and the measurements, as witnessed in this figure most profoundly for the spot positions around , is thought to be an artefact from the modelling of the beam smearing effect. The residual test mass motion that motivated the inclusion in the model of beam smearing may have contained some non-Gaussian behaviour. The upper limit on the predicted coupling level, 1:17000, represents a significant improvement over previously measured grating designs such as the order Littrow grating measured in [@Barr2011], where the coupling factor was of order 1:100. ![Posterior probability density distribution of WGM coupling levels (in units of meters longitudinal per metre transverse) yielded by statistical analysis of the data. The red shaded region shows the coupling levels falling within the most probable 95% of the distribution.[]{data-label="fig:final_result_prob"}](final_prob_3.pdf){width="\columnwidth"} ![Posterior probability density distribution of other parameters used in the analysis: scaling applied to the model’s predicted longitudinal signal (left plot) and the standard deviation assumed for the Gaussian distribution used to model beam smearing (right plot). Both distributions lie well within their prior ranges (see Table \[tab:priors\]).[]{data-label="fig:final_posteriors"}](final_posteriors.pdf){width="\columnwidth"} ![Measurements and simulations of the cavity length signal for spot positions with respect to the ETM’s centre of yaw. The calibrated cavity length change per radian (vertical axis) from the measurements is shown (blue stars) alongside the model’s simulated cavity length changes per radian for the mean (red), 95% upper limit (green) and zero (black) WGM coupling levels. The simulated plots use a scaling factor of and a beam smearing standard deviation of .\ Error bars are shown on the measured spot positions corresponding to their uncertainty. The errors in cavity length change are obtained from the noise floor surrounding each measurement. The noise floors were approximately constant for all measurements, with mean value . Phase error bars are visible for the central values. The errors on each phase measurement, from left to right, are: +/-, +/-, +/-, +/-, +/-, +/-, +/-, +/-, +/-, +/-, +/- and +/- degrees.[]{data-label="fig:final_result"}](final.pdf){width="\columnwidth"} Acknowledgements ---------------- The authors would like to thank members of the LIGO Scientific Collaboration for fruitful discussions. The Glasgow authors are grateful for the support from the Science and Technologies Facility Council (STFC) under grant number ST/L000946/1. The Jena authors are grateful for the support from the Deutsche Forschungsgemeinschaft under project Sonderforschungsbereich Transregio 7. [^1]: *“Yet Another Matlab MCMC code”* by Matthew Pitkin. Available as of time of writing at <https://github.com/mattpitkin/yamm>.
--- address: - CERN - Northwestern University - University of Minnesota - Florida International University - Florida Institute of Technology - Fermi National Laboratory author: - Jordan Damgov - Phillip Dudero - Richard Kellogg - Luis Lebolo - Jeremiah Mans - Mayda Velasco - Igor Vodopiyanov bibliography: - 'auto\_generated.bib' title: 'Performance of CMS Hadron Calorimeter Timing and Synchronization using Test Beam, Cosmic Ray, and LHC Beam Data' --- =1 $Revision: 1.30 $ $Date: 2010/01/12 00:25:27 $ $Name: $ Introduction ============ The primary goal of the Compact Muon Solenoid (CMS) experiment [@cite:jinstcms] is to explore particle physics at the  energy scale exploiting the proton-proton collisions delivered by the Large Hadron Collider (LHC) [@cite:jinstlhc]. The central feature of the CMS apparatus is a superconducting solenoid, of 6 m internal diameter, providing a field of 3.8 T. Within the field volume are the silicon pixel and strip tracker, the crystal electromagnetic calorimeter (ECAL) and the brass/scintillator hadron calorimeter (HCAL). Muons are measured in gas-ionization detectors embedded in the steel return yoke. In addition to the barrel and endcap detectors, CMS has extensive forward calorimetry. The primary purpose of the HCAL is the measurement of hadronic energy from collisions in CMS. In addition to the energy measurement, the HCAL is also able to perform a precise time measurement for each energy deposit. Precise time measurements are valuable for excluding calorimeter noise and energy deposits from beam halo and cosmic ray muons. Time information can also be valuable for identifying new physics signals such as long-lived particle decays and slow high-mass charged particles [@cite:cmsHSCPnote]. This paper is organized as follows. Section \[sec:HCALdesc\] reviews the pertinent details of the HCAL construction and their fundamental impact on timing resolution. Section \[sec:reco\] discusses the method used to extract a time value from the digitized HCAL signal. In Section \[sec:tbandsplash\], the validation of the method is presented based on measurements in test beam and initial beam operations of the LHC in September 2008. Section \[sec:suppression\] details the performance of timing filters in the suppression of non-collision-based backgrounds and the effect these timing filters have on simulated physics events. CMS Hadron Calorimeter Description {#sec:HCALdesc} ================================== A detailed description of the HCAL can be found elsewhere [@cite:jinstcms]. Briefly, the HCAL consists of a set of sampling calorimeters. The barrel [@cite:epjcHB] and endcap [@cite:jinstcmsHE] calorimeters utilize alternating layers of brass as absorber and plastic scintillator as active material. The scintillation light is converted by wavelength-shifting fibers embedded in the scintillator and channeled to hybrid photodiode detectors via clear fibers. The outer calorimeter [@cite:epjcHO] utilizes the CMS magnet coil/cryostat and the steel of the magnet return yoke as its absorber, and uses the same active material and readout system as the barrel and endcap calorimeters. The forward calorimeter is based on Cherenkov light production in quartz fibers and is not discussed in this paper, due to its different signal time structure. The HCAL is segmented into individual calorimeter cells along three coordinates, $\eta$, $\phi$, and depth. The $\phi$ coordinate is the azimuthal angle and $\eta$ is the pseudorapidity. The depth is an integer coordinate that enumerates the segmentation longitudinally, along the direction from the center of the nominal interaction region. The layout of the barrel, endcap and outer calorimeter cells is illustrated in Fig. \[fig:HCALdiagram\]. Most cells include several scintillator layers; for example, in most of the barrel all 17 scintillator layers are combined into a single depth segment. Calorimetric measurements are acquired using the HCAL readout electronics, shown schematically in Fig. \[fig:HCALreadout\]. Each electronics channel collects and processes the signal of a single cell. One calorimetric measurement is acquired with each LHC clock tick (25 ns) from each cell in the HCAL. This defines a “time sample”. The HCAL front-end electronics does not sample the signal instantaneously; rather, the electric current collected from the photodetectors is integrated over each clock period and then sampled. As a consequence, the sample clock is most commonly termed the “integration clock”. The integration clock can be delayed with respect to the LHC clock on a per-channel basis by programmable settings, referred to as sampling delay settings, that have a resolution of 1 ns. The purpose of these settings is to compensate for channel-dependent timing variations at the hardware level, and to permit the energy estimate from each LHC crossing to be reconstructed consistently for use in the Level-1 trigger for all pulse amplitudes and for all channels. They also provide an initial coarse timing calibration. Sources of timing variation and uncertainty {#subsec:sources} ------------------------------------------- There are two dominant sources of channel-dependent timing variation: the different time-of-flight of particles from the interaction region to each HCAL cell and the different signal propagation times through optical fibers of different lengths. Within the barrel, the first effect tends to skew reconstructed times later for higher $\eta$. The second effect, because of the location of the front-end electronics, tends to skew reconstructed times earlier for higher $\eta$, and this is the effect that dominates. The combination of these two effects induces a timing dependence on $\eta$, with a spread in each half-barrel of about 15 ns. Smaller variations as a function of $\phi$ are induced by clock distribution differences and other effects. By applying proper sampling delay settings, this spread can be reduced substantially. In addition, the reconstruction software utilizes a set of per-channel calibration constants to synchronize the mean timing of energy measurements to less than 1 ns. Fiber lengths also differ within each calorimeter cell along the radial coordinate, but in this case there are no means available to compensate for these differences. Since the signals from all the scintillator layers comprising one cell are optically summed, and the optical path lengths are not equalized across the layers, the resulting signal is smeared in time. For hadrons, which exhibit large shower-to-shower fluctuations in longitudinal development, the optical summing of layers imposes a limit on the timing resolution, estimated to be approximately 1 ns. For signals with uniform energy deposit in each layer (such as those arising from the beam-collimator interactions described later), the resolution is not limited by shower-development fluctuations and can be significantly smaller than 1 ns. HCAL Time Reconstruction {#sec:reco} ======================== When the Level-1 trigger system [@cite:craftL1trig] identifies an event of potential physics interest, a set of 10 consecutive time samples per channel containing the triggered bunch-crossing is collected and sent to the high-level trigger software. The capability of the HCAL system to reconstruct the arrival time of the signal more precisely than the sample clock period derives from the spread of the HCAL pulse shape over three to four time-integrated samples (Fig. \[fig:HCALpulse\]). The time reconstruction software calculates a first order time estimate from a center-of-gravity technique using the three samples centered on the peak, $$\label{eq:wpkbin} \textrm{Weighted peak bin} = \bigg[\frac{(p-1)A_{p-1} + pA_p + (p+1)A_{p+1}}{A_{p-1} + A_p + A_{p+1}}\bigg]\times C\quad,$$ where $A_i$ represents the amplitude of an arbitrary time sample $i$, and $p$ is the value of $i$ such that $A_p$ is maximum over the set of samples. In the case of multiple samples with the same amplitude, the earliest one is picked. The constant $C$ is an amplitude-independent normalization constant that rescales the first order estimate to a range from zero to one. The weighted peak bin is then used to determine a second order correction (Fig. \[fig:corfun\]) that compensates for the asymmetry of the pulse shape, yielding the phase of the signal within the peak time sample. An additional additive correction determined in test bench measurements [@cite:timeslew], $$\Delta_\mathrm{slew} = (3.59\;\mathrm{ns}) \log_{10} \left[ \frac{1\TeV}{E (\TeV)} \right]\quad,$$ compensates for an electronics effect that delays the measured time for signals with pulse energy $E$ less than 1. The maximum value of $\Delta_\mathrm{slew}$ is taken to be 10 ns. This algorithm yields accurate time measurements except when consecutive bunch crossings yield energy measurements of similar amplitude within the same HCAL cell. Such events can be identified by their anomalous pulse shape, and the time measurement can be marked as having poor resolution. [htbp]{} &\ & \ Detector Timing/Synchronization Commissioning {#sec:tbandsplash} ============================================= This section describes the performance characterization and commissioning results for HCAL timing and synchronization. Beam tests at the CERN SPS H2 area (hereafter referred to as “test beam”) produced key results [@cite:epjcHBTB] that define the timing performance of the HCAL barrel and endcap; these results are presented first. The results from the LHC 2008 beam commissioning data for the barrel, endcap and outer calorimeters are then described; these results validate and extend the initial synchronization calibration constants determined from beam tests. H2 beam test data ----------------- For test beams in 2004 and 2006, a section of the HCAL barrel was mounted on a table that was adjustable in $\eta$ and $\phi$ coordinates. The unit under test was exposed to pion beams with energies in the 3–300 range. The delivered beam was asynchronous with the 40 MHz integration clock, so all relative sampling phases could be investigated. From these tests the following important timing benchmarks for the HCAL were accurately measured. [htbp]{} &\ & \ The HCAL pulse shape was successfully reconstructed from data. Since the pulse is integrated before it is digitized, this required a numerical differentiation technique that utilized the phase between particle arrival and the integration clock, as measured by a time-to-digital converter (TDC) running at 32 times faster than the sample clock rate. This allowed the separation of the data sample into subsamples $S_n$, with $n=$1..32 ordered by TDC phase. These subsamples represent the integration of the pulse $P(t)$ shifted successively by the TDC bit resolution ($\Delta t = 0.78$ ns). Then, $$\label{eq:recopulse} \langle I_m \rangle_{S_{n+1}}-\langle I_m \rangle_{S_n} \simeq \int_\tau^{\tau+\Delta t}P(t)dt \simeq P_\mathrm{avg}(\tau)\Delta t\quad, ~~~~~~P_\mathrm{avg}(\tau)\simeq\frac{\Delta \langle I \rangle}{\Delta t}\bigg\vert_\tau\quad,$$ where $\langle I_m \rangle_{S_n}$ is the average integrated amplitude for time bin $m$ of data subsample $S_n$, $\tau=m\Delta t$ is the phase within that time bin, and $P_\mathrm{avg}(\tau)$ is the average value of $P(t)$ over $\Delta t$ at $t=\tau$. Equation \[eq:recopulse\] is valid to reconstruct the first 25 ns of the pulse shape, for which $m=0$. For each remaining 25 ns interval, bin $m$ is incremented and a second integral term appears in the equation that corresponds to the $\Delta t$-sized portion of the pulse “leaving” the time bin. In this manner, the whole pulse was reconstructed to approximately 0.8 ns resolution, as is shown in Fig. \[fig:recopulses\]. The functional form matched to these results is used in the simulation of the CMS detector response, and also to determine the second-order time reconstruction correction function (Fig. \[fig:corfun\]). To measure both the channel-to-channel variations and the hadronic timing resolution limit of the HCAL, a 150 pion beam was directed at every cell in the unit under test. From these data the variation of the timing of signals as a function of $\eta$ between HCAL cells in the barrel was mapped. As the test setup mimicked the geometry and fiber lengths of the final barrel detector, a set of final sampling delay settings could be derived. Section \[sec:splash\] discusses the performance of these settings with LHC beam data. The HCAL timing resolution was determined by removing the channel-to-channel variations and combining the data. A Gaussian function was then fitted to the distribution of the leading energy deposit from each event, producing a best-fit width of $\sigma=1.2$ ns (Fig. \[fig:Tspread\]). This result is consistent with the expected effect of the uncorrectable sources of timing spread described in Sec. \[subsec:sources\], averaged over many hadronic shower events. The measured resolution is consistent across the full set of channels studied. The variation of HCAL timing resolution as a function of energy was also measured with pion beam data. In this measurement the HCAL unit under test was paired with its corresponding section of the ECAL in front, as in the CMS detector. Events in two categories were considered: those with no interaction in the ECAL ($<$1 energy deposited) and those with a significant interaction ($>$5 energy deposited). The results indicate that the presence of the ECAL does not substantially alter the timing resolution of the HCAL as a function of energy (Fig. \[fig:timeResEnvelopesAndFilter\](a)). CMS software was subsequently adjusted by smearing the times of simulated HCAL energy deposits so that simulated data of the test beam configuration exhibited the same energy-dependent time resolution (Fig. \[fig:timeResEnvelopesAndFilter\](b)). ![(a) Timing resolution as a function of energy measured during test beam runs in 2006, showing the consistency of time reconstruction for particles that begin showering in the ECAL (lines) and those that do not (areas). (b) Timing resolution as a function of energy of reconstructed deposits generated from CMS simulations. The lines marked “Time Filter” indicate the boundaries of a timing window discussed later and used to accept or reject deposits for distinguishing signal from background.[]{data-label="fig:timeResEnvelopesAndFilter"}](fig/tb06TimeResEnvelope "fig:"){width="48.00000%"} ![(a) Timing resolution as a function of energy measured during test beam runs in 2006, showing the consistency of time reconstruction for particles that begin showering in the ECAL (lines) and those that do not (areas). (b) Timing resolution as a function of energy of reconstructed deposits generated from CMS simulations. The lines marked “Time Filter” indicate the boundaries of a timing window discussed later and used to accept or reject deposits for distinguishing signal from background.[]{data-label="fig:timeResEnvelopesAndFilter"}](fig/MCtimeResEnvelopeAndFilter "fig:"){width="48.00000%"} LHC beam data {#sec:splash} ------------- Data from LHC beam commissioning were used to validate the barrel sampling delay settings derived from test beam data, and to derive settings to compensate for the $\eta$ dependence of timing in the endcap and outer calorimeters. Data from LHC beam operations were recorded for the first time on September 10, 2008. In preparation for the arrival of LHC beam at CMS, the $\eta$-dependent sampling delays measured from test beam were loaded into the barrel front-end electronics. These delay values had been calculated to synchronize channels for data from collisions. Delay values for endcap and outer sectors were set to zero, since no timing measurements for these sectors were available at the time. The LHC beam delivered to CMS consisted of a single 450 proton bunch circulating in the ring, first in one direction and then in the other. The LHC beam was also steered into collimators located 150 m upstream in either direction of the detector interaction point. Such events are referred to as “beam splash” events. Because of the large amount of material in the collimator and along the path of the secondaries, only muons were able to penetrate the entire CMS detector and form the signal in the calorimeter system. The millions of muons produced from the dumping of the beam passed through the entire detector, depositing large signals in every cell of the HCAL, with equivalent energy ranging between hundreds of  to a few . A schematic representation of the geometry of the “beam splash” setup is shown in Fig. \[fig:JordansSplashDiag\]. The signal from “beam splash” muons has a well-defined time with respect to the arrival of the LHC bunch and to the LHC clock. The time-of-flight of the muons from the collimators to the geometrical centers of each cell was calculated and used to predict the expected timing versus $\eta$ for all HCAL cells. The data were averaged over all $\phi$ values at each $\eta$ value to obtain an HCAL $\eta$ timing profile. The resulting distribution of the differences between the measured and predicted times is shown in Fig. \[fig:uncorrectedVsCorrectedSplash\](a). The timing measurements for the barrel region are distributed around zero within $\pm1$ ns, demonstrating the correctness of the delay settings calculated from beam tests. The timing for the endcap before any phase corrections exhibits an average time difference that is 10 ns later than the barrel. The x-shaped pattern and the structure at $|\eta|>1.4$ in the outer calorimeter are understood to originate from the pattern of optical fibers in the scintillator trays and the positioning of readout electronics. ![Difference between measured and predicted time as a function of $\eta$ from LHC “beam splash” events. (a) Per-channel delay settings that compensate for timing variations along $\eta$ were loaded into the front-end electronics for the barrel (depths 1–2, $|\eta|<1.4$) but not for the endcap (depths 1–3, $|\eta|>1.4$) or outer (depth 4) calorimeters. Events from both $+z$ and $-z$ directions are included. (b) Compensating sample delay settings were loaded for all three calorimeters, barrel, endcap and outer. Only events from $+z$ direction were available and are included.[]{data-label="fig:uncorrectedVsCorrectedSplash"}](fig/uncorrectedSplashSept10 "fig:"){width="48.00000%"} ![Difference between measured and predicted time as a function of $\eta$ from LHC “beam splash” events. (a) Per-channel delay settings that compensate for timing variations along $\eta$ were loaded into the front-end electronics for the barrel (depths 1–2, $|\eta|<1.4$) but not for the endcap (depths 1–3, $|\eta|>1.4$) or outer (depth 4) calorimeters. Events from both $+z$ and $-z$ directions are included. (b) Compensating sample delay settings were loaded for all three calorimeters, barrel, endcap and outer. Only events from $+z$ direction were available and are included.[]{data-label="fig:uncorrectedVsCorrectedSplash"}](fig/correctedSplashSept18 "fig:"){width="48.00000%"} From these data, $\eta$-dependent sampling delays were derived for the endcap and outer calorimeters. The delays were loaded into the front-end electronics, and new “beam splash” data were taken from interactions on the $\emph{+z}$ collimator on September 18, 2008. The same analysis used to derive the original delays was repeated on these data; the results are shown in Fig. \[fig:uncorrectedVsCorrectedSplash\](b). Some outer calorimeter data points are omitted from the plot due to a technical issue at the time of data-taking that has since been resolved. Figure \[fig:uncorrectedVsCorrectedSplash\](b) shows that, as a result of the sampling delay derivation from “beam splash” data, barrel and endcap channel synchronization along $\eta$ is verified to within $\pm2$ ns. Although “beam splash” event data from both directions were used to derive the settings for endcap and outer, only event data from the $\emph{+z}$ direction were available to verify these settings. For this reason Fig. \[fig:uncorrectedVsCorrectedSplash\](b) exhibits systematic deviations, particularly near the barrel-endcap boundary. For cells that are significantly slanted relative to the arrival direction of the muons, the “beam splash” will illuminate the various layers of the cell at different times. The effect is canceled when considering events from both directions. Additional possible systematic effects, such as an offset in timing between the positive and negative endcaps, will be studied with future “beam splash” data from both directions. The results shown in the last two sections indicate that the methods of time reconstruction and synchronization are effective both in the H2 test beam environment and with the HCAL integrated in CMS. The final offline time corrections, which will be determined with collision data when they become available, are expected to provide synchronization with a spread in the per-channel mean times of less than 1 ns. Impact of Timing in Analysis {#sec:suppression} ============================ Many interesting physics processes that the CMS experiment was designed to study or discover contain the signature of an undetected particle. Examples include the Standard Model neutrino and the lightest stable particle in several hypothetical supersymmetric particle spectra. Such particles would pass undetected through CMS, carrying some of the energy and momentum that would otherwise counterbalance the other collision products registered in the calorimeters. At hadron colliders, the initial energy and longitudinal momentum of the colliding hadronic constituents are not known, but the initial transverse momentum is known to be negligible in comparison and can be treated as zero. Therefore, the benchmark used to infer the existence of an unobserved particle in any given event is the calorimetric missing transverse energy [@cite:cmsMETnote]. The missing transverse energy (MET) is reconstructed from calorimeter towers, which are reconstructed objects containing the sum of the signals from HCAL and ECAL cells at the same $(\eta,\phi)$ coordinates. The MET is calculated by taking the magnitude of the vector sum of the transverse energy contributions of towers, $$\label{eq:met} MET = \sqrt{(\sum_\mathrm{towers}E_i\sin\theta_i\cos\phi_i)^2 + (\sum_\mathrm{towers}E_i\sin\theta_i\sin\phi_i)^2}\quad,$$ where $E_i$ and $(\theta_i,\phi_i)$ are the energy and angle coordinates of each tower. Multiple complications arise in the accurate measurement of MET. The energies of jets or leptons can be mismeasured due to detector miscalibration or shower fluctuations in the dead material of the detector. The MET in a collision event can also be affected by energy deposits in HCAL cells from non-collision sources. Without a separate identification of these sources the energy will be misassigned to the event of interest, inducing fake MET. Example sources include cosmic ray muons, detector noise, and beam halo. This section explores the use of HCAL timing as a potential tool to identify sources of fake MET. The CMS design ensures that the phase of collision products relative to the LHC clock is fixed to high precision. The results presented in Section \[sec:tbandsplash\] indicate that the HCAL system is capable of being synchronized to approximately 1–2 ns. Since sources of fake MET are generally uncorrelated with the collision event one can significantly suppress their contributions to MET using their reconstructed times. In the case of beam halo particles, although their arrival is correlated to the arrival of proton bunches in the beam, the exact arrival time to a given detector cell is generally distinguishable from the arrival time of particles produced in the interaction region. Impact on out-of-time background {#sec:bkgdtiming} -------------------------------- The strategy for using timing to reject backgrounds is implemented at the level of reconstructed cell measurements, rather than higher-level objects like hadronic jets or MET. It is expected that during data taking triggered events will contain collision data in around 1000 HCAL cells, over which additional out-of-time energy deposits may be superimposed. These out-of-time deposits can be individually removed from the calculation of MET rather than rejecting the entire event. The application of such a technique is appropriate for the analysis of prompt physics signals, while an analysis focused on long-lived particles might perform the reverse selection, preferring energy deposits that are out of time. The sources of irreducible timing uncertainty described in Section 2 will limit the tightness of the timing requirements that can be applied. The degradation of time resolution for low energy particles requires that any time filtering must take this energy dependence into account in order to avoid biasing the measurement. The timing filter for individual cell measurements was defined according to the resolution measurement shown in Fig. \[fig:timeResEnvelopesAndFilter\]. The filter, shown in Fig. \[fig:timeResEnvelopesAndFilter\](b), is made wider than the 95% confidence level by a factor of two, to allow for the additional uncertainty associated with the spread of primary vertex locations, particularly for the endcap calorimeters. The effect of the primary vertex distribution is expected in collision data and included in the CMS simulation but not included in Fig. \[fig:timeResEnvelopesAndFilter\](b), which is intended to replicate the results of test beam data. The larger envelope is also considered a conservative choice for startup when the detector is expected to be less well calibrated, and other sources of irreducible uncertainty will not yet have been identified. For hits with energies below 4, no requirement is applied except that the peak amplitude should be in the triggered data sample. Using the strategy outlined above, the timing filter was then applied to two datasets: beam background data collected during LHC beam commissioning in September 2008 and cosmic ray muon and CMS detector noise background data collected in October and November 2008, during the month-long data-taking exercise known as the Cosmic Run At Four Tesla (CRAFT) [@cite:CRAFTGeneral]. For each event, the MET was calculated twice for comparison. One calculation of MET used all energy deposits (referred to as “unfiltered”), while the second did not include energy deposits with a measured time outside the window (referred to as “time-filtered”). The unfiltered measurements include energy depositions in bunch crossings other than the triggered one, as a consequence of the triggering and reconstruction algorithms used during these periods. The comparison between the unfiltered MET and the time-filtered MET for HCAL-triggered events, cosmic ray events, and beam events is shown in Fig. \[fig:bkgdMETplots\]. The HCAL-triggered events were collected using a Level-1 trigger in which a single calorimetric tower energy exceeds a 5 threshold. Therefore, this sample includes photo-detector and other instrumental noise events [@cite:anomevents]. Events with a Level-1 muon trigger were removed from the sample. The cosmic ray muon events were collected during CRAFT with a Level-1 trigger in which a single muon is identified by the CMS muon stations. The beam events were taken with a single circulating 450 proton bunch on September 18, 2008. All events from this sample are included in the plots regardless of trigger source, which could be beam pickup, coincidence of scintillator or forward calorimeter energy deposits, or any of several muon triggers. The beam sample thus includes beam halo and beam-gas events as well as cosmic ray muons and accidental overlaps with calorimeter noise events. In all cases, the physical MET value would be zero, so a net migration of events from high MET to low MET is expected when comparing the unfiltered and time-filtered MET distributions. Figure \[fig:bkgdMETplots\] also shows the fraction of events that survive a minimum MET criterion as a function of the threshold value. This was done for both the time-filtered and unfiltered distributions for each background type. Figure \[fig:bkgdMETplots\], therefore, indicates that a reduction by a factor of 5–10 in the rate of events with fake MET can be achieved with time filtering for all three sources of background studied. This rejection factor is consistent with the ratio of the filter window width to the width of the four time samples considered in the reconstruction of an HCAL measurement. The fake MET production for all three samples is due predominantly to a small number of high energy measurements, against which the filter is most effective. The filter can be tuned to enhance further the background rejection performance. More sophisticated algorithms could be employed to filter clusters of cells with mixed in-time and out-of-time signals if the members of the cluster share common characteristics that identify them as background. For instance, beam halo muons may create a cluster of deposits in consecutive cells along $\eta$ for the same $\phi$, most of which will be out-of-time. This pattern, once identified, could allow the removal of nominally in-time background as well. Impact on physics processes --------------------------- To estimate the impact of this technique on signals that have not yet been produced in the CMS detector, studies have been performed on two simulated processes, one that contains MET at the event generator level and a second that does not. An interesting example of the first case is the real MET induced by the hypothetical weakly-interacting supersymmetric (SUSY) [@cite:susy] particle called the neutralino. By applying an ill-chosen time filter, one may underestimate the MET induced by the neutralino by filtering out calorimetric deposits associated with the recoiling jet. A typical process without significant MET at generator level is a hadronic dijet event, commonly referred to as a QCD dijet event. In this case one may also infer a (fake) MET in an event in which no MET is expected, causing a tail in the high end of the MET distribution. Therefore, an ill-chosen time filter could degrade the discovery potential of a neutralino search by reducing the measured MET of the neutralino signal while simultanously increasing the (fake) MET of a major background. To assess whether the chosen timing cuts induce a bias on processes with significant MET, the timing filter was applied to a sample of simulated events containing neutralinos and generated under the assumptions of $\sqrt{s}=10\TeV$ and no overlapping events ($L=10^{30}~\mathrm{cm}^{-2}\mathrm{s}^{-1}$). The selected SUSY sample was generated using ISASUSY [@cite:isasusy] and Pythia 6.4 [@cite:pythia] at a “low mass” point referred to as “LM1” [@cite:susylm1] in the phase space of one favored model of SUSY known as mSUGRA [@cite:susy]. The early discovery or exclusion of this point is potentially within reach of the LHC experiments given the integrated luminosity expected from the first physics run. The left plots in Fig. \[fig:signalMETplots\] show that the same energy-dependent timing window used in Section \[sec:bkgdtiming\] preserves MET distributions for simulated SUSY events with high efficiency. The impact of the timing filter for QCD dijet events was also studied. The QCD dijet process was generated by Pythia for a range of hard scattering scales $15\GeVc < \hat{p}_T < 3\TeVc$ with same center-of-mass and luminosity assumptions as the SUSY sample. While these events have no inherent MET, mismeasurements and dead material will result in the reconstruction of some MET in these events, particularly for events with large total transverse energy. The results for the QCD dijet sample are shown in the right column of Fig. \[fig:signalMETplots\] and indicate that the MET distribution is maintained by the filtering process. It is important to note that this filter is not intended to remove QCD backgrounds, which are inherently in-time. The point of the comparison is to verify that an increase in MET is not inadvertently created by the filtering process. In order to apply the time filter to analysis of physics processes, actual performance of the filter will have to be rederived on collision data using processes such as $W\rightarrow\ell\nu$ events containing jets. The generation of fake MET will be measured using processes such as $\gamma$ + jet, where no MET is expected and the recoil against the photon provides good control of possible jet energy mismeasurement. The efficiency of background rejection can be further studied using dijet event samples containing a cosmic ray or beam halo muon as identified by the muon system. Summary ======= This paper has presented the technique used to determine the arrival time of energy deposits in the CMS HCAL to less than 25ns, and the use of this technique to suppress out-of-time backgrounds. The performance of the technique and of the synchronization system of the HCAL have been demonstrated using data from test beam operations as well as “beam splash” data from the LHC in September 2008. The ultimate resolution of the technique for hadron showers was measured to be 1.2 ns using test beam data, and the time spread of HCAL channels after hardware alignment in $\eta$ was measured to be $\pm$2 ns, with an expected potential offline synchronization of less than 1 ns. Using data from CRAFT and beam operations in 2008, the use of a timing filter to suppress fake transverse missing energy was studied. This technique was effective in reducing the rate of MET produced by cosmic ray muons, detector noise, and beam background by a factor of 5–10. At the same time, the integrity of simulated signal MET distributions under the same filtering technique was verified. This technique and the timing results from HCAL in general are available for use in CMS analyses to suppress backgrounds or select particular signals. Acknowledgments {#acknowledgments .unnumbered} =============== We thank the technical and administrative staff at CERN and other CMS Institutes, and acknowledge support from: FMSR (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES (Croatia); RPF (Cyprus); Academy of Sciences and NICPB (Estonia); Academy of Finland, ME, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); OTKA and NKTH (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); NRF (Korea); LAS (Lithuania); CINVESTAV, CONACYT, SEP, and UASLP-FAI (Mexico); PAEC (Pakistan); SCSR (Poland); FCT (Portugal); JINR (Armenia, Belarus, Georgia, Ukraine, Uzbekistan); MST and MAE (Russia); MSTDS (Serbia); MICINN and CPAN (Spain); Swiss Funding Agencies (Switzerland); NSC (Taipei); TUBITAK and TAEK (Turkey); STFC (United Kingdom); DOE and NSF (USA). Individuals have received support from the Marie-Curie IEF program (European Union); the Leventis Foundation; the A. P. Sloan Foundation; and the Alexander von Humboldt Foundation. The CMS Collaboration \[app:collab\] ==================================== =500
--- abstract: | With the on going construction of several large and medium scale laser interferometric gravitational wave antennas around the globe, the problem of the detection of gravitational waves has acquired great impetus. Since gravitational wave signals from astrophysical sources are expected to be weak, despite the state of the art technology being employed, the development of optimal signal extraction techniques and the consequent accurate determination of the parameters of the signal is of major importance. Coalescing binary systems are one of the most promising sources of gravitational waves. One reason is that such sources are easier to model and thus one can design detection strategies tuned to such signals. A lot of attention has been devoted in the literature studying such techniques and most of the work has revolved around matched filtering and maximum likelihood estimation. In a previous work, Monte Carlo simulations were carried out of the detection process using matched filtering for the initial LIGO/VIRGO configuration for the first post-Newtonian corrected coalescing binary waveform. We had compared the results of our simulations with available estimates, obtained from covariance matrix considerations, of the errors in the determination of the parameters. Our results showed that the covariance matrix underestimates, by over a factor of two, the actual errors in the estimation of parameters even when the signal-to-noise ratio (SNR) is as high as 10. Sources having SNR higher than 10 are expected to be few and hence this issue is of major concern. In this paper we probe the question as to why the Monte Carlo simulations give such high errors as opposed to those obtained via the covariance matrix. We present, a [*computationally viable*]{} statistical model of the distribution, of the maximum likelihood estimates (MLE), of the parameters. This model reproduces the essential features of the Monte Carlo simulations, thereby explaining the large root mean square errors in the estimates, obtained in numerical experiments. The chief reason for the large errors seems to be the fact that the probability distribution of the estimated parameters is multimodal. Though only the central peak (corresponding to the actual values of the parameters) is dominant, the subsidary peaks occur ‘far’ away thereby contributing to large variances. We suggest that the variance or the standard deviation of an estimated parameter may not provide the best measure of the error, for the kind of situation we encounter here. We therefore propose another criterion by which the MLE should be judged. In order to illustrate the model we have considered the Newtonian as well as the first post-Newtonian corrected waveform. We have assumed Gaussian noise, with a power spectrum typical of the LIGO/VIRGO type of detectors. The model we have used, however, is quite general, and robust, and will be relevant to many other parameter estimation problems. address: | Inter-University Centre for Astronomy and Astrophysics,\ Post Bag 4, Ganeshkhind, Pune 411 007, India. author: - 'R. Balasubramanian and S. V. Dhurandhar' title: 'Estimation of Parameters of Gravitational Wave Signals from Coalescing Binaries.' --- Introduction ============ Large scale laser interferometric detectors of gravitational waves, namely, the LIGO [@LIGO] and VIRGO [@VIRGO] and the medium scale detectors, GEO and TAMA are expected to be operational by the turn of this century. Compact coalescing binary systems of blackholes and/or neutron stars are relatively ‘clean’ systems to model during their inspiral and their inspiral waveform can be predicted with a fair degree of reliability. This makes them the most promising sources for broad band detectors, and in particular, the upcoming interferometric detectors cited above. Binary systems are also valuable sources of astrophysical information as one can probe the universe up to cosmological distances. For instance, statistical analysis of several binary coalescences enables the estimation of the Hubble constant to an accuracy better than 10% [@SCH86; @Mar93; @Fin96]. Events that produce high signal-to-noise ratio can be potentially used to observe such non-linear effects, as gravitational wave tails, and to put general relativity into test in the strongly non-linear regime [@BS95]. Due to the weak coupling of gravitational radiation with matter the signal waveform has a very low amplitude and will not stand above the detector noise. In addition to the on-going efforts to reduce the noise, and hence increase the sensitivity of the detector, a considerable amount of research activity has gone into the development of efficient and robust data analysis techniques to extract signals buried in very noisy data. For a review on gravitational waves from compact objects and their detection see Thorne [@Th95a; @Th95b]. Various data analysis schemes have been proposed for the detection of the ‘chirp’ waveform from such systems. Among them the technique of matched filtering is the most promising [@Th87; @HEL; @SCH89]. Briefly, this technique involves correlating the detector output with a set of templates, each of which is tuned to detect the signal with a particular set of parameters. In order to obtain a high value for the correlation the signal waveform should be known to a high level of accuracy. The matched filtering technique is very sensitive to the [*phase*]{} of the signal and even if the template and the signal mismatch by even half a cycle the correlation integral is drastically reduced. The fully general relativistic waveform from a coalescing binary system of stars is as yet unavailable. In the absence of such an exact solution, there have been efforts to find solutions perturbatively. Most of the work in this area strives towards computing the waveform correct to a high degree of accuracy so that the theoretical templates based on this will obtain the maximum possible SNR. The signal is said to be detected, if the maximum value of the correlation over all the parameters of the signal crosses a preassigned threshold which has been set by the false alarm one is prepared to tolerate. Once the signal is detected, the maximum likelihood estimates (MLE) of the parameters of the binary are those of the template with which the maximum correlation is obtained. The errors involved in such an estimation have been worked out by several authors [@BS95; @Fin92; @FC93; @BS94; @Kr; @CF94; @KLM93; @PW95], for the case of ‘high’ SNR and for the Newtonian and post-Newtonian waveforms using a single and a network of detectors. In [@BSD96] exhaustive Monte Carlo numerical simulations were carried out to compute the errors in the estimation of parameters and covariances among them, for the case of the initial LIGO configuration taking into account only the first post-Newtonian corrections and assuming circular orbits. It was found that the errors as obtained from the simulations were larger by a factor of two or more from the errors as computed via the covariance matrix at astrophysically relevant SNRs. The discrepancy disappears as the SNR increases beyond certain value - typically 15 in the Newtonian case and 25 in the post-Newtonian case. The comparision with other less stringent lower bounds has also not resolved the discrepancy. Nicholson and Vecchio [@NV97] have recently computed Bayesian bounds such as the Weiss-Wainstein and Ziv-Zakai, for the case of Newtonian signals. They conclude that though these bounds are tighter than the Cramer-Rao bound, the numerical errors are still much larger than the computed lower bounds. In this paper we explain the discrepancy between the Monte Carlo simulations and the results obtained from the covariance matrix. We demonstrate that the probability distribution of the estimated parameters cannot be got from ‘local’ methods such as the ones used earlier [@Fin92]. Since our main purpose in this paper is to demonstrate the validity of the statistical model, we initially use the simplified Newtonian model of the waveform. In the Newtonian case there are fewer parameters of the signal that one has to deal with, making the investigations simpler, analytically as well as computationally. We then specialize to the first post-Newtonian case. Here, although the discrepancy is larger, the problem is similar at a qualitative level. Following Finn [@Fin92] we obtain an equation which relates the estimated parameters to the actual parameters of the signal for a given noise realisation. This equation is non-linear. Finn linearises the equation and obtains the errors in terms of the covariance matrix. Here we do not linearise the equation. We find that the equation has multiple solutions for the parameters in a certain sense and it is this multiplicity of roots forming islands in the parameter space which contributes significantly to the errors. Thus the problem is of a global nature and any local approximation will be inadequate in explaining away the errors. Moreover we suggest that the variance/covariance of the parameters is not a proper measure of the errors in this case. We therefore propose a new criterion by which the MLE should be judged. The paper is organized as follows. In section \[waveform\] we briefly describe the gravitational wave signal waveform and the parameters on which it depends, namely, the amplitude, the time of arrival, the phase at arrival and the ‘chirp times’, which characterizes the time for the binary to evolve from some fiducial time to the actual merger. These parameters are found to be very convenient when we carry out Monte Carlo simulations. It turns out that the covariance matrix is independent of these parameters and hence it is sufficient to carry out the simulations only for a particular set of parameters. Further in this section we also describe the characterstics of the noise that we assume will be present in the detector and briefly review the matched filtering technique. In section \[MCres\] we present the results of the Monte Carlo simulations for the Newtonian case. We carry out transformations of the parameter space which bring out the chief features of the distribution of the estimated parameters. We show that the estimated parameters do not lie in a simply connected region around the signal parameters, but instead are distributed in multiple ‘islands’ in the parameter space considered. We next present a geometric representation of our statistical model. We then apply this model to the Newtonian chirp waveform, and compare the model with the Monte Carlo simulations. In section \[PNwave\] we deal with the post-Newtonian waveform. Finally in section \[sec\_con\] we summarise our results. We propose an alternative measure for the error which performs reasonably better than the variance as a measure of the error. The Signal and the Noise {#waveform} ======================== The Chirp Signal {#chirp} ---------------- When constructing templates for on-line detection, it is sufficient to work with the so called [*restricted*]{} post-Newtonian gravitational waveform. In this approximation the post-Newtonian corrections are incorporated only in the phase of the waveform, while ignoring corresponding corrections to the amplitude [@3mn]. Consequently, the restricted post-Newtonian waveforms only contain the dominant frequency equal to twice the orbital frequency of the binary computed up to the relevant order. In the restricted post-Newtonian approximation the gravitational waves from a binary system of stars, modeled as point masses orbiting about each other in a circular orbit, induce a strain $s(t)$ at the detector given by $$s(t) = {\cal A} (\pi f(t) )^{2/3} \cos \left [\varphi (t) + \Phi\right], \label {wave}$$ where $f(t)$ is the instantaneous gravitational wave frequency, the constant $\cal A$ involves the distance to the binary, its reduced and total mass, and the antenna pattern of the detector [@Th87] and $\Phi$ is the initial phase of the wave at some fiducial time $t=t_s$. The phase of the waveform $\varphi (t)$ contains several pieces corresponding to different post-Newtonian contributions which can be schematically written as $$\varphi(t) = \varphi_0(t) + \varphi_1(t) + \varphi_{1.5}(t) + \ldots. \label {phase}$$ The evolution of the phase depends on the masses of the two components of the binary, characterised by the reduced mass, $\mu$ and the total mass, $M$ of the system. Here $\varphi_0(t)$ is the dominant Newtonian part of the phase and $\varphi_n(t)$ represents the $n$th order post-Newtonian correction to it. The Newtonian part of the phase is sensitive only to a particular combination of the masses of the two stars, frequently characterised by its ‘chirp mass’, ${\cal M} = \mu^{3/5}M^{2/5}$. We give below the waveform correct to the first post-Newtonian order: $$\begin{aligned} \varphi_0 (t) &=& {16 \pi f_s\tau_0 \over 5} \left [ 1 - \left ({f\over f_s}\right )^{-5/3} \right] \nonumber,\\ \varphi_1(t)&=& 4 \pi f_s\tau_1 \left [ 1 - \left ( {f\over f_s} \right )^{-1} \right ] \label {phaseN}\end{aligned}$$ where $f(t)$ is given implicitly by, $$t - t_s= \tau_0 \left [ 1 - \left ( {f \over f_s} \right )^{-8/3} \right ]+ \tau_1\left[1 - \left ( {f \over f_s} \right )^{-2} \right] \label {frequencyN},$$ where $\tau_0$ and $\tau_1$ are constants having dimensions of time given by $$\begin{aligned} \tau_0 &=& {5 \over 256} {\cal M}^{-5/3} (\pi f_s)^{-8/3},\nonumber\\ \tau_1 &=& {5 \over 192\mu (\pi f_s)^2} \left ({743\over 336} + {11\over 4} \eta \right ), \label{NCT}\end{aligned}$$ where $\eta=\mu/M$, and $f_s$ is the instantaneous gravitational wave frequency of the signal at $t=t_s.$ The time elapsed starting from an epoch when the gravitational wave frequency is $f_s$ till the epoch when it becomes infinite will be referred to as the [*chirp time*]{} of the signal. In the quadrupole approximation $\tau_0$ is the chirp time whereas it is $\tau_0+\tau_1$ for the first post-Newtonian case [@Sat94]. The Newtonian part of the phase is characterised by three parameters: (i) the [*time of arrival*]{} $t_s$ when the signal first becomes [*visible*]{} in the detector, (ii) the [*phase*]{} $\Phi$ of the signal at the time of arrival and (iii) the chirp mass. At this level of approximation two coalescing binary signals of the same chirp mass but of different sets of individual masses would be degenerate and thus exhibit exactly the same time evolution. This degeneracy is removed when post-Newtonian corrections are included. The parameters $t_s$ and $\Phi$ are [*kinematical*]{} that fix the origin of the measurement of time and phase, respectively, while the Newtonian and the post-Newtonian chirp times are [*dynamical*]{} parameters in the sense that they dictate the evolution of the phase and the amplitude of the signal. The parameters $\tau_0$, $\tau_1$ and $t_s$ have the dimensions of time. We convert them to dimensionless parameters by multiplying them with $2\pi f_s$. Thus we have the parameter set, $$\bbox{\mu} \equiv \{\mu^0,\mu^1,\mu^2,\mu^3,\mu^4\} \equiv \{A,2\pi f_st_s, \Phi, 2\pi f_s\tau_0, 2\pi f_s\tau_1\}.$$ In the stationary phase approximation the Fourier transform of the restricted second-post-Newtonian chirp waveform for positive frequencies is given by [@Th87; @SD91; @FC93; @CF94]. $$\tilde s (f) = A {\cal N} f^{-7/6} \exp \left [i\sum_{j=1}^4\chi_j(f)\mu^j - i {\pi \over 4} \right ] \ , \label {FT}$$ where $A$, is the amplitude parameter depending on the distance to the binary, as well as the chirp mass and $\cal N$ is a normalization constant to be fixed by means of a scalar product to be introduced later, and $$\begin{aligned} \label {eqs1} \chi_1 & = & {f\over f_s}, \nonumber\\ \chi_2 & = & -1, \nonumber\\ \chi_3 & = & {f\over f_s} -{ 8 \over 5}+ {3\over 5} \left ( {f\over f_s} \right )^{-5/3},\nonumber\\ \chi_4 & = & {f\over f_s} - 2 + {f_s\over f}.\end{aligned}$$ For $f<0$ the Fourier transform is computed using the identity $\tilde s(-f) = \tilde s^*(f)$ obeyed by real functions $s(t).$ The noise {#noise} --------- The output of a gravitational wave detector such as the LIGO, will comprise of data segments, each of duration $T$ seconds, uniformly sampled with a sampling interval of $\Delta$, giving the number of samples in a single data train to be $N = T/\Delta$. Each data train can be considered as a $N$-tuple ${\bf x} \equiv \{x^0,x^1,\ldots,x^{N-1}\}$, $x^a$ being the value of the output of the detector at time $a\Delta$. The set of all such $N$-tuples constitutes an $N$-dimensional vector space $\cal V$ where the addition of two vectors is accomplished by the addition of corresponding time samples. For later convenience we allow each sample to take complex values. A natural basis for this vector space is the [*time basis*]{} ${\bf e}_r^a = \delta^a_r$ where $r$ and $a$ are the vector and component indices respectively. Another basis which we shall use extensively is the Fourier basis. A gravitational wave signal from a coalescing binary system can be characterised by a set of parameters $\mu^a, a = 0,1, ...,m-1$ belonging to some open set of the $m$-dimensional real space $R^m$. The set of such signals ${\bf s}(t; \bbox{\mu})$ constitutes a $m$-dimensional manifold $\cal S$ which is embedded in the vector space $\cal V$. Note that Greek characters in boldface denote the full collection of parameters characterising the signal. The parameters of the binary can be regarded as coordinates on the manifold. The basic problem of signal analysis is thus to determine whether the detector output vector $\bf x$ is the sum of a signal vector and a noise vector, ${\bf x} = {\bf s} + {\bf n}$, or just the noise vector, ${\bf x} = {\bf n}$, and furthermore to identify which particular signal vector, among all possible. The latter is relevant to this paper where we are interested in estimating the parameters and also the errors made in such a estimation. Errors in the estimation arise because the noise contaminates the data. The noise in ground based laser interferometric detectors will have, in general, both a Gaussian and a non-Gaussian component. The main sources for the Gaussian component are the shot noise due to photon counting, the thermal noise in the mirror suspensions alongwith the mirror itself and seismic noise. The non-Gaussian component can be contributed by numerous sources like sudden strain releases in the mirror suspension wires or even if lightning strikes. It should be possible to remove most of the non-Gaussian component by using environmental monitors and looking for coincidence among detectors located at widely separated sites. It is, therefore, assumed usually that the detector noise will be a Gaussian random process. Over a time scale of hours, it can also be assumed to be stationary. The power spectral density of the Gaussian noise component rises very steeply towards the low frequency end due to seismic effects. At the high frequency end it is dominated by photon shot noise which leads to a rise towards higher frequencies. Thus the data will have to be bandpassed with a low frequency seismic cutoff, $f_s$, and a high frequency cutoff, $f_c$. We use the power spectral density expected for the initial LIGO as given in [@LIGO]. Accordingly, we choose $f_s = 40$ Hz and $ f_c = 800$ Hz. In the absence of the signal the output will contain only noise drawn from a stochastic process which can be described by a probability distribution on the vector space $\cal V$. We assume that the noise has its mean zero, or that is, $\overline{n^a} = 0$, where the overbar denotes an ensemble average. Then the covariance matrix of the noise ${\cal C}^{ab}$ is defined as, $${\cal C}^{ab} = \overline{n^a n^b}.$$ If the noise is assumed to be stationary and ergodic then there exists a noise autocorrelation function $K(t)$ such that ${\cal C}^{ab} = K(|a-b|\Delta)$. In the Fourier basis it can be shown that the components of the noise vector are statistically independent [@HEL] and the covariance matrix in the Fourier basis will contain only diagonal terms whose values will be strictly positive: $\tilde{\cal C}^{aa} = \overline{\tilde n^a\tilde n^{*a}}$. This implies that the covariance matrix has strictly positive eigenvalues. The diagonal elements of this matrix $\tilde {\cal C}^{aa}$ constitute the discrete representation of the power spectrum of the noise $S_n(f)$. Gaussian noise can be described by the distribution, $$\label{ndis} p_0 ({\bf n}) = M_n{\exp\left[ -\frac{1}{2}\sum\limits_{a,b=0}^{N-1}{[{\cal C}^{-1}]_{ab} n^a n^{b}}\right]},$$ where $M_n$ is a normalization constant given by, $$M_n = { \left[\ (2\pi)^N \det\left[{\cal C}^{ab}\right]\ \right]^{-1/2}}.$$ Equivalently in the Fourier domain this can be written as, $$\begin{aligned} p_0({\bf n}) &=& M_n {\exp\left[ -\frac{1}{2}\sum\limits_{a,b=0}^{N-1}{[\tilde {\cal C}^{-1}]_{ab} \tilde n^a \tilde n^{b*}}\right]}\nonumber\\ &=& M_n{\exp\left[ -\frac{1}{2}\sum\limits_{a=0}^{N-1}{{\tilde n^a \tilde n^{a*}}/{\tilde {\cal C}^{aa}}}\right]},\end{aligned}$$ where in the last step we have used the diagonal property of the matrix $\tilde {\cal C}^{ab}$ which implies that $[\tilde {\cal C}^{-1}]_{aa} = 1/\tilde {\cal C}^{aa}$. In the presence of the signal ${\bf s} (\bbox{\check\mu})$ the above probability density function (pdf) gets modified but in a very simple way since we have assumed that the noise is additive. We have, $$p_1 ({\bf x}) = p_0 ({\bf x} - {\bf s}(\bbox{\check\mu})),$$ where $p_1 ({\bf x})$ is the pdf of ${\bf x}$ when the signal ${\bf s} (\bbox{\check\mu})$ is present in the data stream. In this paper we shall assume the noise to have a power spectrum consistent with the initial LIGO instrument. We use the fit to the noise curve as given in [@CF94]: $$\label{psd} S_n(f) = S_0 \left[\left(\frac{f}{200}\right)^{-4} + 2\left(1+\frac{f}{200}\right)^2\right].$$ The value of $S_0$ will not concern us since what matters is only the ratio of the signal amplitude to that of the noise, in other words the SNR. We shall set the value of $S_0$ to be unity and accordingly adjust the amplitude of the signal to get the required SNR. The Matched Filter {#mf} ------------------ In the absence of any prior information it must be assumed that all the parameter values, within their respective range, are equally likely to occur. In such a case, the method of maximum liklihood can be used. When the noise is a stationary Gaussian random process, the method of MLE reduces to the so called matched filtering technique. Matched filtering involves correlating the detector output with a bank of matched filters each of which corresponds to the signal waveform for a fixed set of parameters. To this end, we define a scalar product on $\cal V$. In the continuum limit the scalar product between two vectors ${\bf x}$ and ${\bf y}$ is given by, $$\label{scal} \left\langle{\bf x},{\bf y}\right\rangle = \int_{0}^{\infty}df\,{1\over S_n(f)}\, \left( \, \tilde{x}(f)\, \tilde{y}^{\ast} (f)\, + \,\tilde{x}^{\ast}(f) \,\tilde{y}(f)\, \right) \; ,$$ where, the Hermitian property of the Fourier transform of a real function has been used to restrict the domain of integration to positive frequencies. $S_n(f)$ is the power spectral density of the noise. The Fourier domain is convenient since stationarity of the noise has been assumed. The norm of a vector ${\bf z}$ will be denoted by $\|{\bf z}\| = \left\langle{\bf z},{\bf z}\right\rangle^{1/2}$. In eqn. (\[FT\]) we had left $\cal N$ undefined. We choose the value of $\cal N$ such that $\|{\bf s}(\bbox{\mu})\| = A$. From the definition of the scalar product in eqn. (\[scal\]) and from eqn. (\[FT\]) it follows that, $$\label{normdef} \frac{1}{{\cal N}^2} = 2\int\limits_0^\infty \frac{f^{-7/3}}{S_n(f)}df.$$ The integrand in the above equation, $I(f)$ is plotted in Fig. \[fig1\]. This function peaks at around $f=135$Hz and a major contribution to the integral comes from a region around $135$Hz. We shall use normalized matched filters, that is,we set ${\bf h}(\mu^j) = {\bf s}(\bbox{\mu})/A$. Henceforth we shall use the symbols $a,b,\ldots$ to indicate indices whose range of values includes $0$ which as a index denotes the amplitude parameter, [*i.e. $\mu^0 = A$*]{}. Indices $i,j,\ldots$ will not include the amplitude parameter and will never take the value $0$. The output called the correlation $c(\mu^j)$ of the matched filter with parameters $\mu^j$ is then just the scalar product given by, $$c(\mu^j) = \left\langle{\bf x},{\bf h}(\mu^j)\right\rangle. \label{cor}$$ Given the data vector ${\bf x}$, the correlation is computed for the entire feasible range of parameters, continuously over the kinematic parameters $t_s$ and $\Phi$ and for discrete values of the dynamical parameters $\tau_0$ and $\tau_1$. The filters corresponding to the discrete values of $\tau_0$ and $\tau_1$ constitute the filter bank. The maximum of $c(\mu^j)$ is computed and compared with a preassigned threshold determined from the false alarm probability. A detection is announced if the maximum of $c(\mu^j)$ crosses the threshold. The parameters $\hat \mu^j$ for which the correlation is maximised are the MLE of the parameters of the signal. However, these will in general differ from the actual signal parameters $\check \mu^j$ due to the presence of noise. The difference $\Delta \mu^j = \check \mu^j - \hat \mu^j$ is the error made in estimating the parameters. When the signal is weak as compared to the noise (low SNR), the estimated parameters in general will differ by a large margin from the true ones, while in the limit of high SNR, the two will almost coincide. Thus in general $\Delta \mu^j$ is not small. Let the data vector be $${\bf x} = \check A {\bf h}(\check \mu^j) + {\bf n},$$ where $\check A$ is the amplitude of the signal and $\check \mu^j$ are the remaining parameters. We have separated the amplitude as it occurs in the signal as just a scale factor multiplying the signal. The SNR is defined as, $$\label{SNR} \rho = \left\langle{\bf s}(\check{\bbox{\mu}}),{\bf s}(\check{\bbox{\mu}})\right\rangle^{1/2} = {\check A}.$$ From eqs. (\[FT\]) and (\[scal\]) we see that the SNR is equal to the amplitude parameter of the signal, $\check A$. When we consider an ensemble of noise realisations, the $c(\mu^j)$ becomes a random variable. For a fixed set of parameters $\mu^j$, $c(\mu^j)$ is a Gaussian random variable since it is a linear function of the noise (eqn. (\[cor\]). The process of maximization over the parameters is however a nonlinear process and hence the statistics of $c(\mu^j)$ will be non-Gaussian. For a given realization of noise, we assume that the global maximum of $c(\mu^j)$ will also be a local maximum. This assumption depends on the specific nature of the waveform and the filter bank. The equations which will be satisfied at the estimated parameters $\mu^j={\hat\mu^j}$, are, $$\begin{aligned} \label{m1} &&\frac{\partial c}{\partial\mu^i} =0\mbox{\ \ \ }\nonumber\\ &\Rightarrow& \left\langle\check A {\bf h}(\check\mu^j) + {\bf n}, \frac{\partial {\bf h}}{\partial\mu^i}\right\rangle = 0\nonumber\\ &\Rightarrow& \left\langle{\bf h}(\check\mu^j), \frac{\partial {\bf h}}{\partial\mu^i}\right\rangle = -\frac{1}{\check A}\left\langle{\bf n}, \frac{\partial {\bf h}}{\partial\mu^i}\right\rangle.\end{aligned}$$ In the limit of high SNR the mean square errors in the measurement of the parameters are characterised by the so called covariance matrix $C^{ab}$ [@HEL] defined as, $$\begin{aligned} \label{cov} \Gamma_{ab} &=& \left\langle\frac{\partial{\bf s}}{\partial \mu^a}(\bbox{\mu}), \frac{\partial {\bf s}}{\partial \mu^b}(\bbox{\mu})\right\rangle\\ C^{ab} &=& \left[\Gamma^{-1}\right]^{ab}\end{aligned}$$ From the expression for the Fourier transform in the stationary phase approximation (eqn. (\[FT\])) and from the definition of the scalar product (eqn. (\[scal\])) we see that the phase function in the Fourier transform simply cancel out and only the amplitude remains. We also find that $C^{ab} \propto \check A^{-2}$. The idea now is to compare the results of the Monte Carlo simulations with those obtained via eqn. (\[cov\]). In the limit of high SNR, it is expected that the errors will agree with those obtained from the covariance matrix. In the following sections we briefly mention the results of the Monte Carlo simulations and then analyse in detail eqn. (\[m1\]). The goal is to first check the agreement between the two and then understand how the additional errors arise by studying the consequences of eqn. (\[m1\]). The Newtonian Waveform {#MCres} ====================== Monte Carlo Simulations {#MCres1} ----------------------- In this section we shall restrict ourselves to the Newtonian waveform. We have the four parameters: $$\bbox{\mu} \equiv \mu^a \equiv \{\mu^0,\mu^1,\mu^2,\mu^3\} \equiv \{A, 2\pi f_st_s, \Phi, 2\pi f_s\tau_0\}.$$ Simulations were carried out for a test case of $\check\tau_0 = 5.558$ s This corresponds to a binary comprised of a $10M_\odot$ black hole and a $1.4M_\odot$ neutron star. The waveform was cutoff at a frequency of $800$ Hz and the data train was sampled at $2000$Hz. As mentioned before the power spectral density was chosen to be consistent with the initial LIGO interferometer. The range of $\tau_0$ for the filters was $[5.358,5.758]$ with a spacing of $1$ msec. The simulations presented here were carried out for an SNR of $10$, [*i.e.*]{}, $\check A = 10$. We considered $12000$ realizations of noise. The covariance matrix for the Newtonian case is given below: $$\label{cov1} {\bf C}_{(\bbox{\mu})}\equiv \left[C^{ab}_{(\bbox{\mu})}\right] = \frac{1}{{\check A}^2}\left(\begin{tabular}{cccc} ${\check A}^2$&0.0&0.0&0.0\\ 0.0\ \ &222.50&322.26&-227.25\\ 0.0&322.26&469.16&-328.65\\ 0.0&-227.25&-328.65&232.26\\ \end{tabular}\right)$$ In the above the diagonal values denote the variances in the parameters and the nondiagonal components denote the covariances between the parameters. Since the components do not depend on the parameters $\mu^j$, (see [@Sat94; @BSD96] for details) we can choose a new set of parameters in which the covariance matrix is diagonal. The transformation can be affected by means of an orthogonal transformation. We have the new parameter set $\bbox{\nu}\equiv\{A,\nu^1,\nu^2,\nu^3\}$, related to $\bbox{\mu}$ by means of the relation $\bbox{\nu}={\bf S} \bbox{\mu}$, where $\bf S$ is a orthogonal matrix, which for our particular case is: $$\label{smat} {\bf S} = \left(\begin{tabular}{cccc} $1.0$&0.0&0.0&0.0\\ 0.0\ \ &-0.794 & 0.129 & -0.594\\ 0.0&0.359& -0.690 & -0.629\\ 0.0&-0.491 & -0.713 & 0.501\\ \end{tabular}\right)$$ In this new coordinate system the covariance matrix ${\bf C}_{(\nu)} = {\bf S}{\bf C}_{(\mu)}{\bf S^{-1}}$, and is given by, $$\label{cov2} {\bf C}_{(\bbox{\nu})} = \frac{1}{{\check A}^2}\left(\begin{tabular}{cccc} ${\check A}^2$&0.0&0.0&0.0\\ 0.0\ \ &0.024&0.0&0.0\\ 0.0&0.0&1.640&0.0\\ 0.0&0.0&0.0&922.3\\ \end{tabular}\right)$$ In the high SNR limit the root mean square errors in the parameters $\nu^a$ are given by, $$\sigma_{\nu^a} = \sqrt{C^{aa}}.$$ For an SNR of $10$ the values are $\{1.0,\ 0.015,\ 0.128,\ 3.037\}$. In a previous work [@BSD96] we had performed detailed Monte Carlo simulations to study the variation of the errors with the SNR. The simulations carried out in that paper were with a slightly different power spectrum, as we have used a simple fit to the noise curve in this paper. We reproduce in Fig. \[bsdfig\], the variation of errors in the parameters with the SNR as given in Figure 5 of [@BSD96] for the parameters $\tau_0$ and $t_a$, except that we use the log scale for both the axis, as is conventional in statistical literature. The continuous line represents the errors computed via the covariance matrix and in this approximation the errors are inversely propotional to the SNR. The dotted line represents the errors as obtained from the Monte Carlo simulations. The rest of this paper tries to explain the discrepancy between the two curves in this figure at low SNRs $\rho\approx10$. The equation for the errors {#MCres2} --------------------------- The expression for the Fourier transform of the chirp in eqn. (\[FT\]) in the new coordinates retains its form, but the functions $\chi_i(f)$ are now transformed to, $$\eta_i(f) = \sum\limits_{j=1}^3\chi_j \left[S^{-1}\right]^j_i.$$ Rewriting eqn. (\[m1\]) in the new parameters we have at $\nu^j=\hat\nu^j$, $$\label{m1nu} \kappa_i = -\frac{1}{\check A}\left\langle{\bf n}, \frac{\partial {\bf h}}{\partial\nu^i}\right\rangle = \left\langle{\bf h}(\check\nu^j), \frac{\partial {\bf h}}{\partial\nu^i}\right\rangle.$$ Using the definition of the scalar product in eqn. (\[scal\]), we have, $$\label{kap2} \kappa_i = 2 {\cal N}^2 \int_{0}^{\infty}\frac{f^{7/3}\eta_i(f)\sin(\sum\limits_{j=1}^3 \eta_j(f)\Delta\nu^j)}{S_n(f)}df,$$ where $\Delta\nu^j = \hat\nu^j-\check\nu^j$. In the Newtonian case these constitute a set of three nonlinear equations connecting $\nu^i$ to $\kappa_i$. The quatities $\kappa_i$ are random variables. Since $\hat\nu^i$ depend upon $\bf n$, $\kappa_i$ are in general nonlinear functions of the noise. This implies that the statistics of $\kappa_i$ can in general be non-Gaussian. However, for chirp signals of the type we consider, the statistics of $\kappa_1$ and $\kappa_2$ will turn out to be Gaussian. This will be demonstrated in the next section. We define the total phase difference $\theta$ between the signal and the filter as, $$\label{def1} \theta(f,\Delta\nu^i) = \sum\limits_{j=1}^3 \eta_j(f)\Delta\nu^j,$$ which is relevant for future considerations. In the high SNR limit the errors $\Delta\nu^j$ are small and hence we can make the approximation $\sin(\theta)\simeq\theta$. This corresponds to using the covariance matrix to provide an estimate of the errors. At astrophysically interesting SNRs $\approx 10$ this assumption does not hold. It is clear that a good correlation between two waveforms is obtained when the phase difference between the two waveforms is a multiple of $2\pi$ and is roughly constant over the time for which the waveforms last. This assumption is found to be true in the simulations. Even though there are far more outlying points than what is predicted by the covariance matrix the phase difference between the signal and the optimally matching template is found to be roughly constant. A typical case is illustrated in Fig. \[fig2\], in which $\theta$ is plotted as a function of frequency for $\Delta\nu^i\equiv\{1.62, -8.49, 7.3\}$. The function $\theta(f)$ is found to be close to $-4\pi$ in the region around $135$ Hz, from which the maximum contribution to the SNR is obtained. Note that the values of $\Delta\nu^1$ and $\Delta\nu^2$ are way beyond the rms errors predicted by the covariance matrix. Since the function $I(f)$ in Fig. \[fig1\] peaks at $f\approx135$Hz., we define $\theta_m$ to be the value of $\theta$ at $f=135$Hz., [*i.e.*]{}, $$\label{thetam} \theta_m = -3.87615\Delta\nu^1 + 0.739347\Delta\nu^2 + -0.0149334\Delta\nu^3.$$ In Fig. \[fig3\] we illustrate how the $\theta_m$ is distributed across many realizations of noise. It is clear from the figure that $\theta_m$ is strongly clustered around multiples of $2\pi$. Since $\theta_m$ is a linear combination of the parameters, it is clear that at least some of the parameters will show similar clustering properties. The variable $\theta_m$ is a very convenient indicator of the rough location of the parameters in the parameter space. If the absolute value of $\theta_m$ is large then the estimated parameters are far away from the actual value of the parameters. Fig. \[fig4\] is a $\nu^1-\nu^2$ scatter plot, where each point corresponds to a realization of noise. The plot shows that $\nu^1$ and $\nu^2$ parameters are clustered in well separated ‘islands’ in the parameter space considered. The variable $\theta_m$ computed for points in any specific island yield values close to a specific integral multiple of $2\pi$ depending upon the island we choose. Multiple solutions to eqn. (\[kap2\]) are responsible for the islands. Since the pdf in the $\kappa_i$ space is largest at the origin, the islands correspond to $\theta(f)\simeq 2k\pi$, where $k$ is an integer. We give an explanation of this below. The phase parameter $\mu^2\equiv\Phi$ is constrained to lie in the interval $[-\pi,\pi]$. Using the relation $\mu^i=\left[S^{-1}\right]^i_j\nu^j$, we have, $$\label{phase1} \Phi\equiv\mu^2=0.129212\nu^1 + -0.689524\nu^2 + -0.712644\nu^3.$$ Since the rms error in $\nu^3$ as calculated from the covariance matrix at an SNR of $10$ is $3.04$, it is highly likely that the absolute value of the last term in eqn. ( \[phase1\]), ( which dominates the other terms in the same equation, since the rms errors for the other parameters are much lesser) exceeds $\pi$. This forces the parameters $\nu^1$ and $\nu^2$ to [*jump*]{} to values such that $\Phi$ remains in the range $[-\pi,\pi]$. In order to calculate the amount by which the parameters jump, we consider eqn. (\[thetam\]). Since $\theta_m$ must be a multiple of $2\pi$ in order to get a good match, each of the first two terms on the right hand side of eqn. (\[thetam\]) contributes $\pi$. This implies that $\nu^1$ jumps by $\pi/3.87615 \approx .81$ and $\nu^2$ jumps by $\pi/0.739347 \approx 4.25$. Since the two terms are of opposite signs, we find that $\nu^1$ increases when $\nu^2$ decreases and vice versa. We now revert back to eqn. (\[kap2\]). Since $\theta$ is close to an integral multiple of $2\pi$ in the frequency region of interest, we can write, $$\label{approx} \sin(\theta) \approx \left(\sum\limits_{j=1}^3\eta_j(f)\Delta\nu^j - 2k\pi\right),$$ for some integer $k$. We have, $$\label{kap3} \kappa_i = 2 {\cal N}^2\sum\limits_{j=1}^3\int_{0}^{\infty}\frac{f^{7/3} \eta_i(f)\eta_j(f)\Delta\nu^j}{S_n(f)}df - 4k\pi{\cal N}^2\int_{0}^{\infty}\frac{f^{7/3} \eta_i(f)}{S_n(f)}df.$$ Since the $\bf\nu$ coordinate system is orthogonal, $$\label{ortho} \int_{0}^{\infty}\frac{f^{7/3} \eta_i(f)\eta_j(f)\Delta\nu^j}{S_n(f)}df = 0 \ \ \ \mbox{for\ \ } i\neq j,$$ and $$\label{ortho2} 2 {\cal N}^2\int_{0}^{\infty}\frac{f^{7/3} \eta_i(f)^2}{S_n(f)}df = \frac{1}{C^{ii}_{(\nu)}}.$$ Therefore, $$\label{kap4} \kappa_i= \frac{\Delta\nu^i}{C^{ii}_{\nu}} - 2k\pi G_i,$$ where, $$\label{kap5} G_i = 2 {\cal N}^2\int_{0}^{\infty}\frac{f^{7/3} \eta_i(f)}{S_n(f)}df.$$ The amount by which the parameters $\nu^1$ and $\nu^2$ jump can be calculated more accurately using eqn. (\[kap4\]). For two successive values of the integer $k$, the parameter $\nu^1$ has to jump by, $2\pi G_1/C^{11}_{\nu}$ in order to have the same value of $\kappa_1$. This works out to be $0.812$ for the parameter $\nu^1$. Similarly the value for the parameter $\nu^2$ turns out to be $4.332$. These values are consistent with the scatter plot in Fig. \[fig4\]. It is clear from Fig. \[fig4\] that the distribution of the parameters $\nu^1$ and $\nu^2$ will be markedly multimodal. However, the distribution of $\nu^3$ is relatively smoother. This is illustrated in Fig. \[fig5\] which is a scatter plot on the $\nu^2-\nu^3$ plane. Though there are gaps along the $\nu^2$ axis, $\nu^3$ takes all values in the range shown. The parameter $\nu_3$ is relatively well behaved, [*i.e.*]{} it does not exhibit any sudden jumps like the other parameters. The variances as obtained from the Monte Carlo simulations in the parameters $\nu^i$ are, $\Sigma_{\nu^i}\equiv\{0.421, 2.26, 2.53\}$ whereas, the values predicted by the covariance matrix are $\sigma_{\nu^i}\equiv \{0.0153, 0.1281, 3.037\}$. In the case of the parameters $\nu^1$ and $\nu^2$ the observed variances are much larger than the lower bounds, due to the jumps which these parameters make. However we notice that the observed variance in the case of $\nu^3$ is actually [*less*]{} than the lower bound. This is due to the fact that the Cramer-Rao bounds are applicable only to parameters which are allowed to vary freely. For instance the variance for the parameter $\Phi\equiv\mu^2$ can have the maximum value of $\pi^2$ whatever be the SNR. The restriction of the range of $\mu^2$ puts a constraint on the values of the parameters $\nu^j$ and this accounts for the observed error being less than the Cramer-Rao bound. In the next section we give a more quantitative model for the distribution of the parameters. Geometrical Perspective {#geom} ----------------------- In this section we use differential geometry to arrive at a statistical model for the distribution of the parameters. The model described here is quite general and is applicable to the estimation of parameters obtained by means of the maximum likelihood method, for an arbitrary signal in the presence of Gaussian noise. The set of signal vectors, ${\bf s}(\bbox{\nu}) \equiv s(t;\bbox{\nu})$, where $\bbox{\nu} \equiv \{\nu_0,\ldots,\nu_{m-1}\}$, is a m-dimensional parameter vector, will describe a m-dimensional manifold $\cal S$ embedded in ${\cal V}$, the vector space of all detector outputs. (See [@BSD96; @Ow96] for an introduction to the use of differential geometry in gravitational wave data analysis, and [@Am] for the application of differential geometry to statistics.) Let the output of a detector $\bf x$, contain a signal ${\bf s}(\bbox{\check{\nu}})$. Then $\bf x = {\bf s}(\bbox{\check{\nu}}) + \bf n$, where $\bf n$ is a noise vector drawn from the noise distribution. The distance, $D(\bbox{\nu})$ between $\bf x$ and a point $ {\bf s}(\bbox{\nu})$ is given by, $$\begin{aligned} \label{dist} D(\bbox{\nu}) &=& \| \bf{x - s}(\bbox{\nu})\| = \left\langle\bf{x - s}(\bbox{\nu}), {\bf x - s}(\bbox{\nu})\right\rangle^{1/2},\\ &=& \left[\left\langle{\bf x},{\bf x}\right\rangle - 2\left\langle{\bf x},\bf{s}(\bbox{\nu})\right\rangle + \left\langle {\bf s}(\bbox{\nu}), {\bf s}(\bbox{\nu})\right\rangle\right]^{1/2}. \end{aligned}$$ The MLE of the parameters is that point $\bbox{\hat{\nu}}$ on the parameter space which minimises $D(\bbox{\nu})$. This is equivalent to maximising the correlation $c(\bbox{\nu}) = \left\langle {\bf x},{\bf s}(\bbox{\nu}) \right\rangle$ provided we keep $\left\langle {\bf s}(\bbox{\nu}), {\bf s}(\bbox{\nu})\right\rangle$ constant. In the limit of high SNR, ${\bf s}(\bbox{\hat{\nu}})$ will lie in a small region around ${\bf s}(\bbox{\check{\nu}})$ on the manifold, effectively the tangent space to the manifold at ${\bf s}(\bbox{\check{\nu}})$. In this case, the difference, ${\bf s}(\bbox{\hat{\nu}}) - {\bf s}(\bbox{\check{\nu}})$ can be satisfactorily approximated as the projection of the noise vector onto the tangent space. This implies that ${\bf s}(\bbox{\hat{\nu}}) - {\bf s}(\bbox{\check{\nu}})$ is linear function of $\bf n$. Further, if the parameters form a Cartesian system of coordinates, then, they too will be linear in $\bf n$ and the distribution of the parameters can be described by a multivariate Gaussian [@Fin92]. The covariance matrix of this distribution defines a lower bound on the errors in estimation and is termed as the Cramer-Rao bound. If the global minimum of $D(\bbox{\nu})$ is also a local minimum then, at $\bbox{\nu}=\bbox{\hat\nu}$, $$\partial D(\bbox{\nu})/\partial\hat\nu^a = 0,\ \ \mbox{for}\ \ a=0,\ldots,m-1$$ which implies, $$\label{max} \left\langle{\bf s}(\bbox{\check{\nu}}) + {\bf n} - {\bf s}(\bbox{\hat{\nu}}) ,\frac{\partial {\bf s}}{\partial \nu^a}(\bbox{\hat{\nu}})\right\rangle = 0.$$ These are a set of $m$ equations, one for each parameter. We interpret these equations geometrically as follows. The equations imply that the vector ${\bf x} - {\bf s}(\bbox{\hat{\nu}})$ is orthogonal to each of the coordinate basis vectors, $\partial/\partial\nu^a$, evaluated at $\bbox{{\nu}}=\bbox{\hat{\nu}}$. Thus $\bbox{\hat{\nu}}$ is a local extremum when the tip of the vector $\bf x$ lies on that $N-m$ dimensional hyperplane ${\cal B}_{\bbox{\hat{\nu}}}$, which passes through ${\bf s}(\bbox{\hat{\nu}})$, and is orthogonal to the $m$-dimensional tangent plane at ${\bf s}(\bbox{\hat{\nu}})$. This hyperplane, ${\cal B}_{\bbox{\hat{\nu}}}$, is the intersection of the $(N-1)$-dimensional hyperplanes, ${\cal N}^a_{\bbox{\hat{\nu}}}$, each orthogonal to a coordinate basis vector ${\partial}/{\partial\nu^a}$ at $\bbox{{\nu}}=\bbox{\hat{\nu}}$. Let ${\bf l}_a$ be the normalized coordinate basis vectors at $\bbox{\hat{\nu}}$, [*i.e.*]{} $$\label{ells} {\bf l}_a = \frac{\partial {\bf s}}{\partial \nu^a}(\bbox{\hat{\nu}})\bigg/ \left\| \frac{\partial {\bf s}}{\partial \nu^a}(\bbox{\hat{\nu}})\right\|.$$ We define $r_a$ to be the minimal distance from ${\bf s}(\bbox{\check{\nu}})$ to the hyperplane ${\cal N}^a_{\bbox{\hat{\nu}}}$, which is given by $$r_a=\left\langle{\bf s}(\bbox{\hat{\nu}})- {\bf s}(\bbox{\check{\nu}}) ,{\bf l}_a\right\rangle.$$ A schematic illustration of the above discussion is given in Fig. \[fig6\]. The pdf for the vector $\bf x$ to lie on ${\cal N}^a_{\bbox{\hat{\nu}}}$ can depend only on ${\bf l}_a$ and $r_a$. If the vector $\bf x$ is to lie on ${\cal B}_{\bbox{\hat{\nu}}}$, then it must simultaneously lie on each of the normal hyperplanes, ${\cal N}^a_{\bbox{\hat{\nu}}}$. The pdf for $\bf x$ to lie on ${\cal B}_{\bbox{\hat{\nu}}}$ is given by the expression, $$\label{rdis2} p(r_a) = \int_{\cal V}\left[\prod_{a=0}^ {m-1}\delta(\left\langle ({\bf n} - r_a{\bf l}_a),{\bf l}_a\right\rangle)\right] \ p({\bf n}) \ d^{\scriptscriptstyle N}n,$$ where the $\delta$ denotes the Dirac Delta function. Substituting for the Gaussian distribution for the noise $p({\bf n})$ in the equation above and integrating, we get, $$\label{rdis1} p(r_0,r_2,\ldots,r_{m-1}) = {{\exp\left[ -\frac{1}{2}\sum\limits_{a,b=0}^{m-1}\left[\gamma^{-1}\right]^{ab} r_ar_b\right]}\over{\left[\left(2\pi\right)^m \mbox{det}\left[\gamma_{ab}\right]\right]^{1/2}}} ,$$ where, $\gamma_{ab} = \left\langle{\bf l}_a,{\bf l}_b\right\rangle$. The integration though tedious, is quite straightforward. Note that each of the Gaussian random variables $r_a$ will have unit variance as is obvious from the definition of $\gamma_{ab}$. Moreover, the matrix $\gamma_{ab}$ is very closely related to the Fisher information matrix $\Gamma_{ab}$ as defined in eqn. (\[cov\]). Whereas the components of the Fisher information are got by taking scalar products of the tangent vectors on the manifold, the components of the matrix $\gamma_{ab}$ are obtained by by taking scalar products of the [*normalized*]{} tangent vectors on the manifold. Statistical model for the Newtonian Chirp {#appnewt} ----------------------------------------- We now specialise to the case of the Newtonian chirp. We use the parameters $\nu^a$ defined earlier, in the previous section. Since ${\bf s}(\bbox{\hat{\nu}}) = \hat A {\bf h}(\hat\nu^j)$, the above equations for ${\bf l}_a$ and $r_a$ give, $$\begin{aligned} {\bf l}_0 &=& {\bf h}(\hat{\nu}^j),\\ {\bf l}_k &=& \frac{\partial {\bf h}}{\partial\nu^k}\left(\hat{\nu}^j\right)\bigg/\left\| \frac{\partial {\bf h}}{\partial\nu^k}\left(\hat{\nu}^j\right) \right\|,\\ r_0(\hat\nu^k,\hat A) &=& \check A \left\langle{\bf h}(\check{\nu}^k),{\bf h}(\hat{\nu}^k) \right\rangle - \hat A,\\ \label{r2} r_j(\hat\nu^k) &=& \check A\left\langle {\bf h}(\check\nu^k),\frac{\partial {\bf h}}{\partial\nu^j}\left(\hat{\nu}^k\right)\right\rangle\bigg/\left\| \frac{\partial {\bf h}}{\partial\nu^j}\left(\hat{\nu}^k\right) \right\|.\end{aligned}$$ Since the $\bbox{\nu}$ coordinate system is orthogonal, $\gamma_{ab}$ turns out to be nothing but the identity matrix. The distribution of the variables $r_a$ is thus a joint Gaussian given by the expression, $$\label{rdisnu} p(r_0,\ldots,r_3) = \frac{1}{2\pi^2}\exp\left[ -\frac{1}{2}\sum\limits_{a=0}^{3}r_a^2\right].$$ The $r_j$ variables are closely related to the $\kappa_i$ variables defined in section \[MCres2\] in eqn. (\[m1nu\]): $$r_i \ =\ \check A \left\| \frac{\partial {\bf h}}{\partial\nu^i}\left(\hat{\nu}^k\right) \right\|^{-1}\kappa_i %=\check A\frac{1}{\check A\sqrt{\Gamma_{ii}}}\kappa_i \ =\ \frac{\kappa_i}{\sqrt{\Gamma_{ii}}},$$ where we have used the definition of $\Gamma_{ab}$ in eqn. (\[cov\]). Thus they differ only by a factor which is simply a constant from eqs. (\[scal\]) and (\[FT\]). This is a consequence of the intrinsic flatness of the manifold [@BSD96] and the particular parameterization adopted. From eqn. (\[rdisnu\]) we would expect the marginal probability distribution of each of the variables $r_j$ to be a Gaussian distribution with unit variance. Using the ensemble of estimated parameters from the Monte Carlo simulations we can obtain the histograms and consequently the distributions of the variables $r_j$. In Fig. \[fig7\] we plot the probability distributions of the variables $r_j$, and compare them with the expected Gaussian distributions. It is clear that though $r_1$ and $r_2$ are Gaussian random variables to a good approximation, $r_3$ shows a pronounced non-Gaussian behaviour. The reason for this discrepancy can be traced to the fact that the phase parameter is constrained to lie in the range $[-\pi,\pi]$. In Fig. \[fig5\] we observe that in the central island the $\nu^3$ parameter gets abruptly cutoff at a value of about $4.5$ and $-4.5$. Since the points on the central island are ‘close’ to the point corresponding to the actual value of the parameters, we can apply the eqn. (\[kap4\]) with $k=0$. This gives a value of $\kappa_3\approx 0.00488$ and consequently $r_3\approx1.5$. We observe in Fig. \[fig7\] that the dip in the distribution of $r_3$ occurs at the same point [*i.e.*]{} at $r_3\approx1.5$. To further elaborate on this point, we plot in the Fig. \[fig8\], the three variables $r_i$ v/s the variable $\theta_m$ defined earlier. The figure clearly illustrates that while $r_1$ and $r_2$ take on there entire range of values in the central island, $r_3$ does not. This establishes a connection between the dip in the marginal distribution and the phase parameter being constrained to the range $[-\pi,\pi]$. Although a deeper understanding of this is in order, we continue our analysis assuming the distribution of $r_3$ to be given by a Gaussian. Had the map between $\bbox{\hat{\nu}}$ to $\bf r$ been bijective, it would have been possible to write the distribution for the estimated parameters simply as a product of the pdf for the variables $r_a$ and a Jacobian factor. However, a given set of values for ${\bf r} \equiv \{r_a\}$ would in general correspond to multiple parameter vectors $\bbox{\hat{\nu}}^{(l)}$, where the range of values of $l$ depends on the number of solutions. This is clear from Fig. \[fig8\] where we observe that the same value of $r_j$ occurs for different values of the variable $\theta_m$, and hence in different ‘islands’. We also observe that $r_3$ takes almost all its possible values in the central island and the two adjoining ones. Moreover, we notice that it is approximately true that $r_3$ takes different values in each of the three islands. Thus if we restrict ourselves only to the central and two adjoining islands, then the map between $\nu^j$ and $r_j$ is bijective to a good degree of approximation. For a fixed set of values for $\{r_j\}$ we have therefore, a unique solution $\nu^j$ satisfying eqs. (\[r2\]), if we restrict ourselves to the three islands identified above. We shall henceforth term the islands identified above as the [*primary group*]{} of islands and the solution there will be called the [*primary*]{} solution. (It is to be emphasized that the above discussion is applicable only to the Newtonian waveform. As we shall see later for the post-Newtonian case, the primary group of islands contains more islands on either side of the central island.) There will of course be other solutions in the other islands. It is to be noted that number of multiple solutions for a given set of values for $\{r_j\}$, depends on $\{r_j\}$, and we do [*not*]{} imply that each island admits a solution $\nu^j$ for a fixed set of values for $\{r_j\}$. The reason for the term [*primary*]{} solution is to be found in Fig. \[fig9\], where we have plotted a histogram of the variable $\theta_m$ which gives us an idea as to how many points occur in each island. We observe that $99\%$ of the points lie in the central and the two adjoining islands for the Newtonian case. Thus there is an overwhelming probability for the points to lie in one of these islands, and the primary solutions occur much more frequently as compared to the other solutions for a fixed value of $r_j$. The problem now is to determine the various solutions $\bbox{\hat{\nu}}^{(l)}$ for a fixed set of values of $\{r_a\}$, and compute the probability that a particular $\bbox{\hat{\nu}}^{(m)}$ will be selected amongst others for a given $\bf r$. This is essentially the probability that the amplitude $\hat A^{(m)}$ is greater than the amplitude at the other solutions. We shall denote this probability as $P(\bbox{\hat{\nu}}^{(m)}|{\bf r})$. For a fixed set of values of $\{r_j\}$, we will have multiple solutions, $\{\hat\nu^j\}^{(1)}, \{\hat\nu^j\}^{(2)},\ldots$  to eqn. (\[r2\]). The correlation obtained at these points will be $$\hat A^{(l)} = \left\langle{\bf x},{\bf h}(\hat\nu^j)^{(l)}\right\rangle.$$ It is to be noted that for a [*fixed*]{} $\{\hat\nu^j\}$, $\hat A$ will be a Gaussian random variable with a variance of unity. In order to calculate $P(\{\hat\nu^j\}^{(l)}|\{r_j\})$ we need to identify all the solutions corresponding to a fixed set of values $r_a$. The identification of the multiple roots is quite a problem, and so we make the following approximation. We assume that for one of the solutions, that is the primary solution which we shall denote by $\{\hat\nu^j\}^{(1)}$, corresponding to $\{r_j\}$, the probability $P(\{\hat\nu^j\}^{(1)}|\{r_j\})$ is almost unity. If this is true, then, we only need to compute the probability that the correlation at an arbitrary point on the parameter space exceeds the correlation at the primary solution point which shares the same set of values of $\{r_j\}$. This corresponds to computing the probability of the Gaussian random variable $\hat A^{(l)}$, exceeding $\hat A^{(1)}$. Of course, $P(\{\hat\nu^j\}^{(1)}|\{r_j\})$ is set to unity. The justification for the above procedure is essentially the fact that nearly $99\%$ of the points lie in the primary group of islands. So for evaluating the distribution of the estimated parameters at $\nu^k=\hat\nu^k$ we follow the following procedure: 1. Determine $r_j(\hat\nu^k)$ using eqn. (\[r2\]). 2. Determine $\{\hat\nu^k\}^{(1)}$ using eqn. (\[kap4\]) for an appropriate value of $k$, [*i.e.*]{} $k$ takes one of the values $\{-1,0,1\}$. 3. Determine the probablility for $\hat A = \left\langle{\bf x},{\bf h}(\hat\nu^k)\right\rangle$ to be greater than $\hat A^{(l)} = \left\langle{\bf x},{\bf h}(\hat\nu^k)^{(1)}\right\rangle$. (If $\hat\nu^k$ is already a primary solution then this probability is set to unity.) 4. Set $P(\{\nu^a\}|\{r_a\})$ equal to the calculated probability. 5. Write the the pdf of the estimated parameters as $$\label{ndis1} p(\bbox{\hat\nu})= \frac{1}{\left(2\pi\right)^2} J\left(\bbox{\hat\nu}\right)P\left(\bbox{\hat\nu}|{\bf r}\right) \exp\left[-\frac{1}{2}\sum\limits_{a=0}^3 r_a^2(\bbox{\hat\nu})\right],$$ where, $ J(\bbox{\hat{\nu}})$ is the Jacobian of the transformation from ${\hat{\nu}^a}$ to the variables $r_a$, which is essentially, $$\label{jac} J(\bbox{\hat\nu})=\mbox{det}\left[\frac{\partial r_a}{\partial\nu^b}(\bbox{\hat\nu})\right].$$ Since the amplitude parameter is not of primary interest to us, we shall integrate the distribution over $\hat A$. We use a threshhold of $7$, which means that we reject any realization whose measured value $\hat A$ is less than $7$ in the simulations. The amplitude parameter enters only via $r_0$. So the distribution for the remaining three parameters $\hat\nu^k$ is, $$\begin{aligned} \label{ndis2} p(&&\hat{\nu^k})= \frac{1}{\left(2\pi\right)^{3/2}} J\left(\hat\nu^k\right)P\left(\hat\nu^k|{\bf r}\right) \exp\left[-\frac{1}{2}\sum\limits_{i=1}^3 r_i^2(\hat\nu^k)\right] \;\mbox{\bf\LARGE$\times$}\nonumber\\ && \frac{1}{\left(2\pi\right)^{1/2}}\int_{7.0}^\infty \exp\left[-\frac{1}{2} \left(\check{A}\left\langle h({\check{\nu^k}}),h({\hat{\nu^k}}) \right\rangle-\hat{A}\right)^2\right] d\hat{A}.\end{aligned}$$ Since the SNR is chosen to be 10 the second factor in the equation above is very close to unity. We now compare the Monte Carlo results with the distribution given in eqn. (\[ndis2\]). We compare the one dimensional marginal distribution $p(\hat\nu^1)$ with the histogram obtained via the Monte Carlo method. The marginal distribution $p(\hat\nu^1)$ is obtained by integrating eqn. (\[ndis2\]) with respect to $\nu^2$ and $\nu^3$. This is done numerically. Though the parameter $\nu^1$ has the least root mean square error of $.015$ as predicted by the covariance matrix, its distribution has the most pronounced non-Gaussian behaviour. In plot (a) and (b) of Fig. \[fig10\] we display the distributions $p(\hat\nu^1)$, obtained from the Monte Carlo simulations and the statistical model respectively. Plots (c) and (d) in the same figure zoom in on the first two maxima occurring on the right of the central maximum. It can be seen that in the case of plot (d) the match is not very good even though the location of the peaks match fairly well. The difference here could come from either the Monte Carlo method or the statistical model. In the histogram of the variable $\theta_m$ illustrated in Fig. \[fig9\] we have seen that about $79.5\%$ of the points in the simulations fall in the central island and about $10\%$ in each of the adjoining islands at an SNR of $10$. We can obtain the corresponding numbers from our theoretical model as given in eqn. (\[ndis1\]), by integrating the distribution over all the parameters in the domain corresponding to each island. We obtain the values $82\%$ for the central island and about $9.5\%$ for each of the adjoining islands. Thus the statistical model shows good agreement with the Monte Carlo results by this criterion. The number of points in the subsequent islands falls off rapidly. The contribution to the variance from each island depends on the number of points in that region and the location of island in the parameter space. It is found that the maximum contribution to the variance comes from the islands immediately adjoining the central island. Post Newtonian waveform {#PNwave} ======================= In the post-Newtonian case we have the five parameters, $$\bbox{\mu} \equiv \mu^a \equiv \{\mu^0,\mu^1,\mu^2,\mu^3\} \equiv \{A, 2\pi f_st_s, \Phi, 2\pi f_s\tau_0, 2\pi f_s\tau_1\}.$$ Simulations (12000 realizations) were carried out again for an SNR of 10. The signal parameters were $\check\tau_0 = 5.558$s and $\check\tau_1 = 0.684$s, corresponding to a binary comprised of a $10M_\odot$ black hole and a $1.4M_\odot$ neutron star. The simulation box taken was $\{5.058$s$\leq\tau_0\leq5.859$s$, 0.484$s$\leq\tau_1\leq0.985$s$\}$, with filter spacings of $10$ms. in $\tau_0$ and $5$ms. in $\tau_1$. Our analysis of the post-Newtonian case will be very similar to the Newtonian case in section \[MCres\]. All the variables defined there can be defined for the post-Newtonian case also and we will use the same names for the variables to avoid using more symbols. Here again we diagonalise the covariance matrix by making a coordinate transformation from the $\bbox{\mu}$ to the $\bbox{\nu}$ coordinate system. The diagonal covariance matrix in the $\bbox{\nu}$ parameters is given by, $$ = ( ----------------- ------- -------- -------- --------- ${\check A}^2$ 0.0 0.0 0.0 0.0 0.0   0.018 0.0 0.0 0.0 0.0 0.0 1.1974 0.0 0.0 0.0 0.0 0.0 403.64 0.0 0.0 0.0 0.0 0.0 15601.0 ----------------- ------- -------- -------- --------- ) $$ Therefore the root mean square errors in the parameters computed from the covariance matrix are, $\sigma_{\nu^i}~\equiv~\{.013,0.109,2.009,12.49\}$. On the other hand the observed errors from the Monte Carlo simulations are $\Sigma_{\nu^i}~\equiv~\{1.33574, 7.43948, 5.34089, 23.049\}$. One finds as compared to the Newtonian case, the factors $\Sigma_{\nu^i}/\sigma_{\nu^i}$ are larger on the average. The reason for this is that here we have one more parameter, which means there are more filters which can match with the data and consequently there are more outlying points. This is evident from Fig. \[fig13\] as is explained below. In Figs. \[fig11\] and \[fig12\] we give the scatter plots of the four parameters $\nu^i$, the former on the $\nu^1-\nu^2$ plane and the latter on the $\nu^3-\nu^4$ plane. We observe that while there are gaps in distribution of the parameters $\nu^1$ and $\nu^2$ there are none in the distribution of the parameters $\nu^3$ and $\nu^4$. Here the phase parameter $\mu^2\equiv\Phi$ is given by, $$\Phi = -0.109838\nu^1 - 0.621516\nu^2 + 0.695335\nu^3 + 0.343747\nu^4,$$ and the variable $\theta_m$ is given by, $$\theta_m = 4.18241\Delta\nu^1+ 0.892242\Delta\nu^2 + 0.0186845\Delta\nu^3 + 0.00272864\Delta\nu^4.$$ Since both $\sigma_{\nu^3}$ and $\sigma_{\nu^4}$ are comparable, both these parameters will contribute to $\Phi$ as opposed to the Newtonian case where only the $\nu^3$ parameter dominates. Thus we find that both the parameters $\nu^3$ and $\nu^4$ assume all their values in their respective ranges. The spacing between the islands in the $\nu^1$ and $\nu^2$ scatter plot is calculated to be $2\pi G_1/C^{11}_{\nu} = 0.69$ and $2\pi G_2/C^{22}_{\nu} = 3.90$ respectively. This is consistent with Fig. \[fig11\]. In Fig.  \[fig11\] we notice that there are more islands to the right than there are to the left of the central island. This is caused by the simulation box which is asymmetrical, [*i.e.*]{} the parameters of the signal are not at the center of the simulation box. This is so since all combinations of $\tau_0$ and $\tau_1$ do not correspond to valid masses for the components of the binary. This leads to a shift in the mean of the estimated parameters from their actual values. The observed means in the $\Delta\nu^i$ are $\{0.10167, 0.574784, -0.282783, 1.56494\}$. For higher SNRs the estimated parameters will tend to lie only on the islands which lie close to the central island and consequently within a symmetrical region around the actual values of the parameters. Therefore the bias will disappear for the case of higher SNRs. In Fig. \[fig13\] we plot the histogram of the distribution of the variable $\theta_m$ in the post Newtonian case. Here, for the same SNR of $10$, there are more outlying points as compared to the Newtonian case. This is to be expected since we have an additional parameter, and there is an additional degree of freedom to find the filter which matches best with the signal. The variance now gets substantial contributions even from far away islands as opposed to the Newtonian case. In Fig. \[fig14\] we plot the variables $r_i$ v/s $\theta_m$. Here again only $r_4$ does not attain all its possible values within the central island, but now $r_4$ attains its entire range of values only when when we include two islands on either side. Moreover the overlap of values of $r_4$ in these islands is much more pronounced as opposed to the Newtonian case and further investigations are needed to identify the primary solution correctly. In Fig. \[fig15\] we plot the probability distributions of the variables $r_i$. The histograms are plotted using the Monte Carlo data and the continuous curve is a Gaussian with a variance of unity. It is clear that $r_i$ are Gaussian random variables to a good degree of approximation. The approximation is better than in the Newtonian case (see Fig. \[fig7\]). As in the Newtonian case the $r_i$ and $\kappa_i$ are related by only a constant factor. Therefore $\kappa_i$ are also Gaussian random variables. However, we note that this is true only for a special class of waveforms to which the chirp belongs. The chirp signal manifold is intrinsically flat and the parameterization of the waveform is such that the coordinates are essentially Cartesian. If some other parameterization is chosen ([*e.g.*]{} the chirp mass $\cal M$), the coordinates will no longer be so and $\kappa_i$ will not be Gaussian random variables even though the variables $r_i$ will remain Gaussian. We can again as in the Newtonian case, write down the expression for the pdf of the estimated parameters. We do not write it explicitly here since the expression is formally the same as in eqn. (\[ndis1\]), except now the index $a$ runs from $0$ to $4$. We therefore only refer to eqn. (\[ndis1\]) for the required pdf. Conclusions {#sec_con} =========== In this paper we have addressed the question of wide discrepancies between the theoretically predicted lower bounds in the errors in the estimation of parameters and the errors observed in numerical experiments. We now summarize in this section, the main results of our paper and indicate future directions. - We find that the problem is of a [*global*]{} nature, in that the estimated values for the parameters fall into [*disjoint*]{} islands. Though there are very few points in the islands which are far from the actual value of the parameters, they contribute substantially to the variance. Thus the discrepancy between the Monte Carlo simulations and the Cramer Rao bounds cannot be resolved by using perturbative analysis. The restriction of the parameter $\Phi$ to the range $[-\pi,\pi]$ plays a major role in the non local distribution of parameters, as explained in the text. - The problem is more transparent when we reparameterize the chirp signal so that the covariance matrix is diagonal in the new parameters. The parameters $\bbox{\nu}$ correspond to choosing orthogonal coordinates on the chirp manifold. Since the covariances are zero for the variables $r_a$ the pdf is computationally simpler to handle. - A statistical model has been given which matches well with the distribution obtained from Monte Carlo simulations. The model is derived from geometrical considerations. We have identified certain variables $r_i$ as Gaussian random variables and these play an important role in writing down the expression for the distribution of the estimated parameters. - Since the distribution of the estimated parameters is multimodal, the variance is not a good indicator of the performance of the MLE. A more reasonable indicator would be to judge the performance of the MLE by means of confidence intervals [*i.e.*]{} compute the probability that the estimated parameters lie within a certain region around the actual value. As a concrete example, we compute the probability that the deviation of the estimated parameter is less than thrice the root mean square error at that SNR as predicted by the covariance matrix. We will use the symbol $P_{3\sigma}$ to denote the fraction of points which lie within the $3\sigma$ range for a given parameter. However this criterion will in general be dependent on the specific parameter chosen. Since $\tau_0$ is one such physical parameter we use this to compute $P_{3\sigma}$. In Figures \[fig16\] and \[fig17\] we plot $P_{3\sigma}$ v/s SNR for the parameter $\tau_0$ in the Newtonian and the post-Newtonian cases respectively. The values of $P_{3\sigma}$ for the parameters such as $\tau_0$ and $\tau_1$ will be independent of the actual value of the chirp times. For this purpose we use the results of simulations carried out earlier in [@BSD96]. It is to be noted that whereas the simulations carried out in this paper use a single curve to fit the noise power spectrum, $S_n(f)$, we had used a more accurate representation of $S_n(f)$ in our earlier simulations [@BSD96]. We see that the MLE works moderately well even at low SNRs of $\rho\approx10$. It is to be remarked that the assessment of an estimator depends upon how we use the estimates to calculate astrophysically interesting quantities. - We required about 2 days of compution on a 300 MFlops (peak rating) machine to generate the results of this paper. The use of an integration routine specifically suited to the integrand, and/or the use of lookup tables for computing the integrand, would further speed up the computation substantially. The intrinsic flatness of the manifold turns out to be very convenient for our purpose. This property holds true for the first post-Newtonian waveform as well. There is one more parameter in the post-Newtonian waveform and consequently one more integration to perform to get the the marginal distribution of the parameters. For higher post-Newtonian corrections and/or for inclusion of parameters such as spins, it might be computationally expensive to compute the marginal distributions. However, it is to be noted that performing Monte Carlo simulations in such cases would also call for a huge amount of computational effort. A further research into the above issues is in progress. R.B. would like to thank CSIR, India for the senior research fellowship. S.D. would like to thank Bernard Schutz and Andrjez Krolak for fruitful discussions. A. Abramovici [*et. al.*]{}, Science, [**256**]{}, 325, (1992). C. Bradaschia [*et. al.*]{}, Nucl. Instrum. Methods Phys. Res., Sect A 518 (1990). B.F. Schutz, in [*Gravitational Collapse and Relativity,*]{} Edited by H. Sato and T. Nakamura, (World Scientific, Singapore, 1986), pp. 350-368. D. Marković, Phys. Rev. D., [**48**]{}, 4738 (1993). S. Finn, Phys.Rev. D., [**53**]{}, 2878, (1996). L. Blanchet and B.S. Sathyaprakash, Phys. Rev. Lett., [**74,**]{} 1067 (1995). K.S. Thorne, [*Gravitational waves from compact objects*]{}, to be published in [*Proceedings of IAU symposium 165, Compact stars in binaries*]{}, edited by J. van Paradijs, E. van den Heuvel, and E. Kuulkers, (Kluwer Academic Publishers). K.S. Thorne, [*Gravitational waves*]{}, to be published in [*Proceedings of the Snowmass 95 Summer Study on Particle and Nuclear Astrophysics and Cosmology*]{}, edited by E.W. Kolb and R. Peccei, (World Scientific, Singapore). K.S. Thorne, in [*300 Years of Gravitation*]{}, S.W. Hawking and W.Israel (eds.), (Cambridge Univ. Press, 1987). C.W. Helstrom, [*Statistical Theory of Signal Detection,*]{} 2nd. ed, (Pergamon Press, London, 1968). B.F. Schutz, in [*The Detection of Gravitational Radiation,*]{} edited by D. Blair (Cambridge, 1989) pp 406-427. L. S. Finn, Phys. Rev. D [**46**]{}, 5236 (1992). L.S. Finn and D.F. Chernoff, Phys. Rev. D [**47**]{}, 2198 (1993). L. Blanchet and B.S. Sathyaprakash, Class. Quantum Grav., [**11,**]{} 2807 (1994). A. Krolak, in [*Gravitational wave data analysis*]{}, edited by B.F. Schutz, (Dordrecht : Kluwer, 1989), pp. 59-69. C. Cutler and E. Flanagan, Phys.Rev. D [**49**]{}, 2658 (1994). A. Krolak, J. A. Lobo and B. J. Meers, Phys. Rev D [**48**]{}, 3451 (1993). E. Poisson and C.M. Will, Phys. Rev. D [**52**]{}, 848 (1995). R. Balasubramanian, B.S. Sathyaprakash and S.V. Dhurandhar, Phys. Rev. D., [**53**]{}, 3033, (1996). D. Nicholson and A. Vecchio, [*Bayesian bounds on parameter estimation accuracy for coalescing binary gravitational wave signals*]{}, gr-qc9705064. C. Cutler [*et al*]{}, Phys. Rev. Lett. [**70**]{}, 2984 (1993). B.S. Sathyaprakash, Phys. Rev. D [**50**]{}, R7111 (1994). B.S. Sathyaprakash and S.V. Dhurandhar, Phys. Rev. D [**44**]{}, 3819 (1991). B. J. Owen, Phys. Rev. D, [**53**]{}, 6749, (1996). Shun-ichi Amari, [*Differential Geometric Methods in Statistics *]{}, (Springer-Verlag), (1987).
--- abstract: 'We discuss various properties of the variational class of continuous matrix product states, a class of ansatz states for one-dimensional quantum fields that was recently introduced as the direct continuum limit of the highly successful class of matrix product states. We discuss both attributes of the physical states, *e.g.* by showing in detail how to compute expectation values, as well as properties intrinsic to the representation itself, such as the gauge freedom. We consider general translation non-invariant systems made of several particle species and derive certain regularity properties that need to be satisfied by the variational parameters. We also devote a section to the translation invariant setting in the thermodynamic limit and show how continuous matrix product states possess an intrinsic ultraviolet cutoff. Finally, we introduce a new set of states which are tangent to the original set of continuous matrix product states. For the case of matrix product states, this construction has recently proven relevant in the development of new algorithms for studying time evolution and elementary excitations of quantum spin chains. We thus lay the foundation for similar developments for one-dimensional quantum fields.' author: - Jutho Haegeman - 'J. Ignacio Cirac' - 'Tobias J. Osborne' - Frank Verstraete bibliography: - 'paperslibrary.bib' - 'manuallibrary.bib' - 'books.bib' title: Calculus of continuous matrix product states --- Introduction ============ Many revolutions and breakthroughs in quantum physics, and quantum many body physics in particular, were stimulated by guessing a suitable variational ansatz that captures the relevant correlations for the systems under consideration. Feynman’s ansatz for the roton in superfluid Helium[@Feynman:1954aa; @Feynman:1956aa], the Bardeen-Cooper-Schrieffer wave function for superconductivity[@1957PhRv..106..162B] and the Laughlin wave function for the fractional quantum Hall effect[@PhysRevLett.50.1395] are only a few prominent examples. For gapped one-dimensional quantum spin systems, the set of matrix product states[@1987PhRvL..59..799A; @1988CMaPh.115..477A; @1992CMaPh.144..443F; @2008AdPhy..57..143V; @2009JPhA...42X4004C] is a very general ansatz that can describe a range of different phenomena and different physical phases, including normal symmetric and symmetry broken phases as well as the more exotic symmetry-protected topologically ordered phases such as the Haldane phase[@Haldane:1983aa; @Haldane:1983ab; @2010PhRvB..81f4439P]. Indeed, with the benefit of hindsight, we now understand White’s powerful density matrix renormalization group algorithm[@1992PhRvL..69.2863W; @1993PhRvB..4810345W] as a variational optimization over the set of matrix product states[@1995PhRvL..75.3537O; @1997PhRvB..55.2164R]. Until recently, few equally general ansatzes that surpass mean field theory were available for extended quantum systems in the continuum, *i.e.* quantum fields. Numerical approaches require a finite number of degrees of freedom in order to fit the problem in the memory of a computer. For compact systems such as nuclei, atoms and molecules, an expansion in terms of a finite-dimensional basis is possible, but for extended systems this eventually results in a discretization to an effective lattice system. A new variational ansatz field theories in $d=1$ spatial dimensions was developed by Verstraete and Cirac in 2010 [@2010PhRvL.104s0405V]. This ansatz is formulated in the continuum and does not require an underlying lattice approximation. It can be considered to be the continuum limit of a special subclass of matrix product states (MPS) and is therefore called the *continuous matrix product state* (cMPS) class. The aim of the current paper is to discuss in greater detail the properties of cMPS. Section \[s:def\] reviews the different definitions and representations of these states in the current literature. We then derive a set of regularity conditions that become relevant in the case of systems with multiple particle species in Section \[s:regularity\]. Section \[s:expectval\] discusses how to (efficiently) evaluate expectation values with respect to these states. Section \[s:gauge\] is devoted to the gauge invariance and the existence of canonical forms in the continuous matrix product state representation for generic systems without translation invariance. We also discuss uniform continuous matrix product states in the thermodynamic limit and illustrate how continuous matrix product states possess a natural ultraviolet cutoff in Section \[s:ti\]. Finally, Section \[s:tangent\] provides an intuitive construction of tangent vectors to the variational set and discusses their representation properties as well, both for finite systems and in the thermodynamic limit. These tangent states are relevant when studying time evolution or elementary excitations along the lines of analogous MPS algorithms [@2011arXiv1103.0936H; @2011arXiv1103.2286H; @2012PhRvB..85c5130P; @2012arXiv1207.0691M]. We do not strive for absolute mathematical rigor, but merely attempt to explain in full detail the prerequisites for using cMPS in numerical algorithms. For example, due to the intrinsic difficulty of the various infinite-dimensional function spaces involved, we do not include a rigorous proof that the set of continuous matrix product states constitutes a smooth (complex) manifold and that the construction of a tangent space is justified. Various definitions of the variational class {#s:def} ============================================ Setting {#ss:def:setting} ------- Consider a quantum system defined on a one-dimensional continuum ${\ensuremath{\mathcal{R}}}=[-L/2,+L/2]$ with length $\lvert{\ensuremath{\mathcal{R}}}\rvert=L$ that accommodates $q$ bosonic and/or fermionic particle species, which are labeled by the greek index $\alpha=1,\ldots,q$. Throughout this paper, we restrict to non-relativistic systems. A state of the quantum system containing $N_{\alpha}$ particles of type $\alpha$ is then described by a square integrable function on $\prod_{\alpha=1}^{q}{\ensuremath{\mathcal{R}}}^{(N_{\alpha})}_{\eta_{\alpha}}$, where $\eta_{\alpha}=+1$ ($-1$) if particle species $\alpha$ is bosonic (fermionic) and ${\ensuremath{\mathcal{R}}}^{(N_{\alpha})}_{+}$ (${\ensuremath{\mathcal{R}}}^{(N_{\alpha})}_{-}$) corresponds to the symmetric (antisymmetric) subspace of ${\ensuremath{\mathcal{R}}}^N$, the Cartesian product of $N$ copies of ${\ensuremath{\mathcal{R}}}$. The space of the square integrable functions on this domain is a Hilbert space that is denoted as $${\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{\{N_{\alpha}\}_{\alpha=1,\ldots,q}}=L^2\left(\prod_{\alpha=1}^{q}{\ensuremath{\mathcal{R}}}^{(N_{\alpha})}_{\eta_{\alpha}}\right).\label{eq:defNalphaspace}$$ Following the principles of second quantization, we now define the Fock space $${\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{(\text{F})}=\bigoplus_{N_1=0}^{+\infty}\cdots \bigoplus_{N_q=0}^{+\infty}{\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{\{N_{\alpha}\}_{\alpha=1,\ldots,q}}\label{eq:deffockspace}$$ which captures an arbitrary state of the quantum system. In addition, we denote the unique vacuum state as $\ket{\Omega}\in {\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{\{N_\alpha=0\}_{\alpha=1,\ldots,q}}$. Particles of type $\alpha$ are created and annihilated at position $x\in{\ensuremath{\mathcal{R}}}$ with the operators ${\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(x)$ and ${\ensuremath{\hat{\psi}}}_{\alpha}(x)$ with $\alpha=1,\ldots,q$. These satisfy the general commutation or anticommutation relations $$\begin{aligned} {\ensuremath{\hat{\psi}}}_{\alpha}(x){\ensuremath{\hat{\psi}}}_{\beta}(y)-\eta_{\alpha,\beta} {\ensuremath{\hat{\psi}}}_{\beta}(y){\ensuremath{\hat{\psi}}}_{\alpha}(x)&=0,&{\ensuremath{\hat{\psi}}}_{\alpha}(x){\ensuremath{\hat{\psi}^\dagger}}_{\beta}(y)-\eta_{\alpha,\beta} {\ensuremath{\hat{\psi}^\dagger}}_{\beta}(y){\ensuremath{\hat{\psi}}}_{\alpha}(x)&=\delta_{\alpha,\beta}\delta(x-y),\label{eq:commrelations}\end{aligned}$$ where $\eta_{\alpha,\beta}=-1$ if both $\alpha$ and $\beta$ represent fermionic particles and $\eta_{\alpha,\beta}=1$ when at least one of the two particles species $\alpha$ or $\beta$ is bosonic. Clearly $\eta_{\alpha,\alpha}=\eta_{\alpha}$. We always write sums over the species index $\alpha$ explicitly and do not use Einstein’s summation convention with respect to this index. Original definition {#ss:def:original} ------------------- A cMPS is defined to be the state [@2010PhRvL.104s0405V] $$\begin{gathered} \ket{\Psi[Q,R_{1},\ldots,R_{q}]}{\ensuremath{\triangleq}}\operatorname{tr}\left(B \mathscr{P}\!\exp\left[\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\, Q(x)\otimes {\ensuremath{\hat{{\openone}}}}+\sum_{\alpha=1}^{q}R_{\alpha}(x) \otimes {\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(x) \right]\right)\ket{\Omega},\label{eq:defcmps}\end{gathered}$$ where $\mathscr{P}\!\exp$ is the path ordered exponential (that orders its argument from left to right for increasing values of $x$) and $\ket{\Omega}$ is the empty vacuum that is annihilated by ${\ensuremath{\hat{\psi}}}_{\alpha}(x)$, $\forall \alpha=1,\ldots,N$. The trace operation acts on an auxiliary space $\mathbb{C}^D$, also called the ancilla space, where $D$ is the bond dimension. The variational parameters correspond to the functions $Q, R_{\alpha}: {\ensuremath{\mathcal{R}}}\to \mathbb{C}^{D\times D}$ that take value in $\mathbb{L}(\mathbb{C}^D){\ensuremath{\triangleq}}\mathbb{C}^{D\times D}$, the space of linear operators acting on the ancilla space. For now, we do not impose any continuity or regularity conditions on these functions, and we refer to Section \[s:regularity\] for a detailed discussion. Finally, the boundary operator $B\in \mathbb{L}(\mathbb{C}^D)$ encodes the boundary conditions. For a system with periodic boundary conditions the boundary operator has full rank and is typically chosen to be $B={\openone}_{D}$. In case of open boundary conditions, we can choose $B=\bm{v}_{\mathrm{R}}\bm{v}^{\dagger}_{\mathrm{L}}$ with $\bm{v}_{\mathrm{L}}$ and $\bm{v}_{\mathrm{R}}$ $D$-dimensional boundary vectors. Note that the matrix functions $Q$ and $R_{\alpha}$ themselves need to satisfy certain boundary conditions which are imposed by the physical setting. We discuss this in more detail in Section \[s:bc\]. More formally, we can identify the cMPS construction as a map between the function spaces ${\ensuremath{\mathcal{R}}}\to \mathbb{C}^{D\times D}$ and the Fock space ${\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{(\text{F})}$: $$\begin{split} \Psi:&({\ensuremath{\mathcal{R}}}\to \mathbb{C}^{D\times D})^{q+1} \to {\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{(\text{F})}:\\ &\qquad(Q,R_1,\ldots,R_q)\mapsto \ket{\Psi[Q,R_1,\ldots,R_q]}. \end{split}$$ The range of the map $\Psi$ defines a variational set ${\ensuremath{\mathcal{V}}}_{\mathrm{cMPS}(D)}\subset {\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{(\text{F})}$, where we often omit the explicit specification of the bond dimension. Henceforth, we compactly denote a cMPS $\ket{\Psi[Q,R_{1},\ldots,R_{q}]}$ as $\ket{\Psi[Q,\{R_{\alpha}\}]}$. It will always be clear from the context how many and which particle species are present. The variational set ${\ensuremath{\mathcal{V}}}_{\text{cMPS}(D)}$ is not a vector space, since the representation of the sum of two elements $\ket{\Psi[Q,\{R_{\alpha}\}]}+\ket{\Psi[Q',\{R_{\alpha}'\}]}$ requires in the most general case a cMPS $\ket{\tilde{\Psi}[\tilde{Q},\{\tilde{R}_{\alpha}\}]}\in{\ensuremath{{\ensuremath{\mathcal{M}}}}}_{\text{cMPS}(\tilde{D})}$ with bond dimension $\tilde{D}=2D$, where we choose ($\forall x\in[-L/2,+L/2]$) $$\begin{aligned} \tilde{Q}(x)&=Q(x)\oplus Q'(x),\\ \tilde{R}_{\alpha}(x)&=R_{\alpha}(x)\oplus R_{\alpha}'(x),&\forall \alpha=1,\ldots,q\\ \tilde{B}&=B\oplus B'.\end{aligned}$$ The variational set does however contain almost complete rays of states, since for any state $\ket{\Psi[Q,\{R_{\alpha}\}]}\in{\ensuremath{\mathcal{V}}}_{\text{cMPS}(D)}$ and any $\lambda\in\mathbb{C}_{0}=\mathbb{C}\setminus\{0\}$ we can also represent $\lambda\ket{\Psi[Q,\{R_{\alpha}\}]}$ as a cMPS with bond dimension $D$ as $\ket{\Psi[Q',\{R'_{\alpha}\}]}$, where $Q'(x)=Q(x)+\mu(x) {\openone}_{D}$ and $R_{\alpha}'(x)=R_{\alpha}(x)$ with $$\exp\left(\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\,\mu(x)\right)=\lambda.$$ A special case is obtained for $\lambda=0$, since this requires us to redefine $Q(x)$ as $Q'(x)=Q(x)-\infty {\openone}_{D}$. Hence, the null state is not contained within ${\ensuremath{\mathcal{V}}}_{\text{cMPS}(D)}$ but only in its closure. Correspondingly, the variational set ${\ensuremath{\mathcal{V}}}_{\text{cMPS}(D')}$ with $D'<D$ is not a subset of ${\ensuremath{\mathcal{V}}}_{\text{cMPS}(D)}$. For example, if the boundary matrices are fixed to $B'={\openone}_{D'}$ and $B={\openone}_{D}$ (periodic boundary conditions), then a representation of the cMPS $\ket{\Psi'[Q',\{R_{\alpha}'\}]}$ with bond dimension $D'$ as a cMPS $\ket{\Psi[Q,\{R_{\alpha}\}]}$ with bond dimension $D>D'$ requires $Q=Q'\oplus (-\infty \times {\openone}_{D-D'})$ and $R_{\alpha}=R_{\alpha}'\oplus (0\times {\openone}_{D-D'})$, hence ${\ensuremath{\mathcal{V}}}_{\text{cMPS}(D')}$ is only included in the closure of ${\ensuremath{\mathcal{V}}}_{\text{cMPS}(D)}$. Note that this differs from the case of MPS on the lattice, where ${\ensuremath{\mathcal{V}}}_{\text{MPS}(D')}\subset {\ensuremath{\mathcal{V}}}_{\text{MPS}(D)}$ for $D\geq D'$. Fock space embedding {#ss:def:fockembedding} -------------------- The embedding of $\ket{\Psi[Q,\{R_{\alpha}\}]}\in{\ensuremath{\mathcal{V}}}_{\text{cMPS}(D)}$ in the Fock space ${\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{(\text{F})}$ for finite $\lvert{\ensuremath{\mathcal{R}}}\rvert$ can be made explicit by expanding the path ordered exponential as $$\begin{gathered} \ket{\Psi[Q,\{R_{\alpha}\}]}=\sum_{N=0}^{+\infty} \int_{-L/2\leq x_{1}\leq \cdots \leq x_{N}\leq L/2}{\ensuremath{\mathrm{d}}}x_{1}\cdots {\ensuremath{\mathrm{d}}}x_{N}\\ \operatorname{tr}\Bigg[ B \bigg(Q(x_1)\otimes {\ensuremath{\hat{{\openone}}}}+\sum_{\alpha_1=1}^{q}R_{\alpha_1}(x_1) \otimes {\ensuremath{\hat{\psi}^\dagger}}_{\alpha_1}(x_1) \bigg)\times \cdots\\ \times \bigg(Q(x_N)\otimes {\ensuremath{\hat{{\openone}}}}+\sum_{\alpha_N=1}^{q}R_{\alpha_N}(x_N) \otimes {\ensuremath{\hat{\psi}^\dagger}}_{\alpha_N}(x_N) \bigg)\Bigg]\ket{\Omega}.\end{gathered}$$ We can then expand the round brackets and reorder the sum in terms of the actual number of created particles by grouping subsequent occurrences of the $Q$ term, so as to obtain $$\begin{gathered} \ket{\Psi[Q,\{R_{\alpha}\}]}=\sum_{N=0}^{+\infty} \sum_{\alpha_1,\ldots,\alpha_N=1}^{q} \int_{-L/2\leq x_{1}\leq \cdots \leq x_{N}\leq L/2}{\ensuremath{\mathrm{d}}}x_{1}\cdots {\ensuremath{\mathrm{d}}}x_{N}\\ \operatorname{tr}\bigg[ B M_Q(-L/2,x_1) R_{\alpha_1}(x_1) M_Q(x_1,x_2) \cdots R_{\alpha_N}(x_N) M_Q(x_N,L/2) \bigg]\\ {\ensuremath{\hat{\psi}^\dagger}}_{\alpha_1}(x_1){\ensuremath{\hat{\psi}^\dagger}}_{\alpha_2}(x_2)\cdots {\ensuremath{\hat{\psi}^\dagger}}_{\alpha_N}(x_N)\ket{\Omega},\label{eq:cmpsfockembedding}\end{gathered}$$ with $$M_Q(x,y)=\sum_{k=0}^{+\infty} \int_{x\leq z_1\leq \cdots \leq z_k \leq y} {\ensuremath{\mathrm{d}}}z_1\cdots {\ensuremath{\mathrm{d}}}z_k Q(z_1) \cdots Q(z_k)= \mathscr{P}{\ensuremath{\mathrm{e}}}^{\int_{x}^{y} Q(z) {\ensuremath{\mathrm{d}}}z}.$$ Eq.  shows how a cMPS can be interpreted as an superposition over the different particle number sectors in the Fock space. Note that this is not completely equivalent to the different sectors ${\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{\{N_{\alpha}\}_{\alpha=1,\ldots,q}}$ in the direct product construction of ${\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{(\text{F})}$ \[Eq. \], since now only the total number of particles $N=\sum_{\alpha=1}^{q} N_{\alpha}$ is fixed. If we define the $N$-particle wave functions as $$\phi_{\alpha_{1},\ldots,\alpha_{N}}(x_{1},\ldots,x_{N})=\braket{\Omega|{\ensuremath{\hat{\psi}}}_{\alpha_{k}}(x_{k})\cdots {\ensuremath{\hat{\psi}}}_{\alpha_{1}}(x_{1})|\Psi[Q,\{R_{\alpha}\}]}.\label{eq:defphiN}$$ then we can infer from Eq.  that $$\begin{gathered} \phi_{\alpha_{1},\ldots,\alpha_{N}}(x_{1},\ldots,x_{N}) =\\ \operatorname{tr}\bigg[ B M_Q(-L/2,x_1) R_{\alpha_1}(x_1) M_Q(x_1,x_2) \cdots R_{\alpha_N}(x_N) M_Q(x_N,L/2) \bigg]\label{eq:cmpsNparticle}\end{gathered}$$ only when $x_1\leq x_2\leq \cdots \leq x_N$. It can be extended to any other order of the arguments by reordering the annihilation operators in Eq.  according to the given commutation or anticommutation relations in Eq. . The non-relativistic kinetic energy requires that these functions are sufficiently regular, which together with the extension to arbitrary order of the arguments imposes certain non-trivial constraints on the matrix functions $Q$ and $R_{\alpha}$ that are to be discussed in Section \[s:regularity\]. The continuum limit of matrix product states {#ss:def:continuum} -------------------------------------------- The cMPS $\ket{\Psi[Q,\{R_{\alpha}\}]}$ was originally constructed in Ref.  as the continuum limit of a certain subset of MPS, where the subset was selected in such a way as to obtain a valid continuum limit. We explore this construction in greater detail and elaborate on some of the non-trivial implications regarding ultraviolet cutoffs and correlation lengths (infrared cutoffs). We approximate the continuum ${\ensuremath{\mathcal{R}}}=[-L/2,L/2]$ by a lattice ${\ensuremath{\mathcal{L}}}$ with lattice spacing $a$ and $N=L/a$ sites, where we send $a\to 0$. On every site of the lattice we can create and annihilate particles of type $\alpha$ by acting with the creation and annihilation operators ${\ensuremath{\hat{c}}}_{\alpha}^{\dagger}(n)$ and ${\ensuremath{\hat{c}}}_{\alpha}(n)$. We can relate them to the field operators by $$\begin{aligned} {\ensuremath{\hat{c}}}_{\alpha}(n)=\int_{na}^{(n+1) a} {\ensuremath{\hat{\psi}}}_{\alpha}(x)\, {\ensuremath{\mathrm{d}}}x\end{aligned}$$ and its hermitian conjugate. The local basis on site $n$ thus consists of the states $\ket{0}_{n}$ (no particles), $\ket{\alpha}_{n}=c_{\alpha}^{\dagger}(n)\ket{0}_{n}$, $\ket{\alpha,\beta}_{n}=c_{\alpha}^{\dagger}(n)c_{\beta}^{\dagger}(n)\ket{0}_{n}$, … On this lattice, we can define an MPS $\ket{\Psi[A]}$ with matrices $A^{s}(n)$ where $s$ can take values $0$, $\alpha$, $(\alpha,\beta)$, … If the local basis is infinite-dimensional, this MPS definition is only formal, *i.e.* it cannot be used for practical computations. In the limit $a\to 0$, the number of sites $L/a$ in the lattice ${\ensuremath{\mathcal{L}}}$ goes to infinity. On an infinite number of lattice sites, two arbitrary MPS are generally orthogonal due to the (infrared) orthogonality catastrophe[@Anderson:1967aa]. Since we now aim to create quantum field states within the Fock space ${\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{(\text{F})}$, we need to restrict to a special subset of MPS where the total number of particles is finite (on average, so that $\braket{ {\ensuremath{\hat{N}}}}$ is finite). Since a finite number of particles has to be distributed over a diverging number of sites $L/a$, most of the sites in the lattice ${\ensuremath{\mathcal{L}}}$ are empty on average. So $A^{0}$ has to be the dominant matrix, and it turns out that the cMPS $\ket{\Psi[Q,\{R_{\alpha}\}]}\in{\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{(\text{F})}$ can be obtained from the continuum limit ($a\to 0$) of the MPS $\ket{\Psi[A]}\in{\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{L}}}}$ by identifying ${\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(n a)={\ensuremath{\hat{c}}}^{\dagger}_{\alpha}(n)/\sqrt{a}$ and $$\begin{aligned} A^{0}(n)&={\openone}_{D}+a Q(n a),\nonumber\\ A^{\alpha}(n) &= \sqrt{a} R_{\alpha}(n a),\nonumber\\ A^{(\alpha,\beta)}(n) &= \begin{cases} \frac{a}{2} [ R_{\alpha}(n a) R_{\beta}(n a)+\eta_{\alpha,\beta} R_{\beta}(n a) R_{\alpha}(n a)],& \alpha\neq \beta\\ \frac{a}{2} R_{\alpha}(n a)^{2},&\alpha=\beta \end{cases}\label{eq:correspondencemps}\\ &\ldots\nonumber\end{aligned}$$ together with $\ket{\Omega}=\ket{\bm{0}}=\otimes_{n\in{\ensuremath{\mathcal{L}}}} \ket{0}_{n}$, $\forall n=-L/2a,-L/2a+1,\ldots,+L/2a-1$. This equivalence can be obtained from a Taylor expansion of the $\exp$-operator, although this is only completely rigorous when the entries of $Q$ and $R_{\alpha}$ are finite and the operators ${\ensuremath{\hat{\psi}^\dagger}}(x)$ are bounded (*i.e.* not for bosons). Most results for cMPS in the remainder of this chapter can be derived from this correspondence with MPS, but we attempt to derive these results directly in the continuum as much as possible. The correspondence with MPS is useful for concluding that the entanglement of one half of the chain with the other half (in the case of open boundary conditions) is limited by the upper bound $\log D$. By restricting to MPS within a single Fock space in the thermodynamic limit, we avoid the orthogonality catastrophe. The infrared orthogonality catastrophe of MPS in the thermodynamic limit would turn into an ultraviolet catastrophe when this infinitely-sized lattice ${\ensuremath{\mathcal{L}}}$ would correspond to the continuum limit of a finitely sized continuum ${\ensuremath{\mathcal{R}}}$. Physically, the ultraviolet catastrophe is avoided because the finite number of particles induce a physical cutoff $a_{\text{phys}}$ that is given, not by the lattice spacing $a\to 0$ but by $a_{\text{phys}}=\rho^{-1}$ with $\rho=\braket{{\ensuremath{\hat{N}}}}/L$ the particle density[^1]. The presence of a physical length scale can be detected from the physical dimensions of $Q$ and $R_{\alpha}$, which are given by $[Q]=\ell^{-1}$ and $[R]=\ell^{-1/2}$ with $\ell$ a generic length dimension. The nature of the physical cutoff $a_{\text{phys}}$ and its relation to $Q$ and $R_{\alpha}$ is discussed in Section \[s:ti\] for the translation invariant case, where it can unambiguously be defined. Shifting the cutoff from the lattice spacing $a$ to a physical value $a_{\text{phys}}$ is a very important step in the definition of cMPS. MPS with finite bond dimension $D$ have a finite amount of entanglement to which corresponds in general a finite range of fluctuations $\xi/a$, where $\xi$ denotes the correlation length. Hence, they have in general a finite dimensionless correlation length $\tilde{\xi}=\xi/a$. As $a$ is scaled to zero while $\tilde{\xi}$ remains finite, the physical correlation length $\xi$ would also scale to zero. It is because the physical cutoff is shifted to a finite value $a_{\text{phys}}$ (with thus $a_{\text{phys}}/a\to \infty$) that cMPS are able to combine a finite amount of entanglement with a finite physical correlation length $\xi$ (with thus $\xi/a\to \infty$ but with $\xi/a_{\text{phys}}$ finite). The physical correlation length $\xi$ is also computed in Section \[s:ti\] for the translation invariant case. Alternative construction through continuous measurement {#ss:def:continuousmeasurement} ------------------------------------------------------- Rather than trying to construct a cMPS as the continuum limit of a MPS, we could also try to directly define the continuum limit of the processes that define MPS. Unfortunately, the process of sequential Schmidt decompositions has no straightforward generalization to the continuum and neither has the definition of valence bond solids. One can however define a continuum version of the sequential generation process that creates MPS[@2005PhRvL..95k0503S], based on the paradigm of continuous measurement [@Caves:1987aa]. The resulting process for creating cMPS is described in Ref. , and is here summarised for the sake of completeness. As in the discrete case, let the ancilla start in a state $\bm{v}_{\text{R}}\in{\ensuremath{{\ensuremath{\mathbb{H}}}}}_{\text{ancilla}}=\mathbb{C}^{D}$. This ancilla can be interpreted as a resonating cavity with $D$ internal levels, in which there is a particle source that creates particles of type $\alpha$ ($\alpha=1,\ldots,q$). These particles gradually leave the cavity due to cavity losses. Since particles leaving the cavity at different times occupy different positions in space at a given time (since they travel at a certain speed which we set equal to one), the resulting configuration of particles can be interpreted as a static spatially distributed quantum state. For a compact cavity (*i.e.* a zero-dimensional system), the resulting quantum state is one-dimensional. As an abstraction of this physical process, a $(d-1)$-dimensional cavity can be used to encode a $d$-dimensional holographic quantum state. We refer to Ref.  for the general case, and henceforth restrict to the $d=1$ case that produces cMPS. Between two particle emissions, the cavity evolves according to a Hamiltonian $K\in\operatorname{\mathbb{L}}(\mathbb{C}^D)$ (a Hermitean $D\times D$ matrix), whereas the physical state outside the cavity does not evolve. By observing the particles that are emitted from the cavity, we are continuously measuring the state of the cavity (*i.e.* ancilla). The state of the cavity at time $t$ is encoded in the particle distribution at position $x=-t$. It was shown that the resulting configuration of particles outside the cavity is given by $$\bm{v}_{\mathrm{L}}^{\dagger}{\ensuremath{\mathscr{P}\exp}}\left(-{\ensuremath{\mathrm{i}}}\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\, K(x)\otimes{\ensuremath{\hat{{\openone}}}} + \sum_{\alpha=1}^{N}{\ensuremath{\mathrm{i}}}R_{\alpha}(x)\otimes {\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(x)-{\ensuremath{\mathrm{i}}}R_{\alpha}(x)^{\dagger}\otimes {\ensuremath{\hat{\psi}}}_{\alpha}(x)\right) \bm{v}_{\mathrm{R}} \ket{\Omega},\label{eq:defcontmeasurement}$$ where the ancilla is projected onto the state $\bm{v}_{\mathrm{L}}$ at the end of the measurement, in order to decouple it from the physical state. The resulting expression does not yet correspond exactly to Eq.  but it can easily be brought in the required form by using the Baker-Campbell-Hausdorff formula on every infinitesimal patch of the path ordered exponential. We then obtain that the state in Eq.  is contained within ${\ensuremath{\mathcal{V}}}_{\mathrm{cMPS}}$, as it is equal to $\ket{\Psi[Q,\{R_{\alpha}\}]}$ for the specific choice $$\begin{aligned} Q(x)=-{\ensuremath{\mathrm{i}}}K(x) -\frac{1}{2}\sum_{\alpha=1}^{N} R_{\alpha}(x)^{\dagger}R_{\alpha}(x).\label{eq:qunitary}\end{aligned}$$ We recall that $K(x)$ is a Hermitian matrix. Generic cMPS can be brought into this form by using the gauge invariance of the cMPS representation, as discussed in Section \[s:gauge\]. This construction allows us to introduce a unitary operator ${\ensuremath{\hat{U}}}(y,z)\in\operatorname{\mathbb{L}}(\mathbb{C}^{D}\otimes {\ensuremath{{\ensuremath{\mathbb{H}}}}})$ $${\ensuremath{\hat{U}}}(y,z)={\ensuremath{\mathscr{P}\exp}}\left(-{\ensuremath{\mathrm{i}}}\int_{z}^{y}{\ensuremath{\mathrm{d}}}x\, K(x)\otimes{\ensuremath{\hat{{\openone}}}} + \sum_{\alpha=1}^{N}{\ensuremath{\mathrm{i}}}R_{\alpha}(x)\otimes {\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(x)-{\ensuremath{\mathrm{i}}}R_{\alpha}(x)^{\dagger}\otimes {\ensuremath{\hat{\psi}}}_{\alpha}(x)\right).\label{eq:defUalternative}$$ Being a unitary operator, it conserves the norm of $\bm{v}_{\mathrm{R}}\otimes\ket{\Omega}$. This does not imply that the cMPS $\ket{\Psi[Q,\{R_{\alpha}\}]}$ with $Q$ given by Eq.  is automatically normalized to unity, because the definition also involves a projection to $\bm{v}_{\mathrm{L}}$. But the unitarity of ${\ensuremath{\hat{U}}}(y,z)$ in Eq.  does guarantee that $\ket{\Psi[Q,\{R_{\alpha}\}]}$ can easily be normalized and has no norm that diverges or goes to zero in the large volume limit. From a physical perspective, this construction is important as it clearly sketches the holographic properties of the cMPS. The physical state of a one-dimensional system is described by a zero-dimensional boundary theory. The spatial coordinate of the physical system acts as a time coordinate in the boundary theory. The physical state is created because the boundary theory interacts with the physical system, where the position of the interaction shifts linearly in time. This interaction results in the boundary theory not being at equilibrium. Instead, the boundary theory is subject to dissipative dynamics, as will become clear in the following section. This holographic property is of course strongly related with the intrinsic area law for entanglement that is present in cMPS. Path integral representation {#ss:def:pathintegral} ---------------------------- Recently, it has also been illustrated that we can break up the path ordered exponential in the definition of $\ket{\Psi[Q,\{R_\alpha\}]}$ and insert resolutions of the identity in order to obtain a path integral description of the same state[@Brockt:fk]. The easiest way to insert an identity is by first introducing a second quantized version of the ancilla by making the substitution $$\begin{aligned} Q(x)& \mapsto \hat{Q}(x)=Q^{j,k}(x) \hat{b}_j^\dagger \hat{b}_k,&R_{\alpha}(x) &\mapsto \hat{R}_{\alpha}(x)=R_{\alpha}^{j,k}(x) \hat{b}_j^\dagger \hat{b}_k,\end{aligned}$$ with $\hat{b}_j$ and $\hat{b}^\dagger_j$ annihilation and creation operators for bosonic or fermionic particles in level $j=1,\ldots,D$ of the ancilla. The resolution of the identity can now be expressed in terms of coherent states. However, the ancilla Hilbert space is now an infinite-dimensional Fock space, whereas the original ancilla space was only $\mathbb{C}^D$ and corresponds to the single-particle sector of this Fock space. Because the operators $\hat{Q}(x)$ and $\hat{R}_{\alpha}(x)$ are particle-number preserving with respect to the ancilla, we can restrict the whole path integral to the single particle sector by either choosing appropriate boundary conditions. If $\ket{\omega}$ denotes the ancilla zero-particle state, then a restriction to the single particle state is obtained by identifying $$\begin{aligned} B&\mapsto \hat{B}=B^{j,k} b^\dagger_j \ket{\omega}\bra{\omega} b_k.\end{aligned}$$ If we introduce the coherent states $$\ket{\phi}=\exp\left(\sum_{j=1}^{D} \phi_j \hat{b}^{\dagger}_j - \phi^\ast_j \hat{b}_j\right)\ket{\omega}$$ then we can write the identity as $$\hat{{\openone}}=\frac{1}{\pi^D} \int \prod_{j=1}^D {\ensuremath{\mathrm{d}}}\phi_j{\ensuremath{\mathrm{d}}}\phi_j^\ast \, \ket{\phi}\bra{\phi}.$$ Following the standard recipe, we can then obtain the path integral description of $\ket{\Psi[Q,\{R_{\alpha}\}]}$ as $$\begin{gathered} \ket{\Psi[Q,\{R_{\alpha}\}]}=\\ \int \mathscr{D} \phi \mathscr{D}\phi^{\ast} \left(\phi(+L/2)^\dagger B \phi(-L/2)\right) {\ensuremath{\mathrm{e}}}^{-\frac{\lvert \phi(-L/2)\rvert^2}{2}-\frac{\lvert \phi(L/2)\rvert^2}{2}}\qquad\qquad\qquad\qquad\qquad\qquad\\ \times \exp\bigg[\int_{-L/2}^{+L/2} \Big\{\frac{1}{2}\phi^\dagger(x)\frac{{\ensuremath{\mathrm{d}}}\phi}{{\ensuremath{\mathrm{d}}}x}(x) -\frac{1}{2} \frac{{\ensuremath{\mathrm{d}}}\phi^{\dagger}}{{\ensuremath{\mathrm{d}}}x}(x) \phi(x) + \phi^\dagger(x)Q(x)\phi(x)\\ + \sum_{\alpha=1}^{q} \left(\phi^\dagger(x) R_{\alpha}(x)\phi(x)\right) {\ensuremath{\hat{\psi}^\dagger}}_\alpha(x)\Big\}\, {\ensuremath{\mathrm{d}}}x \bigg]\ket{\Omega},\label{eq:pathintegralrepresentation}\end{gathered}$$ where $\phi(x)$ is a $D$-dimensional vector function with components $\phi_j(x)$, $j=1,\ldots,D$. This path integral representation can serve as a useful starting point for generalizations of the cMPS, *e.g.* by replacing the second quantized auxiliary system by a true field theory, so that this becomes the cMPS analogon of the construction in Ref. . If this field theory is a conformal field theory, it is then very close in spirit to some model states for Quantum Hall Systems[@Moore1991362; @Dubail:fk]. Regularity conditions {#s:regularity} ===================== In Eq.  we have defined the $N$-particle wave functions $\phi_{\alpha_{1},\ldots,\alpha_{N}}(x_{1},\ldots,x_{N})$. For $x_{1}\leq \cdots \leq x_{N}$ these are completely specified by Eq. . However, for general choices of the matrix functions $Q$ and $R_{\alpha}$, the extension of Eq.  to all orders of its arguments does not automatically satisfy the required properties that a physical $N$-particle wave function should satisfy. For example, the $N$-particle wave functions should be differentiable in each of its arguments if the state has to produce a finite non-relativistic kinetic energy. However, there is no need to work with the Fock space expansion of Eq. . We can check the regularity of the $N$-particle wave functions by immediately evaluating the kinetic energy in second quantization. For further reference, we first define $$\begin{aligned} {\ensuremath{\hat{U}}}(x,y)=\mathscr{P} \exp\left[\int_{x}^{y}{\ensuremath{\mathrm{d}}}z\, \left\{Q(z)\otimes {\ensuremath{\hat{{\openone}}}} + \sum_{\alpha=1}^{q}R_{\alpha}(z)\otimes {\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(z)\right\}\right]\label{eq:defU},\end{aligned}$$ where ${\ensuremath{\hat{U}}}(x,y)\in\operatorname{\mathbb{L}}({\ensuremath{{\ensuremath{\mathbb{H}}}}}\otimes \mathbb{C}^{D})$ with $\mathbb{C}^{D}$ the ancilla space, *i.e.* it is a $D\times D$ matrix of operators. Unlike the operator ${\ensuremath{\hat{U}}}(y,z)$ defined in Subsection \[ss:def:fockembedding\], the operator in Eq.  is not unitary. It only equals the unitary version when acting on $\ket{\Omega}$ and if $Q(z)$ is given by Eq. . In addition, we define a closely related set of operators ${\ensuremath{\hat{U}}}_\alpha(x,y)$ ($\alpha=1,\ldots,q$) as $${\ensuremath{\hat{U}}}_{\alpha}(x,y)=\mathscr{P} \exp\left[\int_{x}^{y}{\ensuremath{\mathrm{d}}}z\,\left\{ Q(z)\otimes {\ensuremath{\hat{{\openone}}}} + \sum_{\beta=1}^{q}\eta_{\alpha,\beta}R_{\beta}(z)\otimes {\ensuremath{\hat{\psi}^\dagger}}_{\beta}(z)\right\}\right]\label{eq:defUalpha}.$$ In order to compute any expectation value, which is the topic of the next section, we need to be able to act with the field annihilation operators ${\ensuremath{\hat{\psi}}}_{\alpha}(x)$ on the state $\ket{\Psi[Q,\{R_{\alpha}\}]}$. If we are able to drag ${\ensuremath{\hat{\psi}}}_{\alpha}(x)$ through the path-ordered exponential, it then acts on $\ket{\Omega}$, which is annihilated by any field operator. We can now use Eq.  as derived in Appendix \[a:formula\], where ${\ensuremath{\hat{B}}}={\ensuremath{\hat{\psi}}}_{\alpha}(x)$, ${\ensuremath{\hat{A}}}_1(z)$ contains both $Q(z)\otimes {\ensuremath{\hat{{\openone}}}}$ and any term $R_{\beta}(z)\otimes {\ensuremath{\hat{\psi}^\dagger}}_{\beta}(z)$ for which $\eta_{\alpha,\beta}=1$, and ${\ensuremath{\hat{A}}}_2(z)$ contains the terms $R_{\beta}(z)\otimes {\ensuremath{\hat{\psi}^\dagger}}_{\beta}(z)$ for which $\eta_{\alpha,\beta}=-1$. We then obtain $${\ensuremath{\hat{\psi}}}_{\alpha}(x){\ensuremath{\hat{U}}}(-L/2,+L/2)-{\ensuremath{\hat{U}}}_{\alpha}(-L/2,+L/2){\ensuremath{\hat{\psi}}}_{\alpha}(x)={\ensuremath{\hat{U}}}_{\alpha}(-L/2,x) R_{\alpha} {\ensuremath{\hat{U}}}(x,+L/2)$$ which immediately results in $${\ensuremath{\hat{\psi}}}_{\alpha}(x) \ket{\Psi[Q,\{R_{\beta}\}]}=\operatorname{tr}\left[B {\ensuremath{\hat{U}}}_{\alpha}(-L/2,x) R_{\alpha}(x) {\ensuremath{\hat{U}}}(x,+L/2)\right]\ket{\Omega}.\label{eq:psiPsi}$$ Hence, acting with an annihilation operator of type $\alpha$ at position $x$ not only lowers a matrix $R_{\alpha}(x)$, but also transforms the path ordered exponential ${\ensuremath{\hat{U}}}(-L/2,x)$ into ${\ensuremath{\hat{U}}}_{\alpha}(-L/2,x)$, because we had to take the particle statistics into account for bringing ${\ensuremath{\hat{\psi}}}_{\alpha}(x)$ to the position where it could lower $R_{\alpha}(x)$. The non-relativistic kinetic energy operator ${\ensuremath{\hat{T}}}$ is given by $${\ensuremath{\hat{T}}}=\int_{-L/2}^{+L/2}{\ensuremath{\hat{t}}}(x)\,{\ensuremath{\mathrm{d}}}x,$$ where the kinetic energy density ${\ensuremath{\hat{t}}}(x)$ at position $x$ is given by $${\ensuremath{\hat{t}}}(x)=\sum_{\alpha=1}^{N} \frac{1}{2 m_{\alpha}} \left(\frac{{\ensuremath{\mathrm{d}}}{\ensuremath{\hat{\psi}^\dagger}}_{\alpha}}{{\ensuremath{\mathrm{d}}}x}(x)\right) \left(\frac{{\ensuremath{\mathrm{d}}}{\ensuremath{\hat{\psi}}}_{\alpha}}{{\ensuremath{\mathrm{d}}}x}(x)\right).$$ Hence, a finite kinetic energy expectation value $\braket{\Psi[\overline{Q},\{\overline{R}_{\alpha}\}]|{\ensuremath{\hat{T}}}|\Psi[Q,\{R_{\alpha}\}]}$ requires that the state $\frac{{\ensuremath{\mathrm{d}}}{\ensuremath{\hat{\psi}}}_{\alpha}}{{\ensuremath{\mathrm{d}}}x}(x)\ket{\Psi[Q,\{R_{\alpha}\}]}$ has a finite norm. Differentiating Eq.  and using Eq. , we obtain $$\begin{aligned} \frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}{\ensuremath{\hat{\psi}}}_{\alpha}(x)& \ket{\Psi[Q,\{R_{\beta}\}]}\nonumber\\ =&\operatorname{tr}\Bigg[B {\ensuremath{\hat{V}}}_{\alpha}(-L/2,x) \bigg([Q(x),R_{\alpha}(x)]+\frac{{\ensuremath{\mathrm{d}}}R_{\alpha}}{{\ensuremath{\mathrm{d}}}x}(x)\bigg){\ensuremath{\hat{U}}}(x,+L/2)\Bigg]\ket{\Omega}\nonumber\\ &+\operatorname{tr}\Bigg[B {\ensuremath{\hat{V}}}_{\alpha}(-L/2,x) \bigg(\sum_{\beta=1}^{q}\big[\eta_{\alpha,\beta} R_{\beta}(x)R_{\alpha}(x)\nonumber\\ &\qquad\qquad\qquad\qquad\qquad- R_{\alpha}(x)R_{\beta}(x)\big]\otimes{\ensuremath{\hat{\psi}^\dagger}}_{\beta}(x)\bigg){\ensuremath{\hat{U}}}(x,+L/2)\Bigg]\ket{\Omega}.\label{eq:diffpsiPsi}\end{aligned}$$ The term on the first line can be shown to have a finite norm (see next section), provided of course that $R_\alpha(x)$ is a differentiable function with a well-behaved derivative ${\ensuremath{\mathrm{d}}}R_\alpha(x)/d x$ at any $x\in{\ensuremath{\mathcal{R}}}$. Since the term on the second line of Eq.  has particles of any species $\beta=1,\ldots,q$ being created at the fixed position $x$, this term is not normalizable. Put differently, $\lVert ({\ensuremath{\mathrm{d}}}{\ensuremath{\hat{\psi}}}(x)/{\ensuremath{\mathrm{d}}}x)\ket{\Psi[Q,\{R_{\alpha}\}]}\rVert^{2}$ contains a divergent contribution $\delta(0)$ (in position space), unless we impose the *regularity condition* $$\begin{aligned} \eta_{\alpha,\beta} R_{\beta}(x)R_{\alpha}(x) -R_{\alpha}(x) R_{\beta}(x)=0, \quad \forall x\in {\ensuremath{\mathcal{R}}}.\label{eq:regcondition}\end{aligned}$$ Hence the matrices $R_{\alpha}$ should have the same statistics as the particle creation operators to which they couple. For systems with a single species of bosons, the condition in Eq.  is automatically fulfilled. For systems with multiple species of bosons, it requires that any two matrices $R_{\alpha}(x)$ and $R_{\beta}(x)$ at the same spatial point $x$ commute. If $\alpha$ is a fermionic particle species, the corresponding matrix $R_{\alpha}(x)$ has to satisfy $R_{\alpha}(x)^{2}=0$, $\forall x\in{\ensuremath{\mathcal{R}}}$. When two particles of fermionic type $\alpha$ approach each other, there is a corresponding factor $R_{\alpha}(y) {\ensuremath{\mathscr{P}\exp}}(\int_{y}^{z}{\ensuremath{\mathrm{d}}}x\, Q(x)) R_{\alpha}(z)$ in the $N$-particle wave function $\phi_{\alpha_{1},\ldots,\alpha,\alpha,\ldots \alpha_{N}}(x_{1},\ldots,y,z,\ldots,x_{N})$. For $y\to z$, the exponential factor continuously evolves towards ${\openone}_{D}$, so that the $k$-particle wave function continuously becomes zero. Hence, the finiteness of the kinetic energy requires that two fermionic particles of the same type cannot come arbitrarily close together and thus imposes Pauli’s principle. Differentiability of the wave function is sufficient for a finite kinetic energy, which is by far the most important physical requirement of the wave function. We can also impose higher regularity constraints on the $N$-particle wave functions. Since these do in general not arise from physical considerations, we postpone this discussion to Appendix \[a:higherorderregularity\]. While the resulting conditions are interesting from an algebraic point of view, they are in general hard to satisfy with finite-dimensional matrices. For practical applications, satisfying the original condition in Eq. , as imposed by the finiteness of the kinetic energy, should be sufficient. We conclude this subsection by investigating what else can be learned from the physical considerations concerning particle statistics. The regularity conditions \[Eq. \] already require that the matrices $R_{\alpha}$ behave as the corresponding operators ${\ensuremath{\hat{\psi}}}_{\alpha}$ in terms of commutation and anticommutation relations. In a physical system, we should not have fermionic condensates, *i.e.* $\braket{\Psi|{\ensuremath{\hat{\psi}}}_{\alpha}(x)|\Psi}=0$ if particle species $\alpha$ is fermionic. This is a consequence of the invariance of an physical Hamiltonian ${\ensuremath{{\ensuremath{\hat{H}}}}}$ under the action of the parity operator ${\ensuremath{\hat{P}}}$, which flips the sign of any fermionic operator (${\ensuremath{\hat{P}}}{\ensuremath{\hat{\psi}}}_{\alpha}(x){\ensuremath{\hat{P}}}=\eta_{\alpha,\alpha}{\ensuremath{\hat{\psi}}}_{\alpha}(x)$) and is thus idempotent (${\ensuremath{\hat{P}}}={\ensuremath{\hat{P}}}^{-1}={\ensuremath{\hat{P}}}^{\dagger}$). We can construct ${\ensuremath{\hat{P}}}$ as $${\ensuremath{\hat{P}}}=\exp\left[{\ensuremath{\mathrm{i}}}\pi \sum_{\alpha\ \text{fermionic}} {\ensuremath{\hat{N}}}_{\alpha}\right]=\exp\left[{\ensuremath{\mathrm{i}}}\pi \sum_{\alpha\ \text{fermionic}} \int_{{\ensuremath{\mathcal{R}}}} {\ensuremath{\mathrm{d}}}x\, {\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(x){\ensuremath{\hat{\psi}}}_{\alpha}(x)\right].$$ Physical states satisfy ${\ensuremath{\hat{P}}}\ket{\Psi}={\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}\phi} \ket{\Psi}$, where the idempotence of ${\ensuremath{\hat{P}}}$ requires $\phi=0$ or $\phi=\pi$. Physical states thus consist completely of a superposition of states, all of which have either an even or an odd number of fermions. Imposing this same property for cMPS requires one to explicitly incorporate the $\mathbb{Z}_{2}$ symmetry (with group elements $\{{\ensuremath{\hat{{\openone}}}},{\ensuremath{\hat{P}}}\}$) in the matrix structure of $R_{\alpha}$ and $Q$. Since ${\ensuremath{\hat{P}}}\ket{\Psi[Q,\{R_{\alpha}\}]}=\ket{\Psi[Q,\{\eta_{\alpha,\alpha} R_{\alpha}\}]}$, we should also be able to define a virtual operator $P\in\operatorname{\mathbb{L}}(\mathbb{C}^{D})$ such that $P Q P^{-1}=Q$ and $P R_{\alpha} P^{-1} =\eta_{\alpha,\alpha} R_{\alpha}$. This operator can in principle be $x$-dependent, but we should then be able to apply a local gauge transformation (see Section \[s:gauge\]) in order to make $P$ space-independent. In addition, it is clear from the definition that $P$ is idempotent ($P=P^{-1}$). If we can assume that $P$ is diagonalizable, then $P$ divides the ancilla space $\mathbb{C}^{D}$ into a sector with positive parity (eigenspace of eigenvalue $+1$) and a sector with negative parity (eigenspace of $-1$). A global gauge transformation brings $P$ into the diagonal form $$P=\begin{bmatrix} {\openone}_{D^{(+)}} & 0_{D^{(+)}\times D^{(-)}} \\ 0_{D^{(-)}\times D^{(+)}} & -{\openone}_{D^{(-)}}\end{bmatrix}$$ with $D^{(+)}+D^{(-)}=D$. The required transformation behavior of $Q$ and $R_{\alpha}$ then imposes the following decomposition $$\begin{aligned} Q&=\begin{bmatrix} Q^{(+)} & 0_{D^{(+)}\times D^{(-)}} \\ 0_{D^{(-)}\times D^{(+)}} & Q^{(-)}\end{bmatrix},\\ R_{\alpha}&=\begin{bmatrix} R_{\alpha}^{(+)} & 0_{D^{(+)}\times D^{(-)}} \\ 0_{D^{(-)}\times D^{(-)}} & R_{\alpha}^{(-)} \end{bmatrix}\qquad \text{(particle species $\alpha$ is bosonic)},\\ R_{\alpha}&=\begin{bmatrix} 0_{D^{(+)}\times D^{(+)}} & R_{\alpha}^{(+-)} \\ R_{\alpha}^{(-+)} & 0_{D^{(-)}\times D^{(-)}}\end{bmatrix}\qquad \text{(particle species $\alpha$ is fermionic)}.\end{aligned}$$ In the cMPS $\ket{\Psi[Q,\{R_{\alpha}\}]}$, all contributions with either an even or an odd number of fermions in Eq.  drop out, depending on the boundary matrices $B$. If only states with an even number of fermions are allowed, $B$ should have a decomposition as $$\begin{aligned} B&=\begin{bmatrix} B^{(+)} & 0_{D^{(+)}\times D^{(-)}} \\ 0_{D^{(-)}\times D^{(+)}} & B^{(-)}\end{bmatrix},\end{aligned}$$ whereas a decomposition of the form $$\begin{aligned} B&=\begin{bmatrix} 0_{D^{(+)}\times D^{(+)}} & B_{\alpha}^{(+-)} \\ B_{\alpha}^{(-+)} & 0_{D^{(-)}\times D^{(-)}}\end{bmatrix}\end{aligned}$$ is required to select only states with an odd number of fermions. Boundary conditions {#s:bc} =================== We have already mentioned in Section \[s:def\] that the type of boundary conditions —open or periodic— is encoded in the rank of the boundary matrix $B$. For a system with periodic boundary conditions, $B$ has full rank and is typically chosen to be the identity ($B={\openone}_{D}$). Since periodic boundary conditions identify the points $x=-L/2$ and $x=+L/2$, it is natural to assume that the matrix functions $Q$ and $R_{\alpha}$ are also single-valued, *i.e.* $Q(-L/2)=Q(+L/2)$ and $R_{\alpha}(-L/2)=R_{\alpha}(+L/2)$ for all $\alpha=1,\ldots,q$. For a system with open boundary conditions, it is suitable to work with a boundary matrix of the form $B=\bm{v}_{\mathrm{R}}\bm{v}_{\mathrm{L}}^{\dagger}$, *i.e.* the rank of $B$ is one. However, in the case of open boundary conditions physical requirements impose additional conditions on the $N$-particle wave functions of Eq. . Typically, a finite system is interpreted as being embedded in an infinite system and having an infinitely strong potential energy outside of the interval ${\ensuremath{\mathcal{R}}}$, *i.e.* $v(x)=+\infty$ for $x<-L/2$ and $x>+L/2$. The single particle wave functions that build up the Fock space are zero outside ${\ensuremath{\mathcal{R}}}$. A finite kinetic energy imposes continuity, and thus requires that the single particle wave functions are zero at $x=\pm L/2$. Consequently, the resulting $N$-particle wave functions have to produce zero as soon as one of the arguments $x_i$ is equal to $\pm L/2$. Since this has to be true for any configuration of the remaining $N-1$ particles, we obtain that we have to impose $$\begin{aligned} \bm{v}_{\mathrm{L}}^\dagger R(-L/2)&=0 &\text{and}&&R(+L/2)\bm{v}_{\mathrm{R}}=0.\label{eq:qropenbc}\end{aligned}$$ A more detailed discussion of these conditions is presented in Ref. [@qgp], where a partial differential equation for the evolution of $Q$ and $R_{\alpha}$ under real or imaginary time dynamics is derived. In order to solve this partial differential equation, it needs to be complemented by the proper boundary conditions as given above. Throughout the remainder of this manuscript, we assume that we are working with cMPS where the matrix functions $Q$ and $R_{\alpha}$ satisfy the required conditions. We now also have to discuss whether we can completely fix the boundary matrix $B$, or whether its entries should be included within the set of variational parameters. While $B={\openone}_{D}$ represents a fixed choice that is well-suited for the case of periodic boundary conditions, we will see in Section \[s:gauge\] that it is beneficial to include one of both boundary vectors $\bm{v}_{\mathrm{L}}$ or $\bm{v}_{\mathrm{R}}$ in the set of variational parameters in the case of open boundary conditions. In order to have a uniform notation, we do not explicitly denote this dependence in the notation for the state $\ket{\Psi[Q,\{R_\alpha\}]}$. Note that it is impossible to absorb the boundary vectors into the matrices $Q(-L/2)$, $R_{\alpha}(-L/2)$ and $Q(L/2)$, $R_{\alpha}(L/2)$ in the case of open boundary conditions. More generally, unlike in the case of generic MPS on finite lattices, it is for cMPS impossible to use a space-dependent bond dimension $D(x)$, since the required continuity of $D$ in combination with its discrete character enforces a constant value. Computation of expectation values {#s:expectval} ================================= This section is concerned with the computation of expectation values of normally ordered operators. We have already illustrated how to act with annihilation operators and derivatives thereof in the Section \[s:regularity\]. With a MPS, the computation of expectation values boils down to a contraction of the physical indices in the network. In the continuum, however, the intuitive notion of physical indices is a bit lost. We therefore start by computing the overlap of two cMPS $\ket{\Psi[Q,\{R_{\alpha}\}]}$, $\ket{\Psi[Q',\{R_{\alpha}'\}]}$, which are given as an expansion in Fock space \[Eq. \]. It is clear that the basis states ${\ensuremath{\hat{\psi}^\dagger}}_{\alpha_1}(x_1)\cdots {\ensuremath{\hat{\psi}^\dagger}}_{\alpha_N}(x_N)\ket{\Omega}$ are automatically orthogonal for different $N$, and further that $$\begin{gathered} \braket{\Omega|{\ensuremath{\hat{\psi}}}_{\beta_N}(y_N)\cdots {\ensuremath{\hat{\psi}}}_{\beta_1}(y_1){\ensuremath{\hat{\psi}^\dagger}}_{\alpha_1}(x_1)\cdots {\ensuremath{\hat{\psi}^\dagger}}_{\alpha_N}(x_N)|\Omega}=\\ \delta_{\alpha_1,\beta_1}\cdots \delta_{\alpha_N,\beta_N} \delta(x_1-y_1)\cdots \delta(x_N-y_N),\end{gathered}$$ due to the ordering of the arguments $x_1\leq \cdots \leq x_N$ and $y_1\leq \cdots \leq y_N$. We thus obtain $$\begin{gathered} \braket{\Psi[Q',\{R'_{\alpha}\}]|\Psi[Q,\{R_{\alpha}\}]}=\sum_{N=0}^{+\infty}\sum_{\{\alpha_1,\ldots,\alpha_N\}=1}^{q} \int_{-L/2\leq x_1\leq \cdots \leq x_N\leq +L/2} {\ensuremath{\mathrm{d}}}x_1\cdots {\ensuremath{\mathrm{d}}}x_N\\ \operatorname{tr}\left[B {\ensuremath{\mathscr{P}\exp}}\left(\int_{-L/2}^{x_1} Q(z)\,{\ensuremath{\mathrm{d}}}z\right) R_{\alpha_1}(x_1)\cdots R_{\alpha_N}(x_N) {\ensuremath{\mathscr{P}\exp}}\left(\int_{x_N}^{+L/2} Q(z)\,{\ensuremath{\mathrm{d}}}z\right)\right]\\ \times \operatorname{tr}\left[\overline{B} {\ensuremath{\mathscr{P}\exp}}\left(\int_{-L/2}^{x_1} \overline{Q'(z)}\,{\ensuremath{\mathrm{d}}}z\right) \overline{R'_{\alpha_1}(x_1)}\cdots \overline{R'_{\alpha_N}(x_N)} {\ensuremath{\mathscr{P}\exp}}\left(\int_{x_N}^{+L/2} \overline{Q(z)}\,{\ensuremath{\mathrm{d}}}z\right)\right].\end{gathered}$$ Using trivial direct product identities such as $\operatorname{tr}[A]\operatorname{tr}[B]=\operatorname{tr}[A\otimes B]$, $(AB)\otimes (CD)=(A\otimes B)(C\otimes D)$ and $\exp(A)\otimes \exp(B)=\exp(A\otimes {\openone}_D+ {\openone}_D \otimes B)$ for $D\times D$ matrices $A$, $B$, $C$ and $D$, the previous expression can be rewritten as $$\begin{gathered} \braket{\Psi[Q',\{R'_{\alpha}\}]|\Psi[Q,\{R_{\alpha}\}]}=\sum_{N=0}^{+\infty}\sum_{\{\alpha_1,\ldots,\alpha_N\}=1}^{q} \int_{-L/2\leq x_1\leq \cdots \leq x_N\leq +L/2} {\ensuremath{\mathrm{d}}}x_1\cdots {\ensuremath{\mathrm{d}}}x_N\\ \operatorname{tr}\Bigg[(B\otimes \overline{B}) {\ensuremath{\mathscr{P}\exp}}\left(\int_{-L/2}^{x_1} [Q(z)\otimes {\openone}_D+{\openone}_D \otimes \overline{Q'(z)}]\,{\ensuremath{\mathrm{d}}}z\right) (R_{\alpha_1}(x_1)\otimes \overline{R'_{\alpha_1}(x_1)})\cdots \\ (R_{\alpha_N}(x_N)\otimes \overline{R'_{\alpha_N}(x_N)}){\ensuremath{\mathscr{P}\exp}}\left(\int_{x_N}^{+L/2} [Q(z)\otimes {\openone}+{\openone}\otimes \overline{Q'(z)}]\,{\ensuremath{\mathrm{d}}}z\right)\Bigg].\end{gathered}$$ Reverting the expansion of the path ordered exponential that lead to Eq. , results in $$\begin{gathered} \braket{\Psi[Q',\{R'_{\alpha}\}]|\Psi[Q,\{R_{\alpha}\}]}=\\ \operatorname{tr}\Bigg[(B\otimes \overline{B}) {\ensuremath{\mathscr{P}\exp}}\left(\int_{-L/2}^{+L/2} [Q(x)\otimes {\openone}_D+{\openone}_D \otimes \overline{Q'(x)}+\sum_{\alpha=1}^{q} R_{\alpha}(x)\otimes \overline{R'_{\alpha}(x)}]\,{\ensuremath{\mathrm{d}}}x\right) \Bigg].\end{gathered}$$ From the expression above, we can deduce that in the computation of expectation values ($Q'=Q$, $R_\alpha'=R_\alpha$) a central role is played by the local transfer matrix ${\ensuremath{\mathbb{T}}}(x)$ defined as $${\ensuremath{\mathbb{T}}}(x)=Q(x)\otimes {\openone}_{D}+{\openone}_{D}\otimes \overline{Q(x)} + \sum_{\alpha=1}^{N} R_{\alpha}(x)\otimes \overline{R_{\alpha}(x)}.\label{eq:transferoperator}$$ To this transfer matrix, we can also associate linear maps $\mathscr{T}^{(x)}:\operatorname{\mathbb{L}}(\mathbb{C}^{D})\mapsto \operatorname{\mathbb{L}}(\mathbb{C}^{D})$ and $\widetilde{\mathscr{T}}^{(x)}:\operatorname{\mathbb{L}}(\mathbb{C}^{D})\mapsto \operatorname{\mathbb{L}}(\mathbb{C}^{D})$ that map virtual operators $f$ ($D\times D$ matrices) to $$\begin{aligned} \mathscr{T}^{(x)}(f) &= Q(x) f + f Q(x)^{\dagger}+ \sum_{\alpha=1}^{N} R_{\alpha}(x) f R_{\alpha}(x)^{\dagger},\\ \widetilde{\mathscr{T}}^{(x)}(f) &= f Q(x) + Q(x)^{\dagger}f+ \sum_{\alpha=1}^{N} R_{\alpha}(x)^{\dagger} f R_{\alpha}(x).\end{aligned}$$ The transfer matrix ${\ensuremath{\mathbb{T}}}(z)$ is of course strongly related to the transfer matrix ${\ensuremath{\mathbb{E}}}(n)=\sum_{s} A^s(n)\otimes \overline{A}^s(n)$ that features in expectation values with respect to MPS on the lattice. Indeed, if $\ket{\Psi[A]}$ is the MPS with matrices $A$ as in Eq. , then the transfer operator ${\ensuremath{\mathbb{T}}}(x)$ is related to the transfer operator ${\ensuremath{\mathbb{E}}}(n)$ of the MPS $\ket{\Psi[A]}$ by ${\ensuremath{\mathbb{E}}}(n)={\ensuremath{\mathbb{{\openone}}}}+a {\ensuremath{\mathbb{T}}}(na)+\operatorname{\mathscr{O}}(a^{2})$. The expectation value of any normally ordered operator ${\ensuremath{\hat{O}}}=:O[\{{\ensuremath{\hat{\psi}^\dagger}}_{\alpha}\},\{{\ensuremath{\hat{\psi}}}_{\beta}\}]:$ can now be computed by first acting with all annihilation operators ${\ensuremath{\hat{\psi}}}_{\alpha}(x)$ on the ket $\ket{\Psi[Q,\{R_{\beta}\}]}$ as we did in the Section \[s:regularity\], and similarly acting with the creation operators on the bra. The result of this is the insertion of some operators acting on the virtual system at the corresponding positions, with operators ${\ensuremath{\hat{U}}}(x,y)$, ${\ensuremath{\hat{U}}}_{\alpha}(x,y)$ or ${\ensuremath{\hat{U}}}_{\alpha,\beta}(x,y)$ connecting them. The expectation value is obtained by “contracting the physical indices”, which results in the inserted virtual operators in the ket combining with those in the bra at the same position[^2], whereas the contraction of the part in between the local insertions result in a path ordered exponential of the transfer matrix. However, to incorporate the particle statistics, we also need to define generalized transfer operators as $$\begin{aligned} {\ensuremath{\mathbb{T}}}_{\alpha}(x)&=Q(x)\otimes {\openone}_{D}+{\openone}_{D}\otimes \overline{Q(x)} + \sum_{\beta=1}^{N} \eta_{\alpha,\beta} R_{\beta}(x)\otimes \overline{R_{\beta}(x)},\\ {\ensuremath{\mathbb{T}}}_{\alpha,\beta}(x)&=Q(x)\otimes {\openone}_{D}+{\openone}_{D}\otimes \overline{Q(x)} + \sum_{\gamma=1}^{N} \eta_{\alpha,\gamma}\eta_{\beta,\gamma} R_{\gamma}(x)\otimes \overline{R_{\gamma}(x)}.\end{aligned}$$ Note that ${\ensuremath{\mathbb{T}}}_{\alpha,\alpha}(x)={\ensuremath{\mathbb{T}}}(x)$ since $\eta_{\alpha,\beta}^{2}=1$. Given this recipe we can, for example, evaluate the correlation function $$\begin{gathered} G^{\alpha,\beta}(x,y)=\braket{\Psi[\overline{Q},\{\overline{R}_{\alpha}\}]|{\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(x){\ensuremath{\hat{\psi}}}_{\beta}(y)|\Psi[Q,\{R_{\alpha}\}]}\\ =\theta(x-y)\operatorname{tr}\bigg[\big(B\otimes \overline{B}\big)\mathscr{P}{\ensuremath{\mathrm{e}}}^{\int_{-L/2}^{+x} {\ensuremath{\mathbb{T}}}_{\alpha,\beta}(z)\,{\ensuremath{\mathrm{d}}}z} \big(R_{\beta}(y)\otimes{\openone}_{D}\big) \mathscr{P}{\ensuremath{\mathrm{e}}}^{\int_{y}^{x}{\ensuremath{\mathbb{T}}}_{\alpha}(z)\,{\ensuremath{\mathrm{d}}}z}\\ \shoveright{\times\big({\openone}_{D}\otimes \overline{R_{\alpha}(x)}\big)\mathscr{P}{\ensuremath{\mathrm{e}}}^{\int_{x}^{+L/2}{\ensuremath{\mathbb{T}}}(z)\,{\ensuremath{\mathrm{d}}}z}\bigg]\ }\\ +\theta(y-x)\operatorname{tr}\bigg[\big(B\otimes \overline{B}\big)\mathscr{P}{\ensuremath{\mathrm{e}}}^{\int_{-L/2}^{+x} {\ensuremath{\mathbb{T}}}_{\alpha,\beta}(z)\,{\ensuremath{\mathrm{d}}}z} \big({\openone}_{D}\otimes \overline{R_{\alpha}(x)}\big) \mathscr{P}{\ensuremath{\mathrm{e}}}^{\int_{x}^{y}{\ensuremath{\mathbb{T}}}_{\beta}(z)\,{\ensuremath{\mathrm{d}}}z}\\ \times\big(R_{\beta}(y)\otimes{\openone}_{D}\big)\mathscr{P}{\ensuremath{\mathrm{e}}}^{\int_{y}^{+L/2}{\ensuremath{\mathbb{T}}}(z)\,{\ensuremath{\mathrm{d}}}z}\bigg].\label{eq:corrfungeneric}\end{gathered}$$ All quantities in this expression, if we could store and manipulate variables with a fully continuous $x$-dependence, are $D^{2}\times D^{2}$ matrices. Since such matrices need to be multiplied, this is an operation with computational complexity of $\operatorname{\mathscr{O}}(D^{6})$, or $\operatorname{\mathscr{O}}(D^{5})$ if we exploit the tensor-product structure. For physical systems, we can further simplify Eq. . When only bosonic particle species are present, all $\eta_{\alpha,\beta}=1$ and ${\ensuremath{\mathbb{T}}}={\ensuremath{\mathbb{T}}}_{\alpha}={\ensuremath{\mathbb{T}}}_{\alpha,\beta}$. In case of the presence of fermionic particle species, we should incorporate the $\mathbb{Z}_{2}$ parity symmetry discussed in the Section \[s:regularity\]. We can then define an idempotent parity superoperator ${\ensuremath{\mathbb{P}}}=P\otimes \overline{P}$ and we obtain ${\ensuremath{\mathbb{P}}}{\ensuremath{\mathbb{T}}}{\ensuremath{\mathbb{P}}}={\ensuremath{\mathbb{T}}}$, as well as ${\ensuremath{\mathbb{P}}}{\ensuremath{\mathbb{T}}}_{\alpha}{\ensuremath{\mathbb{P}}}={\ensuremath{\mathbb{T}}}_{\alpha}$ and ${\ensuremath{\mathbb{P}}}{\ensuremath{\mathbb{T}}}_{\alpha,\beta}{\ensuremath{\mathbb{P}}}={\ensuremath{\mathbb{T}}}_{\alpha,\beta}$. This allows to conclude that $\braket{\Psi[\overline{Q},\{\overline{R}_{\alpha}\}]|{\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(x){\ensuremath{\hat{\psi}}}_{\beta}(y)|\Psi[Q,\{R_{\alpha}\}]}=0$ whenever the particle species $\alpha$ and $\beta$ have different statistics. When $\alpha$ and $\beta$ are both bosonic or both fermionic, it is clear that ${\ensuremath{\mathbb{T}}}_{\alpha,\beta}={\ensuremath{\mathbb{T}}}$ and ${\ensuremath{\mathbb{T}}}_{\alpha}={\ensuremath{\mathbb{T}}}_{\beta}$. In the case of open boundary conditions, we can define virtual density matrices $l(x),r(x)\in\operatorname{\mathbb{L}}(\mathbb{C}^{D})$ which are defined through the initial conditions $l(-L/2)=\bm{v}_{\mathrm{L}}\bm{v}_{\mathrm{L}}^{\dagger}$ and $r(+L/2)=\bm{v}_{\mathrm{R}}\bm{v}_{\mathrm{R}}^{\dagger}$ and the first order differential equations $$\begin{aligned} \frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}l(x) &=\widetilde{\mathscr{T}}^{(x)}\big(l(x)\big),&\text{and}&&\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}r(x) &=-\mathscr{T}^{(x)}\big(r(x)\big).\label{eq:virtualdensitymatrix}\end{aligned}$$ To these density matrices $l(x)$ and $r(x)$ we associate vectors ${\ensuremath{|l(x))}},{\ensuremath{|r(x))}}\in\mathbb{C}^{D}\otimes\overline{\mathbb{C}^{D}}$ in the ancilla product space. Formally, the solution is given by $$\begin{aligned} {\ensuremath{(l(x)|}}&={\ensuremath{(l(-L/2)|}}\mathscr{P}{\ensuremath{\mathrm{e}}}^{\int_{-L/2}^{x}{\ensuremath{\mathbb{T}}}(y)\,{\ensuremath{\mathrm{d}}}y},\\ {\ensuremath{|r(x))}}&=\mathscr{P}{\ensuremath{\mathrm{e}}}^{\int_{x}^{+L/2}{\ensuremath{\mathbb{T}}}(y)\,{\ensuremath{\mathrm{d}}}y}{\ensuremath{|r(+L/2))}}.\end{aligned}$$ We can then write $$\begin{aligned} \braket{\Psi[\overline{Q},\{\overline{R}_{\alpha}\}]|\Psi[Q,\{R_{\alpha}\}]}&={\ensuremath{\left(l(-L/2)\middle\vert {\ensuremath{\mathscr{P}\exp}}\left[\int_{-L/2}^{+L/2} {\ensuremath{\mathbb{T}}}(x)\,{\ensuremath{\mathrm{d}}}x\right]\middle\vert r(+L/2)\right)}}\nonumber\\ &={\ensuremath{\left(l(x)|r(x)\right)}}=\operatorname{tr}\left[l(x) r(x)\right], \quad \forall x \in {\ensuremath{\mathcal{R}}}.\end{aligned}$$ From the correspondence with completely positive maps, it can be shown that the solution $l(x)$ and $r(x)$ of Eq.  starting from positive definite initial conditions $l(-L/2)$ and $r(+L/2)$ are positive for any $x\in\mathcal{R}$ (see Theorem 3 in Ref. ). The norm is thus guaranteed to be positive. Note that, for the special parameterization of $Q(x)$ in the continuous measurement interpretation \[Eq. \], we can write the determining differential equation for $r(x)$ as $$\begin{gathered} \frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}r(x)=-\mathscr{T}^{(x)}\big(r(x)\big)=\\ -{\ensuremath{\mathrm{i}}}[K(x), r(x)] -\frac{1}{2}\sum_{\alpha=1}^{N} \{R_{\alpha}(x)^{\dagger}R_{\alpha}(x),r(x)\} +\sum_{\alpha=1}^{N}R_{\alpha}(x) r(x) R_{\alpha}(x)^{\dagger}.\end{gathered}$$ This is a master equation in Lindblad form [@1976CMaPh..48..119L] describing the non-equilibrium Markov dynamics of the ancilla (*i.e.* the cavity). Starting from a pure state $r(L/2)=\bm{v}_{\mathrm{R}}\bm{v}_{\mathrm{R}}^{\dagger}$ at $t=-x=-L/2$, it evolves through interaction with the physical system (via the interaction operators $R_{\alpha}$). At a general time $t=-x$, the density matrix $r(x)$ is no longer pure: non-equilibrium evolution is a dissipative process. Note that the evolution is trace preserving, since tracing the equation above results in ${\ensuremath{\mathrm{d}}}\operatorname{tr}[r(x)] /{\ensuremath{\mathrm{d}}}x=0$. In addition, the corresponding map $\widetilde{\mathscr{T}}^{(x)}$ satisfies $\widetilde{\mathscr{T}}^{(x)}({\openone}_{D})=0$. In systems which only contain bosons, all $\eta_{\alpha,\beta}=1$ and there is no need to introduce ${\ensuremath{\mathbb{T}}}_{\alpha}(x)$, ${\ensuremath{\mathbb{T}}}_{\alpha,\beta}(x)$, etc. As an alternative to the general recipe described above, we can then also deduce all expectation values of normally ordered operators ${\ensuremath{\hat{O}}}=:O[\{{\ensuremath{\hat{\psi}^\dagger}}_{\alpha}\},\{{\ensuremath{\hat{\psi}}}_{\alpha}\}]:$ from a generating functional $Z[\{\overline{J}_{\alpha}\},\{J_{\alpha}\}]$ as (see Ref. ) $$\begin{gathered} \braket{\Psi[\overline{Q},\{\overline{R}_{\alpha}]|:O[\{{\ensuremath{\hat{\psi}^\dagger}}_{\beta}\},\{{\ensuremath{\hat{\psi}}}_{\beta}\}]: |\Psi[Q,\{R_{\alpha}\}]}=\\ O\left[\bigg\{\frac{\delta\ }{\delta \overline{J}_{\beta}}\bigg\},\bigg\{\frac{\delta\ }{\delta J_{\beta}}\bigg\}\right]Z[\{\overline{J}_{\alpha}\},\{J_{\alpha}\}]\bigg|_{\overline{J}_{\alpha},J_{\alpha}=0}\label{eq:expecrule}\end{gathered}$$ with $\delta\ /\delta J_{\alpha}$ the functional derivative with respect to $J_{\alpha}$, and $$\begin{gathered} Z[\{\overline{J}_{\alpha}\},\{J_{\alpha}\}]=\operatorname{tr}\Bigg[\big(B\otimes\overline{B}\big) {\ensuremath{\mathscr{P}\exp}}\bigg\{\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\,{\ensuremath{\mathbb{T}}}(x)\\ + \sum_{\alpha=1}^{N}J_{\alpha}(x)[R_{\alpha}(x)\otimes 1_{D}] +\overline{J}_{\alpha}(x) [1_{D}\otimes \overline{R_{\alpha}(x)}] \bigg\}\Bigg],\label{eq:genfunc}\end{gathered}$$ which for a system with open boundary conditions results in $$\begin{gathered} Z[\{\overline{J}_{\alpha}\},\{J_{\alpha}\}]=\Bigg(l(-L/2)\Bigg\vert{\ensuremath{\mathscr{P}\exp}}\bigg\{\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\,{\ensuremath{\mathbb{T}}}(x) \\ + \sum_{\alpha=1}^{N}J_{\alpha}(x)[R_{\alpha}(x)\otimes 1_{D}] +\overline{J}_{\alpha}(x) [1_{D}\otimes \overline{R_{\alpha}(x)}] \bigg\}\Bigg\vert r(+L/2)\Bigg).\label{eq:genfuncopen}\end{gathered}$$ Let us now illustrate this approach by defining a generic Hamiltonian for a single-boson system with open boundary conditions[^3] $$\begin{gathered} {\ensuremath{\hat{H}}}={\ensuremath{\hat{T}}}+{\ensuremath{\hat{V}}}+{\ensuremath{\hat{W}}}=\\ \int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\,\frac{1}{2m} \left(\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x} {\ensuremath{\hat{\psi}^\dagger}}(x)\right)\left(\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}{\ensuremath{\hat{\psi}}}(x)\right)+\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\,v(x){\ensuremath{\hat{\psi}^\dagger}}(x){\ensuremath{\hat{\psi}}}(x)\\ +\frac{1}{2}\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}y\,w(x,y) {\ensuremath{\hat{\psi}^\dagger}}(x){\ensuremath{\hat{\psi}^\dagger}}(y){\ensuremath{\hat{\psi}}}(y){\ensuremath{\hat{\psi}}}(x) \label{eq:generichamiltonian}\end{gathered}$$ describing particles with mass $m$ that interact with an external potential $v(x)$ and with each other through two-particle interaction $w(x,y)$. Using Eq.  we find (henceforth omitting the arguments $Q$ and $R$ in the state $\ket{\Psi}$) $$\braket{\Psi|{\ensuremath{\hat{\psi}^\dagger}}(x){\ensuremath{\hat{\psi}}}(x)| \Psi}={\ensuremath{(l(x)|R(x)\otimes \overline{R}(x)|r(x))}},$$ and $$\begin{gathered} \braket{\Psi|{\ensuremath{\hat{\psi}^\dagger}}(x){\ensuremath{\hat{\psi}^\dagger}}(y){\ensuremath{\hat{\psi}}}(y) {\ensuremath{\hat{\psi}}}(x)| \Psi}=\\ \theta(y-x){\ensuremath{(l(x)|R(x)\otimes \overline{R(x)} \mathscr{P}\mathrm{e}^{\int_{x}^{y}{\ensuremath{\mathrm{d}}}z\, {\ensuremath{\mathbb{T}}}(z)} R(y)\otimes\overline{R(y)}|r(y))}}\\ +\theta(x-y){\ensuremath{(l(y)|R(y)\otimes \overline{R(y)} \mathscr{P}\mathrm{e}^{\int_{y}^{x}{\ensuremath{\mathrm{d}}}z\, {\ensuremath{\mathbb{T}}}(z)}R(x)\otimes\overline{R(x)}|r(x))}}.\end{gathered}$$ Defining $R^{(l)}_{x}(x)=R(x)^{\dagger} l(x) R(x)$ for every $x\in[-L/2,+L/2]$ and solving $$\begin{aligned} \frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}y} {\ensuremath{(R^{(l)}_{x}(y)|}}={\ensuremath{(R^{(l)}_{x}(y)|}}{\ensuremath{\mathbb{T}}}(y)\label{eq:defrl}\end{aligned}$$ for every $y\in [x,L/2]$, we can write the expectation value of the potential and interaction energy as $$\begin{aligned} \braket{\Psi|{\ensuremath{\hat{V}}}|\Psi}&= \int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\, v(x) {\ensuremath{(l(x)|R(x)\otimes\overline{R(x)}|r(x))}},\\\braket{\Psi|{\ensuremath{\hat{W}}}|\Psi}&= \int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\int_{x}^{+L/2}{\ensuremath{\mathrm{d}}}y\, w(x,y) {\ensuremath{(R^{(l)}_{x}(y)|R(y)\otimes\overline{R(y)}|r(y))}}.\end{aligned}$$ To evaluate the expectation value of the kinetic energy, we compute $$\begin{gathered} \braket{\Psi|\left(\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}{\ensuremath{\hat{\psi}^\dagger}}(x)\right)\left(\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}{\ensuremath{\hat{\psi}}}(x)\right)|\Psi}=\lim_{x\to y} \frac{{\ensuremath{\mathrm{d}}}^{2}\ }{{\ensuremath{\mathrm{d}}}x{\ensuremath{\mathrm{d}}}y}\braket{\Psi|{\ensuremath{\hat{\psi}^\dagger}}(x){\ensuremath{\hat{\psi}}}(y)|\Psi}\\ \shoveleft{\quad=\lim_{x\to y} \frac{{\ensuremath{\mathrm{d}}}^{2}\ }{{\ensuremath{\mathrm{d}}}x{\ensuremath{\mathrm{d}}}y}\bigg[\theta(y-x){\ensuremath{(l(x)|(1_{D}\otimes \overline{R(x)})\mathscr{P}\mathrm{e}^{\int_{x}^{y}{\ensuremath{\mathrm{d}}}z\, {\ensuremath{\mathbb{T}}}(z)}(R(y)\otimes 1_{D})|r(y))}}}\\ \shoveright{+ \theta(x-y){\ensuremath{(l(y)|(R(y)\otimes 1_{D})\mathscr{P}\mathrm{e}^{\int_{y}^{x}{\ensuremath{\mathrm{d}}}z\, {\ensuremath{\mathbb{T}}}(z)}(1_{D}\otimes \overline{R(x)})\)}}|r(x)}\bigg]}\\ \shoveleft{\quad=\lim_{x\to y} \frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}\Bigg[\theta(y-x)\big(l(x)\big|\big(1_{D}\otimes \overline{R(x)}\big)\mathscr{P}\mathrm{e}^{\int_{x}^{y}{\ensuremath{\mathrm{d}}}z\, {\ensuremath{\mathbb{T}}}(z)}}\\ \shoveright{\times\bigg\{ \big[{\ensuremath{\mathbb{T}}}(y) ,R(y)\otimes 1_{D}\big] + \big({\ensuremath{\mathrm{d}}}R(y)/{\ensuremath{\mathrm{d}}}y \otimes 1_{D}\big) \bigg\}\big\vert r(y)\big)\quad}\\ \qquad+ \theta(x-y)\big(l(y)\big\vert \bigg\{ \big[{\ensuremath{\mathbb{T}}},R(y)\otimes 1_{D}\big]+\big({\ensuremath{\mathrm{d}}}R(y)/{\ensuremath{\mathrm{d}}}y\otimes 1_{D}\big) \bigg\}\\ \times\mathscr{P}\mathrm{e}^{\int_{y}^{x}{\ensuremath{\mathrm{d}}}z\, {\ensuremath{\mathbb{T}}}(z)}\big(1_{D}\otimes \overline{R(x)}\big)\big|r(x)\big)\Bigg].\end{gathered}$$ We have used the defining equations \[Eq. \] in the computation of ${\ensuremath{\mathrm{d}}}{\ensuremath{(l(y)|}}/{\ensuremath{\mathrm{d}}}y={\ensuremath{(l(y)|}}{\ensuremath{\mathbb{T}}}(y)$ and ${\ensuremath{\mathrm{d}}}{\ensuremath{|r(y))}}/{\ensuremath{\mathrm{d}}}y=-{\ensuremath{\mathbb{T}}}(y){\ensuremath{|r(y))}}$. Since ${\ensuremath{\mathbb{T}}}(y)=Q(y)\otimes 1_{D}+1_{D}\otimes \overline{Q(y)}+R(y)\otimes \overline{R(y)}$, we obtain $[{\ensuremath{\mathbb{T}}}(y),R(y)\otimes 1_{D}]=[Q(y),R(y)]\otimes 1_{D}$ and thus $$\begin{gathered} \braket{\Psi|\bigg(\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}{\ensuremath{\hat{\psi}^\dagger}}(x)\bigg)\bigg(\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}{\ensuremath{\hat{\psi}}}(x)\bigg)|\Psi}=\\ \shoveleft{\quad\lim_{x\to y} \bigg[\theta(y-x)\big(l(x)\big\vert1_{D}\otimes \big([\overline{Q(x)},\overline{R(x)}]+{\ensuremath{\mathrm{d}}}\overline{R(x)}/{\ensuremath{\mathrm{d}}}x\big) \mathscr{P}\mathrm{e}^{\int_{x}^{y}{\ensuremath{\mathrm{d}}}z\, {\ensuremath{\mathbb{T}}}(z)}}\\ \shoveright{\times\big( [Q(y) ,R(y)] + {\ensuremath{\mathrm{d}}}R(y)/{\ensuremath{\mathrm{d}}}y\big) \otimes 1_{D}\big\vert r(y)\big)\quad}\\ + \theta(x-y)\big(l(y)\big\vert \big( [Q(y),R(y)]+{\ensuremath{\mathrm{d}}}R(y)/{\ensuremath{\mathrm{d}}}y)\otimes 1_{D}\mathscr{P}\mathrm{e}^{\int_{y}^{x}{\ensuremath{\mathrm{d}}}z\, {\ensuremath{\mathbb{T}}}(z)}\\ \times 1_{D}\otimes \big(1_{D}\otimes[\overline{Q(x)},\overline{R(x)}]+{\ensuremath{\mathrm{d}}}\overline{R(x)}/{\ensuremath{\mathrm{d}}}x\big)\big\vert r(x)\big)\bigg],\end{gathered}$$ where we used the same trick. Note that derivatives with respect to the Heaviside functions (which would produce a diverging contribution $\delta(x-y)$) nicely cancel for both derivatives with respect to $y$ and to $x$. As noted in the Section \[s:regularity\], the regularity condition Eq.  is automatically fulfilled for the case of a single boson. We thus obtain $$\begin{gathered} \braket{\Psi|{\ensuremath{\hat{T}}}|\Psi}= \frac{1}{2m}\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\, \big(l(x)\big\vert\big([Q(x),R(x)]+{\ensuremath{\mathrm{d}}}R(x)/{\ensuremath{\mathrm{d}}}x\big)\\ \otimes\big([\overline{Q(x)},\overline{R(x)}]+{\ensuremath{\mathrm{d}}}\overline{R(x)}/{\ensuremath{\mathrm{d}}}x\big)\big\vert r(x)\big).\end{gathered}$$ Note that this result could also be obtained by the general strategy outlined at the beginning of this section, *i.e.* by acting directly on the cMPS with the operators ${\ensuremath{\hat{\psi}}}(x)$ and ${\ensuremath{\mathrm{d}}}{\ensuremath{\hat{\psi}}}(x) / {\ensuremath{\mathrm{d}}}x$ and only afterwards computing the expectation values. However, the generating function approach is very general and relates nicely to the standard approach that is used to compute expectation values in quantum field theory. As for the definition of the state itself, we can also write the generating functional using a path integral, which can be useful for analytic computations or Monte Carlo based evaluation strategies. Gauge invariance {#s:gauge} ================ As with a MPS, the map $\Psi$ associating a physical state $\ket{\Psi[Q,\{R_{\alpha}\}]}\in {\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{(\mathrm{F})}$ to the matrix functions $Q:{\ensuremath{\mathcal{R}}}\to \mathbb{C}^{D\times D}$ and $R_{\alpha}:{\ensuremath{\mathcal{R}}}\to\mathbb{C}^{D\times D}$ is not injective, *i.e.* the representation is not unique. For MPS, this so-called *gauge invariance* was rigorously discussed in terms of principal fibre bundles in Ref. . Such a rigorous treatment for the case of cMPS is severely complicated by the fact that both the domain and the codomain of the map $\Psi$ are now infinite dimensional. Therefore, it is beyond the scope of the current manuscript, as noted in the introduction. We thus proceed in an intuitive way. We do expect the existence of a local gauge transformation $g:{\ensuremath{\mathcal{R}}}\to\mathsf{GL}(D,\mathbb{C})$, *i.e.* a position-dependent invertible matrix $g(x)$, that acts on the matrices $Q(x)$ and $R_{\alpha}(x)$ while leaving the physical state $\ket{\Psi[Q,\{R_{\alpha}]}$ invariant. While it is hard to extract the correct transformation formulas for $Q$ and $R_{\alpha}$ from the original cMPS definition in Eq. , people with a background in Yang-Mills gauge theories might recognise $Q$ as the connection that generates parallel transport by comparing the $N$-particle wave functions of the Fock space embedding \[Eq. \] to Wilson lines with insertions of charges transforming according to the adjoint representation, or from recognizing the action of the path integral formulation \[Eq. \] as a Yang-Mills action with a covariant derivative $\frac{\mathrm{d}\ }{\mathrm{d} x} + Q(x)$. The gauge transformation for a cMPS is thus given by $$\begin{aligned} \tilde{Q}(x)&=g(x)^{-1} Q(x) g(x)+ g(x)^{-1} \frac{{\ensuremath{\mathrm{d}}}g}{{\ensuremath{\mathrm{d}}}x}(x) ,&\tilde{R}(x)&=g^{-1}(x) R(x) g(x),\label{eq:gaugetransform}\end{aligned}$$ While we prefer the continuum derivation, these transformation formulas can also be obtained by using the correspondence with MPS \[Eq. \] and the well-known gauge transformations for MPS [@Haegeman:fk] $$\begin{aligned} \tilde{A}^{0}(n)&=g((n-1)a)^{-1} A^{0}(n) g(na)\\ &=g((n-1)a)^{-1}g(n a)+a g((n-1)a)^{-1}Q(na)g(na)\\ &={\openone}_{D}+a\left[-\frac{{\ensuremath{\mathrm{d}}}g^{-1}}{{\ensuremath{\mathrm{d}}}x}(na) g(n a) + g(na)^{-1} Q(na) g(na)\right]+\operatorname{\mathscr{O}}(a^{2}),\\ \tilde{A}^{\alpha}(n) &= g((n-1)a)^{-1} A^{\alpha}(n) g(na)\\ &=\sqrt{a} g(na)^{-1} R_{\alpha}(n a)g(na)+\operatorname{\mathscr{O}}(a^{3/2}),\\ \tilde{A}^{(\alpha,\beta)}(n) &= g((n-1)a)^{-1} A^{(\alpha,\beta)}g(na)\\ &=\begin{cases} \frac{a}{2} [ \tilde{R}_{\alpha}(n a) \tilde{R}_{\beta}(n a)+\eta_{\alpha,\beta} \tilde{R}_{\beta}(n a) \tilde{R}_{\alpha}(n a)]+\operatorname{\mathscr{O}}(a^{2}),& \alpha\neq \beta\\ \frac{a}{2} \tilde{R}_{\alpha}(n a)^{2}+\operatorname{\mathscr{O}}(a^{2}),&\alpha=\beta \end{cases}\\ &\ldots\nonumber\end{aligned}$$ Indeed, using ${\ensuremath{\mathrm{d}}}g^{-1}(x) /{\ensuremath{\mathrm{d}}}x g(x) = - g^{-1}(x) {\ensuremath{\mathrm{d}}}g(x)/ {\ensuremath{\mathrm{d}}}x$, we reproduce the transformation formulas of Eq. . To have an invariant physical state $\ket{\Psi[Q,\{R_{\alpha}\}]}=\ket{\Psi[\tilde{Q},\{R_{\alpha}\}]}$, we also need to transform the boundary matrix as $\tilde{B}=g(L/2)^{-1} B g(-L/2)$. When $B$ is fixed, we need to restrict to gauge transformations that satisfy the boundary condition $g(L/2)^{-1} B g(-L/2)=B$ (*e.g.* $g(L/2)=g(-L/2)$ for $B={\openone}_D$). In addition, we also require the function $g:{\ensuremath{\mathcal{R}}}\to\mathsf{GL}(D,\mathbb{C})$ to be second order differentiable in order to have new matrix functions $\tilde{Q}(x)$ and $\tilde{R}_{\alpha}(x)$ which have a well-defined first order derivative. The regularity condition of Eq.  is not modified by the gauge transformation and puts no further constraints on the set of allowed gauge transformations. Since this condition follows from physical considerations which are left invariant by gauge transformations, it would be strange if we obtained a different result. As for MPS, we can use the gauge fixing conditions to impose a certain canonical form on the matrices $Q(x)$ and $R_{\alpha}(x)$. Suppose we want to impose a gauge fixing condition such that $\tilde{Q}(x)$ is of the form in Eq. , corresponding to the cMPS construction from continuous measurement. It is equivalent to the *left orthonormalization condition* of MPS and boils down to imposing $$\tilde{Q}(x)+\tilde{Q}(x)^\dagger +\sum_{\alpha=1}^{q} \tilde{R}_{\alpha}(x)^\dagger \tilde{R}_{\alpha}(x)=0$$ for every $x\in\mathcal{R}$. Inserting the explicit form of $\tilde{Q}(x)$ and $\tilde{R}_{\alpha}(x)$ in terms of the original $Q(x)$, $R_{\alpha}(x)$ and $g(x)$ \[Eq. \], we obtain that $g(x)$ should be a solution of the differential equation $$\begin{gathered} \begin{split} \frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x} \left[ \left(g^{-1}(x)\right)^\dagger g^{-1}(x)\right]&= \left(g^{-1}(x)\right)^\dagger g^{-1}(x) Q(x) + Q(x)^\dagger \left(g^{-1}(x)\right)^\dagger g^{-1}(x)\\ &\qquad\qquad+\sum_{\alpha=1}^{q} R_{\alpha}(x)^\dagger \left(g^{-1}(x)\right)^\dagger g^{-1}(x) R_{\alpha}(x)\\ &=\tilde{\mathscr{T}}^{(x)}\left[\left(g^{-1}(x)\right)^\dagger g^{-1}(x)\right]. \end{split}\end{gathered}$$ Clearly, this differential equation only determines $g(x)$ up to a unitary prefactor. Put differently, for any solution $g(x)$ of this equation, $g'(x)=u(x) g(x)$ with $u(x)$ a unitary matrix is an equally valid solution. We can use the remaining gauge freedom $u(x)\in\mathsf{U}(D)$ to diagonalize $r(x)$ at every point $x$, hence obtaining the *left-canonical form*. However, at this point it becomes important to discuss the boundary conditions that should be satisfied by solutions $g(x)$. If the boundary matrix $B$ is fixed, we need to impose $g^{-1}(+L/2) B g(-L/2)=B$. This is a highly non-trivial condition and it is not certain that such solutions exist. For periodic boundary conditions with $B={\openone}_{D}$, it logically results in $g(+L/2)=g(-L/2)$. Translation-invariant states with periodic boundary conditions can be subjected to the the same treatment as the translation-invariant states in the thermodynamic limit, which are discussed in the next section. Henceforth, we restrict to the case of open boundary conditions with $B=\bm{v}_{\mathrm{R}}\bm{v}_{\mathrm{L}}^{\dagger}$. From this, we can derive the conditions $$\begin{aligned} \bm{v}_{\mathrm{L}}^{\dagger} g(-L/2) &= \alpha \bm{v}_{\mathrm{L}}^{\dagger} &g^{-1}(+L/2)\bm{v}_{\mathrm{R}}&=\frac{1}{\alpha} \bm{v}_{\mathrm{R}}\end{aligned}$$ for some non-zero $\alpha\in\mathbb{C}$. However, we can easily fix $\alpha=1$ by substituting $g(x)\leftarrow g'(x)=g(x)/\alpha$, since the constant gauge transformation $\alpha {\openone}_{D}$ acts trivially on $Q$ and $R$, *i.e.* it is within the kernel of the gauge group action. Nevertheless, the resulting boundary conditions are still highly non-trivial and it is not assured by the standard theory of differential equations that there exist solutions satisfying both conditions simultaneously. Hence, it is better to restrict to a single boundary condition such as $g(-L/2)={\openone}_{D}$ and do not impose any condition on $g(+L/2)$. The value of $g(+L/2)$ is then completely determined by the differential equation (up to the unitary prefactor). Consequently, we then also have to transform the right boundary vector as $\tilde{\bm{v}}_{\mathrm{R}}=g^{-1}(+L/2) \bm{v}_{\mathrm{R}}$. This implies that $\bm{v}_{\mathrm{R}}$ is part of the variational degrees of freedom, and should also be included in *e.g.* the variational optimization for finding ground states. Note that the boundary conditions for $g(x)$ are inherently imposed by the representation of the state, and are not related to or influenced by the physical conditions that need to be satisfied by $Q$ and $R$, as discussed in Section \[s:bc\]. Alternatively, we can also impose the *right orthonormalization condition*, which boils down to $$\tilde{Q}(x)+\tilde{Q}(x)^{\dagger}+\sum_{\alpha=1}^{N}\tilde{R}_{\alpha}(x)\tilde{R}_{\alpha}(x)^{\dagger}=0$$ and implies that $$\tilde{Q}(x)=-{\ensuremath{\mathrm{i}}}K(x) -\frac{1}{2}\sum_{\alpha=1}^{N}\tilde{R}_{\alpha}(x)\tilde{R}_{\alpha}(x)^{\dagger}$$ with $K(x)$ a Hermitian matrix. Starting from an arbitrary cMPS with matrices $Q(x)$ and $R_{\alpha}(x)$, we obtain new matrices $\tilde{Q}(x)$ and $\tilde{R}_{\alpha(x)}$ according to Eq. , which satisfy the above relations if $g(x)$ is a solution of $$\begin{gathered} \begin{split} \frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x} \left[ g(x) g(x)^{\dagger} \right]&= -Q(x) g(x) g(x)^{\dagger} - g(x) g(x)^{\dagger} Q(x)^\dagger-\sum_{\alpha=1}^{q} R_{\alpha}(x)g(x)g(x)^{\dagger}R_{\alpha}(x)^{\dagger} \\ &=-\mathscr{T}^{(x)}\left[g(x) g(x)^\dagger\right]. \end{split}\end{gathered}$$ Clearly, for any solution $g(x)$, we obtain a family of solutions $g'(x)=g(x) u(x)$ with $u(x)\in\mathsf{U}(D)$. This unitary freedom can be fixed by diagonalizing $l(x)$, resulting in the *right-canonical form*. As for the left-canonical form, one has to pay careful attention to the boundary conditions that need to be satisfied by $g$. For a system with open boundary conditions, the easiest solution is again to include one of the boundary vectors in the set of the variational parameters and also transform it under the action of the gauge transform. Note that we can also define a gauge transformation $g(x)$ for the cMPS $\ket{\Psi[Q,\{R_{\alpha}\}]}\in{\ensuremath{{\ensuremath{\mathcal{M}}}}}_{\text{cMPS}}$ so that $$\tilde{Q}(x)=g(x)^{-1} Q(x) g(x)+g(x)^{-1} \frac{{\ensuremath{\mathrm{d}}}g}{{\ensuremath{\mathrm{d}}}x}(x)=0.$$ It is sufficient to choose $$g(x)=\mathscr{P}\!\exp\left[\int^{+L/2}_{x} Q(y)\,{\ensuremath{\mathrm{d}}}y\right] g_0$$ with $g_{0}$ some arbitrary integration factor that is fixed by the boundary conditions. For example, if we require $g(-L/2)={\openone}_{D}$ then $g_0=\left(\mathscr{P}\!\exp\left[\int^{+L/2}_{-L/2} Q(y)\,{\ensuremath{\mathrm{d}}}y\right]\right)^{-1}$ and we also need to transform $\bm{v}_{\mathrm{R}}\leftarrow \bm{\tilde{v}}_{\mathrm{R}}= g(+L/2)^{-1}\bm{v}_{\mathrm{R}}=g_0^{-1}\bm{v}_{\mathrm{R}}$. Hence, the cMPS can now be written as $$\ket{\Psi[\{\tilde{R}_{\alpha}\}]}=\bm{v}_{\mathrm{L}}^{\dagger} \mathscr{P}\!\exp\left[\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\, \sum_{\alpha=1}^{N}\tilde{R}_{\alpha}(x) \otimes {\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(x) \right]\bm{\tilde{v}}_{\mathrm{R}}\ket{\Omega}.\label{eq:formulationlinkwithmeanfield}$$ This formulation is close in spirit to the bosonic mean field ansatz $$\ket{\varphi}=\exp\left(\int_{-L/2}^{+L/2}\varphi(x) {\ensuremath{\hat{\psi}^\dagger}}(x)\,{\ensuremath{\mathrm{d}}}x\right)\ket{\Omega}$$ with $\varphi$ a scalar (complex-valued) function, since it identifies the mean field ansatz with a cMPS with bond dimension $D=1$. This mean field ansatz lies at the basis of the Gross-Pitaevskii equation [@Gross:1961aa; @Pitaevskii:1961aa], that is still used today with great success. All variational degrees of freedom are now contained in the matrices $\tilde{R}_{\alpha}(x)$ (and $\bm{\tilde{v}}_{\mathrm{R}}$), and all gauge degrees of freedom have been eliminated. However, we do not employ this particular choice of gauge in the remainder of this manuscript as it also has some downsides. For example, translation-invariant states $\ket{\Psi[Q,R_{\alpha}]}$ can be obtained by choosing the matrices $Q$ and $R_{\alpha}$ $x$-independent (see next subsection). However, this particular gauge transformation maps the $x$-independent matrices $R_{\alpha}$ to $x$-dependent matrices $\tilde{R}_{\alpha}(x)={\ensuremath{\mathrm{e}}}^{+Q x} R_{\alpha}{\ensuremath{\mathrm{e}}}^{-Q x}$, so that translation invariance is less easily recognized. Translation invariance and the thermodynamic limit {#s:ti} ================================================== When using cMPS to approximate ground states of translation invariant Hamiltonians, we can restrict to the subclass of uniform cMPS $\ket{\Psi(Q,\{R_{\alpha}\})}$, which are obtained from taking $Q(x)=Q$ and $R_{\alpha}(x)=R_{\alpha}$ constant $x$-independent $D\times D$ matrices in $\ket{\Psi[Q,\{R_{\alpha}\}]}$. This approach is valid both for a finite system with periodic boundary conditions ($B={\openone}_{D}$) or for a system in the thermodynamic limit ($\lvert{\ensuremath{\mathcal{R}}}\rvert=L\to \infty$ or thus ${\ensuremath{\mathcal{R}}}\to \mathbb{R}$), where the precise value of the boundary matrix $B$ should be irrelevant and should not appear in any normalised expectation value. We henceforth restrict to the latter case. The transfer operator ${\ensuremath{\mathbb{T}}}=Q\otimes 1_{D}+1_{D}\otimes\overline{Q}+\sum_{\alpha=1}^{q} R_{\alpha}\otimes\overline{R}_{\alpha}$ also becomes translation invariant and ${\ensuremath{\mathscr{P}\exp}}[\int_{y}^{z}{\ensuremath{\mathrm{d}}}x\, {\ensuremath{\mathbb{T}}}]=\exp[{\ensuremath{\mathbb{T}}}(z-y)]$. The normalization of the state $\ket{\Psi(Q,R)}$ is given by $\lim_{L\to\infty}\operatorname{tr}\big[(B\otimes\overline{B})\exp({\ensuremath{\mathbb{T}}} L)\big]$. If $\mu=\max_{\lambda\in\sigma({\ensuremath{\mathbb{T}}})}\{\Re(\lambda)\}$, where $\sigma({\ensuremath{\mathbb{T}}})$ denotes the spectrum of ${\ensuremath{\mathbb{T}}}$ and $\Re$ the real part, then $\braket{\Psi(\overline{Q},\{\overline{R}_{\alpha}\})|\Psi(Q,\{R_{\alpha}\})}\sim \lim_{L\to\infty} \exp(\mu L)$. Normalizing this state by multiplying it with $\exp(-\mu L)$ results in $Q\leftarrow Q-\mu/2 {\openone}_{D}$ and ${\ensuremath{\mathbb{T}}}\leftarrow {\ensuremath{\mathbb{T}}}-\mu {\ensuremath{\mathbb{{\openone}}}}$, so that the new transfer operator ${\ensuremath{\mathbb{T}}}$ has at least one eigenvalue for which the real part is zero and no eigenvalue has a positive real part. Let us assume that the eigenvalue $\lambda$ with $\Re \lambda=0$ is unique. If ${\ensuremath{|r)}}$ is the corresponding right eigenvector, then we can write the eigenvalue equation as $\mathscr{T}(r)=\lambda r$ with $r$ the associated virtual density matrix. Hermitian conjugation learns that $\mathscr{T}(r^{\dagger})=\overline{\lambda} r^{\dagger}$, so that the uniqueness of the eigenvalue with $\Re \lambda=0$ implies that $\lambda=\overline{\lambda}=0$ and $r^{\dagger}={\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}\phi} r$, where we can choose the phase of the eigenvector so that $r$ is Hermitian. Similarly, the virtual density matrix $l$ associated to the left eigenvector ${\ensuremath{|l)}}$ can also be chosen Hermitian. Having a unique eigenvalue zero and $\Re(\lambda)<0$ for all other eigenvalues $\lambda$ corresponds to the generic case, as can be better appreciated by referring to the well-known results for MPS[@1992CMaPh.144..443F; @2006quant.ph..8197P; @Haegeman:fk]. Indeed, a full categorisation of the eigenvalue structure of ${\ensuremath{\mathbb{T}}}$ can be obtained by identifying[^4] $${\ensuremath{\mathbb{T}}}=\lim_{a\to 0} \frac{1}{a} \ln {\ensuremath{\mathbb{E}}}$$ with ${\ensuremath{\mathbb{E}}}$ the corresponding transfer operator of the uniform MPS $\ket{\Psi(A)}$ with $A$ related to $Q$ and $R_{\alpha}$ as in Eq. . The set of MPS with a well-defined thermodynamic limit correspond to the injective or pure MPS, for which the transfer operator ${\ensuremath{\mathbb{E}}}$ has a single eigenvalue $1$ that maps to the eigenvalue zero of ${\ensuremath{\mathbb{T}}}$. The corresponding left and right eigenvectors ${\ensuremath{(l|}}$ and ${\ensuremath{|r)}}$ correspond to strictly positive Hermitian operators $l$ and $r$ (*i.e.* they have full rank). All other eigenvalues of ${\ensuremath{\mathbb{E}}}$ lie strictly within the unit circle and map to eigenvalues of ${\ensuremath{\mathbb{T}}}$ with strictly negative real part. If the left and right eigenvectors corresponding to eigenvalue $0$ are normalized such that ${\ensuremath{(l|r)}}=1$, then $\lim_{L\to\infty} \exp({\ensuremath{\mathbb{T}}} L)={\ensuremath{|r)}}{\ensuremath{(l|}}$ and we obtain $$\braket{\Psi(\overline{Q},\{\overline{R}_{\alpha}\})|\Psi(Q,\{R_{\alpha}\})}={\ensuremath{(l|B\otimes \overline{B}|r)}}.$$ In expectation values of local operators, this overall factor always appears, but the rest of the expression will not depend on $B$. Hence, the $B$-dependence is cancelled by considering normalized expectation values, or by henceforth choosing $B$ such that $\braket{\Psi(Q,\{R_{\alpha}\})|\Psi(Q,\{R_{\alpha}\})}={\ensuremath{(l|B\otimes \overline{B}|r)}}=1$. For uniform cMPS, the gauge invariance is restricted to global transformations $Q\leftarrow\tilde{Q}=g Q g^{-1}$ and $R_{\alpha}\leftarrow \tilde{R}_{\alpha}=g R_{\alpha} g^{-1}$ with $g\in{\ensuremath{\mathsf{GL}}}(\mathbb{C},D)$. This gauge transformation can be used to impose the left or right orthonormalization conditions. Left orthonormalization boils down to fixing the left eigenvector $l$ of eigenvalue $0$ to $l={\openone}_{D}$, which results in $Q=-{\ensuremath{\mathrm{i}}}K-1/2 \sum_{\alpha=1}^{q} R_{\alpha}^{\dagger}R_{\alpha}$ with $K$ a Hermitian matrix. The remaining unitary gauge freedom can be used to diagonalize $r$, bringing $Q$ and $R_{\alpha}$ in the left-canonical form. The right-canonical form is obtained analogously. In principle, an exact computation of the left and right eigenvectors $l$ and $r$ corresponding to the eigenvalue with largest real part $\lambda$ of the transfer operator ${\ensuremath{\mathbb{T}}}$ are computationally costly operations \[$\operatorname{\mathscr{O}}(D^{6})$\]. By using an explicit parameterization of the left-canonical form in terms of $R_{\alpha}$ and the Hermitian matrix $K$, we know exactly that $\lambda=0$ and $l={\openone}_{D}$. It is then possible to obtain $r$ with an iterative solver with computational efficiency $\operatorname{\mathscr{O}}(D^{3})$. By imposing the physical requirements discussed at the end of Section \[s:regularity\], we can define the parity superoperator ${\ensuremath{\mathbb{P}}}$ as in Section \[s:expectval\]. Since ${\ensuremath{\mathbb{P}}}{\ensuremath{\mathbb{T}}}{\ensuremath{\mathbb{P}}}={\ensuremath{\mathbb{T}}}$, we can expect that the left and right eigenvectors ${\ensuremath{|l)}}$ and ${\ensuremath{|r)}}$ corresponding to the zero eigenvalue satisfy ${\ensuremath{(l|}}{\ensuremath{\mathbb{P}}}={\ensuremath{(l|}}$ and ${\ensuremath{\mathbb{P}}}{\ensuremath{|r)}}={\ensuremath{|r)}}$, or thus $P^{\dagger} l P = l$ and $P r P^{\dagger}=r$. Note that we can always choose the gauge such that $P$ is Hermitian. In addition, it is easy to prove that ${\ensuremath{\mathbb{T}}}_{\alpha}$ also has an eigenvalue zero even if $\alpha$ refers to a fermionic particle species so that ${\ensuremath{\mathbb{T}}}_{\alpha}\neq {\ensuremath{\mathbb{T}}}$. The corresponding left and right eigenvectors are in that case given by $l_{\alpha}=l P=P^{\dagger} l$ and $r_{\alpha} =P r=r P^{\dagger}$, whereas they equal $l$ and $r$ if $\alpha$ is a bosonic particle. We can now evaluate correlation functions as $$\begin{gathered} C_{\alpha,\beta}(x,y)=\braket{\Psi(\overline{Q},\{\overline{R}_{\alpha}\})|{\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(x){\ensuremath{\hat{\psi}}}_{\beta}(y)|\Psi(Q,\{R_{\alpha}\})}\\ =\theta(x-y){\ensuremath{(l|[R_{\beta}\otimes{\openone}_{D}]{\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathbb{T}}}_{\alpha}(x-y)}[{\openone}_{D}\otimes \overline{R_{\alpha}}]|r)}}\\ +\theta(y-x){\ensuremath{(l|[{\openone}_{D}\otimes \overline{R_{\alpha}}]{\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathbb{T}}}_{\alpha}(y-x)}[R_{\beta}\otimes{\openone}_{D}]|r)}},\label{eq:corrfunti}\end{gathered}$$ where we have used the physical requirement ${\ensuremath{\mathbb{T}}}_{\alpha,\beta}={\ensuremath{\mathbb{T}}}$ and ${\ensuremath{\mathbb{T}}}_{\alpha}={\ensuremath{\mathbb{T}}}_{\beta}$ for non-vanishing correlation functions (see Section \[s:expectval\]). The correlation function $C_{\alpha,\beta}(x,y)$ is translation invariant and we define $C_{\alpha,\beta}(x,y)=C_{\alpha,\beta}(y-x)$. When $\alpha$ is bosonic and $\beta$ fermionic, we automatically have $C_{\alpha,\beta}(x)=0$ if the parity considerations from Section \[s:regularity\] are correctly built in. In the long-range limit, we obtain $\lim_{\lvert x\rvert \to \infty}C_{\alpha,\beta}(x)={\ensuremath{(l|R_{\beta}\otimes{\openone}_{D}|r_{\alpha})}}{\ensuremath{(l_{\alpha}|{\openone}_{D}\otimes \overline{R_{\alpha}}|r)}}$. When both $\alpha$ and $\beta$ refer to fermionic particle species, this limiting value is automatically zero (also under the assumption that parity is correctly built into the matrices). When both indices refer to bosonic particles, a non-zero value is possible in the case of Bose-Einstein condensation. We should then define a connected correlation function $\tilde{C}_{\alpha,\beta}(x)$, which decays exponentially as $\lim_{\lvert x \rvert\to \infty}\tilde{C}_{\alpha,\beta}(x)=\operatorname{\mathscr{O}}(\exp[-\lvert x\rvert/\xi_{\text{c}}])$ with $\xi_{c}=(\Re \lambda_{1})^{-1}$, where $\lambda_{1}$ is the eigenvalue of ${\ensuremath{\mathbb{T}}}_{\alpha}$ with second largest real part (*i.e.* skipping eigenvalue $\lambda_{0}=0$). Clearly, $C_{\alpha,\beta}(x)$ is continuous at $x=0$. We can then compute the first derivative, which is only continuous at $x=0$ if we impose the regularity conditions in Eq. . This is another way to derive these conditions. If Eq.  is satisfied, then the second derivative of $C_{\alpha,\beta}(x)$ at $x=0$ (which gives the expectation value of the kinetic energy density ${\ensuremath{\hat{t}}}$ up to a factor $-1/2m$) is finite and automatically continuous. The third derivative is then finite but will not be continuous in general, without imposing further conditions as discussed in Appendix \[a:higherorderregularity\]. We define the Fourier transformed correlation function $$n_{\alpha,\beta}(p,p')=\int_{-\infty}^{+\infty} \frac{{\ensuremath{\mathrm{d}}}x}{2\pi} \int_{-\infty}^{+\infty}\frac{{\ensuremath{\mathrm{d}}}y}{2\pi}\, C_{\alpha,\beta}(x,y){\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}p x - {\ensuremath{\mathrm{i}}}p' y}= \delta (p'-p) n_{\alpha,\beta}(p)$$ with $$n_{\alpha,\beta}(p)=\int_{-\infty}^{+\infty} \frac{{\ensuremath{\mathrm{d}}}x}{2\pi} C_{\alpha,\beta}(x) {\ensuremath{\mathrm{e}}}^{-{\ensuremath{\mathrm{i}}}p x}.$$ In order to evaluate $n_{\alpha,\beta}(p)$, it is important to separate $\exp({\ensuremath{\mathbb{T}}}_{\alpha}x)$ into two parts. The first part is given by ${\ensuremath{\mathbb{S}}}_{\alpha}={\ensuremath{|r_{\alpha})}}{\ensuremath{(l_{\alpha}|}}$, the projector onto the eigenspace corresponding to eigenvalue $0$ of ${\ensuremath{\mathbb{T}}}_{\alpha}$, and yields a singular contribution to the integral. If we define the complementary projector ${\ensuremath{\mathbb{Q}}}_{\alpha}={\openone}-{\ensuremath{\mathbb{S}}}_{\alpha}$, then the remaining part $$\exp({\ensuremath{\mathbb{T}}}_{\alpha}x)-{\ensuremath{\mathbb{S}}}_{\alpha}={\ensuremath{\mathbb{Q}}}_{\alpha}\exp({\ensuremath{\mathbb{T}}}_{\alpha}x) {\ensuremath{\mathbb{Q}}}_{\alpha}={\ensuremath{\mathbb{Q}}}_{\alpha}\exp({\ensuremath{\mathbb{Q}}}_{\alpha}{\ensuremath{\mathbb{T}}}_{\alpha}{\ensuremath{\mathbb{Q}}}_{\alpha}x) {\ensuremath{\mathbb{Q}}}_{\alpha}\label{eq:singulardecompositionT}$$ is well behaved in the Fourier transform, since all of its eigenvalues decay exponentially $x$. If we then introduce the notation ${\ensuremath{\mathbb{Q}}}_{\alpha}(-{\ensuremath{\mathbb{T}}}_{\alpha}\pm{\ensuremath{\mathrm{i}}}p)^{-1}{\ensuremath{\mathbb{Q}}}_{\alpha}=(-{\ensuremath{\mathbb{T}}}_{\alpha}\pm{\ensuremath{\mathrm{i}}}p)^{\mathsf{P}}$, which is well defined even at $p=0$ because the zero eigensector of ${\ensuremath{\mathbb{T}}}_{\alpha}$ is projected out, we can rewrite $n_{\alpha,\beta}(p)$ as $$\begin{gathered} n_{\alpha,\beta}(p)=2\pi \delta(p) {\ensuremath{(l|{\openone}_{D}\otimes \overline{R_{\alpha}}|r_{\alpha})}}{\ensuremath{(l_{\alpha}|R_{\beta}\otimes{\openone}_{D}|r)}}\\ +{\ensuremath{(l|[{\openone}_{D}\otimes \overline{R_{\alpha}}] (-{\ensuremath{\mathbb{T}}}_{\alpha}+{\ensuremath{\mathrm{i}}}p)^{\mathsf{P}} [R_{\beta}\otimes{\openone}_{D}]|r)}}\\ +{\ensuremath{(l|[R_{\beta}\otimes{\openone}_{D}] (-{\ensuremath{\mathbb{T}}}_{\alpha}-{\ensuremath{\mathrm{i}}}p)^{\mathsf{P}} [{\openone}_{D}\otimes \overline{R_{\alpha}}]|r)}}. \label{eq:cmpsmomentumoccupation}\end{gathered}$$ The first term is only present for bosonic particles that have condensed. It would also disappear in the Fourier transformation of the connected correlation function $\tilde{C}(x,y)$. If we define Fourier transformed field operators ${\ensuremath{\hat{\varPsi}}}(p)$ —no confusion between the state $\ket{\Psi}$ and the momentum-space operator ${\ensuremath{\hat{\varPsi}}}$ should arise— as $${\ensuremath{\hat{\varPsi}}}(p)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}{\ensuremath{\mathrm{d}}}x\,{\ensuremath{\hat{\psi}}}(x){\ensuremath{\mathrm{e}}}^{-{\ensuremath{\mathrm{i}}}p x},$$ then it is easy to see why we have used the suggestive notation $n_{\alpha,\beta}$ for the Fourier transform of $C_{\alpha,\beta}$. We obtain $$\braket{\Psi(\overline{Q},\{\overline{R}_{\alpha}\})|{\ensuremath{\hat{\varPsi}^\dagger}}_{\alpha}(p){\ensuremath{\hat{\varPsi}}}_{\beta}(p')|\Psi(Q,\{R_{\alpha}\})}=\delta(p-p')n_{\alpha,\beta}(p).\label{eq:defmomoccnum}$$ Hence, $n_{\alpha,\beta}(p)$ describes the occupation number of momentum levels. The large-$p$ behavior of $n_{\alpha,\beta}(p)$ follows from the regularity of $C_{\alpha,\beta}(x)$. At first sight, Eq.  might seem to decay as $\operatorname{\mathscr{O}}(p^{-1})$. However, if the regularity conditions in Eq.  are satisfied, then the momentum occupation number $n_{\alpha,\beta}(p)$ has to decay as $\operatorname{\mathscr{O}}(p^{-4})$ for large values of $p$. We can show this explicitly. For $\lvert p\rvert$ larger than the eigenvalue of ${\ensuremath{\mathbb{T}}}_{\alpha}$ with the largest absolute value, we can expand $(-{\ensuremath{\mathbb{T}}}_{\alpha}\pm {\ensuremath{\mathrm{i}}}p)^{\mathsf{P}}$ as $$(-{\ensuremath{\mathbb{T}}}_{\alpha}\pm {\ensuremath{\mathrm{i}}}p)^{\mathsf{P}}=\mp {\ensuremath{\mathrm{i}}}\frac{{\ensuremath{\mathbb{Q}}}_{\alpha}}{p}\sum_{n=0}^{+\infty} \left(\pm {\ensuremath{\mathrm{i}}}\frac{{\ensuremath{\mathbb{T}}}_{\alpha}}{p}\right)^n=\mp {\ensuremath{\mathrm{i}}}\frac{{\ensuremath{\mathbb{Q}}}_{\alpha}}{p} +\frac{{\ensuremath{\mathbb{T}}}_{\alpha}}{p^2}\pm {\ensuremath{\mathrm{i}}}\frac{{\ensuremath{\mathbb{T}}}_{\alpha}^2}{p^3}-\frac{{\ensuremath{\mathbb{T}}}_{\alpha}^3}{p^4}+\operatorname{\mathscr{O}}(p^{-5}).$$ We now have to show that by plugging this expansion into Eq. , the first three terms vanish. The first term is trivial, if particle type $\alpha$ is bosonic so that ${\ensuremath{\mathbb{Q}}}_{\alpha}={\ensuremath{\mathbb{{\openone}}}}-{\ensuremath{|r)}}{\ensuremath{(l|}}$. For the fermionic case, one has to employ the parity conservation. Using the regularity conditions of Eq.  and $\eta_{\alpha,\gamma}=\eta_{\beta,\gamma}$ for non-vanishing correlation functions —$\alpha$ and $\beta$ are of both bosonic or both fermionic— we can show that $$\begin{aligned} {\ensuremath{\mathbb{T}}}_{\alpha} [R_{\beta}\otimes {\openone}_{D}]{\ensuremath{|r)}}=[R_{\beta}\otimes{\openone}_{D}]{\ensuremath{\mathbb{T}}}{\ensuremath{|r)}}+[Q,R_{\beta}]\otimes{\openone}_{D}{\ensuremath{|r)}}=[Q,R_{\beta}]\otimes{\openone}_{D}{\ensuremath{|r)}}\end{aligned}$$ and similarly $$\begin{aligned} {\ensuremath{\mathbb{T}}}_{\alpha} [{\openone}_{D}\otimes \overline{R_{\alpha}}]{\ensuremath{|r)}}&={\openone}_{D}\otimes[\overline{Q},\overline{R_{\alpha}}]{\ensuremath{|r)}},\\ {\ensuremath{(l|}}[R_{\beta}\otimes{\openone}_{D}]{\ensuremath{\mathbb{T}}}_{\alpha} &={\ensuremath{(l|}}[R_{\beta},Q]\otimes{\openone}_{D},\\ {\ensuremath{(l|}}[{\openone}_{D}\otimes \overline{R_{\alpha}}]{\ensuremath{\mathbb{T}}}_{\alpha} &={\ensuremath{(l|}}{\openone}_{D}\otimes [\overline{R_{\alpha}},\overline{Q}].\end{aligned}$$ These results can be used to show that both the second and third term in the expansion vanish when they are plugged into Eq. . The first non-vanishing term is thus of order $p^{-4}$. Because $n_{\alpha,\beta}(p)$ is a dimensionless quantity, this asymptotic behavior allows us to introduce a momentum cutoff $\Lambda$ as $$\Lambda^4=\lim_{p\to\infty} \lvert p^4 n_{\alpha,\beta}(p)\rvert=\lvert {\ensuremath{(l|[{\openone}_{D}\otimes \overline{R_{\alpha}}] {\ensuremath{\mathbb{T}}}_{\alpha}^3 [R_{\beta}\otimes{\openone}_{D}]|r)}}+{\ensuremath{(l|[R_{\beta}\otimes{\openone}_{D}] {\ensuremath{\mathbb{T}}}_{\alpha}^3 [{\openone}_{D}\otimes \overline{R_{\alpha}}]|r)}}\rvert,$$ where the absolute value is not required if we use $\beta=\alpha$. The eigenvalue spectrum of ${\ensuremath{\mathbb{T}}}_{\alpha}$ thus provides a definition for an ultraviolet cutoff scale $a=\Lambda^{-1}$. Rather than defining the ultraviolet cutoff scale $a=\Lambda^{-1}$ through the total particle density $$\rho_{\alpha,\beta}=\int_{-\infty}^{+\infty}\frac{{\ensuremath{\mathrm{d}}}p}{2\pi}\, n_{\alpha,\beta}(p),$$ we have now defined a UV cutoff scale $\Lambda$ based on the large momentum behavior of the momentum occupation number $n_{\alpha,\beta}(p)$. For two pure uniform cMPS $\ket{\Psi(Q,\{R_{\alpha}\})}$ and $\ket{\Psi(Q',\{R_{\alpha}'\})}$ we can define a superoperator ${\ensuremath{\mathbb{T}}}_{\text{mixed}}=Q'\otimes{\openone}_{D}+{\openone}_{D}\otimes \overline{Q}+\sum_{\alpha=1}^{N}R_{\alpha}'\otimes \overline{R_{\alpha}}$ so that the $\braket{\Psi(Q,\{R_{\alpha}\})|\Psi(Q',\{R_{\alpha}'\})}$ decays as $\lim_{L\to+\infty}\exp(\lambda L)$, with $\lambda$ the eigenvalue with largest real part of ${\ensuremath{\mathbb{T}}}_{\text{mixed}}$. If the two uniform cMPS are inequivalent, $\Re(\lambda) < 0$ and there is an infrared orthogonality catastrophe. If $\Re(\lambda)=0$, then we can define a phase $\phi=\Im(\lambda)$ and a gauge transformation $g\in\mathsf{GL}(D;\mathbb{C})$ such that $Q'=g Q g^{-1} +{\ensuremath{\mathrm{i}}}\phi$ and $R'_{\alpha}=g R_{\alpha} g^{-1}$. With $f$ being the right eigenvector corresponding to eigenvalue $\lambda={\ensuremath{\mathrm{i}}}\phi$ of ${\ensuremath{\mathbb{T}}}_{\text{mixed}}$, $g$ can be obtained as $g=f r^{-1}$. Let us also illustrate how to compute the expectation value of a translation invariant Hamiltonian. The generic Hamiltonian in Eq.  becomes translation invariant for $v(x)=v$ and $w(x,y)=w(y-x)$ with $w(x)=w(-x)$. Since the uniform cMPS is extensive, expectation values are proportional to the volume and it makes more sense to compute the expectation values of the kinetic, potential and interaction energy densities ${\ensuremath{\hat{t}}}$, ${\ensuremath{\hat{v}}}$ and ${\ensuremath{\hat{w}}}$. We obtain $$\begin{aligned} \braket{\Psi(\overline{Q},\{\overline{R}_{\alpha}\})|{\ensuremath{\hat{t}}}|\Psi(Q,\{R_{\alpha}\})}&=\frac{1}{2m}{\ensuremath{(l|[Q,R]\otimes [\overline{Q},\overline{R}]|r)}},\\ \braket{\Psi(\overline{Q},\{\overline{R}_{\alpha}\})|{\ensuremath{\hat{v}}}|\Psi(Q,\{R_{\alpha}\})}&=v{\ensuremath{(l|R\otimes \overline{R}|r)}},\end{aligned}$$ $$\begin{aligned} \braket{\Psi(\overline{Q},\{\overline{R}_{\alpha}\})|{\ensuremath{\hat{w}}}|\Psi(Q,\{R_{\alpha}\})}&=\int_{0}^{+\infty}{\ensuremath{\mathrm{d}}}z\,w(z){\ensuremath{(l|R\otimes \overline{R} \mathrm{e}^{{\ensuremath{\mathbb{T}}} z} R\otimes\overline{R}|r)}}.\end{aligned}$$ If $w(z)$ has a Laplace transform $\mathscr{L}[w](\sigma)=\int_{0}^{+\infty}{\ensuremath{\mathrm{d}}}z w(z) \exp(-\sigma z)$ that is defined for $\Re \sigma \geq 0$, we obtain $$\begin{aligned} \braket{\Psi|{\ensuremath{\hat{w}}}|\Psi}&={\ensuremath{(l|R\otimes \overline{R}\ \mathscr{L}[w](-{\ensuremath{\mathbb{T}}}) R\otimes\overline{R}|r)}}.\end{aligned}$$ Note that translation invariance has allowed the parameterization of a field with a continuous number of degrees of freedom by a discrete number of degrees of freedom. Having $l$ and $r$, the computational cost is $\operatorname{\mathscr{O}}(D^{6})$ when long-range interactions are present, since we then have to compute an arbitrary function $\mathscr{L}[w]$ of the transfer operator ${\ensuremath{\mathbb{T}}}$, unless $w$ is such that there is an exact or approximate (iterative) strategy for evaluating the action of $\mathscr{L}[w](-{\ensuremath{\mathbb{T}}})$ on a vector efficiently. One particular example is the case of strictly local interactions $w(x-y)\sim \delta(x-y)$. The interaction energy (density) can then be computed with computational complexity of $\operatorname{\mathscr{O}}(D^{3})$ just like the potential and the kinetic energy density. Tangent vectors of continuous matrix product states {#s:tangent} =================================================== Generic case ------------ For MPS, a new algorithm for time evolution and variational optimization (via imaginary time evolution) was recently constructed using the time-dependent variational principle[@2011arXiv1103.0936H]. An essential ingredient of this algorithm is the study of (infinitesimally) small variations of MPS, *i.e.* the set of MPS tangent vectors. Indeed, it was rigorously proven that the set of MPS can be given the structure of a variational manifold with a well-defined tangent space[@Haegeman:fk] by eliminating some singular points or regions. While we do expect the same theorems to hold for cMPS, the infinite dimensionality of the parameter space and Hilbert space might require a different proof strategy, especially in the absence of translation invariance. As noted several times before, this would be beyond the scope of this paper. Given the practical use of tangent vectors, we nevertheless proceed, albeit in a more intuitive manner. Let us assume that we do have an open subset of cMPS with fixed bond dimension $D$ that constitute a (complex) manifold ${\ensuremath{\mathcal{M}}}_{\mathrm{cMPS}}\subset {\ensuremath{{\ensuremath{\mathbb{H}}}}}$. At any base point $\ket{\Psi[Q,\{R_{\alpha}\}]}\in {\ensuremath{\mathcal{M}}}_{\mathrm{cMPS}}$, we can construct a (holomorphic) tangent space $T_{\ket{\Psi[Q,\{R_{\alpha}\}]}} {\ensuremath{\mathcal{M}}}_{\mathrm{cMPS}} \subset {\ensuremath{{\ensuremath{\mathbb{H}}}}}$. If the collective index $i=1,\ldots,D^2$ is used to combine both virtual (matrix) indices $(\alpha,\beta)$ and we use the summation convention with respect to this index, a general tangent vector $\ket{\Phi[V,\{W_{\alpha}\};Q,\{R_{\alpha}\}]}$ in $T_{\ket{\Psi[Q,\{R_{\alpha}\}]}} {\ensuremath{\mathcal{M}}}_{\mathrm{cMPS}}$ can be defined as $$\begin{split} &\ket{\Phi[V,\{W_{\alpha}\};Q,\{R_{\alpha}\}]}=\ket{\Phi^{[Q,\{R_{\alpha}\}]}[V,\{W_{\alpha}\}]}\\ &\quad=\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\,\left(V^{i}(x) \frac{\delta\ }{\delta Q^{i}(x)}+\sum_{\beta=1}^{q}W_{\beta}^{i}(x) \frac{\delta\ }{\delta R_{\beta}^{i}(x)}\right) \ket{\Psi[Q,\{R_{\alpha}\}]}\\ &\quad=\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\,\operatorname{tr}\left[B{\ensuremath{\hat{U}}}(-L/2,x) \left(V(x)\otimes {\ensuremath{\hat{{\openone}}}}+\sum_{\beta=1}^{q}W_{\beta}(x)\otimes {\ensuremath{\hat{\psi}^\dagger}}_{\beta}(x)\right){\ensuremath{\hat{U}}}(x,L/2)\right]\ket{\Omega}. \end{split}\label{eq:deftangentgeneric}$$ Because of the gauge invariance discussed in Section \[s:gauge\], not all variations in $Q$ and $R_{\alpha}$ result in changes of the physical state. Consequently, not all linearly independent choices of the matrix functions $V$ and $W_{\alpha}$ result in linearly independent tangent vectors $\ket{\Phi[V,\{W_{\alpha}\};Q,\{R_{\alpha}\}]}$. Let $Q(\eta)$ and $R_{\alpha}(\eta)$ ($\forall \alpha=1,\ldots,q$) be a one-parameter family of matrix functions, so that $Q(\eta):{\ensuremath{\mathcal{R}}}\mapsto\mathbb{C}^{D\times D}:x\mapsto Q(x;\eta)$ and similarly for $R_{\alpha}(\eta)$. If we define $Q(0)=Q:x\mapsto Q(x)$, $R_{\alpha}(0)=R_{\alpha}:x\mapsto R_{\alpha}(x)$ together with ${\ensuremath{\mathrm{d}}}Q/{\ensuremath{\mathrm{d}}}\eta(0)=V:x\mapsto V(x)$ and ${\ensuremath{\mathrm{d}}}R_{\alpha}/{\ensuremath{\mathrm{d}}}\eta(0)=W_{\alpha}:x\mapsto W_{\alpha}(x)$, then we can write $$\left.\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}\eta} \ket{\Psi[Q(\eta),{R_{\alpha}(\eta)}]}\right|_{\eta=0}=\ket{\Phi[V,\{W_{\alpha}\};Q,\{R_{\alpha}\}]}.$$ If we now choose a one-parameter family of gauge equivalent states, so that $Q(x;\eta)=g(x; \eta)^{-1}Q(x)g(x;\eta) +g(x,\eta)^{-1} \frac{\partial g(x; \eta)}{\partial x}$ and $R(x;\eta)=g(x;\eta)^{-1} R(x) g(x;\eta)$, where the one-parameter family of gauge transforms is given by $g(x;\eta)=\exp(\eta h(x))$ and $h(x)\in {\ensuremath{\mathfrak{gl}}}(\mathbb{C},D)\equiv\mathbb{C}^{D\times D}$, $\forall x\in{\ensuremath{\mathcal{R}}}$, then we can use the gauge invariance of the cMPS representation to obtain $\ket{\Psi[Q(x;\eta),R(x;\eta)]}=\ket{\Psi[Q(x),R(x)]}$ and thus $$\begin{aligned} \ket{\Phi[\mathscr{M}_{\Phi}^{[Q]}[h],\{\mathscr{N}_{\alpha,\Phi}^{[R_{\alpha}]}[h]\};Q,\{R_{\alpha}\}]}=0,\end{aligned}$$ where the maps $\mathscr{M}_{\Phi}^{[Q]}$ and $\mathscr{N}_{\alpha,\Phi}^{[R_{\alpha}]}$ ($\forall \alpha=1,\ldots,N$) are given by $$\begin{aligned} \mathscr{M}_{\Phi}^{[Q]}[h](x)&=[Q(x),h(x)]+\frac{{\ensuremath{\mathrm{d}}}h}{{\ensuremath{\mathrm{d}}}x}(x),&\mathscr{N}^{[R_{\alpha}]}_{\alpha,\Phi}[h](x)&=[R_{\alpha}(x),h(x)].\end{aligned}$$ The maps $\mathscr{M}_{\Phi}^{[Q]}$ and $\mathscr{N}_{\alpha,\Phi}^{[R_{\alpha}]}$ thus establish a linear homomorphism from functions $h:\mathcal{R}\to {\ensuremath{\mathfrak{gl}}}(\mathbb{C},D)\equiv\mathbb{C}^{D\times D}$ to the kernel of the representation $\ket{\Phi[V,\{W_{\alpha}\};Q,\{R_{\alpha}\}]}$ of the tangent space $\ket{\Psi[Q,\{R_{\alpha}\}]}\in T_{\ket{\Psi[Q,\{R_{\alpha}\}]}} {\ensuremath{\mathcal{M}}}_{\mathrm{cMPS}}$. Put differently, the representation of cMPS tangent vectors has a gauge invariance under the additive transformation law $V\leftarrow V+\mathscr{M}_{\Phi}^{[Q]}[h]$ and $W_{\alpha}\leftarrow W_{\alpha}+\mathscr{N}_{\alpha,\Phi}^{[R_{\alpha}]}[h]$. In all of the above, we have considered $B$ fixed. The gauge transformation $g(x)$ then has to satisfy the boundary condition $g(+L/2) B g(-L/2)^{-1}=B$, which also imposes a boundary condition on the set of allowed functions $h(x)$, namely $$h(+L/2) B - B h(-L/2) = 0.$$ In particular, for periodic boundary conditions with $B={\openone}_{D}$, we obtain that the generator $h:{\ensuremath{\mathcal{R}}}\to \mathfrak{gl}(D,\mathbb{C})$ should satisfy periodic boundary conditions $h(+L/2)=h(-L/2)$. We now restrict to the case of open boundary conditions and discard the explicit reference to the base point $\ket{\Psi[Q,\{R_{\alpha}\}]}$ in the notation of tangent vectors. To take full advantage of the gauge freedom, we noted in Section \[s:gauge\] that is better to include one of the boundary vectors in the set of variational parameters. We thus generalize our definition of tangent vectors by also including variations with respect to *e.g.* the right boundary vector $\bm{v}_{\text{R}}$. We write $$\begin{split} &\ket{\Phi[V,\{W_{\alpha}\},\bm{w}_{\mathrm{R}}]}\\ &\qquad=\bm{w}_{\mathrm{R}}\cdot \bm{\nabla}_{\bm{v}_{\mathrm{R}}}\ket{\Psi[Q,\{R_{\alpha}\}]}\\ &\qquad\qquad+\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\,\left(V^{i}(x) \frac{\delta\ }{\delta Q^{i}(x)}+\sum_{\beta=1}^{N}W_{\beta}^{i}(x) \frac{\delta\ }{\delta R_{\beta}^{i}(x)}\right) \ket{\Psi[Q,\{R_{\alpha}\}]}\\ &\qquad=\bm{v}_{\mathrm{L}}^\dagger {\ensuremath{\hat{U}}}(-L/2,+L/2) \bm{w}_{\mathrm{R}}\ket{\Omega}\\ &\qquad\qquad+\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\,\bm{v}_{\mathrm{L}}^\dagger {\ensuremath{\hat{U}}}(-L/2,x) \left(V(x)\otimes {\ensuremath{\hat{{\openone}}}}+\sum_{\beta=1}^{N}W_{\beta}(x)\otimes {\ensuremath{\hat{\psi}^\dagger}}_{\beta}(x)\right){\ensuremath{\hat{U}}}(x,L/2)\bm{v}_{\mathrm{R}}\ket{\Omega}. \end{split}\label{eq:deftangentgeneric2}$$ Let us revisit the gauge freedom for the new tangent vectors of Eq. . The state $\ket{\Phi[V,\{W_{\alpha}\},\bm{w}_{\mathrm{R}}]}$ is invariant under the additive gauge transformation $V\leftarrow V+\mathscr{M}_{\Phi}[h]$, $W_{\alpha}\leftarrow W_{\alpha}+\mathscr{N}_{\alpha,\Phi}[h]$ and $\bm{w}_{\mathrm{R}}\leftarrow \bm{w}_{\mathrm{R}} + \bm{m}_{\Phi}[h]$ with $$\bm{m}_{\Phi}[h]=-h(+L/2)\bm{v}_{\mathrm{R}}.$$ Since $\bm{v}_{\mathrm{L}}$ is still fixed, the gauge transformation has to satisfy the boundary condition $g(-L/2)={\openone}_{D}$, so that its generator $h(x)$ satisfies $h(-L/2)=0$. The overlap between two tangent vectors is given by $$\begin{split} &\braket{\Phi[\overline{V},\{\overline{W}_{\alpha}\},\overline{\bm{w}_{\mathrm{R}}}]|\Phi[V',\{W'_{\alpha}\},\bm{w'}_{\mathrm{R}}]}=\bm{w}_{\mathrm{R}}^{\dagger} l(L/2) \bm{w'}_{\mathrm{R}}\\ &\qquad+\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\, {\ensuremath{(l(x)|\sum_{\alpha=1}^{q} W'_{\alpha}(x) \otimes \overline{W_{\alpha}(x)} | r(x))}}\\ &\qquad +\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\int_{x}^{+L/2}{\ensuremath{\mathrm{d}}}y\, \big(l(x)\big\vert\big[V'(x)\otimes 1_{D}+\sum_{\alpha=1}^{q} W'_{\alpha}(x)\otimes \overline{R_{\alpha}(x)}\big] \mathscr{P}\mathrm{e}^{\int_{x}^{y}{\ensuremath{\mathrm{d}}}z\, {\ensuremath{\mathbb{T}}}(z)}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\times \big[1_{D}\otimes \overline{V(y)}+\sum_{\alpha=1}^{q}R_{\alpha}(y)\otimes \overline{W_{\alpha}(y)}\big]|r(y)\big)\\ &\qquad+\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\int_{-L/2}^{x}{\ensuremath{\mathrm{d}}}y\, \big(l(y)\big\vert\big[1_{D}\otimes \overline{V(y)}+\sum_{\alpha=1}^{q}R_{\alpha}(y)\otimes \overline{W_{\alpha}(y)}\big] \mathscr{P}\mathrm{e}^{\int_{y}^{x}{\ensuremath{\mathrm{d}}}z\, {\ensuremath{\mathbb{T}}}(z)}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\times \big[V'(x)\otimes 1_{D}+\sum_{\alpha=1}^{q}W'_{\alpha}(x)\otimes \overline{R_{\alpha}(x)}\big]\big\vert r(x)\big). \end{split}\label{eq:phiphioverlap}$$ It defines a metric for the manifold ${\ensuremath{{\ensuremath{\mathcal{M}}}}}_{\mathrm{cMPS}}$ and features in any coordinate-invariant expression involving cMPS tangent vectors. We can use the gauge freedom in the representation of tangent vectors to simplify the expression above significantly. The counting argument for the gauge degrees of freedom is now less rigorous as in the discrete case. In general, we have $D^{2}$ parameters in $h(x)$ to eliminate $D^{2}$ degrees of freedom from $\{V(x),W_{1}(x),\ldots,W_{q}(x)\}$ at every point $x$. However, this is only correct if all linearly independent algebra-valued functions $h:{\ensuremath{\mathcal{R}}}\to{\ensuremath{\mathfrak{gl}}}(\mathbb{C},D)$ map to linearly independent matrix functions $[\mathscr{M}_{\Phi}^{[Q]},\{\mathscr{N}_{\alpha,\Phi}^{[R_{\alpha}]}\}]$. Let us show that by substituting $V(x)\leftarrow \tilde{V}(x)=V(x)+\mathscr{M}_{\Phi}[h](x)$ and $W_{\alpha}(x)\leftarrow \tilde{W}_{\alpha}(x)=W_{\alpha}(x)+\mathscr{N}_{\alpha,\Phi}[h](x)$ ($\forall \alpha=1,\ldots,q$), we can indeed impose $D^2$ conditions, such as the *left gauge fixing condition*: $${\ensuremath{(l(x)|}}\left[\tilde{V}(x)\otimes {\openone}_{D} + \sum_{n=1}^{N} \tilde{W}_{\alpha}(x)\otimes \overline{R_{\alpha}(x)}\right]=0.\label{eq:leftgaugefix}$$ This requires that $h$ is a solution of $$\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}\big[l(x)h(x)\big]=\tilde{\mathscr{T}}^{(x)}\big[l(x)h(x)\big]-\left[l(x)V(x)+\sum_{\alpha=1}^{q} R_{\alpha}(x)^{\dagger} l(x) W_{\alpha}(x)\right]$$ which together with the boundary condition $h(-L/2)=0$ results in the solution $${\ensuremath{(l(x)h(x)|}}=-\int_{-L/2}^{x}{\ensuremath{\mathrm{d}}}y\, {\ensuremath{(l(y)|}}\left[V(y)\otimes{\openone}_{D}+\sum_{\alpha=1}^{q} W_{\alpha}(y)\otimes \overline{R}_{\alpha}(y)\right]\mathscr{P}\exp\left[\int_{y}^{x}{\ensuremath{\mathbb{T}}}(z)\,{\ensuremath{\mathrm{d}}}z\right].$$ This equation gives a solution for $l(x)h(x)$. We can extract $h(x)$ by multiplying with $l(x)^{-1}$ to the left. The left density matrix $l(x)$ should be positive definite and hence invertible for every $x>-L/2$. However, at $x=-L/2$ it equals $l(-L/2)=\bm{v}_{\mathrm{L}}\bm{v}_{\mathrm{L}}^{\dagger}$ and thus becomes singular. Nevertheless, the limit $\lim_{x\to-L/2} h(x)$ should be well defined since the right hand side of the equation above, which is being multiplied with $h(x)^{-1}$, will have a similar scaling. Alternatively, we can also impose a *right gauge fixing condition* $$\left[V(x)\otimes {\openone}_{D} + \sum_{\alpha=1}^{N} W_{\alpha}(x)\otimes \overline{R_{\alpha}(x)}\right]{\ensuremath{|r(x))}}=0.\label{eq:rightgaugefix}$$ Finally, we remark that the tangent space $T_{\ket{\Psi[Q,\{R_{\alpha}\}]}} {\ensuremath{\mathcal{M}}}_{\mathrm{cMPS}}$ spanned by the states of Eq.  contains the original cMPS $\ket{\Psi[Q,\{R_{\alpha}\}]}$, *e.g.* by choosing $V=1/L$, $W_{\alpha}=0$ and $\bm{w}_{\mathrm{R}}=0$ or by choosing $V=W_{\alpha}=0$ and $\bm{w}_{\mathrm{R}}=\bm{v}_{\mathrm{R}}$. Both choices are related by a gauge transform with $h(x)=(x/L+1/2){\openone}_{D}$. For a general tangent vector $\ket{\Phi[V,\{W_{\alpha}\},\bm{w}_{\mathrm{R}}]}$, we obtain $$\begin{split} &\braket{\Psi[\overline{Q},\{\overline{R}_{\alpha}\}]|\Phi[V,\{W_{\alpha}\},\bm{w}_{\mathrm{R}}]}=\bm{v}_{\mathrm{R}}^{\dagger}l(L/2) \bm{w}_{\mathrm{R}}\\ &\qquad\qquad\qquad+\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\,{\ensuremath{(l(x)|V(x)\otimes {\openone}_{D}+\sum_{\alpha=1}^{N} W_{\alpha}(x)\otimes \overline{R_{\alpha}(x)}|r(x))}}. \end{split}\label{eq:overlappsiphi}$$ If we fix the gauge according to either the left or right gauge fixing prescription, the second term cancels. We can restrict to the orthogonal complement of $\ket{\Psi[Q,\{R_{\alpha}\}]}$ in $T_{\ket{\Psi[Q,\{R_{\alpha}\}]}} {\ensuremath{\mathcal{M}}}_{\mathrm{cMPS}}$, which is denoted as $T_{\ket{\Psi[Q,\{R_{\alpha}\}]}} {\ensuremath{\mathcal{M}}}_{\mathrm{cMPS}}^\perp$, by further imposing $$\bm{v}_{\mathrm{R}}^{\dagger}l(L/2) \bm{w}_{\mathrm{R}}=0.$$ Uniform case ------------ We specialize again to the case of translation invariant systems in the thermodynamic limit. While the parameter space is now finite dimensional, it is fruitful to still consider the full tangent space to the manifold of all (translation non-invariant) cMPS at the special uniform point $\ket{\Psi(Q,\{R_{\alpha}\})}$. This boils down to allowing space-dependent matrix functions $V(x)$ and $W_{\alpha}(x)$ in the definition of the tangent vectors. We can then decompose the full tangent space into sectors ${\ensuremath{{\ensuremath{\mathbb{T}}}}}_{\Phi_{p}}$ of momentum $p\in\mathbb{R}$ by introducing Fourier modes $V(x)=V {\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}p x}$ and $W_{\alpha}(x)=W_{\alpha}{\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}p x}$, resulting in $$\begin{gathered} \ket{\Phi_{p}(V,\{W_{\alpha}\};Q,\{R_{\alpha}\})}=\ket{\Phi_{p}^{(Q,\{R_{\alpha}\})}(V,\{W_{\alpha}\})}=\\ \int_{-\infty}^{+\infty}{\ensuremath{\mathrm{d}}}x\,{\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}p x} \bm{v}_{\mathrm{L}}^{\dagger}{\ensuremath{\hat{U}}}(-\infty,x) \left(V\otimes {\ensuremath{\hat{{\openone}}}}+\sum_{\alpha=1}^{N}W_{\alpha}\otimes {\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(x)\right){\ensuremath{\hat{U}}}(x,+\infty)\bm{v}_{\mathrm{R}}\ket{\Omega}.\end{gathered}$$ Note that the boundary vectors $\bm{v}_{\mathrm{L},\mathrm{R}}$ are irrelevant for the bulk properties of these states, and they are therefore not included in the set of variational parameters in the thermodynamic limit. Consequently, we also do not need to differentiate with respect to one of them in order to define the tangent space. We can also compute the overlap between two of these tangent vectors and obtain $$\begin{split} &\braket{\Phi_p(\overline{V},\{\overline{W}_{\alpha}\})|\Phi_{p'}(V',\{W'_{\alpha}\})}=\int_{-\infty}^{+\infty}{\ensuremath{\mathrm{d}}}x\, {\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}(p'-p) x}{\ensuremath{(l|\sum_{\alpha=1}^{q} W'_{\alpha} \otimes \overline{W_{\alpha}} | r)}}\\ &\qquad +\int_{-\infty}^{+\infty}{\ensuremath{\mathrm{d}}}x\int_{x}^{+\infty}{\ensuremath{\mathrm{d}}}y\, {\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}(p'x - py)}\big(l\big\vert\big[V'\otimes 1_{D}+\sum_{\alpha=1}^{q} W'_{\alpha}\otimes \overline{R_{\alpha}}\big] \mathrm{e}^{(y-x){\ensuremath{\mathbb{T}}}}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\times \big[1_{D}\otimes \overline{V}+\sum_{\alpha=1}^{q}R_{\alpha}\otimes \overline{W_{\alpha}}\big]|r\big)\\ &\qquad+\int_{-\infty}^{+\infty}{\ensuremath{\mathrm{d}}}x\int_{-\infty}^{x}{\ensuremath{\mathrm{d}}}y\,{\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}(p'y-px)} \big(l\big\vert\big[1_{D}\otimes \overline{V}+\sum_{\alpha=1}^{q}R_{\alpha}\otimes \overline{W_{\alpha}}\big] \mathrm{e}^{(x-y){\ensuremath{\mathbb{T}}}}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\times \big[V'\otimes 1_{D}+\sum_{\alpha=1}^{q}W'_{\alpha}\otimes \overline{R_{\alpha}}\big]\big\vert r\big). \end{split}$$ If we again resort to the decomposition of Eq. , we can further evaluate this to $$\begin{split} &\braket{\Phi_p(\overline{V},\{\overline{W}_{\alpha}\})|\Phi_{p'}(V',\{W'_{\alpha}\})}=\\ &\qquad 2\pi\delta(p'-p)\Big[{\ensuremath{(l|\sum_{\alpha=1}^{q} W'_{\alpha} \otimes \overline{W_{\alpha}} | r)}}\\ &\qquad\qquad\qquad +\big(l\big\vert\big[V'\otimes 1_{D}+\sum_{\alpha=1}^{q} W'_{\alpha}\otimes \overline{R_{\alpha}}\big](-{\ensuremath{\mathbb{T}}}+{\ensuremath{\mathrm{i}}}p)^{\mathsf{P}}\big[1_{D}\otimes \overline{V}+\sum_{\alpha=1}^{q}R_{\alpha}\otimes \overline{W_{\alpha}}\big]|r\big)\\ &\qquad\qquad\qquad +\big(l\big\vert\big[1_{D}\otimes \overline{V}+\sum_{\alpha=1}^{q}R_{\alpha}\otimes \overline{W_{\alpha}}\big] (-{\ensuremath{\mathbb{T}}}-{\ensuremath{\mathrm{i}}}p)^{\mathsf{P}} \big[V'\otimes 1_{D}+\sum_{\alpha=1}^{q}W'_{\alpha}\otimes \overline{R_{\alpha}}\big]\big\vert r\big)\Big]\\ &\qquad+(2\pi)^2 \delta(p) \delta(p')\big(l\big\vert\big[V'\otimes 1_{D}+\sum_{\alpha=1}^{q} W'_{\alpha}\otimes \overline{R_{\alpha}}\big]\big\vert r\big)\big(l\big\vert\big[1_{D}\otimes \overline{V}+\sum_{\alpha=1}^{q}R_{\alpha}\otimes \overline{W_{\alpha}}\big]|r\big). \end{split}\label{eq:phipoverlap}$$ The momentum eigenstates $\ket{\Phi_{p}(V,\{W_{\alpha}\})}$ cannot be normalized to unity in the thermodynamic limit, but rather satisfy a $\delta$-normalization. For $p=p'=0$, there is an additional divergence which is stronger than the $\delta$-normalization. It can be related to the overlap between the $\ket{\Phi_{p}(V,\{W_{\alpha}\})}$ and the original cMPS $\ket{\Psi(Q,\{R_{\alpha}\})}$, which is given by $$\braket{\Psi(\overline{Q},\{\overline{R}_{\alpha}\})|\Phi_p(V,\{W_{\alpha}\})}=2\pi\delta(p) \big(l\big\vert|\big[V\otimes 1_{D}+\sum_{\alpha=1}^{q} W_{\alpha}\otimes \overline{R_{\alpha}}\big]\big\vert r\big).\label{eq:psiphipoverlap}$$ As before, a one-parameter family of local gauge transformations $g(x;s)=\exp(sh(x))$ with $h(x)\in{\ensuremath{\mathfrak{gl}}}(D;\mathbb{C})$ induces a map to the kernel of the representation $\Phi_{p}$ of ${\ensuremath{{\ensuremath{\mathbb{T}}}}}_{\Phi_{p}}$ by setting $h(x)=h{\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}p x}$, so that $$\ket{\Phi_{p}(\mathscr{M}_{\Phi_{p}}^{(Q)}(h),\{\mathscr{N}_{\alpha,\Phi_{p}}^{(R_{\alpha})}(h)\};Q,\{R_{\alpha}\})}=0,$$ with $$\begin{aligned} \mathscr{M}_{\Phi_{p}}^{(Q)}(h)&=[Q,h]+{\ensuremath{\mathrm{i}}}p h&&\text{and}&\mathscr{N}_{\alpha,\Phi_{p}}^{(R_{\alpha})}(h)=[R_{\alpha},h].\end{aligned}$$ We henceforth omit the superscript notation of $Q$ and $R_{\alpha}$. The dimension of the kernel of the map $\Phi_{p}$ is thus $D^{2}$-dimensional, except at $p=0$. This can easily be proven, since for every non-zero $h\in{\ensuremath{\mathfrak{gl}}}(D;\mathbb{C})$, $\mathscr{M}_{\Phi_{p}}(h)\neq 0$ or $\mathscr{N}_{\alpha,\Phi_{p}}(h)\neq 0$, $\forall \alpha=1,\ldots,N$. Indeed, suppose that $\mathscr{M}_{\Phi_{p}}(h)= 0$ and $\mathscr{N}_{\Phi_{p}}(h)=0$. Imposing that $$\mathscr{M}_{\Phi_{p}}(h) r+\sum_{\alpha=1}^{N} \mathscr{N}_{\alpha,\Phi_{p}}(h) r R_{\alpha}^{\dagger} =0$$ results in ${\ensuremath{\mathbb{T}}} {\ensuremath{|h r)}}={\ensuremath{\mathrm{i}}}p {\ensuremath{|h r)}}$ which has no non-trivial solution except at $p=0$, where we find $h=c{\openone}_{D}$ with $c\in\mathbb{C}$. At nonzero momenta, we can use a gauge fixing condition to reduce the number of parameters by $D^{2}$. At $p=0$, we can only reduce the number of parameters by $D^{2}-1$ through gauge fixing. But imposing orthogonality to $\ket{\Psi(Q,R)}$ manually at $p=0$ allows to discard one additional parameter. For any momentum $p$, we can uniquely fix the gauge of any tangent vector in ${\ensuremath{{\ensuremath{\mathbb{T}}}}}_{\Phi_{p}}^{\perp}$ by setting ${\ensuremath{(l|}}V\otimes 1_{D} + W\otimes R=0$ or $V\otimes 1_{D} + W\otimes R{\ensuremath{|r)}}=0$, corresponding to the left and right gauge fixing conditions respectively. It can indeed be checked that with either one of these conditions being satisfied, the overlap $\braket{\Psi(\overline{Q},\{\overline{R}_{\alpha}\})|\Phi_p(V,\{W_{\alpha}\})}$ given in Eq.  vanishes even for $p=0$. In addition, if either gauge fixing condition is satisfied, the overlap between two tangent vectors simplifies significantly, as only the local term survives. Also note the difference with the approach for translation non-invariant systems in the previous subsection. There we could impose the left or right gauge fixing condition for any $x$, without this automatically implying that $\ket{\Phi[V,\{W_{\alpha}\},\bm{w}_{\mathrm{R}}]}\perp \ket{\Psi[Q,\{R_{\alpha}\}]}$, since a non-zero overlap between the tangent vector and the original cMPS could be encoded in the changing boundary vector $\bm{w}_{\mathrm{R}}$. Conclusion and outlook ====================== This manuscript provides a detailed description of a variational class of wave functions for one-dimensional quantum field theories, that goes by the name of “continuous matrix product states”. We reviewed different alternative constructions that produce the same class of states and have their own merits, *e.g.* in offering clear hints on how to generalize this class to different settings such as open quantum systems or higher-dimensional theories. We illustrated how to formulate the cMPS ansatz for the most general class of theories including an arbitrary number of bosonic and fermionic particles, and were naturally led to a set of constraints that the variational parameters needed to satisfy in order to produce a finite kinetic energy density. We also discussed other physical constraints such as fermion parity. We then proceeded by explaining in detail how to compute expectation values, in particular for the case of systems with open boundary conditions. We provided some additional details for the case of systems with translation invariance, where we can use the expectation value of a correlation function to define an ultraviolet cutoff within the cMPS state. We also discussed the important topic of gauge invariance in the cMPS representation. Finally we introduced the concept of cMPS tangent vectors, and discussed how the gauge invariance allows to represent them in such a way that the metric of the cMPS manifold simplifies tremendously. While we have not introduced any practical algorithms or recipes for finding cMPS approximations of ground states or for describing other physical phenomena, we have introduced all necessary definitions and concepts in order to comfortably work with cMPS. This set of definitions can now be used in follow-up papers that will focus on new algorithms. As such, the current paper provides a stepping stone that will hopefully spur more research in the context of variational methods for quantum field theories in one dimension and beyond. JH acknowledges fruitful discussions with Michaël Mariën. This work was supported by the EU grants QUERG and QFTCMPS, by the FWF SFB grants FoQuS and ViCoM, by the DFG cluster of excellence NIM and by the cluster of excellence EXC 201 Quantum Engineering and Space-Time Research. A useful formula {#a:formula} ================ Consider an operator ${\ensuremath{\hat{U}}}(x,y)$ defined as $${\ensuremath{\hat{U}}}(x,y)={\ensuremath{\mathscr{P}\exp}}\left[\int_x^y {\ensuremath{\hat{A}}}(z)\,{\ensuremath{\mathrm{d}}}z\right],$$ where ${\ensuremath{\hat{A}}}$ is not necessarily antihermitian. This operator satisfies $$\begin{aligned} \frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x} {\ensuremath{\hat{U}}}(x,y)&=-{\ensuremath{\hat{A}}}(x) {\ensuremath{\hat{U}}}(x,y),& \frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}y} {\ensuremath{\hat{U}}}(x,y)&=+{\ensuremath{\hat{U}}}(x,y) {\ensuremath{\hat{A}}}(y).\label{eq:diffU}\end{aligned}$$ For the derivatives of the inverse operator ${\ensuremath{\hat{U}}}(x,y)^{-1}$ we can use the general result $$\begin{aligned} \frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x} {\ensuremath{\hat{U}}}(x,y)^{-1} &= - {\ensuremath{\hat{U}}}(x,y)^{-1} \left(\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x} {\ensuremath{\hat{U}}}(x,y)\right) {\ensuremath{\hat{U}}}(x,y)^{-1}=+{\ensuremath{\hat{U}}}(x,y)^{-1} {\ensuremath{\hat{A}}}(x),\\ \frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}y} {\ensuremath{\hat{U}}}(x,y)^{-1} &= - {\ensuremath{\hat{U}}}(x,y)^{-1} \left(\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}y} {\ensuremath{\hat{U}}}(x,y)\right) {\ensuremath{\hat{U}}}(x,y)^{-1}=- {\ensuremath{\hat{A}}}(y){\ensuremath{\hat{U}}}(x,y)^{-1},\\\end{aligned}$$ Now define the following operator quantity depending on an arbitrary operator ${\ensuremath{\hat{B}}}$ $${\ensuremath{\hat{C}}}(x,y)={\ensuremath{\hat{U}}}(x,y){\ensuremath{\hat{B}}} {\ensuremath{\hat{U}}}(x,y)^{-1}.$$ By taking the derivative with respect to $y$, we obtain $$\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}y}{\ensuremath{\hat{C}}}(x,y)={\ensuremath{\hat{U}}}(x,y)\left[{\ensuremath{\hat{A}}}(y),{\ensuremath{\hat{B}}}\right] {\ensuremath{\hat{U}}}(x,y)^{-1}.$$ Integrating ${\ensuremath{\mathrm{d}}}{\ensuremath{\hat{C}}}(x,z) /{\ensuremath{\mathrm{d}}}z$ for $z$ from $x$ to $y$ and making use of the initial value ${\ensuremath{\hat{C}}}(x,x)={\ensuremath{\hat{B}}}$ results in $${\ensuremath{\hat{C}}}(x,y)={\ensuremath{\hat{B}}}+\int_x^y {\ensuremath{\hat{U}}}(x,z) \left[{\ensuremath{\hat{A}}}(z),{\ensuremath{\hat{B}}}\right]{\ensuremath{\hat{U}}}(x,z)^{-1}\, {\ensuremath{\mathrm{d}}}z.$$ We then multiply this equality with ${\ensuremath{\hat{U}}}(x,y)$ to the right and make use of the obvious identity ${\ensuremath{\hat{U}}}(x,y)={\ensuremath{\hat{U}}}(x,z) {\ensuremath{\hat{U}}}(z,y)$ for any $x<z<y$ in the integral of the right hand side in order to obtain our final result $$\left[{\ensuremath{\hat{U}}}(x,y),{\ensuremath{\hat{B}}}\right]=\int_x^y {\ensuremath{\hat{U}}}(x,z) \left[{\ensuremath{\hat{A}}}(z),{\ensuremath{\hat{B}}}\right]{\ensuremath{\hat{U}}}(z,y)\,{\ensuremath{\mathrm{d}}}z.\label{eq:commutatorequality}$$ We can further generalize this result. Suppose we have two operators ${\ensuremath{\hat{U}}}_{\pm}(x,y)$ defined as $${\ensuremath{\hat{U}}}_{\pm}(x,y)={\ensuremath{\mathscr{P}\exp}}\left[\int_x^y \left\{{\ensuremath{\hat{A}}}_1(z) \pm {\ensuremath{\hat{A}}}_2(z)\right\}\,{\ensuremath{\mathrm{d}}}z\right],$$ for arbitrary ${\ensuremath{\hat{A}}}_{1,2}(z)$. If we consider the quantity $${\ensuremath{\hat{C}}}(x,y)={\ensuremath{\hat{U}}}_{-}(x,y){\ensuremath{\hat{B}}} {\ensuremath{\hat{U}}}_{+}(x,y)^{-1},$$ then we obtain $$\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}y}{\ensuremath{\hat{C}}}(x,y)={\ensuremath{\hat{U}}}_{-}(x,y)\left(\left[{\ensuremath{\hat{A}}}_1(y),{\ensuremath{\hat{B}}}\right]-\left\{{\ensuremath{\hat{A}}}_{2}(y),{\ensuremath{\hat{B}}}\right\}\right) {\ensuremath{\hat{U}}}(x,y)_{+}^{-1},$$ using a similar derivation. Continuing along the same line results in $${\ensuremath{\hat{B}}}{\ensuremath{\hat{U}}}_{+}(x,y) -{\ensuremath{\hat{U}}}_{-}(x,y){\ensuremath{\hat{B}}} = \int_x^y {\ensuremath{\hat{U}}}_{-}(x,x)\left(\left[{\ensuremath{\hat{B}}},{\ensuremath{\hat{A}}}_1(z)\right]+\left\{{\ensuremath{\hat{B}}},{\ensuremath{\hat{A}}}_2(z)\right\}\right){\ensuremath{\hat{U}}}_{+}(z,y)\,{\ensuremath{\mathrm{d}}}z.\label{eq:commutatorequalitygeneralized}$$ Higher order regularity conditions {#a:higherorderregularity} ================================== In this appendix we derive additional regularity conditions by considering higher derivatives of the field operators acting on the ground state. Throughout this appendix, we assume that Eq.  is fulfilled and $R_{\alpha}(x)$ has well-behaved higher order derivatives. We now consider the state $({\ensuremath{\mathrm{d}}}^{2}{\ensuremath{\hat{\psi}}}_{\alpha}(x)/ {\ensuremath{\mathrm{d}}}x^{2}) \ket{\Psi[Q,\{R_{\beta}\}]}$, which contains a contribution with infinite norm unless $$\left[\frac{{\ensuremath{\mathrm{d}}}R_\alpha}{{\ensuremath{\mathrm{d}}}x}(x) +[Q(x),R_\alpha(x)], R_{\beta}(x)\right]_{\mp}=0,\label{eq:regcondition2}$$ where $[\cdot,\cdot]_{\mp}$ is a commutator ($-$) or anticommutator ($+$) for $\eta_{\alpha,\beta}=\pm 1$. If $Q$ and $R_{\alpha}$ obey all equations to have a ‘well defined’ derivative up to order $n$, so that the state $({\ensuremath{\mathrm{d}}}^{n}{\ensuremath{\hat{\psi}}}(x)/{\ensuremath{\mathrm{d}}}x^{n})\ket{\Psi[Q,\{R_{\beta}\}]}$ is normalizable, the sufficient condition to eliminate all harmful contributions from $({\ensuremath{\mathrm{d}}}^{n+1}{\ensuremath{\hat{\psi}}}(x)/{\ensuremath{\mathrm{d}}}x^{n+1})\ket{\Psi[Q,\{R_{\beta}\}]}$ is $$\begin{gathered} \bigg[\frac{{\ensuremath{\mathrm{d}}}^{n}\ }{{\ensuremath{\mathrm{d}}}x^{n}}R_\alpha(x) +\frac{\mathrm{d}^{n-1}\ }{\mathrm{d} x^{n-1}}[Q(x),R_\alpha(x)]+\frac{\mathrm{d}^{n-2}\ }{\mathrm{d} x^{n-2}}[Q(x),[Q(x),R_\alpha(x)]]\\ + \ldots + [Q(x),[\ldots,[Q(x),R(x)]] \ldots ] , R_{\beta}(x)\bigg]_{\mp}=0.\label{eq:regconditionn}\end{gathered}$$ We can also impose regularity of the mixed derivatives of the $N$-particle wave function, by first evaluating ${\ensuremath{\hat{\psi}}}_{\alpha}(x){\ensuremath{\hat{\psi}}}_{\beta}(y)\ket{\Psi[Q,\{R_{\gamma}\}]}$ $$\begin{gathered} {\ensuremath{\hat{\psi}}}_{\alpha}(x){\ensuremath{\hat{\psi}}}_{\beta}(y)\ket{\Psi[Q,\{R_{\gamma}\}]}=\\ \theta(y-x)\operatorname{tr}\left[B {\ensuremath{\hat{U}}}_{\alpha,\beta}(-L/2,x) \eta_{\beta,\alpha}R_{\alpha}(x) {\ensuremath{\hat{U}}}_{\beta}(x,y)R_{\beta}(y){\ensuremath{\hat{U}}}(y,+L/2)\right]\ket{\Omega}\\ +\theta(x-y)\operatorname{tr}\left[B {\ensuremath{\hat{U}}}_{\alpha,\beta}(-L/2,y) R_{\beta}(y) {\ensuremath{\hat{U}}}_{\alpha}(y,x)R_{\alpha}(x){\ensuremath{\hat{U}}}(x,+L/2)\right]\ket{\Omega}\end{gathered}$$ where a new set of operators ${\ensuremath{\hat{U}}}_{\alpha,\beta}(x,y)$ ($\alpha,\beta=1,\ldots,q$) was introduced as $${\ensuremath{\hat{U}}}_{\alpha,\beta}(x,y)=\mathscr{P} \exp\left[\int_{x}^{y}{\ensuremath{\mathrm{d}}}z\, \left\{Q(z)\otimes {\ensuremath{\hat{{\openone}}}} + \sum_{\gamma=1}^{q}\eta_{\alpha,\gamma}\eta_{\beta,\gamma}R_{\gamma}(z)\otimes {\ensuremath{\hat{\psi}^\dagger}}_{\gamma}(z)\right\}\right]\label{eq:defUalphabeta}.$$ Note that the regularity condition in Eq.  is sufficient for the annihilation of two particles ${\ensuremath{\hat{\psi}}}_{\alpha}(x){\ensuremath{\hat{\psi}}}_{\beta}(y)\ket{\Psi[Q,\{R_{\gamma}\}]}$ to be continuous at $x=y$. By first differentiating with respect to $x$, we obtain $$\begin{gathered} \left(\frac{{\ensuremath{\mathrm{d}}}{\ensuremath{\hat{\psi}}}_{\alpha}}{{\ensuremath{\mathrm{d}}}x}(x)\right){\ensuremath{\hat{\psi}}}_{\beta}(y)\ket{\Psi[Q,\{R_{\gamma}\}]}\\ \shoveleft{\quad=\theta(y-x)\operatorname{tr}\Bigg[B {\ensuremath{\hat{U}}}_{\alpha,\beta}(-L/2,x) \eta_{\beta,\alpha}\bigg(\frac{{\ensuremath{\mathrm{d}}}R_{\alpha}}{{\ensuremath{\mathrm{d}}}x}(x) +\big[Q(x),R_{\alpha}(x)\big]\bigg)}\\ \shoveright{\times{\ensuremath{\hat{U}}}_{\beta}(x,y)R_{\beta}(y){\ensuremath{\hat{U}}}(y,+L/2)\Bigg]\ket{\Omega}\ \ }\\ \shoveleft{\quad\quad+\theta(x-y)\operatorname{tr}\Bigg[B {\ensuremath{\hat{U}}}_{\alpha,\beta}(-L/2,y) R_{\beta}(y) {\ensuremath{\hat{U}}}_{\alpha}(y,x)}\\ \times\bigg(\frac{{\ensuremath{\mathrm{d}}}R_{\alpha}}{{\ensuremath{\mathrm{d}}}x}(x) +[Q(x),R_{\alpha}(x)]\bigg){\ensuremath{\hat{U}}}(x,+L/2)\Bigg]\ket{\Omega},\end{gathered}$$ where we have assumed the regularity condition in Eq.  to hold. This allows one to eliminate the fixed insertion of particles at position $x$ as well as the terms obtained from differentiating the Heaviside functions (*i.e.* the terms proportional to $\delta(x-y)$). Such terms would indeed arise if ${\ensuremath{\hat{\psi}}}_{\alpha}(x){\ensuremath{\hat{\psi}}}_{\beta}(y)\ket{\Psi[Q,\{R_{\gamma}\}]}$ were not continuous at $x=y$. If we now also differentiate with respect to $y$, we obtain a divergent contribution $$-\delta(x-y)\operatorname{tr}\left[B {\ensuremath{\hat{W}}}_{\alpha,\beta}(-L/2,x) \left[R_{\beta}(x),\frac{{\ensuremath{\mathrm{d}}}R_{\alpha}}{{\ensuremath{\mathrm{d}}}x}(x) +[Q(x),R_{\alpha}(x)]\right]_{\mp}{\ensuremath{\hat{U}}}(x,+L/2)\right]\ket{\Omega}.$$ If we differentiated with respect to $y$ first, and then to $x$, the divergent contribution is $$\delta(x-y)\operatorname{tr}\left[B {\ensuremath{\hat{W}}}_{\alpha,\beta}(-L/2,x) \left[\frac{{\ensuremath{\mathrm{d}}}R_{\beta}}{{\ensuremath{\mathrm{d}}}x}(x) +[Q(x),R_{\beta}(x)],R_{\alpha}(x)\right]_{\mp}{\ensuremath{\hat{U}}}(x,+L/2)\right]\ket{\Omega}.$$ Since we are working under assumption of the regularity condition $[R_{\beta}(x),R_{\alpha}(x)]_{\mp}=0$ \[Eq. \], it is easy to show that $[R_{\beta}(x),{\ensuremath{\mathrm{d}}}R_{\alpha}(x)/{\ensuremath{\mathrm{d}}}x]_{\mp}=-[{\ensuremath{\mathrm{d}}}R_{\beta}(x)/{\ensuremath{\mathrm{d}}}x,R_{\alpha}(x)]_{\mp}$ and also $[R_{\beta}(x),[Q(x),R_{\alpha}(x)]]_{\mp}=-[[Q(x),R_{\beta}(x)],R_{\alpha}(x)]_{\mp}$, so that both diverging contributions are equal. By imposing $$\left[\frac{{\ensuremath{\mathrm{d}}}R_{\beta}}{{\ensuremath{\mathrm{d}}}x}(x) +[Q(x),R_{\beta}(x)],R_{\alpha}(x)\right]_{\mp}=-\left[R_{\beta}(x),\frac{{\ensuremath{\mathrm{d}}}R_{\alpha}}{{\ensuremath{\mathrm{d}}}x}(x) +[Q(x),R_{\alpha}(x)]\right]_{\mp}=0\label{eq:regmixed}$$ the mixed derivative $({\ensuremath{\mathrm{d}}}{\ensuremath{\hat{\psi}}}_{\alpha}(x)/{\ensuremath{\mathrm{d}}}x)({\ensuremath{\mathrm{d}}}{\ensuremath{\hat{\psi}}}_{\beta}(y)/{\ensuremath{\mathrm{d}}}y)\ket{\Psi[Q(x),\{R_{\gamma}\}]}$ is well defined and normalizable. Note that Eq.  is identical to Eq. , so that regularity of the mixed product of two first order derivatives is guaranteed if the second order derivative is regular, or vice versa. The higher order regularity conditions derived in this appendix put very strong constraints on $Q$ and $R_{\alpha}$ that might be hard to satisfy with finite-dimensional matrices. As mentioned in the main text, satisfying the original condition in Eq. , as imposed by the finiteness of the kinetic energy, should be sufficient for most practical applications. [^1]: cMPS still obey the infrared orthogonality catastrophe when formulated in the thermodynamic limit (see Section \[s:ti\]) [^2]: If there is no insertion at the same position, we can always insert a unit operator ${\openone}_D$ [^3]: While we mentioned in Section \[s:bc\] that we always assume the matrix functions $Q$ and $R_{\alpha}$ to satisfy the proper boundary conditions, we do not have to use the condition in Eq.  at any point in deriving the expectation value of the Hamiltonian ${\ensuremath{\hat{H}}}$ in Eq. . [^4]: While we take a standard matrix logarithm, it also makes sense to define the linear maps $\mathscr{T}$, $\tilde{\mathscr{T}}$ as the logarithm of —or the generator for— the completely positive maps $\mathscr{E}$ and $\tilde{\mathscr{E}}$ associated to the left or right action of ${\ensuremath{\mathbb{E}}}$. However, not all completely positive maps have a natural logarithm associated to it, as was shown in Ref. .
--- abstract: 'Dynamic neural network toolkits such as PyTorch, DyNet, and Chainer offer more flexibility for implementing models that cope with data of varying dimensions and structure, relative to toolkits that operate on statically declared computations (e.g., TensorFlow, CNTK, and Theano). However, existing toolkits—both static and dynamic—require that the developer organize the computations into the batches necessary for exploiting high-performance algorithms and hardware. This batching task is generally difficult, but it becomes a major hurdle as architectures become complex. In this paper, we present an algorithm, and its implementation in the DyNet toolkit, for automatically batching operations. Developers simply write minibatch computations as aggregations of single instance computations, and the batching algorithm seamlessly executes them, on the fly, using computationally efficient batched operations. On a variety of tasks, we obtain throughput similar to that obtained with manual batches, as well as comparable speedups over single-instance learning on architectures that are impractical to batch manually.[^1]' author: - | Graham Neubig[^2]\ Language Technologies Institute\ Carnegie Mellon University\ `gneubig@cs.cmu.edu` Yoav Goldberg$^*$\ Computer Science Department\ Bar-Ilan University\ `yogo@cs.biu.ac.il`\ Chris Dyer\ DeepMind\ `cdyer@google.com`\ bibliography: - 'confnames.bib' - 'autobatch-paper.bib' title: | On-the-fly Operation Batching\ in Dynamic Computation Graphs --- Introduction ============ Modern CPUs and GPUs evaluate batches of arithmetic operations significantly faster than the sequential evaluation of the same operations. For example, performing elementwise operations takes nearly the same amount of time on the GPU whether operating on tens or on thousands of elements, and multiplying a few hundred different vectors by the same matrix is significantly slower than executing a single (equivalent) matrix–matrix product using an optimized GEMM implementation on either a GPU or a CPU. Thus, careful grouping of operations into batches that can execute efficiently in parallel is crucial for making the most of available hardware resources. Today, developers who write code to train neural networks are responsible for crafting most of this batch handling by hand. In some cases this is easy: when inputs and outputs are naturally represented as fixed sized tensors (e.g., images of a fixed size such those in the MNIST and CIFAR datasets, or regression problems on fixed sized vector inputs), and the computations required to process each instance are instance-invariant and expressible as standard operations on tensors (e.g., a series of matrix multiplications, convolutions, and elementwise nonlinearities), a suitably flexible tensor library that provides efficient implementations of higher-order generalizations of low-order operations makes manual batching straightforward. For example, by adding a leading or trailing dimension to the tensors representing inputs and outputs, multiple instances can be straightforwardly represented in a single data structure. In other words: in this scenario, the developer conceives of and writes code for the computation on an individual instance, packs several instances into a tensor as a “minibatch”, and the library handles executing these efficiently in parallel. Unfortunately, this idealized scenario breaks when working with more complex architectures. Deep learning is increasingly being applied to problems whose inputs, outputs and intermediate representations do not fit easily into fixed sized tensors. For example, images vary in size and sequences in length; data may be structured as trees [@socher11recursivenn] or graphs [@liang:2016], or the model may select its own computation conditional on the input [@li:2017; @shazeer:2017; @yogatama:2017]. In all these cases, while the desired computation is easy enough to write for a single instance, organizing the computational operations so that they make optimally efficient use of the hardware is nontrivial. Indeed, many papers that operate on data structures more complicated than sequences have avoided batching entirely [@dyer2015stacklstm; @reed:2016; @ladhak:2016]. In fact, until last year [@bowman2016spinn; @louppe2017qcd], *all* published work on recursive (i.e., tree-structured) neural networks appears to have used single instance training. ![Two computation graphs for computing the loss on a minibatch of three training instances consisting of a sequence of input vectors paired with a fixed sized output vector. On the left is a “conceptual” computation graph which shows the operations associated with computing the losses individually for each sequence and then aggregating them. The same computation is executed by the right-hand (“batched”) computation graph: it aggregates the inputs in order to make better use of modern processors. This comes with a price in complexity—the variable length of the sequences requires padding and masking operations. Our aim is for the user to specify the conceptual computation on the left, and let the framework take care of its efficient execution.\[fig:twographs\]](unbatched-vs-batched.pdf){width="\textwidth"} The premise of this work is that operation batching should not be the responsibility of the user, but instead should be a service provided by the framework. The user should only be responsible for specifying a large enough computation so that batching is possible (i.e, summing the losses of several instances, such as one sees in the left side of Figure \[fig:twographs\]), and the framework should take care of the lower-level details of operation batching, much like optimizing compilers or JIT optimizers in interpreted languages do. We take a large step towards this goal by introducing an efficient algorithm—and a corresponding implementation—for automatic batching in dynamically declared computation graphs.[^3] Our method relies on separating the graph construction from its execution, using operator overloading and lazy evaluation (§\[sec:batching\]). Once this separation is in place, we propose a fast batching heuristic that can be performed in real time, for each training instance (or minibatch), between the graph construction and its execution (§\[sec:algorithm\]). We extend the DyNet toolkit [@dynet] with this capability. From the end-user’s perspective, the result is a simple mechanism for exploiting efficient data-parallel algorithms in networks that would be cumbersome to batch by hand. The user simply defines the computation independently for each instance in the batch (using standard Python or C++ language constructs), and the framework takes care of the rest. Experiments show that our algorithm compares favorably to manually batched code, that significant speed improvements are possible on architectures with no straightforward manual batching design, and that we obtain better performance than TensorFlow Fold [@looks2017dynamic], an alternative framework built to simulate dynamic graph definition and automatic batching on top of TensorFlow (§\[sec:experiments\]). Batching: Conception vs. Efficient Implementation {#sec:batching} ================================================= To illustrate the challenges with batching, consider the problem of predicting a real-valued vector conditional on a sequence of input vectors (this example is chosen for its simplicity; experiments are conducted on more standard tasks). We assume that an input sequence of vectors is read sequentially by an RNN, and then the final state is used to make a prediction; the training loss is the Euclidean distance between the prediction and target. We compare two algorithms for computing this code: a naïve, but developer-friendly one (whose computation graph is shown in the left part of Figure \[fig:twographs\]), which reflects how one conceives of what a batch loss computation is; and a computationally efficient—but more conceptually complex—version that batches up the computations so they are executed in parallel across the sequences (the right part of Figure \[fig:twographs\]). #### Naïve (developer-friendly) batched implementation {#sec:naive} The left part of Figure \[fig:twographs\] shows the computations that must be executed to compute the losses associated with three ($b=3$) training instances, implemented naïvely. Pseudo-code for constructing the graph for each of the RNNs on the left using a dynamic declaration framework is as follows: $\mathbf{h}_0 = \mathbf{0}$ Initial state of the RNN; $\mathbf{h}_t \in \mathbb{R}^{d}$. $\mathbf{h}_t = \tanh (\mathbf{W}[\mathbf{h}_{t-1};\mathbf{x}_t] + \mathbf{b})$ $\hat{\mathbf{y}} = \mathbf{Uh}_n + \mathbf{c}$ $\mathcal{L} = ||\hat{\mathbf{y}} - \mathbf{y}||_2^2$ $\mathcal{L}$ Note that the code does not compute any value, but constructs a symbolic graph describing the computation. This can then be integrated into a batched training procedure: $\textsc{New-Graph}()$ Naïvely loop over elements of batch. $\mathcal{L}^{(i)} = \textsc{RNN-Regression-Loss}(\mathbf{x}^{(i)}_{1:n^{(i)}},\mathbf{y}^{(i)}; \boldsymbol{\theta}$) Single instance loss. $\mathcal{L} = \sum_i \mathcal{L}^{(i)}$ Aggregate losses for all elements in batch. <span style="font-variant:small-caps;">Forward</span>$(\mathcal{L})$ $\frac{\partial \mathcal{L}}{\partial \boldsymbol{\theta}} = \textsc{Backward}(\mathcal{L})$ $\boldsymbol{\theta} = \boldsymbol{\theta} - \eta \frac{\partial \mathcal{L}}{\partial \boldsymbol{\theta}}$ This code is simple to understand, uses basic flow control present in any programming language and simple mathematical operations. Unfortunately, executing it will generally be quite inefficient, since in the resulting computation graph each operation is performed sequentially without exploiting the fact that similar operations are being performed across the training instances. #### Efficient manually batched implementation {#sec:manual} To make good use of efficient data-parallel algorithms and hardware, it is necessary to batch up the operations so that the sequences are processed in parallel. The standard way to achieve this is by aggregating the inputs and outputs, altering the code as follows: $\mathbf{M} = \mathbf{0}$ Build loss mask; $\mathbf{M} \in \mathbb{R}^{b\times n_{\max}}$. $\mathbf{M}_{[i,n^{(i)}]} = 1$ Position where the final symbol in sequence $i$ occurs. $\mathbf{H}_0 = \mathbf{0}$ Initial states of the RNN (one per instance); $\mathbf{H}_t \in \mathbb{R}^{d \times b}$. $\mathbf{H}_t = \tanh (\mathbf{W}[\mathbf{H}_{t-1};\mathbf{X}_t] + \mathbf{b})$ Addition broadcasts $\mathbf{b}$ over columns. $\hat{\mathbf{Y}}_t = \mathbf{UH}_t + \mathbf{c}$ Addition broadcasts $\mathbf{c}$ over columns. $\mathcal{L}_t = ||(\hat{\mathbf{Y}}_t - \mathbf{Y})(\mathbf{m}_t\mathbf{1}^{\top})||_{\mathcal{F}}^2$ Compute masked losses ($\mathbf{m}_t$ is the $t$th column of $\mathbf{M}$). $\mathcal{L} = \sum_t \mathcal{L}_t$ $\mathcal{L}$ $n_{\max} = \max_i n^{(i)}$ Build sequence of batch input matrices. $\mathbf{X}_t = \mathbf{0} \in \mathbb{R}^{d \times b}$ $\mathbf{X}_{t,[\cdot,i]} = \mathbf{x}^{(i)}_t \ \textbf{if}\ t \le n^{(i)}\ \textbf{otherwise} \ \mathbf{0}$ The $i$th column of $\mathbf{X}_t$. $\mathbf{Y} = [\mathbf{y}^{(1)} \ \mathbf{y}^{(2)} \ \cdots \ \mathbf{y}^{(b)}]$ Build batch of output targets. $\textsc{New-Graph}()$ Now that inputs are constructed, create graph, evaluate loss and gradient. $\mathcal{L} = \textsc{RNN-Regression-Batch-Loss}(\mathbf{X}_{1:n_{\max}},\mathbf{Y},n^{(1:b)}; \boldsymbol{\theta}$) $\textsc{Forward}(\mathcal{L})$ $\frac{\partial \mathcal{L}}{\partial \boldsymbol{\theta}} = \textsc{Backward}(\mathcal{L})$ $\boldsymbol{\theta} = \boldsymbol{\theta} - \eta \frac{\partial \mathcal{L}}{\partial \boldsymbol{\theta}}$ This code computes the same value as the naïve implementation, it does so more efficiently, and it is significantly more complicated. Because the sequences processed by RNNs will generally be of different lengths (which is precisely why RNNs are useful!), it is necessary to pad the input representation with dummy values, and also to mask out the resulting losses at the right times. While these techniques are part of the inventory of skills that a good ML engineer has, they increase the difficulty of implementation and probability that bugs will be present in the code. #### Implementation comparison The naïve algorithm has two advantages over manual batching. First, it is easy to implement: the way we conceive of a model is the way it is implemented, and errors with padding, masking, and batching are avoided. Second, the naïve algorithm aggregates *any* single instance loss, whereas manual batching efforts are generally problem specific. For these reasons, one should strongly prefer the first algorithm; however, for efficiency reasons, batching matters. In the next section we turn to the problem of how to efficiently execute naïve computation graphs so that they can take advantage of efficient batched implementations of operations. This provides the best of both worlds to developers: code is easy to write, but execution is fast. An Algorithm for On-the-fly Batching {#sec:algorithm} ==================================== Manual batching, discussed in the previous section, mostly operates by *aggregating input instances* and feeding them through a network. In RNNs, this means aggregating inputs that share a time step. This often require padding and masking, as input sizes may differ. It also restricts the kinds of operations that can be batched. In contrast, our method *identifies and aggregates computation graph nodes* that can be executed in a batched fashion for a given graph. This reduces the need for workarounds such as padding and masking, allows for seamless efficient execution also in architectures which are hard to conceptualize in the input-centric paradigm, and allows for the identification of batching opportunities that may not be apparent from an input-centric view. Our batching procedure operates in three steps (1) graph definition, (2) operation batching, and (3) computation. Here, steps (1) and (3) are shared with standard execution of computation graphs, while (2) corresponds to our proposed method. Graph Definition ---------------- First, we define the graph that represents the computation that we want to perform. From the user’s perspective, this is done by simply performing computation that they are interested in performing, such as that defined in the <span style="font-variant:small-caps;">Rnn-Regression-Loss</span> function from the previous example. While it is common for dynamic graph frameworks to interleave the graph definition and its forward execution, we separate these parts by using *lazy evaluation*: we only perform forward evaluation when a resulting value is requested by the user through the calling of the <span style="font-variant:small-caps;">Forward</span> function. The graph can be further extended after a call to <span style="font-variant:small-caps;">Forward</span>, and further calls will lazily evaluate the delta of the computation. This allows the accumulation of large graph chunks before executing forward computations, providing ample opportunities for operation batching. Operation Batching ------------------ Next, given a computation graph, such as the one on the left side of Figure \[fig:twographs\], our proposed algorithm converts it into a graph where operations that can be executed together are batched together. This is done in the two step process described below. #### Computing compatibility groups We first partition the nodes into compatibility groups, where nodes in the same group have the potential for batching. This is done by associating each node with a signature such that nodes that share the same signature are guaranteed to be able to be executed in a single operation if their inputs are ready. Signatures vary depending on the operation the node represents. For example, in nodes representing element-wise operations, all nodes with the same operation can be batched together, so the signature is simply the operation name (`tanh`, `log`, ...). In nodes where dimensions or other information is also relevant to whether the operations can be batched, this information is also included in the signature. For example, a node that picks a slice of the input matrix will also be dependent on the matrix size and range to slice, so the signature will look something like `slice-400x500-100:200`. In some other cases (e.g. a parameterized matrix multiply) we may remember the specific node ID of one of the inputs (e.g. `node123` representing the matrix multiply parameters) while generalizing across other inputs (e.g. data or hidden state vectors on the right-hand side), resulting in a signature that would look something like `matmul-node123-400x1`. A more thorough discussion is given in Appendix \[sec:nodesignatures\]. #### Determining execution order A computation graph is essentially a job dependency graph where each node depends on its input (and by proxy the input of other preceding nodes on the path to its inputs). Our goal is to select an execution order in which (1) each node is executed after its dependencies; and (2) nodes that have the same signature and do not depend on each other are scheduled for execution on the same step (and will be executed in a single batched operation). Finding an optimal execution order that maximizes the amount of batching in the general case is NP hard [@potts:2000]. We discuss two heuristic strategies for identifying execution orders that satisfy these requirements. *Depth-based batching* is used as a method for automatic batching in TensorFlow Fold [@looks2017dynamic]. This is done by calculating the depth of each node in the original computation graph, defined as the maximum length from a leaf node to the node itself, and batching together nodes that have an identical depth and signature. By construction, nodes of the same depth are not dependent on each-other, as all nodes will have a higher depth than their input, and thus this batching strategy is guaranteed to satisfy condition (1) above. However, this strategy will also miss some good batching opportunities. For example, the loss function calculations in Figure \[fig:twographs\] are of different depths due to the different-lengthed sequences, and similar problems will occur in recurrent neural network language models, tree-structured neural networks, and a myriad of other situations. *Agenda-based batching* is a method we propose that does not depend solely on depth. The core of this method is an agenda that tracks “available” nodes that have no unresolved dependencies. For each node, a count of its unresolved dependencies is maintained; this is initialized to be the number of inputs to the node. The agenda is initialized by adding nodes that have no incoming inputs (and thus no unresolved dependencies). At each iteration, we select a node from the agenda together with all of the available nodes in the same signature, and group them into a single batch operation. These nodes are then removed from the agenda, and the dependency counter of all of their successors are decremented. Any new zero-dependency nodes are added to the agenda. This process is repeated until all nodes have been processed. How do we prioritize between multiple available nodes in the agenda? Intuitively, we want to avoid prematurely executing nodes if there is a potential for more nodes of the same signature to be added to the agenda at a later point, resulting in better batching. A good example of this from our running example in Figure \[fig:twographs\] is the loss-calculating nodes, which will be added to the agenda at different points due to becoming calculable after different numbers of RNN time steps. To capture this intuition, we introduce a heuristic method for prioritizing nodes based on the *average depth* of all nodes with their signature, such that nodes with a lower average depth will be executed earlier. In general (with some exceptions), this tends to prioritize nodes that occur in earlier parts of the graph, which will result in the nodes in the later parts of the graph, such as these loss calculations, being executed later and hopefully batched together.[^4] Finally, this non-trivial batching procedure must be executed quickly so that overhead due to batch scheduling calculations doesn’t cancel out the efficiency gains from operation batching. To ensure this, we perform a number of optimizations in the implementation, which we detail in Appendix \[sec:fastcalculation\]. Forward-backward Graph Execution and Update ------------------------------------------- Once we have determined an execution order (including batching decisions), we perform calculations of the values themselves. In standard computation graphs, forward computation is done in topological order to calculate the function itself, and backward calculation is done in reverse topological order to calculate gradients. In our automatically batched evaluation, the calculation is largely similar with two exceptions: #### Single$\rightarrow$batch node conversion First, it is necessary to convert single nodes into a batched node, which also requires modification of the underlying operations such as converting multiple matrix-vector operations $\mathbf{W}\mathbf{h}_i$ to a single matrix-matrix operation $\mathbf{W}\mathbf{H}$. This is done internally in the library, while the user-facing API maintains the original unbatched computation graph structure, making this process invisible to the user. #### Ensuring contiguous memory To ensure that operations can be executed as a batch, the inputs to the operations (e.g. the various vectors $\mathbf{h}^{(i)}_t$) must be arranged in contiguous memory (e.g. a matrix $\mathbf{H}_t$). In some cases, it is necessary to perform a memory copy to arrange these inputs into contiguous memory, but in other cases the inputs are already contiguous and in the correct order, and in these cases we can omit the memory copy and use the inputs as-is. Experiments {#sec:experiments} =========== In this section we describe our experiments, designed to answer three main questions: (1) in situations where manual batching is easy, how close can the proposed method approach the efficiency of a program that uses hand-crafted manual batching, and how do the depth-based and agenda-based approaches compare (§\[sec:experiments:synthetic\])? (2) in situations where manual batching is less easy, is the proposed method capable of obtaining significant improvements in efficiency (§\[sec:experiments:real\])? (3) how does the proposed method compare to TensorFlow Fold, an existing method for batching variably structured networks within a static declaration framework (§\[sec:experiments:tensorflow\])? Synthetic Experiments {#sec:experiments:synthetic} --------------------- Our first experiments stress-test our proposed algorithm in an ideal case for manual batching. Specifically, we train a model on a bi-directional LSTM sequence labeler [@huang2015bidirectional; @plank16tagging], on synthetic data where every sequence to be labeled is the same length (40). Because of this, manual batching is easy because we don’t have to do any padding or adjustment for sentences of different lengths. The network takes as input a size 200 embedding vector from a vocabulary of size 1000, has 2 layers of 256 hidden node LSTMs in either direction, then predicts a label from one of 300 classes. The batch size is 64.[^5] ![Computation time for forward/backward graph construction or computation, as well as parameter update for a BiLSTM tagger without or with manual batching, and without, with depth-based, or with agenda-based automatic batching.\[fig:synthetic\]](synthetic.pdf){width="\textwidth"} Within this setting we test various batching settings: Without or with manual mini-batching where we explicitly batch the word vector lookup, LSTM update, and loss calculation for each time step. Without on-the-fly batching (<span style="font-variant:small-caps;">NoAuto</span>), with depth-based autobatching (<span style="font-variant:small-caps;">ByDepth</span>), or with agenda-based autobatching (<span style="font-variant:small-caps;">ByAgenda</span>). We measure the speed of each method by ms/sec and also break down the percentage of computation time spent in (1) forward graph creation/on-the-fly batching, (2) forward computation, (3) backward graph creation, (4) backward computation, (5) parameter update. The results can be found in Figure \[fig:synthetic\]. First, comparing the first row with the second two, we can see that the proposed on-the-fly batching strategy drastically reduces computation time per sentence, with <span style="font-variant:small-caps;">ByAgenda</span> reducing per-sentence computation time from 193ms to 16.9ms on CPU and 54.6ms to 5.03ms on GPU, resulting in an approximately 11-fold increase in sentences processed per second (5.17$\rightarrow$59.3 on CPU and 18.3$\rightarrow$198 on GPU). <span style="font-variant:small-caps;">ByAgenda</span> is faster than <span style="font-variant:small-caps;">ByDepth</span> by about 15–30%, demonstrating that our more sophisticated agenda-based strategy is indeed more effective at batching together operations. Next, compared to manual batching without automatic batching (the fourth row), we can see that fully automatic batching with no manual batching is competitive, but slightly slower. The speed decrease is attributed to the increased overhead for computation graph construction and batch scheduling. However, even in this extremely idealized scenario where manual batching will be most competitive, the difference is relatively small (1.27$\times$ on CPU and 1.76$\times$ on GPU) compared to the extreme difference between the case of using no batching at all. Given that automatic batching has other major advantages such as ease of implementation, it may be an attractive alternative even in situations where manual batching is relatively easy. Finally, if we compare the fourth and fifth/sixth rows, we can see that on GPU, even with manual batching, automatic batching still provides gains in computational efficiency, processing sentences up to 1.1 times faster than without automatic batching. The reason for this can be attributed to the fact that our BiLSTM implementation performs manual batching across sentences, but not across time steps within the sentence. In contrast, the auto-batching procedure was able to batch the word embedding lookup and softmax operations across time-steps as well, reducing the number of GPU calls and increasing speed. This was not the case for CPU, as there is less to be gained from batching these less expensive operations. Experiments on Difficult-to-batch Tasks {#sec:experiments:real} --------------------------------------- Next, we extend our experiments to cases that are increasingly more difficult to manually batch. We use realistic dimension sizes for the corresponding tasks, and batches of size $b=64$. Exact dimensions and further details on training settings are in Appendix \[sec:trainingsettings\]. BiLSTM: : This is similar to the ideal case in the previous section, but trained on actual variable length sequences. BiLSTM w/char: : This is the same as the BiLSTM tagger above, except that we use an additional BiLSTM over characters to calculate the embeddings over rare words. These sorts of character-based embeddings have been shown to allow the model to generalize better [@ling2015functioninform], but also makes batching operations more difficult, as we now have a variably-lengthed encoding step that may or may not occur for each of the words in the input. Tree-structured LSTMs: : This is the Tree-LSTM model of [@tai15treelstm]. Here, each instance is a tree rather than a sequence, and the network structure follows the tree structures. As discussed in the introduction, this architecture is notoriously hard to manually batch. Transition-based Dependency Parsing: : The most challenging case we evaluate is that of a transition-based system, such as a transition based parser with LSTM-based feature-extraction [@dyer2015stacklstm; @dyer2016rnng; @kiperwasser2016eftreelstm] and exploration-based training [@ballesteros16exploration; @goldberg13dynamic; @bengio15scheduled]. Here, a sequence is encoded using an LSTM (or a bi-LSTM), followed by a series of predictions. Each prediction based on a subset of the encoded vectors, and the vectors that participate in each prediction, as well as the loss, are determined by the outcomes of the previous predictions. Here, batching is harder yet as the nature of the computation interleaves sampling from the model and training, and requires calling <span style="font-variant:small-caps;">Forward</span> at each step, leaving the automatic-batcher very little room to play with. However, with only a small change to the computation, we can run $b$ different parsers “in parallel”, and potentially share the computation across the different systems in a given time-step. Concretely, we use a modified version of the <span style="font-variant:small-caps;">Bist</span> parser [@kiperwasser2016bilstmparser]. -------------------- ------ ------ ---------- ------ --------- ---------- Task BiLSTM 16.8 139 **156** 56.2 337 **367** BiLSTM w/ char 15.7 93.8 **132** 43.2 183 **275** TreeLSTM 50.2 348 **357** 76.5 **672** 661 Transition-Parsing 16.8 61.0 **61.2** 33.0 89.5 **90.1** -------------------- ------ ------ ---------- ------ --------- ---------- : Sentences/second on various training tasks for increasingly challenging batching scenarios.[]{data-label="tab:real"} From the results in Table \[tab:real\], we can see that in all cases automatic batching gives healthy improvements in computation time, 3.6x–9.2$\times$ on the CPU, and 2.7–8.6$\times$ on GPU. Furthermore, the agenda-based heuristic is generally more effective than the depth-based one. [r]{}[0.5]{} Comparison to TensorFlow Fold {#sec:experiments:tensorflow} ----------------------------- We compare the TensorFlow Fold reference implementation of the Stanford Sentiment Treebank regression task [@socher:2013], using the same TreeLSTM architecture [@tai15treelstm].Figure \[fig:tffvsdy\] shows how many trees are processed per second by TF (excluding both evaluation of the dev set and static graph construction/optimization) on GPU and CPU relative to the performance of the <span style="font-variant:small-caps;">ByAgenda</span> algorithm in DyNet (including graph construction time). The DyNet performance is better across the board stratified by hardware type. Furthermore, DyNet has greater throughput on CPU than TensorFlow Fold on GPU until batch sizes exceed 64. Additionally, we find that with single instance training, DyNet’s sequential evaluation processes 46.7 trees/second on CPU, whereas autobatching processes 93.6 trees/second. This demonstrates that in complex architectures like TreeLSTMs, there are opportunities to batch up operations inside a single training instance, which are exploited by our batching algorithm. In addition, it should be noted that the DyNet implementation has the advantage that it is much more straightforward, relying on simple Python data structures and flow control to represent and traverse the trees, while the Fold implementation requires implementing the traversal and composition logic in a domain specific functional programming language (described in Section 3 of Looks et al. [@looks2017dynamic]). Related Work ============ Optimization of static algorithms is widely studied, and plays an important role in numerical libraries used in machine learning. Our work is rather different since the code/workload (as represented by the computation graph) is dynamically specified and must be executed rapidly, which precludes sophisticated statistic analysis. However, we review some of the important related work here. Automatic graph optimization and selection of kernels for static computation graphs is used in a variety of toolkits, including TensorFlow [@abadi2016tensorflow] and Theano [@bergstra2010theano]. Dynamic creation of optimally sized minibatches (similar to our strategy, except the computation graph is assumed to be static) that make good use of hardware resources has also been proposed for optimizing convolutional architectures [@hadjis:2015]. The static nature of the computation makes this tools closer to optimizing compilers rather than efficient interpreters which are required to cope with the dynamic workloads encountered when dealing with dynamically structured computations. Related to this is the general technique of automatic vectorization, which is a mainstay of optimizing compilers. Recent work has begun to explore vectorization in the context of interpreted code which may cannot be compiled [@rohou:2013]. Our autobatching variant of DyNet similarly provides vectorized primitives that can be selected dynamically. Further afield, the problem of scheduling with batching decisions has been widely studied in operations research since at least the 1950s (for a recent survey, see [@potts:2000]). Although the OR work deals with similar problems (e.g., scheduling work on machines that can process a ‘family’ of related item with minimal marginal cost over a single item), the standard algorithms from this field (which are often based on polynomial-time dynamic programs or approximations to NP-hard search problems) are too computationally demanding to execute in the inner loop of a learning algorithm. Conclusion {#sec:conclusion} ========== Deep learning research relies on empirical exploration of architectures. The rapid pace of innovation we have seen in the last several years has been enabled largely by tools that have automated the error-prone aspects of engineering, such as writing code that computes gradients. However, our contention is that operation batching is increasingly becoming another aspect of model coding that is error prone and amenable to automation. Our solution is a framework that lets programmers express computations naturally and relies on a smart yet lightweight interpreter to figure out how to execute the operations efficiently. Our hope is that this will facilitate the creation of new classes of models that better cope with the complexities of real-world data. Details on Node Signatures {#sec:nodesignatures} ========================== Each node has a “signature,” such that nodes with identical signatures can be batched together. With few exceptions, nodes can only be batched together if they perform the same operation, so the identity of the operation the node performs will be a necessary part of the signature. In addition, there may be additional constraints on what nodes can be batched together based on the nature of the operation to be performed. We demonstrate the signatures for a few illustrative classes of operations below: Component-wise operations : such as “tanh” or “log” will perform exactly the same work regardless of the shape of the tensors involved. For these simple operations, the signature is simply the identity of the operation (e.g. `tanh` or `log`) with no additional constraints. This is also true for component-wise operations that take multiple arguments such as sums or component-wise multiplications, as long as they do not involve broadcasting, which will be discussed in the following items. Dimension-sensitive operations : require additional restrictions. For example, matrix multiplies can generally only be performed on inputs where the dimensions match, so if we have several $\mathbf{W}_i\mathbf{h}_i$ operations we will only be able to batch them together if $\mathbf{W}_i$ and $\mathbf{h}_i$ are the same dimension across all elements $i$. In these cases, we explicitly specify the necessary dimensions in the signature (e.g. `mult-256\times256-256` if $\mathbf{W}_i$ was a 256$\times$256 matrix and $\mathbf{b}_i$ was a length-256 vector), preventing inputs with incompatible dimensions from being processed together. Operations with shared elements : such as a matrix–vector multiply $\mathbf{W}\mathbf{h}_i$ where same matrix is applied to all the vectors, are both common and the source of most potential gains from operation batching. The reason why these operations are important is because we can perform explicit optimizations such as concatenating all of the $\mathbf{h}_i$ vectors into a matrix $\mathbf{H}$ and performing a single matrix–matrix multiplication $\mathbf{W}\mathbf{H}$. To take advantage of this, if $\mathbf{W}$ is represented as node $n_{\mathbf{W}}$, we can define a signature `mult-n_{\mathbf{W}}-256`, where operations that share their left side but may have different right sides are grouped together. Matrix multiplication can have either a shared or un-shared signature. Unbatchable operations : are operations that either cannot be batched together trivially, or would not benefit significantly from batching. One thing that should be noted is that for some nodes, like the matrix multiplies $\mathbf{A}\mathbf{x}$ in the example or affine transforms $\mathbf{A}\mathbf{x} + \mathbf{y}$, which signature to use is not clear. If some of the elements are shared parameters, it would be preferable to use a signature that shares these parameters to take advantage of efficient implementations such as the one mentioned above. However, if all of the elements of the multiply or affine transform are unique, then it is better to use the simpler dimension-sensitive operations. In our implementation, we use a simple heuristic: because multiplies and affine transforms in neural networks tend to have the parameters in the positions of $\mathbf{A}$ and $\mathbf{y}$, and the elements in the $\mathbf{x}$ position tend to be input-dependent, we use signatures that share the elements in the $\mathbf{A}$ and $\mathbf{y}$ positions but do not share the elements in the $\mathbf{x}$ position. Optimizations for Fast Graph Calculation {#sec:fastcalculation} ======================================== In order to ensure that the increased complexity of automatic batching does not introduce unacceptable overhead in our calculation, we took care to efficiently implement the different parts of the algorithm using sophisticated but fairly standard optimization techniques. These include: - Minimizing the number of memory allocations and preferring stack allocation of fixed-size memory to heap allocation of variable-sized memory. - Implementing specialized linked-list-style data structures in contiguous memory to avoid expensive-to-construct vectors of vectors. - Computing node signatures as integer hash values instead of strings. - Implementing optimized GPU kernels to perform sparse-to-dense and dense-to-sparse memory copies for use when copying operations results to/from contiguous memory for use in batched nodes. Details of all of these optimizations can be found in the open source implementation in DyNet.[^6] Experimental Settings {#sec:trainingsettings} ===================== The first three experiments are based on implementations in the DyNet benchmark repository,[^7]. BiLSTM (`bilstm-tagger-bulk`): : As our tagging data, we use data from the named entity recognition task Models were trained and tested on the WikiNER English Corpus [@nothman2012wikiner], and all words with frequency less than five were treated as unknowns. The network was single-layer with word embeddings and LSTMs in either direction containing 256 nodes each. BiLSTM w/char (`bilstm-tagger-withchar-bulk`): : The settings for the BiLSTM tagger with character embeddings are the same as above, but with the addition of character-based LSTMs for unknown words. The character embeddings are of size 64, and the character LSTMs are 128 in both directions. Tree-structured LSTMs (`treenn-bulk`): : Tree LSTMs are trained on the Stanford Sentiment Treebank regression task [@socher:2013]. These similarly use word embedding and node sizes of 256. The models are trained to predict the labels at the each node in the tree. For our final experiment, we modified a version of the publicly available transition-based version (`barchybrid`) of the <span style="font-variant:small-caps;">BistParser</span>[^8][@kiperwasser2016bilstmparser]. Our modified code is available in the DyNet benchmark repository. Transition-based Dependency Parsing: : The parser was modified to perform aggregate batching by running several parsers in-parallel and aggregating decisions in a given time-step across the different parsers. In contrast to the other benchmarks in this paper which are implemented in C++, this is a python-based implementation. We measure the training time of one iteration over the training set of the publicly available English Universal Dependencies Treebank,[^9] containing 12K sentences. We use the default settings of the parser (100 dim word embeddings, 25 dim POS embeddings, 25 dim relation embeddings, 200 dim LSTM layers, and a 100 dim hidden layer in the prediction MLP), as well as the flags `—userlmost —userl —bibi-lstm`. [^1]: The proposed algorithm is implemented in DyNet (<http://github.com/clab/dynet>), and can be activated by using the “`--dynet-autobatch 1`” command line flag. [^2]: Authors contributed equally. [^3]: Computation graphs (often represented in a form called a Wengert list) are the data structures used to structure the evaluation of expressions and use reverse mode automatic differentiation to compute their derivatives [@mbb:2000]. Broadly, learning frameworks use two strategies to construct these: static and dynamic. In static toolkits (e.g., Theano [@bergstra2010theano], Tensorflow  [@abadi2016tensorflow]) the computation graph is defined once and compiled, and then examples are fed into the same graph. In contrast, dynamic toolkits (e.g., DyNet [@dynet], Chainer [@tokui2015chainer], PyTorch \[<http://pytorch.org>\]) construct the computation graph for each training instance (or minibatch) as the forward computation is executed. While dynamic declaration means that each minibatch can have its own computational architecture, the user is still responsible for batching operations themselves. [^4]: Even given this prioritization method it is still possible to have ties, in which case we break ties by calculating “cheap” operations (e.g. $\tanh$ and other elementwise ops) before “heavy” ones (e.g. matrix multiplies). [^5]: Experiments were run on a single Tesla K80 GPU or Intel Xeon 2.30GHz E5-2686v4 CPU. To control for variance in execution time, we perform three runs and report the fastest. We do not report accuracy numbers, as the functions calculated and thus accuracies are the same regardless of batching strategy. [^6]: <http://github.com/clab/dynet> [^7]: <https://github.com/neulab/dynet-benchmark> [^8]: <http://www.github.com/elikip/bist-parser/> [^9]: <https://github.com/UniversalDependencies/UD_English>
--- abstract: 'The recently studied material FeCrAs exhibits a surprising combination of experimental signatures, with metallic, Fermi liquid like specific heat but resistivity showing strong non-metallic character. The $\rm Cr$ sublattice posseses local magnetic moments, in the form of stacked (distorted) Kagome lattices. Despite the high degree of magnetic frustration, anti-ferromagnetic order develops below $T_N \sim 125K$ suggesting the non-magnetic $\rm Fe$ sublattice may play a role in stabilizing the ordering. From the material properties we propose a microscopic Hamiltonian for the low energy degrees of freedom, including the non-magnetic $\rm Fe$ sublattice, and study its properties using slave-rotor mean field theory. Using this approach we find a spin liquid phase on the $\rm Fe$ sublattice, which survives even in the presence of the magnetic $\rm Cr$ sublattice. Finally, we suggest that the features of FeCrAs can be qualitatively explained by critical fluctuations in the non-magnetic sublattice Fe due to proximity to a metal-insulator transition.' author: - 'Jeffrey G. Rau' - 'Hae-Young Kee' bibliography: - 'fecras-paper.bib' title: 'Hidden spin liquid in an antiferromagnet: Applications to ${{\rm FeCrAs}}$' --- Introduction ============ The ubiquity of Landau’s Fermi liquid is a testament to universality in the solid state. As such, departures from these classic experimental signatures in metallic systems act as a guide to novel and interesting physics. Non-Fermi liquid behaviour appears in many strongly correlated materials such as unconventional superconductors[@cuprate1; @feas1; @organics1], heavy fermion materials[@stewart1; @stewart2] and near quantum phase transitions[@qcp1; @qcp2]. Some routes to realize this behaviour include coupling itinerant electronic systems to localized magnetic moments and through intermediate to strong electron-electron interactions. These mechanisms can give rise to characteristics and experimental signatures that do not fit neatly in the Fermi liquid paradigm. A recently re-examined compound, ${{\rm FeCrAs}}$[@fruchart1; @fruchart2; @julian], provides a direct example of a material that does not fit completely within Fermi liquid theory and combines aspects of the mechanisms discussed above. The unit cell of ${{\rm FeCrAs}}$ shown in Fig. \[unit-cell\] shows $\rm Cr$ and $\rm Fe$ form alternating two dimensional lattices along what we will denote the $c$ axis. $\rm Cr$ forms layers with the structure of a distorted Kagome lattice where the $\rm Cr-Cr$ distances are approximately constant. The $\rm Fe$ layers have a more complicated structure, forming a triangular lattice of three atom units which we will call trimers as shown in Fig. \[trimer-lattice\]. The $\rm As$ is interspersed throughout both the $\rm Fe$ and $\rm Cr$ layers, as well as in between. Most of the known experimental data is nicely presented in Wu et al[@julian], which we summarize below. The specific heat exhibits Fermi liquid behaviour at low temperatures, i.e. $C \sim \gamma T$ where the slope $\gamma$ is sample dependent[@julian-synth]. The measured linear range is roughly $T \sim 3 K - 10 K$. Resistivity measurements show insulating behaviour at low and high temperatures. In plane ($ab$) and out of plane ($c$) resistivities are of the same order over the entire temperature range considered. The resistivity monotonically decreases (that is $d\rho/dT<0$) as $T$ is raised from $5K$ up to $\sim 800K$ except for a small peak in the $c$ axis resistivity around $T \sim 125 K$. A low temperature power law $\rho \sim \rho_0 - A T^{\alpha}$ is observed for $T \sim 80mK - 5 K$ in both $ab$ planes and $c$ axis resistivity with $\alpha \sim 0.6-0.7$ There is a peak in the susceptibility at $T_N \sim 125 K$ indicating a magnetic transition with a lack of hysteresis pointing to antiferromagnetic ordering. Below $T_N$ the susceptibility is anisotropic, differing between the $ab$ plane and the $c$ axis. Elastic neutron scattering[@julian-neutron] done deep in the magnetic phase, at $T = 2.8 K$, is consistent with the anti-ferromagnetic order inferred from the susceptibility, signaling an ordering vector at $\vec{Q} = (\frac{1}{3},\frac{1}{3},0)$, indicating ferromagnetic (stacked) order along the c-axis, but with a tripled unit cell in the $ab$ plane. While measurements of the specific heat give a result consistent with a metallic Fermi liquid, transport is unusual and deviates strongly from the classic Fermi liquid result for metals, while being distinct from the expected result for strong insulators. Furthermore, this material has a frustrated magnetic sublattice that nonetheless orders at low temperatures, while the remaining sublattices show either small or no magnetic moment[@julian; @julian-neutron]. The magnetic sublattice takes the form of a distorted Kagome lattice, where even classical Heisenberg models fail to order magnetically for both the stacked[@stacked-kagome-1; @stacked-kagome-2] and purely two dimensional cases[@kagome-1; @kagome-2; @kagome-3; @kagome-4; @kagome-5]. Very few experiments have been carried out on ${{\rm FeCrAs}}$, so theoretical models are not completely restricted. Regardless, there are a number of questions that need to be addressed, such as the nature of the stabilization of magnetic order, the cause of the very different thermodynamic and transport signals and the role of the non-magnetic sublattice. The nature of the magnetic order has been recently addressed by Redpath et al[@hopkinson], where a minimal model was proposed which suffices to explain the experimentally observed stabilization of a particular magnetic ordering vector. However, transport and thermodynamic behavior remains to be explained along with the role of the non-magnetic, ${\rm Fe}$ sublattice. In this paper, we elaborate a microscopic route to an effective model for the compound ${{\rm FeCrAs}}$, taking into account the ${\rm Fe}$ sublattice, and present a scenario to address the incongruities between the conflicting metallic, Fermi liquid specific heat and insulating like transport signals. This model consists of interacting electrons of the non-magnetic sublattice coupled to magnetic moments of the magnetic, ${\rm Cr}$ sublattice. Here we do not address the detailed nature of the moments themselves, treating them classically[@hopkinson], with the non-Fermi liquid physics arising from strong charge fluctuations occuring at intermediate Hubbard coupling in the ${\rm Fe}$ sublattice. In this picture we are excluding any Kondo physics, a view supported by the experimental results. Our emphasis is on the interplay between strong charge fluctuations near the metal-insulator transition on the $\rm Fe$ sublattice and the magnetic order of the $\rm Cr$ sublattice. For this we turn to the slave-rotor method which allows access to the intermediate coupling regime and metal-insulator transition. The structure of the paper is as follows: in Section \[effham\] we present an argument to pass from the atomic limit through to an effective model of the electronic degrees of freedom in ${{\rm FeCrAs}}$. In Section \[model\] we discuss the localized moments and magnetic interactions and we present the effective Hamiltonian relevant for ${{\rm FeCrAs}}$. We proceed to review the slave-rotor method in Section \[slave-rotors\] and the assumptions and implementation of our mean field theory in Section \[mean-field-theory\]. In Section \[discussion\] we comment on the application of our results to ${{\rm FeCrAs}}$ and summarize conclusions in Section \[conclusions\]. Effective Hamiltonian {#effham} ===================== Local environments and spin states ---------------------------------- Considering the common oxidation states of $\rm Fe$ we will take (in the atomic limit) $\rm Fe^{3+}$ as a starting point. This leaves the valence configuration being ${\rm 3d}^5$ for ${\rm Fe}^{3+}$. Following the experimental data, we assume that the Fe atoms are in a low spin state, due to the lack of a detectable magnetic moment[@julian-neutron]. The tetrahedral arrangement of the As atoms about each Fe atom, shown in Fig. \[local-geometry\] (a), implies a crystal field producing a splitting of the Fe $d$ levels, shown in Fig. \[fe-cf\] (a), with a pair of low-lying $e$ levels and a three-fold degenerate set of $t_{2}$ levels, separated by the crystal field gap. Assuming the low spin case is relevant the two $e$ states are filled and there is a single electron in the $t_{2}$ triplet shown in Fig. \[fe-cf\] (b). The case of $\rm Cr$ is more complicated, as the experiments do not single out a more probable spin state. For $\rm Cr$ we will take a ionic charge of ${\rm Cr}^{2+}$, giving a valence configuration of $3d^4$. A distorted octahedral environment, shown in Fig. \[local-geometry\] (b) reduces the symmetry about this site to $D_4$, with crystal field splittings shown in Fig. \[cr-cf\] (a). The low spin state(shown in Fig. \[cr-cf\] (b)) can still yield a $S=1$ spin moment, while the high spin state has an $S=2$ moment. In the low spin case hopping onto the Cr$^{2+}$ will be suppressed by the orbital repulsion, while for the high spin case the crystal field energy will give a further supression. We would like to emphasize that the arguments and models presented here are under-constrained by both experimental results and first principles electronic structure calculations[@abinitio]. Due to this limitation the precise details could fail quantitatively, but the subsequent effective Hamiltonian appears to be robust to a variety of ionic configuration changes in the underlying model. For example, the specific oxidation state we use for the $\rm Cr$ will be irrelevant to our final discussion, as only the localized character is needed to capture the gross magnetic features[@hopkinson]. With this in mind below we present a possible route from the microscopic Hamiltonian to the effective model. Iron-Chromium interactions -------------------------- Since the Fe-Cr distance is considerably smaller than the direct Fe-Fe distances outside the trimers and the indirect Fe-As-Cr distance, we will consider interactions induced by Fe-Cr-Fe hopping paths. First we define a simple model for an isolated Cr atom, assuming a low spin state as shown in Fig. \[cr-cf\] (b). The local Hamiltonian for the $\rm Cr$ $e$ doublet assumes the natural form $$H_{{\rm Cr}} = \Delta \sum_{j\alpha} n_{j\alpha} + U \sum_{j\alpha} n_{j\alpha \uparrow} n_{j\alpha \downarrow} - J\sum_j\left(\sum_{\alpha} \vec{S}_{j\alpha}\right)^2,$$ where $n_{j\alpha \sigma} = {{e}^{\dagger}}_{j\alpha \sigma} e_{j\alpha \sigma}$ is the number operator for the state in the doublet $\alpha$. We have denoted the atomic potential as $\Delta$, the intra-orbital repulsion as $U$ and the Hund’s coupling as $J$. Allowing for hopping between $\rm Fe$ and $\rm Cr$, we include the term $$H_{{\rm Cr}-{\rm Fe}} = -\sum_{ij \sigma \alpha} t^{\alpha}_{ij} {{d}^{\dagger}}_{i \sigma} e_{j\alpha \sigma} + {\rm h.c},$$ where $i$ runs over the orbitals on sites connected to the Cr atom at site $j$ and $d$ is the annihilation operator on the Fe site $i$. Integrating out all of the high energy degrees of freedom on the Cr atom, the ground state for sufficiently large $J$ is given by the doubly occupied, $S=1$ triplet states which we denote by ${\left|t_{ja} \right>}$ with $a=0,\pm$. Noting that $H_{\rm Cr-Fe}$ does not connect triplet to triplet states, as it changes the electron number on the Cr atom we can formulate the effective Hamiltonian following $$\begin{aligned} H_{\rm eff} &=& P_t H_{\rm Cr-Fe} \left(E_0 - H_{\rm Cr}\right)^{-1} {{H}^{\dagger}}_{\rm Cr-Fe} P_t,\end{aligned}$$ where $E_0$ is the triplet energy and $P_t =\sum_{ja} {\left|t_{ja} \right>}{\left< t_{ja}\right|}$ projects onto the triplet subspace. Carrying out the expansion of the effective Hamiltonian, denoting the spin operators on the $\rm Fe$ and $\rm Cr$ atoms as $\vec{S}_i$ and $\frac{1}{2}\vec{\sigma}_i$ respectively, our full effective Hamiltonian is given by $$H_{\rm eff} = -\sum_{ijl} \frac{1}{W}\left(\sum_\alpha t^{\alpha}_{ij} {{(t^{\alpha}_{lj})}^{*}}\right) {{d}^{\dagger}}_{i}\left(1-\vec{\sigma} \cdot \vec{S_j}\right) d_{l},$$ where $$\frac{1}{W} = \frac{ U+\frac{5 J}{2}}{\Delta^2 + \left(\frac{5 J}{4}\right)^2 + U(\frac{5J}{4}-\Delta)} > 0.$$ For $i=l$ this represents an anti-ferromagnetic exchange between the $\rm Fe$ and $\rm Cr$ atoms, while $i \neq l$ presents a spin-dependent hopping term, non-zero for both intra-trimer and inter-trimer hopping paths. Based on overlap of the orbital wavefunctions, we assume that the hopping $\sum_\alpha |t^\alpha_{ij}|^2$ is dominant for $i=l$ and independent of the orbitals, leading to the effective exchange Hamiltonian $$H^{\rm exch}_{\rm eff} = \sum_{ij} \frac{1}{W}\left(\sum_\alpha |t^\alpha_{ij}|^2\right) {{d}^{\dagger}}_i \left( \sigma \cdot \vec{S}_j \right) d_i,$$ up to a shift of the chemical potential. For future use, we will denote the effective exchange as $$J_K = \frac{1}{W} \sum_\alpha |t^\alpha_{ij}|^2 > 0.$$ Trimer Approximation for $\rm Fe$ --------------------------------- From the inter-atomic distances we expect that the Fe-Fe atoms in the trimer structure are tightly coupled. We can take advantage of this by grouping the degrees of freedom in the trimer into a single unit and formulating the rest of our theory in terms of these variables. This process is similar to the treatment of the pairs of molecules as the effective degrees of freedom, frequently employed in studies of organic superconductors[@organics1]. This assumes the energy scales for interactions and inter-trimer hopping matrix elements are small relative to the intra-trimer hoppings. In addition, we need that the tetrahedral crystal field splitting be much larger than our intra-orbital interactions (this is the low spin assumption) and both the intra- and inter-trimer hopping elements. With this in mind, let us find the low energy degrees of freedom of a trimer, keeping only the three site cluster. Considering the $C_3$ symmetry, there are four possiblities for the degeneracies of the two lowest levels, leading to either a half or quarter filled band as the relevant states (assuming all gaps are large compared to the relevant energy scales). Motivated by the Slater-Koster argument in Appendix A, we start with a model for the $xy$, $xz$ and $yz$ orbitals of the trimer: $$\begin{aligned} H_{\rm trimer} &=& -\sum_{{\langle ij \rangle}} {{d}^{\dagger}}_i \left( \begin{tabular}{ccc} $0$ & $t_{xz,yz}$ & 0 \\ $t_{xz,yz}$ & $0$ & 0 \\ $0$ & $0$ & $t_{xy,xy}$ \end{tabular} \right) d_j,\end{aligned}$$ where ${{d}^{\dagger}}_i = \left( {{d}^{\dagger}}_{i,xz}\ {{d}^{\dagger}}_{i,yz}\ {{d}^{\dagger}}_{i,xy}\right)$ and $i,j=1,2,3$ are the sites in the trimer. The $xy$ part is just a simple three site chain with a ground state at $-2t_{xy,xy}$ and pair of excited levels at $t_{xy,xy}$. For the $xz$ and $yz$ orbitals we can diagonalize $H_{\rm trimer}$ by changing the basis: $$\begin{aligned} d_{i,+} &=& \frac{1}{\sqrt{2}}\left( d_{xz} + d_{yz}\right), \\ d_{i,-} &=& \frac{1}{\sqrt{2}}\left( d_{xz} - d_{yz}\right).\end{aligned}$$ This gives a pair of decoupled three site chains with hoppings $\pm t_{xz,yz}$ and thus energy levels of $\pm 2 t_{xz-yz}$ and $\pm t_{xz,yz}$. When $\frac{1}{2}t_{xy,xy} < t_{xz,yz} < 2 t_{xy,xy}$ a single half-filled level in the trimer, symmetric under permutations of the three sites is realized. Ignoring all of the other matrix elements in the complete hopping matrix, we find (see Appendix A) $$\begin{aligned} t_{xz,yz} & \sim &\frac{1}{32}\left(7t_{\delta} - 16t_{\pi} + 9 t_{\sigma}\right), \\ t_{xy,xy} & \sim &\frac{1}{64}\left(49 t_{\delta} - 12 t_{\pi} + 3 t_{\sigma}\right).\end{aligned}$$ A more complete model would include mixing between all three orbitals but the qualitative picture should remain the same. The naive choice of $t_{\delta}>t_{\pi}>t_{\sigma}$ seems to select the case of $t_{xy,xy}>t_{xz,yz}$ and thus the single occupied band has $d_{+}$ character (shown in Fig. \[trimer-spectrum-half\]). Note the gap between the occupied level and the half-filled level is given by $$\Delta_{\rm trimer} = 2|t_{xy,xy}-t_{xz,yz}|=\frac{5}{64}\left|7t_{\delta}+4t_{\pi} - 3 t_{\sigma}\right|,$$ and must be a fairly large to ensure that the trimer approximation is valid. We emphasize that regardless of the exact details of the these hoppings the qualitative picture will remain, as long as this half-filling is found such as for ${\rm Fe^{3+}}$. Other oxidation states will give a different relevant molecular orbital, possibly with extra orbital degeneracy, and could change the picture presented here. Effective Hamiltonian and magnetic interactions {#model} ----------------------------------------------- The gross magnetic structure of ${{\rm FeCrAs}}$ is captured by a classical model of localized moments interacting with the $\rm Fe$ sublattice via a simple exchange[@hopkinson]. The utility of the classical model for the moments is also supported by the fact that Kondo-like signatures are absent in the experimental data. Futhermore, due to the uniform character of the relevant trimer states and the assumption of equal exchanges between $\rm Cr$ and different $\rm Fe$ orbitals, the effective exchange between the $\rm Cr$ and the trimers will be equal and between all of the $\rm Cr$ in the surrounding hexagons in the layers above and below. We thus take our Hamiltonian to have the form $$\begin{aligned} H = -t \sum_{{\langle ij \rangle}\in {ab}}\sum_\sigma {{c}^{\dagger}}_{i\sigma} c_{j\sigma} -t' \sum_{{\langle ij \rangle}\in c}\sum_{ \sigma} {{c}^{\dagger}}_{i\sigma} c_{j\sigma}+ U \sum_{i} n_{i\uparrow}n_{i\downarrow} +\frac{J_K}{2}\sum_{i,a \in \hexagon_i} \left({{c}^{\dagger}}_i \vec{\sigma} c_i\right) \cdot \vec{S}_a + J_H \sum_{{\langle ab \rangle}} \vec{S}_a \cdot \vec{S}_b \label{full-model},\end{aligned}$$ where the low energy degrees of freedom on the trimers are denoted using the operators ${{c}^{\dagger}}_{i\sigma}$, $c_{i\sigma}$ and the classical $\rm Cr$ spins are denoted as $\vec{S}_a$. The trimer sublattice is afforded hoppings $t$ and $t'$ which originate from direct overlaps between trimers (or indirectly via $\rm Cr$ or $\rm As$) in the $ab$ planes and along the $c$ axis respectively. Here we denote sites on the $\rm Fe$ sublattice as $i$ and $j$, while sites on the $\rm Cr$ sublattice are denoted as $a$ and $b$. The notation $a\in \hexagon_i$ indicates that the sum on the $\rm Cr$ sublattice is over sites in the hexagon that surround the trimer at $i$ on the $\rm Fe$ sublattice. Furthermore ${\langle ij \rangle} \in ab,c$ denotes bonds in the $ab$ or $c$ planes respectively. The interactions are given as follows: $U$ is the intra-trimer repulsion, inherited from the $\rm Fe$ atoms, $J_K$ is the $\rm Fe-Cr$ exchange and $J_H$ is the $\rm Cr-Cr$ exchange. In this Hamiltonian the $\rm Cr$ spins appear as an effective magnetic field for the trimers, which we will denote as $\vec{h}_i = \sum_{a \in \hexagon_i} \vec{S}_a$. In the large $U$ limit, where $U \gg t$ and $t^2/U,(t')^2/U \ll J_K,J_H$, this model should reproduce the model studied in Ref \[\], and thus their classical results provide a useful point of comparison for our large $U/t$ behaviour. To attack the intermediate $U/t$ regime we will use a slave-rotor approach, reviewed in the following section. Slave Rotors ============ For completeness and standardization of notation, we review the general properties of two dimensional rotors and follow with a discussion of the slave particle representation that bears their name[@rotor-florens-mott]. A rotor in two dimensions is an object that possesses only angular momentum, prototypically of the form $H \propto L^2$ where $L$ is the angular momentum operator about some axis, say the $\hat{z}$ axis. We classify states by the eigenstates of the $L$ operator, $L {\left|n \right>} = n {\left|n \right>}$ where $n$ is an integer. Raising and lowering operators are defined as $${{U}^{\dagger}} {\left|n \right>} = {\left|n+1 \right>}, \hspace{30pt} U {\left|n \right>} = {\left|n-1 \right>},$$ where $U$ is a unitary operator. From this definition it is simple to show that $L$ and $U$ satisfy the commutation relations, $$\left[ L,U \right] = U, \hspace{30pt} \left[ L,{{U}^{\dagger}} \right] = -{{U}^{\dagger}}.$$ Since $U$ is unitary it can be written as $U = \exp(-i\theta)$ where ${{\theta}^{\dagger}} = \theta$ and one can show that this implies the canonical commutation relation $[\theta,L] = i$, showing that $L$ and $\theta$ are canonically conjugate variables. To use these rotors as a slave-particle we associate the local electron basis with the product of the states of slave fermion and the states of an $O(2)$ rotor, $$\begin{aligned} {\left|0 \right>} & = & {\left|0 \right>}_f {\left|+1 \right>}_{\theta} ,\\ {\left|{\uparrow}\right>} & = & {\left|{\uparrow}\right>}_f{\left|0 \right>}_{\theta} ,\\ {\left|{\downarrow}\right>} & = & {\left|{\downarrow}\right>}_f{\left|0 \right>}_{\theta} , \\ {\left|{\uparrow}{\downarrow}\right>} & = & {\left|{\uparrow}{\downarrow}\right>}_f{\left|-1 \right>}_{\theta}.\end{aligned}$$ The slave fermion is called a spinon and spinon states are denoted by an $f$ subscript. The slave-rotor will be referred to as a rotor and a $\theta$ subscript will be used to denote rotor states. The natural interpretation is to have the spinon to be neutral and the rotor to carry the charge of the electron, thus explicitly separating the spin and charge degrees of freedom. Having expanded our local Hilbert space, a constraint is required to remove the unphysical states. The four physical states above are characterized by $$L_i + \sum_{\sigma} n^f_{i\sigma} = 1,$$ where $L_i$ is the rotor angular momentum operator and $n^f_{i\sigma}$ is the spinon number operator. This is the Hilbert space constraint. The electron operators can therefore be expressed as $$c_{i \sigma} = f_{i \sigma} e^{i \theta_i} ,$$ where $\exp\left(-i \theta_i\right)$ is the rotor lowering operator and $f_{i\sigma}$, ${{f}^{\dagger}}_{i\sigma}$ are the fermionic spinon operators. Using this representation the electronic Hamiltonian for the trimers (\[full-model\]) is written as $$\begin{aligned} H & = & - t\sum_{{\langle ij \rangle}\in ab}\sum_\sigma {{f}^{\dagger}}_{i\sigma}f_{j\sigma} e^{-i (\theta_i -\theta_j)} \\ & & - t'\sum_{{\langle ij \rangle}\in c}\sum_\sigma {{f}^{\dagger}}_{i\sigma}f_{j\sigma} e^{-i (\theta_i -\theta_j)} \\ & &+ \frac{U}{2} \sum_i L_i (L_i -1)+ \frac{J_K}{2} \sum_{i, a\in \hexagon_i} \left({{f}^{\dagger}}_i\vec{\sigma}f_i \right)\cdot \vec{S}_a.\end{aligned}$$ The Hubbard term is now a kinetic term for the rotors, so the complexity of the Hubbard interaction has been moved to the hopping term and the constraint. We note that the $\rm Fe-Cr$ coupling term only involves the spinon degrees of freedom. Mean Field Theory ================= We approach this problem using mean field theory. For simplicity we take the perspective of Florens and Georges,[@rotor-florens-mott] first decoupling the hopping term into $${{f}^{\dagger}}_{i\sigma}f_{j\sigma} e^{-i (\theta_i -\theta_j)} \approx \chi_{ij} e^{-i (\theta_i - \theta_j)} + B_{ij} {{f}^{\dagger}}_{i\sigma}f_{j\sigma} - \chi_{ij} B_{ij},$$ where we have introduced the mean fields $\chi_{ij} = \frac{1}{2} \sum_{\sigma}{\langle {{f}^{\dagger}}_{i\sigma}f_{j\sigma} \rangle}$ and $B_{ij} = {\langle e^{-i (\theta_i-\theta_j)} \rangle}$. Note that we have assumed that $\chi_{ij}$ is independent of spin. To handle the constraint we treat it both on average in space and on average in our states. For the case of half-filling this leads to the two conditions, $$\sum_i {\langle L_i \rangle} = 0, \hspace{30pt} \sum_{i\sigma} {\langle n^{f}_{i\sigma} \rangle} = 1.$$ Note that applying this on average in space prohibits us from considering charge ordering in our calculations. To enforce these constraints we introduce chemical potentials for the rotors and for the spinons, $\mu_L$ and $\mu_f$ respectively. This leads to two independent Hamiltonians which only talk to each other through the mean fields $\chi_{ij}$ and $B_{ij}$, $$\begin{aligned} \label{spinon-ham} H_{f} & = & - \sum_{{\langle ij \rangle}\sigma} (t_{ij}B_{ij} + \mu_f \delta_{ij}) {{f}^{\dagger}}_{i\sigma}f_{j\sigma} \\ &&+ \frac{J_K}{2} \sum_{i, a\in \hexagon_i} \left({{f}^{\dagger}}_i\vec{\sigma}f_i \right)\cdot \vec{S}_a,\\ \label{rotor-ham} H_{L} & = & -2t \sum_{{\langle ij \rangle}} \chi_{ij} e^{-i \theta_i} e^{i \theta_j} + \frac{U}{2} \sum_i( L_i^2 - \mu_L L_i),\end{aligned}$$ where we’ve introduced $t_{ij}$ which is equal to $t$ on the in-plane triangular bonds and equal to $t'$ on the out of plane bonds. While the spinon Hamiltonian can be treated using mean field theory, the rotor Hamiltonian needs a different strategy. Two approaches for the rotor Hamiltonian have been used in the literature: a self-consistent cluster approach[@rotor-zhao] and a bosonic approach[@rotor-florens-mott]. taking the bosonic approach, which is most simply tackled using a path-integral formulation, the imaginary time action takes the form, $$\begin{aligned} \nonumber S(\theta,L) = \int_0^{\beta} & d\tau & \Big[ \sum_i \left(i L_i {\partial}_\tau \theta_i + \frac{U}{2} L_i^2 \right) \\ &-& 2\sum_{ij} t_{ij} \chi_{ij} e^{-i\theta_i} e^{i \theta_j} \Big],\end{aligned}$$ where $\mu_L$ is chosen to eliminate the linear terms in $S$. Due to the symmetry of the action this guarantees that ${\langle L_i \rangle}=0$. Next, we integrate out $L$ to get the following action, $$\label{quantum-xy} S(\theta) = \int_0^{\beta} d\tau \left[ \frac{1}{2U}({\partial}_\tau\theta_i)^2- 2 \sum_{ij} t_{ij} \chi_{ij} e^{-i\theta_i} e^{i \theta_j} \right].$$ We write this using a bosonic variable $\phi_i = e^{i\theta_i}$ subject to the constraint that $|\phi_i|^2 = 1$, giving $$\begin{aligned} \nonumber \label{boson-action} \nonumber S({\bar{\phi}},\phi;\lambda) & = &\int_0^{\beta} d\tau \Big[ \frac{1}{2U}\sum_i \left|{\partial}_\tau \phi_i \right|^2 - i \sum_i \lambda_i \\ + & &\sum_{ij}\left(i\lambda_i \delta_{ij} - 2 t_{ij} \chi_{ij}\right) {\bar{\phi}}_i \phi_j \Big],\end{aligned}$$ where $\lambda$ is an auxiliary field introduced to enforce the constraint. Treating this new constraint in saddle point approximation, the solution of the bosonic part of the Hamiltonian is reduced to solving this saddle point equation and a free bosonic problem. These saddle point equations simply fix the boson number at each site to one. We assume a uniform state on the $\rm Fe$ sublattice, by considering $\chi_{ij} \equiv \chi$ and $B_{ij} \equiv B$ in plane, with $\chi_{ij} = \alpha \chi$ and $B_{ij} = \alpha B$ out of plane, where $\alpha$ is chosen so that we smoothly match the non-interacting limit. [^1] For the $\rm Cr$ spins it is natural to assume that the periodicity is that found by experiments[@julian; @julian-neutron] and previous classical calculations[@hopkinson], with wavevector of $(1/3,1/3,0)$. Within this space of magnetic states, we consider only canted classical ground states, that is states where the inplane components in each triangle are at $120^{\circ}$ but they are tilted out of plane by a canting angle $\psi$. These states interpolate between a subset of the ground states for $J_H \gg J_K$ at $\psi = 0$ and the ferrimagnetic state valid for $J_K \gg J_H$ with $\psi = \pi/2$[@hopkinson]. Among these states one can see that only those with a finite moment, as one sums around a hexagon in the Kagome lattice, will be favoured due to the $J_K$ interaction. With these constraints we only have a single set of magnetic states to consider parametrized by the canting angle $\psi$. Under these assumptions the trimer feels the following effective magnetic field (defined as $\vec{h}_i = \sum_{a \in \hexagon_i} \vec{S}_a$) after summing around the hexagon $$\vec{h}_i = 6 S \cos{\psi}\left[ \cos{(\vec{Q}\cdot \vec{r}_i)}\hat{x}+ \sin{(\vec{Q}\cdot \vec{r}_i)}\hat{y}\right]+ 12 S \sin{\psi} \hat{z},$$ where $\vec{Q} = (1/3,1/3,0)$ and $S$ is the magnitude of the $\rm Cr$ moment. Note the factor of $2$ between the in-plane and out-plane components due to the partial cancellation as we sum around the hexagon. The phase diagram for $J_H/t = 0.4$ is shown in Fig. \[phase4\], and demonstrates the full range of phases. There are three distinct phases shown in these phase diagrams. At low $J_K/t$ and $U/t$ we find a metallic phase on the $\rm Fe$ sublattice with zero canting angle (i.e. in plane, 120$^\circ$ ordering) on the $\rm Cr$ sublattice. This metallic state is characterized by condensation of the bosonic degree of freedom (${\langle \phi \rangle} \neq 0$) and a uniform, real $\chi_{ij} = |\chi|$ implying the existence of an electron Fermi surface and gapless charge excitations. Futhermore, the $\rm Fe$ trimers have an induced moment (anti-ferromagnetically) following the $\rm Cr$ moment. This moment scales with $J_K/t$, and is thus will be small so long as $J_K/t$ is. As we increase $U/t$ and keep $J_K/t$ small, we find a metal insulator transition near $U_c/t \approx 3.5$ into a uniform, $U(1)$ spin liquid (SL) phase. The metal-insulator boundary is very flat, as seen in Fig. \[phase4\]. , since under the mean field ansatz we have employed the critical $U$ is only a function of $\chi$ which changes only by a small amount as we increase $J_K$ (below the jump into the paramagnet phase). This SL phase is also characterized by a uniform, real $\chi_{ij}$ but gapped bosons (${\langle \phi \rangle}=0$), meaning the existence of a spinon Fermi surface but gapped charge excitations. This insulating phase carries the induced moment as in the metallic phase, with the magnitude also being proportional to $J_K/t$. In both the metallic and insulating phases as $J_K/t$ is increased (but remains small) the $\chi$ order parameter decreases towards zero. ![ \[phase4\] (Color Online) Phase diagram as a function of $J_K/t$ and $U/t$ for the value $J_H/t = 0.4$. The inset diagrams show a sample of the $\rm Cr$ spin configuration on one of the Kagome triangles. $\chi$ and $B$ and the slave-rotor mean field parameters, ${\langle \phi \rangle}$ is the rotor condensate and $\psi$ is the $\rm Cr$ canting angle (see text). ](fig4.pdf) From the spin only model of Ref. \[\] (accounting for the differing normalizations) one expects that for large $U/t$ the canting angle will become non-zero at $J_K = 2 J_H$ and saturate at $J_K = 4 J_H$ as one sees in Fig. \[phase4\]. We find that the uniform spin liquid does not support a non-zero canting angle. By this we mean that as $J_K/t$ is increased the canting angle remains zero until some critical $J_K/t$, wherein the spin liquid is replaced by the canted AF with $\chi=B=0$. The converse is not true, as can be seen in Fig. \[phase4\]. We see that the spin liquid phase can be destroyed before the onset of the canted magnetic state, leaving a window where we have a a simple insulating, in-plane antiferromagnet. Once inside this canted AF phase with $\chi=B=0$, the canting angle increases with $J_K/t$ until in becomes saturated and we enter a ferrimagnetic phase, with a net magnetic moment. The key feature of the phase diagrams we want to emphasize is that for a variety of values of $J_H$ and for small $J_K$ this spin liquid state is stable and does not coexist with a finite canting. Discussion ========== To discuss the applications of this to ${{\rm FeCrAs}}$ the effects of fluctuations must be taken into account near the metal insulator transtion. These include both charge and gauge degrees of freedom and are treated in detail in the work of Podolsky et al[@podolsky]. In this work it is found that there are two relevant temperature scales ${{T}^{*}}$ and ${T}^{**}$ that determine the qualitative features of the thermodynamic and transport properties. These temperature scales vanish as one approaches the critical point separating the metal and insulator. At temperatures above these scales the specific heat has weak logarithmic corrections $$C \sim T \ln\ln(1/T),$$ while the conductivity has a strong temperature dependence. Specifically, writing $\sigma = \sigma_f + \sigma_b$ where $\sigma_f$ is the spinon conductivity and $\sigma_b$ is boson conductivity, under the assumption of weak disorder we have $\sigma_f \sim \sigma^{\rm imp}_f$ due to impurity scattering and $$\sigma_b \sim T \ln^2(1/T).$$ This implies that the effects of the fluctuations on the specific heat are much more difficult to discern experimentally then the effect on the resistivity, which will be a monotonically decreasing function of $T$, as seen in the experiments on ${{\rm FeCrAs}}$. This is possible scenario for ${{\rm FeCrAs}}$ where a nearly linear specific is observed, as these logarithmic corrections would only be visible over large temperature ranges, where other contributions would begin to dominate and wash out the signature. This is also consistent with the magneto-resistance measurements[@julian], where no change is seen in the low temperature resistivity under magnetic fields up to $8$T, as the magnetic field would only couple weakly to the rotor fluctuations. A method to test this hypothesis experimentally would be a study of the pressure dependence of the transport and thermodynamic properties. Naively one expects that the application of pressure should drive the material through the metal-insulator transition, into the quantum critical metal and eventually into a Fermi liquid phase. Under our scenario, this could in principle be visible as the development of a maximum in the resistivity at low temperatures as the pressure is increased. Furthermore, the specific heat should be relatively unaffected, still only being renormalized by a logarithmic term. The limitations of the current study deserve some discussion as they lead to future directions. Due to the natural ansatz used in the slave-rotor study, a number of non-trivial spin-liquid states have been excluded from the analysis, such as those with a non-trivial phase structure or that break translational symmetry[@rau-prl]. A full exploration of the possible phases in this model could yield useful insights for ${{\rm FeCrAs}}$. Another aspect of this problem that requires future work is the inclusion quantum effects in the description of $\rm Cr$, leading to a Kondo-Heisenberg model with strong interactions for the conduction electrons. While the addition of frustrating interactions for the Kondo spins has attracted attention recently[@coleman1; @si1], the inclusion of electronic interactions is largely unaddressed (particularly with frustration on the conduction electrons) and is an interesting, but highly non-trivial, question for future study. Summary {#conclusions} ======= In this paper we have presented and motivated a minimal model for the low energy degrees of freedom in the compound ${{\rm FeCrAs}}$. Starting from the crystal structure and using the experimental facts, we have argued that the magnetic degrees of freedom are well described by a set of classical, localized moments and the electronic degrees of freedom take the form of a half-filled Hubbard model on the trimer sublattice. The coupling between these two subsystems stabilizes a definite magnetic order on the localized moments despite the high degree frustration. To explain the thermodynamic and transport properties of this material at low temperatures we propose that the electrons residing on the $\rm Fe$ trimers could be close to a quantum critical point separating metallic and insulating phase. The charge fluctuations associated with the critical point strongly renormalize the transport properties but provide only small corrections to the thermodynamics, qualitatively consistent the experimental results on ${{\rm FeCrAs}}$. Finally, we have discussed unexplored experimental consequences of this proposal and future directions for theoretical work. We thank J. Hopkinson, S. Julian, Y.B. Kim and Y.J. Kim for useful discussions. This research was supported by NSERC of Canada and the Canada Research Chair program. Hopping integrals ================= We first consider direct hopping between the Fe atoms in a trimer. The $t_2$ level is composed of $xz$, $yz$ and $xy$ orbitals with respect to the canonical choice of tetrahedron axes. Rotating these into the axes of a tetrahedron in a trimer, the orbitals are oriented as shown in Fig. \[local-orb\]. From Fig. \[local-orb\] one can estimate that the only significant hoppings would be between ${xy-xy}$, ${xz-yz}$ and ${yz-xz}$. One can approach this more quantitatively using the ideas of Slater-Koster[@slater-koster] theory to compute the orbital overlaps in terms of rotation matrices and irreducible overlaps. We identify the vector connecting two sites in a trimer $\hat{a}$ as $(\hat{x}+\sqrt{3}\hat{y})/2$, giving the (Euler) representation $R=R_{\hat{z}}(0) R_{\hat{y}}(-\frac{\pi}{2}) R_{\hat{z}} (-\frac{\pi}{3})$. The rotation that takes our tetrahedron into the proper axes is given as $R_T=R_{\hat{z}}(-\frac{\pi}{2}) R_{\hat{y}}(\frac{\pi}{2}) R_{\hat{z}} (\frac{\pi}{4})$ giving the transformation to local axes $R_T$ and $R_{\hat{z}}(\frac{2\pi}{3}) R_T$ for the neighbouring tetrahedron in the trimer. This reduces the number of parameters to three, given by overlaps of $l=2$, $m=0,\pm 1,\pm 2$ orbitals displaced along the $\hat{z}$ direction which we will denote as $t_{\sigma}$, $t_{\pi}$ and $t_{\delta}$ respectively. The hopping matrix in the basis of $xz$,$yz$ and $xy$ is then given by $$\nonumber \frac{1}{32} \left[ \begin{tabular}{ccc} $\scriptstyle t_{\delta} -8t_{\pi} - 9t_{\sigma}$ & $\scriptstyle 7t_{\delta} -16t_{\pi} + 9t_{\sigma}$ & $\scriptstyle \sqrt{\frac{3}{2}}\left(-7t_{\delta} -4t_{\pi} +3t_{\sigma}\right)$ \\ $\scriptstyle 7t_{\delta} -16t_{\pi} + 9t_{\sigma}$ & $\scriptstyle t_{\delta} -8t_{\pi} - 9t_{\sigma}$ & $\scriptstyle \sqrt{\frac{3}{2}}\left(-7t_{\delta} +4t_{\pi} -3t_{\sigma}\right)$ \\ $\scriptstyle \sqrt{\frac{3}{2}}\left(7t_{\delta} +4t_{\pi} -3t_{\sigma}\right)$ & $\scriptstyle \sqrt{\frac{3}{2}}\left(-7t_{\delta} -4t_{\pi} +3t_{\sigma}\right)$ & $\scriptstyle \frac{1}{2}\left(49t_{\delta} -12t_{\pi} +3t_{\sigma}\right)$ \\ \end{tabular} \right].$$ Considering the orbitals at atomic separations, we expect $t_{\sigma}$ and $t_{\delta}$ are positive with $t_{\pi}$ negative. The simplest ansatz to try is $ t_{\sigma} = t_{\delta} = -t_{\pi}\equiv t$. This gives: $$t \left( \begin{tabular}{ccc} $0$ & $1$ & $0$ \\ $1$ & $0$ & $0$ \\ $0$ & $0$ & $1$ \end{tabular} \right),$$ as one might guess by looking at the orbital overlaps in the rotated axes. Varying the numerical values for the irreducible hopping parameters around this point gives qualitatively the same picture as this simple case, justifying our naive guess. [^1]: More general ansatzes could be employed while maintaining uniformity, such as varying the phases of $\chi$ and $B$ over the tripled unit cell or include spin-dependent $\chi$. For simplicity we leave these considerations for future work.
--- abstract: 'In the paper, the authors introduce a notion “$(\alpha,m)$-GA-convex functions” and establish some integral inequalities of Hermite-Hadamard type for $(\alpha,m)$-GA-convex functions.' address: - 'College of Mathematics, Inner Mongolia University for Nationalities, Tongliao City, Inner Mongolia Autonomous Region, 028043, China' - 'College of Mathematics, Inner Mongolia University for Nationalities, Tongliao City, Inner Mongolia Autonomous Region, 028043, China' - 'School of Mathematics and Informatics, Henan Polytechnic University, Jiaozuo City, Henan Province, 454010, China; Department of Mathematics, School of Science, Tianjin Polytechnic University, Tianjin City, 300387, China' author: - 'Ai-Ping Ji' - 'Tian-Yu Zhang' - Feng Qi title: 'Integral inequalities of Hermite-Hadamard type for $\boldsymbol{(\alpha,m)}$-GA-convex functions' --- [^1] [^2] Introduction ============ In [@Mihesan-1993-Romania; @Toader-Proc-1985-338], the concepts of $m$-convex functions and $(\alpha,m)$-convex functions were introduced as follows. A function $f:[0,b]\to \mathbb{R}$ is said to be $m$-convex for $m\in(0,1]$ if the inequality $$f(\alpha x+m(1-\alpha)y)\le \alpha f(x)+m(1-\alpha)f(y)$$ holds for all $x,y\in [0,b]$ and $\alpha\in[0,1].$ For $f:[0,b]\to\mathbb{R}$ and $(\alpha,m)\in(0,1]^2$, if $$f(tx+m(1-t)y)\le t^\alpha f(x)+m(1-t^\alpha)f(y)$$ is valid for all $x,y\in[0,b]$ and $t\in[0,1]$, then we say that $f(x)$ is an $(\alpha,m)$-convex function on $[0,b]$. Hereafter, a few of inequalities of Hermite-Hadamard type for the $m$-convex and $(\alpha,m)$-convex functions were presented, some of them can be recited as following theorems. \[Klaric-Ozdemir-Pecaric-08-1032\] Let $I\supset\mathbb{R}_0=[0,\infty)$ be an open interval and let $f:I\to\mathbb{R}$ be a differentiable function on $I$ such that $f'\in L([a,b])$ for $0\le a<b<\infty$, where $L([a,b])$ denotes the set of all Lebesgue integrable functions on $[a,b]$. If $|f'(x)|^{q}$ is $m$-convex on $[a,b]$ for some given numbers $m\in(0,1]$ and $q\ge1$, then $$\begin{gathered} \biggl|f\biggl(\frac{a+b}{2}\biggr)-\frac{1}{b-a} \int_a^bf(x)\operatorname{d\mspace{-2mu}}x\biggr| \\ \le \frac{b-a}{4} \min \biggl\{\biggl[\frac{|f'(a)|^{q}+m|f'(b/m)|^{q}}{2}\biggr]^{1/q}, \biggl[\frac{m|f'(a/m)|^{q}+|f'(b)|^{q}}{2}\biggr]^{1/q} \biggr\}.\end{gathered}$$ Let $I\supset[0,\infty)$ be an open interval and let $f:I\to(-\infty, \infty)$ be a differentiable function on $I$ such that $f'\in L([a,b])$ for $0\le a<b<\infty$. If $|f'(x)|^{q} $ is $(\alpha,m)$-convex on $[a,b]$ for some given numbers $m, \alpha\in(0,1]$, and $q\ge1$, then $$\begin{gathered} \biggl|\frac{f(a)+f(b)}{2}-\frac{1}{b-a} \int_a^bf(x)\operatorname{d\mspace{-2mu}}x\biggr| \le \frac{b-a}{2}\biggl(\frac{1}{2}\biggr)^{1-1/q}\\* \times\min\biggl\{\biggl[v_1|f'(a)|^{q}+v_{2}m \biggl|f'\biggl(\frac{b}{m}\biggr)\biggr|^{q}\biggr]^{1/q}, \biggl[v_{2}m\biggl|f'\biggl(\frac{a}{m}\biggr)\biggr|^{q} +v_1|f'(b)|^{q}\biggr]^{1/q} \biggr\},\end{gathered}$$ where $$v_1=\frac{1}{(\alpha+1)(\alpha+2)}\biggl(\alpha+\frac{1}{2^\alpha}\biggr)$$ and $$v_{2}=\frac{1}{(\alpha+1)(\alpha+2)}\biggl(\frac{\alpha^{2} +\alpha+2}{2}-\frac{1}{2^\alpha}\biggr).$$ For more information on Hermite-Hadamard type inequalities for various kinds of convex functions, please refer to the monograph [@Dragomir-selected-Topic], the recently published papers [@Hadramard-Convex-Xi-Filomat.tex; @H-H-Bai-Wang-Qi-2012.tex; @Dragomir-Agarwal-AML-98-95; @Kirmaci-AMC-04-146; @Kir-Bak-Ozd-Pec-AMC-26-35; @Wang-Qi-Xi-Hadamard-IJOPCM.tex; @Xi-Bai-Qi-Hadamard-2011-AEQMath.tex], and closely related references therein. In this paper, we will introduce a new concept “$(\alpha,m)$-geometric-arithmetically convex function” (simply speaking, $(\alpha,m)$-GA-convex function) and establish some integral inequalities of Hermite-Hadamard type for $(\alpha,m)$-GA-convex functions. A definition and a lemma ======================== Now we introduce the so-called $(\alpha,m)$-GA-convex functions. \[(a,m)-GA-convex-dfn\] Let $f:[0,b] \to \mathbb{R}$ and $(\alpha,m)\in [0,1]^2$. If $$\label{(a,m)-GA-convex-dfn-eq} f\bigl(x^\lambda y^{m(1-\lambda)}\bigr)\le \lambda ^\alpha f(x)+ m(1-\lambda^\alpha )f(y)$$ for all $x,y\in [0,b]$ and $\lambda\in [0,1]$, then $f(x)$ is said to be a $(\alpha,m)$-geometric-arithmetically convex function or, simply speaking, an $(\alpha,m)$-GA-convex function. If is reversed, then $f(x)$ is said to be a $(\alpha,m)$-geometric-arithmetically concave function or, simply speaking, a $(\alpha,m)$-GA-concave function. When $m=\alpha=1$, the $(\alpha,m)$-GA-convex (concave) function defined in Defintion \[(a,m)-GA-convex-dfn\] becomes a GA-convex (concave) function defined in [@Niculescu-MIA-00-155; @Niculescu-MIA-03-571]. To establish some new Hermite-Hadamard type inequalities for $(\alpha,m)$-GA-convex functions, we need the following lemma. \[lem1-2012-GA-Ji\] Let $f:I \subseteq \mathbb{R}_+=(0,\infty)\to\mathbb{R}$ be a differentiable function and $a,b \in I$ with $a < b$. If $f'(x) \in L([a,b])$, then $$\frac{b^2f(b)-a^2f(a)}{2}-\int_a^bxf(x)\operatorname{d\mspace{-2mu}}x = \frac{\ln b-\ln a}{2}\int_0^1 a^{3(1-t)}b^{3t}f'\bigl(a^{1-t}b^t\bigr)\operatorname{d\mspace{-2mu}}t.$$ Let $x=a^{1-t}b^t$ for $0\le t\le 1$. Then $$\begin{gathered} (\ln b-\ln a)\int_0^1a^{3(1-t)}b^{3t}f'\bigl(a^{1-t}b^t\bigr)\operatorname{d\mspace{-2mu}}t = \int_a^b x^2f'(x)\operatorname{d\mspace{-2mu}}x\\ =b^2f(b)-a^2f(a) -2\int_a^bxf(x)\operatorname{d\mspace{-2mu}}x.\end{gathered}$$ Lemma \[lem1-2012-GA-Ji\] is thus proved. Inequalities of Hermite-Hadamard type ===================================== Now we turn our attention to establish inequalities of Hermite-Hadamard type for $(\alpha,m)$-GA-convex functions. \[thm1-2012-GA-Ji\] Let $f:\mathbb{R}_0=[0,\infty)\to\mathbb{R}$ be a differentiable function and $f'\in L([a,b])$ for $0<a<b<\infty $. If $|f'|^q$ is $(\alpha,m)$-GA-convex on $\bigl[0,\max \{a^{1/m},b\}\bigr]$ for $(\alpha,m)\in (0,1]^2$ and $q\ge1$, then $$\begin{gathered} \label{thm1-2012-GA-Ji-1} \biggl|\frac{b^2f(b)-a^2f(a)}{2}-\int_a^bxf(x)\operatorname{d\mspace{-2mu}}x\biggr| \le \frac{\ln b-\ln a}{2}\bigl[L\bigl(a^3,b^3\bigr)\bigr]^{1-1/q} \\ \times \bigl\{ {m\bigl[L\bigl(a^3,b^3\bigr) -G(\alpha,3)\bigr]\bigl|f'\bigl(a^{1/m}\bigr)\bigr|^q + G(\alpha,3)|f'(b)|^q} \bigr\}^{1/q},\end{gathered}$$ where $$\label{2012-GA-Ji} G(\alpha,\ell) = \int_0^1 {t^\alpha a^{\ell(1-t)}}b^{\ell t} \operatorname{d\mspace{-2mu}}t$$ for $\ell \ge 0$ and $$\label{log-mean-dfn-eq} L(x,y)=\frac{y-x}{\ln y-\ln x}$$ for $x,y>0$ with $x\ne y$. Making use of the $(\alpha,m)$-GA-convexity of $|f'(x)|^q$ on $\bigl[0,\max\bigl\{a^{1/m},b\bigr\}\bigr]$, Lemma \[lem1-2012-GA-Ji\], and Hölder inequality yields $$\begin{aligned} &\quad\biggl|\frac{b^2f(b)-a^2f(a)}{2}-\int_a^bxf(x)\operatorname{d\mspace{-2mu}}x\biggr| \le \frac{\ln b-\ln a}{2}\int_0^1a^{3(1-t)}b^{3t}\bigl|f'\bigl(a^{1-t}b^t\bigr)\bigr|\operatorname{d\mspace{-2mu}}t \\ &\le \frac{\ln b-\ln a}{2}\biggl[\int_0^1a^{3(1-t)}b^{3t}\operatorname{d\mspace{-2mu}}t\biggr]^{1-1/q} \biggl[\int_0^1a^{3(1-t)}b^{3t}\Bigl|f'\Bigl(\bigl(a^{1/m}\bigr)^{m(1-t)}b^t\Bigr)\Bigr|^q\operatorname{d\mspace{-2mu}}t\biggr]^{1/q}\\ &\le \frac{\ln b-\ln a}{2}\biggl(\frac{b^3-a^3}{\ln b^3-\ln a^3}\biggr)^{1-1/ q}\\ &\quad\times\biggl[\int_0^1a^{3(1-t)}b^{3t}\bigl(t^\alpha|f'(b)|^q +m(1-t^\alpha)\bigl|f'\bigl(a^{1/m}\bigr)\bigr|^q\bigr)\operatorname{d\mspace{-2mu}}t\biggr]^{1/q}\\ &=\frac{\ln b-\ln a}{2}\bigl[L\bigl(a^3,b^3\bigr)\bigr]^{1-1/q}\\ &\quad\times\bigl\{m\bigl[L\bigl(a^3,b^3\bigr)-G(\alpha,3)\bigr] \bigl|f'\bigl(a^{1/m}\bigr)\bigr|^q+G(\alpha,3)|f'(b)|^q\bigr\}^{1/q}.\end{aligned}$$ As a result, the inequality  follows. The proof of Theorem \[thm1-2012-GA-Ji\] is complete. \[cor-3.1-1-2012-GA-Ji\] Under the conditions of Theorem \[thm1-2012-GA-Ji\], if $q=1$, then $$\begin{gathered} \biggl|\frac{b^2f(b)-a^2f(a)}2-\int_a^bxf(x)\operatorname{d\mspace{-2mu}}x\biggr|\\ \le \frac{\ln b-\ln a}{2}\bigl\{m\bigl[L\bigl(a^3,b^3\bigr)-G(\alpha,3)\bigr] \bigl|f'\bigl(a^{1/m}\bigr)\bigr| + G(\alpha,3)|f'(b)|\bigr\}.\end{gathered}$$ \[cor-3.1-2-2012-GA-Ji\] Under the conditions of Theorem \[thm1-2012-GA-Ji\], if $\alpha=1$, then $$\begin{gathered} \label{cor-3.1-2-2012-GA-Ji-1} \biggl|\frac{b^2f(b)-a^2f(a)}2-\int_a^bxf(x)\operatorname{d\mspace{-2mu}}x\biggr|\le\frac{\bigl(b^3-a^3\bigr)^{1-1/q}}{6} \\ \times\Bigl\{m\bigl[L\bigl(a^3,b^3\bigr)-a^3\bigr] \bigl|f'\bigl(a^{1/m}\bigr)\bigr|^q + \bigl[b^3-L\bigl(a^3,b^3\bigr)\bigr]|f'(b)|^q\Bigr\}^{1/q}.\end{gathered}$$ This follows from the fact that $$G(1,3)=\int_0^1ta^{3(1-t)}b^{3t}\operatorname{d\mspace{-2mu}}t =\frac{b^3-L\bigl(a^3,b^3\bigr)}{3(\ln b-\ln a)}.$$ The proof of Corollary \[cor-3.1-2-2012-GA-Ji\] is complete. \[cor-3.1-3-2012-GA-Ji\] Under the conditions of Theorem \[thm1-2012-GA-Ji\], we have $$\begin{gathered} \label{cor-3.1-3pineq1} \biggl|\frac{b^2f(b)-a^2f(a)}2-\int_a^bxf(x)\operatorname{d\mspace{-2mu}}x\biggr| \le \frac{\ln b-\ln a}{2}\bigl[L\bigl(a^3,b^3\bigr)\bigr]^{1-1/q} \\ \times\biggl(\frac{1}{\alpha+1}\biggr)^{1/q} \bigl\{m\bigl[(\alpha+1)L\bigl(a^3,b^3\bigr)-b^3\bigr] \bigl|f'\bigl(a^{1/m}\bigr)\bigr|^q+b^3|f'(b)|^q\bigr\}^{1/q}\end{gathered}$$ and $$\label{cor-3.1-3pineq2} \biggl|\frac{b^2f(b)-a^2f(a)}2-\int_a^bxf(x)\operatorname{d\mspace{-2mu}}x\biggr| \le \frac{\ln b-\ln a}{2}L\bigl(a^3,b^3\bigr)|f'(b)|.$$ Using $\bigl(\frac{b}a\bigl)^{3t}\le\bigl(\frac{b}a\bigl)^3$ for $t\in[0,1]$ in  gives $$G(\alpha,3)=a^3\int_0^1t^\alpha\biggl(\frac{b}a\biggr)^{3t}\operatorname{d\mspace{-2mu}}t\le\frac{b^3}{\alpha+1}.$$ Substituting this inequality into  yields . Utilizing $¡°t^\alpha\le1$ for $t\in[0,1]$ in  reveals $$G(\alpha,3)\le \int_0^1a^{3(1-t)}b^{3t}\operatorname{d\mspace{-2mu}}t=L\bigl(a^3,b^3\bigr).$$ Combining this inequality with  yields . Corollary \[cor-3.1-3-2012-GA-Ji\] is thus proved. \[thm2-2012-GA-Ji\] Let $f:\mathbb{R}_0\to\mathbb{R}$ be a differentiable function and $f'\in L([a,b])$ with $0<a<b<\infty $. If $|f'|^q$ is $(\alpha,m)$-GA-convex on $\bigl[0,\max\bigl\{a^{1/m},b\bigr\}\bigr]$ for $(\alpha,m)\in (0,1]^2$ and $q>1$, then $$\begin{gathered} \biggl|\frac{b^2f(b)-a^2f(a)}{2}-\int_a^bxf(x)\operatorname{d\mspace{-2mu}}x\biggr| \le \frac{\ln b-\ln a}{2}\biggl(\frac{1}{\alpha+1}\biggr)^{1/q}\\ \times\bigl[L\bigl(a^{3q/(q-1)},b^{3q/(q-1)}\bigr)\bigr]^{1-1/q} \bigl[|f'(b)|^q+\alpha m\bigl|f'\bigl(a^{1/m}\bigr)\bigr|^q\bigr]^{1/q},\end{gathered}$$ where $L$ is defined by . Since $|f'(x)|^q$ is $(\alpha,m)$-GA-convex on $\bigl[0,\max\bigl\{a^{1/m},b\bigr\}\bigr]$, from Lemma \[lem1-2012-GA-Ji\] and Hölder inequality, we have $$\begin{aligned} &\quad\biggl|\frac{b^2f(b)-a^2f(a)}{2}-\int_a^bxf(x)\operatorname{d\mspace{-2mu}}x\biggr| \le \frac{\ln b-\ln a}{2}\int_0^1a^{3(1-t)}b^{3t}\bigl|f'\bigl(a^{1-t}b^t\bigr)\bigr|\operatorname{d\mspace{-2mu}}t \\ &\le \frac{\ln b-\ln a}{2}\biggl[\int_0^1a^{3q/(q-1)(1-t)}b^{3q/(q-1)t}\operatorname{d\mspace{-2mu}}t\biggr]^{1-1/q} \biggl[\int_0^1\Bigl|f'\Bigl(\bigl(a^{1/m}\bigr)^{m(1-t)}b^t\Bigr)\Bigr|^q\operatorname{d\mspace{-2mu}}t\biggr]^{1/q} \\ &\le \frac{\ln b-\ln a}{2}\biggl[\frac{b^{3q/(q-1)}-a^{3q/(q-1)}}{\ln b^{3q/(q-1)} -\ln a^{3q/(q-1)}}\biggr]^{1-1/q}\\ &\quad\times\biggl[\int_0^1\bigl(t^\alpha |f'(b)|^q+m(1-t^\alpha)\bigl|f'\bigl(a^{1/m}\bigr)\bigr|^q\bigr)\operatorname{d\mspace{-2mu}}t\biggl]^{1/q} \\ &= \frac{\ln b-\ln a}{2}\bigl[L\bigl(a^{3q/(q-1)},b^{3q/(q-1)}\bigr)\bigr]^{1-1/q} \biggl[\frac{1}{\alpha + 1}|f'(b)|^q + \frac{\alpha m}{\alpha+1}\bigl|f'\bigl(a^{1/m}\bigr)\bigr|^q\biggr]^{1/q}.\end{aligned}$$ The proof of Theorem \[thm2-2012-GA-Ji\] is complete. \[cor-3.2-2012-GA-Ji\] Under the conditions of Theorem \[thm2-2012-GA-Ji\], if $\alpha=1$, then $$\begin{gathered} \biggl|\frac{b^2f(b)-a^2f(a)}2-\int_a^bxf(x)\operatorname{d\mspace{-2mu}}x\biggr| \le \frac{\ln b-\ln a} {2^{1+1/q}}\\ \times\bigl[L\bigl(a^{3q/(q-1)}, b^{3q/(q-1)}\bigr)\bigr]^{1-1/q}\bigl[|f'(b)|^q +m\bigl|f'\bigl(a^{1/m}\bigr)\bigr|^q\bigr]^{1/q}.\end{gathered}$$ \[thm3-1-2012-GA-Ji\] Let $f:\mathbb{R}_0\to\mathbb{R}$ be a differentiable function and $f'\in L([a,b])$ for $0<a<b<\infty $. If $|f'|^q$ is $(\alpha,m)$-GA-convex on $\bigl[0,\max\bigl\{a^{1/m},b\bigr\}\bigr]$ for $q>1$ and $(\alpha,m)\in (0,1]^2$, then $$\begin{gathered} \biggl|\frac{b^2f(b)-a^2f(a)}{2}-\int_a^bxf(x)\operatorname{d\mspace{-2mu}}x\biggr| \le \frac{\ln b-\ln a}{2} \\ \times\bigl\{m\bigl[L\bigl(a^{3q},b^{3q}\bigr)-G(\alpha,3q)\bigr] \bigl|f'\bigl(a^{1/m}\bigr)\bigr|^q+G(\alpha,3q)|f'(b)|^q\bigr\}^{1/q},\end{gathered}$$ where $G$ and $L$ are respectively defined by  and . Since $|f'(x)|^q$ is $(\alpha,m)$-GA-convex on $\bigl[0,\max\bigl\{a^{1/m},b\bigr\}\bigr]$, from Lemma \[lem1-2012-GA-Ji\] and Hölder inequality, we have $$\begin{aligned} &\quad\biggl|\frac{b^2f(b)-a^2f(a)}{2}-\int_a^bxf(x)\operatorname{d\mspace{-2mu}}x\biggr| \\ &\le\frac{\ln b-\ln a}{2}\biggl(\int_0^11\operatorname{d\mspace{-2mu}}t\biggr)^{1-1/q} \biggl[\int_0^1a^{3q(1-t)}b^{3qt}\Bigl|f'\Bigl(\bigl(a^{1/m}\bigr)^{m(1-t)}b^t\Bigr)\Bigr|^q\operatorname{d\mspace{-2mu}}t\biggr]^{1/q} \\ &\le\frac{\ln b-\ln a}{2}\bigl[mL\bigl(a^{3q},b^{3q}\bigr) \bigl|f'\bigl(a^{1/m}\bigr)\bigr|^q+G(\alpha,3q) \bigl(|f'(b)|^q-m\bigl|f'\bigl(a^{1/m}\bigr)\bigr|^q\bigr)\bigr]^{1/q}.\end{aligned}$$ The proof of Theorem \[thm3-1-2012-GA-Ji\] is complete. \[cor-3-1.3-2012-GA-Ji\] Under the conditions of Theorem \[thm3-1-2012-GA-Ji\], if $\alpha=1$, then $$\begin{gathered} \biggl|\frac{b^2f(b)-a^2f(a)}{2}-\int_a^bxf(x)\operatorname{d\mspace{-2mu}}x\biggr| \le\frac{(\ln b-\ln a)^{1-1/q}}{2}\biggl(\frac{1}{3q}\biggr)^{1/q} \\ \times \bigl\{m\bigl[L\bigl(a^{3q},b^{3q}\bigr)-a^{3q}\bigr]\bigl|f'\bigl(a^{1/m}\bigr)\bigr|^q + \bigl[b^{3q}-L\bigl(a^{3q},b^{3q}\bigr)\bigr]|f'(b)|^q\bigr\}^{1/q}.\end{gathered}$$ From $$G(1,3q) = \int_0^1 ta^{3q(1-t)}b^{3qt} \operatorname{d\mspace{-2mu}}t = \frac{b^{3q}-L\bigl(a^{3q},b^{3q}\bigr)}{\ln b^{3q}-\ln a^{3q}},$$ Corollary \[cor-3-1.3-2012-GA-Ji\] follows. \[thm3-2012-GA-Ji\] Let $f:\mathbb{R}_0\to\mathbb{R}$ be a differentiable function and $f'\in L([a,b])$ for $0<a<b<\infty $. If $|f'|^q$ is $(\alpha,m)$-GA-convex on $\bigl[0,\max\bigl\{a^{1/m},b\bigr\}\bigr]$ for $q>1$, $q> p>0$, and $(\alpha,m)\in (0,1]^2$, then $$\begin{gathered} \biggl|\frac{b^2f(b)-a^2f(a)}{2}-\int_a^bxf(x)\operatorname{d\mspace{-2mu}}x\biggr| \le \frac{\ln b-\ln a}{2}\bigl[L\bigl(a^{3(q-p)/(q-1)},b^{3(q-p)/(q-1)}\bigr)\bigr]^{1-1/q} \\ \times \bigl\{m\bigl[L\bigl(a^{3p},b^{3p}\bigr)-G(\alpha,3p)\bigr]\bigl|f'\bigl(a^{1/m}\bigr)\bigr|^q +G(\alpha,3p)|f'(b)|^q\bigr\}^{1/q},\end{gathered}$$ where $G$ and $L$ are respectively defined by  and . Since $|f'(x)|^q$ is $(\alpha,m)$-GA-convex on $\bigl[0,\max\bigl\{a^{1/m},b\bigr\}\bigr]$, from Lemma \[lem1-2012-GA-Ji\] and Hölder inequality, we have $$\begin{aligned} &\quad\biggl|\frac{b^2f(b)-a^2f(a)}{2}-\int_a^bxf(x)\operatorname{d\mspace{-2mu}}x\biggr| \\ &\le \frac{\ln b-\ln a}{2}\biggl[\int_0^1a^{3(q-p)/(q-1)(1-t)}b^{3(q-p)/(q-1)t}\operatorname{d\mspace{-2mu}}t\biggr]^{1-1/q} \\ &\quad\times\biggl[\int_0^1a^{3p(1-t)}b^{3pt} \Bigl|f'\Bigl(\bigl(a^{1/m}\bigr)^{m(1-t)}b^t\Bigr)\Bigr|^q\operatorname{d\mspace{-2mu}}t\biggr]^{1/q} \\ &\le \frac{\ln b-\ln a}{2}\biggl[\frac{b^{3(q-p)/(q-1)}-a^{3(q-p)/(q-1)}}{\ln b^{3(q-p)/(q-1)} -\ln a^{3(q-p)/(q-1)}}\biggr]^{1-1/q} \\ &\quad\times\biggl[\int_0^1a^{3p(1-t)}b^{3pt}\bigl(t^\alpha |f'(b)|^q +m(1-t^\alpha )\bigl|f'\bigl(a^{1/m}\bigr)\bigr|^q\bigr)\operatorname{d\mspace{-2mu}}t\biggr]^{1/q} \\ &=\frac{\ln b-\ln a}{2}\bigl[L\bigl(a^{3(q-p)/(q-1)},b^{3(q-p)/(q-1)}\bigr)\bigr]^{1-1/q} \\ &\quad\times \bigl[mL\bigl(a^{3p},b^{3p}\bigr)\bigl|f'\bigl(a^{1/m}\bigr)\bigr|^q+G(\alpha,3p) \bigl(|f'(b)|^q-m\bigl|f'\bigl(a^{1/m}\bigr)\bigr|^q\bigr)\bigr]^{1/q}.\end{aligned}$$ The proof of Theorem \[thm3-2012-GA-Ji\] is complete. \[cor-3.3-2012-GA-Ji\] Under the conditions of Theorem \[thm3-2012-GA-Ji\], if $\alpha=1$, then $$\begin{gathered} \biggl|\frac{b^2f(b)-a^2f(a)}{2}-\int_a^bxf(x)\operatorname{d\mspace{-2mu}}x\biggr| \le \frac{(\ln b-\ln a)^{1-1/q}}{2}\biggl(\frac{1}{3p}\biggr)^{1/q}\\ \times\bigl[L\bigl(a^{3(q-p)/(q-1)},b^{3(q-p)/(q-1)}\bigr)\bigr]^{1-1/q} \\ \times \bigl\{m\bigl[L\bigl(a^{3p},b^{3p}\bigr)-a^{3p}\bigr]\bigl|f'\bigl(a^{1/m}\bigr)\bigr|^q + \bigl[b^{3p}-L\bigl(a^{3p},b^{3p}\bigr)\bigr]|f'(b)|^q\bigr\}^{1/q}.\end{gathered}$$ By $$G(1,3p) = \int_0^1 ta^{3p(1-t)}b^{3pt} \operatorname{d\mspace{-2mu}}t = \frac{b^{3p}-L\bigl(a^{3p},b^{3p}\bigr)}{\ln b^{3p}-\ln a^{3p}},$$ Corollary \[cor-3.3-2012-GA-Ji\] can be proved easily. \[thm4-2012-GA-Ji\] Let $f, g:\mathbb{R}_0\to\mathbb{R}_0$ and $fg\in L([a,b])$ for $0<a<b<\infty $. If $f^q(x)$ is $(\alpha_1 ,m_1)$-GA-convex on $\bigl[0,\max\bigl\{a^{1/m_1},b\bigr\}\bigr]$ and $g^q(x)$ is $(\alpha_2,m_2)$-GA-convex on $\bigl[0,\max\bigl\{a^{1/m_2},b\bigr\}\bigr]$ for $q\ge 1$, $(\alpha_1,m_1)$, and $(\alpha_2,m_2)\in (0,1]^2$, then $$\begin{gathered} \int_a^bf(x)g(x)\operatorname{d\mspace{-2mu}}x \le (\ln b-\ln a)[L(a,b)]^{1-1/q}\bigl\{m_1m_2[L(a,b) -G(\alpha_1,1)-G(\alpha_2,1)\\ +G(\alpha_1+\alpha_2,1)]f^q \bigl(a^{1/m_1}\bigr)g^q \bigl(a^{1/m_2}\bigr) +m_1[G(\alpha_2,1)-G(\alpha_1+\alpha_2,1)]f^q\bigl(a^{1/m_1}\bigr)g^q(b)\\ + m_2 [G(\alpha_1,1)-G(\alpha_1+\alpha_2 ,1)]f^q(b)g^q \bigl(a^{1/m_2}\bigr) +G(\alpha_1+\alpha_2,1)f^q(b)g^q(b)\bigr\}^{1/q},\end{gathered}$$ where $G$ and $L$ are respectively defined by  and . Using the $(\alpha_1 ,m_1)$-GA-convexity of $f^q(x)$ and the $(\alpha_2,m_2)$-GA-convexity of $g^q(x)$, we have $$f^q\bigl(a^{1-t}b^t\bigr)\le t^{\alpha_1}f^q(b)+m_1(1-t^{\alpha_1})f^q\bigl(a^{1/m_1}\bigr)$$ and $$g^q\bigl(a^{1-t}b^t\bigr)\le t^{\alpha_2}g^q(b)+m_2(1-t^{\alpha_2})g^q\bigl(a^{1/m_2}\bigr)$$ for $0\le t\le 1$. Letting $x=a^{1-t}b^t$ for $0\le t\le 1$ and using Hölder’s inequality figure out $$\begin{aligned} &\quad\int_a^bf(x)g(x)\operatorname{d\mspace{-2mu}}x =(\ln b-\ln a)\int_0^1a^{1-t}b^tf\bigl(a^{1-t}b^t\bigr)g\bigl(a^{1-t}b^t\bigr)\operatorname{d\mspace{-2mu}}t \\ &\le (\ln b-\ln a)\biggl(\int_0^1a^{1-t}b^t\operatorname{d\mspace{-2mu}}t\biggr)^{1-1/q} \biggl\{\int_0^1a^{1-t}b^t\bigl[f\bigl(a^{1-t}b^t\bigr)g\bigl(a^{1-t}b^t\bigr)\bigr]^q\operatorname{d\mspace{-2mu}}t\biggr\}^{1/q}\\ &\le (\ln b-\ln a)\biggl(\int_0^1a^{1-t}b^t\operatorname{d\mspace{-2mu}}t\biggr)^{1-1/q}\biggl\{\int_0^1a^{1-t}b^t\bigl[t^{\alpha _1}f^q(b)\\ &\quad+m_1(1-t^{\alpha_1})f^q\bigl(a^{1/m_1}\bigr)\bigr] \bigl[t^{\alpha_2}g^q(b)+m_2(1-t^{\alpha_2})g^q\bigl(a^{1/m_2}\bigr)\bigr]\operatorname{d\mspace{-2mu}}t\biggr\}^{1/q}\\ &=(\ln b-\ln a)[L(a,b)]^{1- 1/q}\biggl\{\int_0^1a^{1-t}b^t\bigl[t^{\alpha_1+\alpha_2}f^q(b)g^q(b)\\ &\quad+m_1 t^{\alpha_2 }(1-t^{\alpha_1 })f^q\bigl(a^{1/m_1}\bigr)g^q(b) + m_2t^{\alpha_1} (1-t^{\alpha _2 })f^q(b)g^q\bigl(a^{1/m_2}\bigr)\\ &\quad+ m_1m_2(1-t^{\alpha_1})(1-t^{\alpha _2})f^q\bigl(a^{1/m_1}\bigr)g^q\bigl(a^{1/m_2}\bigr)\bigr]\operatorname{d\mspace{-2mu}}t \biggr\}^{1/q} \\ &=(\ln b-\ln a)[L(a,b)]^{1-1/q}\bigl\{m_1m_2[L(a,b)-G(\alpha_1,1) \\ &\quad-G(\alpha_2,1)+G(\alpha_1+\alpha_2,1)]f^q\bigl(a^{1/m_1}\bigr)g^q\bigl(a^{1/m_2}\bigr)\\ &\quad +m_1[G(\alpha_2,1)-G(\alpha_1+\alpha_2,1)]f^q\bigl(a^{1/m_1}\bigr)g^q(b)\\ &\quad+ m_2 [G(\alpha_1,1)-G(\alpha_1+\alpha_2 ,1)]f^q(b)g^q\bigl(a^{1/m_2}\bigr) +G(\alpha_1+\alpha_2,1)f^q(b)g^q(b)\bigr\}^{1/q}.\end{aligned}$$ The proof of Theorem \[thm4-2012-GA-Ji\] is complete. \[cor-3.4-1-2012-GA-Ji-cor\] Under the conditions of Theorem \[thm4-2012-GA-Ji\], 1. if $q=1$, then $$\begin{gathered} \int_a^bf(x)g(x)\operatorname{d\mspace{-2mu}}x \le(\ln b-\ln a)\bigl\{m_1m_2[L(a,b)-G(\alpha_1,1)-G(\alpha_2,1)\\ +G(\alpha_1+\alpha_2,1)]f\bigl(a^{1/m_1}\bigr)g\bigl(a^{1/m_2}\bigr) +m_1[G(\alpha_2,1)-G(\alpha_1+\alpha_2,1)]f\bigl(a^{1/m_1}\bigr)g(b) \\ + m_2 [G(\alpha_1,1)-G(\alpha_1+\alpha_2 ,1)]f(b)g\bigl(a^{1/m_2}\bigr) +G(\alpha_1+\alpha_2,1)f(b)g(b) \bigr\},\end{gathered}$$ 2. if $q=1$ and $\alpha_1=\alpha_2=m_1=m_2=1$, then $$\begin{gathered} \int_a^bf(x)g(x)\operatorname{d\mspace{-2mu}}x\le\frac{1}{\ln b-\ln a}\{[2L(a,b)-a(\ln b-\ln a)-2a]f(a)g(a)+[a+b\\ - 2L(a,b)][f(a)g(b)+f(b)g(a)]+[2L(a,b)+b(\ln b-\ln a)-2b]f(b)g(b)\},\end{gathered}$$ 3. if $\alpha_1=\alpha_2=m_1=m_2=1$, then $$\begin{gathered} \int_a^bf(x)g(x)\operatorname{d\mspace{-2mu}}x\le \frac{[L(a,b)]^{1-1/q}}{(\ln b-\ln a)^{2/(q-1)}}\bigl\{[2L(a,b)-a(\ln b-\ln a)-2a]f^q(a)g^q(a)\\ +[a+b- 2L(a,b)][f^q(a)g^q(b)+f^q(b)g^q(a)]\\ +[2L(a,b)+b(\ln b-\ln a)-2b]f^q(b)g^q(b)\bigr\}^{1/q}.\end{gathered}$$ \[thm5-2012-GA-Ji\] Let $f, g:\mathbb{R}_0\to\mathbb{R}_0$ and $fg\in L([a,b])$ for $0<a<b<\infty $. If $f^q(x)$ is $(\alpha_1 ,m_1)$-GA-convex on $\bigl[0,\max\bigl\{a^{1/m_1},b\bigr\}\bigr]$ and $g^{q/(q-1)}(x)$ is $(\alpha_2,m_2)$-GA-convex on $\bigl[0,\max\bigl\{a^{1/m_2},b\bigr\}\bigr]$ for $q>1$, $(\alpha_1,m_1)$, and $(\alpha_2,m_2)\in (0,1]^2$, then $$\begin{gathered} \label{thm5-2012-GA-Ji-1-eq} \int_a^bf(x)g(x)\operatorname{d\mspace{-2mu}}x \le (\ln b-\ln a)\bigl\{m_1f^q\bigl(a^{1/m_1}\bigr)L(a,b)\\ + G(\alpha_1,1)\bigl[f^q(b)-m_1f^q\bigl(a^{1/ m_1}\bigr)\bigr]\bigr\}^{1 /q}\bigl\{m_2g^{q/(q-1)}\bigl(a^{1/m_2}\bigr)L(a,b)\\ + G(\alpha _2 ,1)\bigl[g^{q/(q-1)}(b) -m_2 g^{q / (q-1)}\bigl(a^{1/m_2}\bigr)\bigr]\bigr\}^{1-1/q},\end{gathered}$$ where $G$ and $L$ are respectively defined by  and . By the $(\alpha_1,m_1)$-GA-convexity of $f^q(x)$ and the $(\alpha_2,m_2)$-GA-convexity of $g^{q/(q-1)}(x)$, we have $$f^q\bigl(a^{1-t}b^t\bigr)\le t^{\alpha_1}f^q(b)+m_1(1-t^{\alpha_1})f^q\bigl(a^{1/m_1}\bigr)$$ and $$g^{q/(q-1)}\bigl(a^{1-t}b^t\bigr)\le t^{\alpha_2}g^{q/(q-1)}(b)+m_2(1-t^{\alpha_2})g^{q/(q-1)}\bigl(a^{1/m_2}\bigr)$$ for $t\in [0,1]$. Letting $x=a^{1-t}b^t$ for $0\le t\le 1$ and employing Hölder’s inequality yield $$\begin{aligned} &\quad\int_a^bf(x)g(x)\operatorname{d\mspace{-2mu}}x\le \biggl[\int_a^bf^q(x)\operatorname{d\mspace{-2mu}}x\biggr]^{1/q} \biggl[\int_a^b g^{q/(q-1)}(x)\operatorname{d\mspace{-2mu}}x \biggr]^{1-1/q}\\ &=(\ln b-\ln a)\biggl[\int_0^1a^{1-t}b^tf^q\bigl(a^{1-t}b^t\bigr)\operatorname{d\mspace{-2mu}}t \biggr]^{1 /q} \biggl[\int_0^1a^{1-t}b^tg^{q/(q-1)}\bigl(a^{1-t}b^t\bigr)\operatorname{d\mspace{-2mu}}t\biggr]^{1-1/q} \\ &\le (\ln b-\ln a)\biggl[\int_0^1 a^{1-t}b^t\bigl[t^{\alpha_1} f^q(b)+ m_1(1-t^{\alpha_1} )f^q\bigl(a^{1/m_1}\bigr)\bigr]\operatorname{d\mspace{-2mu}}t\biggr]^{1/q} \\ &\quad\times \biggl[\int_0^1a^{1-t}b^t\bigl[t^{\alpha_2}g^{q/(q-1)}(b)+ m_2(1-t^{\alpha_2})g^{q/(q-1)}\bigl(a^{1/m_2}\bigr)\bigr]\operatorname{d\mspace{-2mu}}t \biggr]^{1-1/q} \\ &=(\ln b-\ln a)\bigl\{m_1f^q\bigl(a^{1/m_1}\bigr)L(a,b)+G(\alpha_1,1)\bigl[f^q(b)- m_1f^q\bigl(a^{1/m_1}\bigr)\bigr]\bigr\}^{1/q}\\ &\quad\times \bigl\{m_2 g^{q/(q-1)}\bigl(a^{1/m_2}\bigr)L(a,b)\\ &\quad+G(\alpha _2 ,1) \bigl[g^{q/(q-1)}(b)-m_2g^{q/(q-1)}\bigl(a^{1/m_2}\bigr)\bigr]\bigr\}^{1- 1/q}.\end{aligned}$$ The proof of Theorem \[thm5-2012-GA-Ji\] is complete. \[cor-3.5-2012-GA-Ji\] Under the conditions of Theorem \[thm5-2012-GA-Ji\], if $\alpha_1=\alpha_2=m_1=m_2=1$, then $$\begin{gathered} \int_a^bf(x)g(x)\operatorname{d\mspace{-2mu}}x\le \{f^q(a)[L(a,b)-a]+[b-L(a,b)]f^q(b)\}^{1/q} \\ \times\bigl\{g^{q/(q-1)}(a)[L(a,b)-a]+[b-L(a,b)]g^{q/(q-1)}(b)\bigr\}^{1-1/q}.\end{gathered}$$ \[thm6-2012-GA-Ji\] Let $f, g:\mathbb{R}_0\to\mathbb{R}_0$ and $fg\in L([a,b])$ for $0<a<b<\infty $. If $f(x)$ is $(\alpha_1 ,m_1)$-GA-concave on $\bigl[0,\max\bigl\{a^{1/m_1},b\bigr\}\bigr]$ and $g(x)$ is $(\alpha_2,m_2)$-GA-concave on $\bigl[0,\max\bigl\{a^{1/m_2},b\bigr\}\bigr]$ for $(\alpha_1,m_1)\in (0,1]^2$ and $(\alpha_2,m_2)\in (0,1]^2$, then $$\begin{gathered} \label{thm5-2012-GA-Ji-2} \int_a^bf(x)g(x)\operatorname{d\mspace{-2mu}}x \ge(\ln b-\ln a)\bigl\{m_1m_2[L(a,b)-G(\alpha_1 ,1)-G(\alpha_2,1) \\ +G(\alpha_1+\alpha_2 ,1)]f\bigl(a^{1/m_1}\bigr)g\bigl(a^{1/m_2}\bigr) +m_1[G(\alpha_2,1)-G(\alpha_1+\alpha_2,1)]f\bigl(a^{1/m_1}\bigr)g(b) \\ +m_2[G(\alpha_1,1)-G(\alpha_1+\alpha_2,1)]f(b)g\bigl(a^{1/m_2}\bigr)+G(\alpha_1+\alpha_2,1)f(b)g(b)\bigr\},\end{gathered}$$ where $G$ and $L$ are respectively defined by  and . Since $f(x)$ is $(\alpha_1 ,m_1)$-GA-concave on $\bigl[0,\max\bigl\{a^{1/m_1},b\bigr\}\bigr]$ and $g(x)$ is $(\alpha_2,m_2)$-GA-concave on $\bigl[0,\max\bigl\{a^{1/m_2},b\bigr\}\bigr]$, we have $$\begin{aligned} &f\bigl(a^{1-t}b^t\bigr)\ge t^{\alpha_1}f(b)+m_1(1-t^{\alpha_1})f\bigl(a^{1/m_1}\bigr)\end{aligned}$$ and $$\begin{aligned} &g\bigl(a^{1-t}b^t\bigr)\ge t^{\alpha_2}g(b)+m_2(1-t^{\alpha_2})g\bigl(a^{1/m_2}\bigr)\end{aligned}$$ for $t\in [0,1]$. Further letting $x=a^{1-t}b^t$ for $0\le t\le 1$ and utilizing Hölder’s inequality reveal $$\begin{aligned} &\quad\int_a^bf(x)g(x)\operatorname{d\mspace{-2mu}}x=(\ln b-\ln a) \int_0^1a^{1-t}b^tf\bigl(a^{1-t}b^t\bigr)g\bigl(a^{1-t}b^t\bigr)\operatorname{d\mspace{-2mu}}t \\ & \ge (\ln b-\ln a)\biggl\{\int_0^1a^{1-t}b^t\bigl[t^{\alpha _1}f(b) + m_1(1-t^{\alpha_1} ) f\bigl(a^{1/m_1}\bigr)\bigr]\\ &\quad\times\bigl[t^{\alpha_2 }g(b) + m_2(1-t^{\alpha_2 })g\bigl(a^{1/m_2}\bigr)\bigr]\operatorname{d\mspace{-2mu}}t\biggr\}\\ &=(\ln b-\ln a)\int_0^1a^{1-t}b^t\bigl[t^{\alpha_1+\alpha _2} f(b)g(b)+m_1(1-t^{\alpha_1})t^{\alpha_2}f\bigl(a^{1/m_1}\bigr)g(b) \\ &\quad+ m_2 t^{\alpha_1}(1-t^{\alpha _2})f(b)g\bigl(a^{1/m_2}\bigr) + m_1m_2(1-t^{\alpha_1})(1-t^{\alpha_2})g\bigl(a^{1/m_2}\bigr)f\bigl(a^{1/m_1}\bigr)\bigl]\operatorname{d\mspace{-2mu}}t \\ & =(\ln b-\ln a)\bigl\{m_1m_2[L(a,b)-G(\alpha_1 ,1)-G(\alpha_2,1)\\ &\quad+G(\alpha_1+\alpha_2 ,1)]f\bigl(a^{1/m_1}\bigr)g\bigl(a^{1/m_2}\bigr) +m_1[G(\alpha_2,1)-G(\alpha_1+\alpha_2,1)]f\bigl(a^{1/m_1}\bigr)g(b)\\ &\quad+m_2[G(\alpha_1,1)-G(\alpha_1+\alpha_2,1)]f(b)g\bigl(a^{1/m_2}\bigr)+G(\alpha_1+\alpha_2,1)f(b)g(b)\bigr\}.\end{aligned}$$ The proof of Theorem \[thm6-2012-GA-Ji\] is complete. \[cor-3.6-2012-GA-Ji\] Under the conditions of Theorem \[thm6-2012-GA-Ji\], if $\alpha_1=\alpha_2=m_1=m_2=1$, we have $$\begin{gathered} \int_a^bf(x)g(x)\operatorname{d\mspace{-2mu}}x\ge(\ln b-\ln a)\{[2L(a,b)-a(\ln b -\ln a)-2a]f(a)g(a)+[a+b\\ -2L(a,b)][f(a)g(b)+f(b)g(a)]+[2L(a,b)+b(\ln b-\ln a)-2b]f(b)g(b)\}.\end{gathered}$$ [99]{} R.-F. Bai, F. Qi, and B.-Y. Xi, *Hermite-Hadamard type inequalities for the $m$- and $(\alpha,m)$-logarithmically convex functions*, Filomat **27** (2013), no. 1, 1–7. S.-P. Bai, S.-H. Wang, and F. Qi, *Some Hermite-Hadamard type inequalities for $n$-time differentiable $(\alpha,m)$-convex functions*, J. Inequal. Appl. 2012, **2012**:267, 11 pages; Available online at <http://dx.doi.org/10.1186/1029-242X-2012-267>. M. K. Bakula, M. E. Özdemir, and J. Pečarić, *Hadamard type inequalities for $m$-convex and $(\alpha,m)$-convex functions*, J. Inequal. Pure Appl. Math. **9** (2008), no. 4, Art. 96, 12 pages; Available online at <http://www.emis.de/journals/JIPAM/article1032.html>. S. S. Dragomir and R. P. Agarwal, *Two inequalities for differentiable mappings and applications to special means of real numbers and to trapezoidal formula*, Appl. Math. Lett. **11** (1998), no. 5, 91–95; Available online at <http://dx.doi.org/10.1016/S0893-9659(98)00086-X>. S. S. Dragomir and C. E. M. Pearce, *Selected Topics on Hermite-Hadamard Type Inequalities and Applications*, RGMIA Monographs, Victoria University, 2000; Available online at <http://rgmia.org/monographs/hermite_hadamard.html>. U. S. Kirmaci, *Inequalities for differentiable mappings and applications to special means of real numbers to midpoint formula*, Appl. Math. Comput. **147** (2004), no. 1, 137–146; Available online at <http://dx.doi.org/10.1016/S0096-3003(02)00657-4>. U. S. Kirmaci, M. K. Bakula, M. E. Özdemir, and J. Pečarić, *Hadamard-type inequalities for $s$-convex functions*, Appl. Math. Comput. **193** (2007), no. 1, 26–35; Available online at <http://dx.doi.org/10.1016/j.amc.2007.03.030>. V. G. Miheşan, *A generalization of the convexity*, Seminar on Functional Equations, Approx. Convex, Cluj-Napoca, 1993. (Romania) C. P. Niculescu, *Convexity according to the geometric mean*, Math. Inequal. Appl. **3** (2000), no. 2, 155–167; Available online at <http://dx.doi.org/10.7153/mia-03-19>. C. P. Niculescu, *Convexity according to means*, Math. Inequal. Appl. **6** (2003), no. 4, 571–579; Available online at <http://dx.doi.org/10.7153/mia-06-53>. G. Toader, *Some generalizations of the convexity*, Proc. Colloq. Approx. Optim., Univ. Cluj-Napoca, Cluj-Napoca, 1985, 329–338. S.-H. Wang, B.-Y. Xi, and F. Qi, *On Hermite-Hadamard type inequalities for $(\alpha,m)$-convex functions*, Int. J. Open Probl. Comput. Sci. Math. **5** (2012), no. 4, 47–56. B.-Y. Xi, R.-F. Bai, and F. Qi, *Hermite-Hadamard type inequalities for the $m$- and $(\alpha,m)$-geometrically convex functions*, Aequationes Math. **84** (2012), no. 3, 261–269; Available online at <http://dx.doi.org/10.1007/s00010-011-0114-x>. [^1]: This work was partially supported by the Foundation of the Research Program of Science and Technology at Universities of Inner Mongolia Autonomous Region under grant number NJZY13159, China [^2]: This paper was typeset using -LaTeX
--- abstract: 'Dirac’s large number hypothesis is motivated by certain scaling transformations that relate the parameters of macro and microphysics. We show that these relations can actually be explained in terms of the holographic $N$ bound conjectured by Bousso and a series of purely cosmological observations, namely, that our universe is spatially homogeneous, isotropic, and flat to a high degree of approximation and that the cosmological constant dominates the energy density at present.' address: | $^1$ I.M.A.F.F., C.S.I.C., Serrano 121, 28006 Madrid, Spain\ $^2$ Instituto de Física, Universidade Federal da Bahia, 40210-340, Salvador, Bahia, Brazil author: - 'Guillermo A. Mena Marugán$^1$ and Saulo Carneiro$^{1,2}$' title: Holography and the Large Number Hypothesis --- Explaining the value of the constants of nature is one of the most exciting challenges of theoretical physics. Some of these constants play a fundamental role in the foundations of the scientific paradigms. This is the case of Planck constant $\hbar$ in quantum mechanics, and of Newton constant $G$ and the speed of light $c$ in general relativity. These three constants provide a natural system of units for all physical quantities. For instance, the length and mass units are $l_P=\sqrt{\hbar G/c^{3}}=1.6\times10^{-35}$ m and $m_P=\sqrt{\hbar c/G}= 2.2\times 10^{-8}$ kg. In terms of these Planck units, the other constants of nature become dimensionless numbers. Already in the 20’s, Eddington tried unsuccessfully to deduce the value of all constants of physics from theoretical considerations [@Ed]. Most importantly, he pointed out the existence of relations between the parameters of fields that at first sight seem unconnected, like nuclear physics and cosmology. Among these, perhaps the most intriguing relation is the apparent coincidence between the present number of baryons in the universe, known as Eddington number, and the squared ratio of the electric to the gravitational force between the proton and the electron. This coincidence between large numbers can also be expressed in the alternative form [@Weinberg] $$\label{rel} \hbar^2H_0\approx Gcm_N^3.$$ This approximate identity is sometimes called the Eddington-Weinberg relation. Here, $m_N$ is the proton mass and $H_0\approx 70$ km/(sMpc) is the present value of the Hubble constant [@Hubble]. Actually, the Hubble parameter is not a true constant, but varies as the inverse of the cosmological time in standard Friedman-Robertson-Walker (FRW) cosmology [@Weinberg]. This fact led Dirac [@Dirac] to put forward the hypothesis that Newton constant must depend on time as $H_0$, $G\propto t^{-1}$, so that relation (1) is always valid. In spite of its attractive features, Dirac’s large number hypothesis turns out to be incompatible with the experimental bounds that exist on the time variation of $G$ [@sol]. Therefore, the explanation of the Eddington-Weinberg relation still remains a mystery. Recently, the determination of cosmological parameters has experienced a considerable revolution. The observation of type Ia supernovae (SNe Ia) at high redshift has provided evidence in favor of a positive cosmological constant [@sne]. In addition, accurate measurements of the angular power spectrum of anisotropies in the cosmic microwave background (CMB) have shown that the curvature of the universe is close to flat [@CMB]. These CMB and SNe Ia data, together with other cosmological information, have been combined in a consistent (nearly) flat FRW model whose values of the cosmological constant $\Lambda$ and matter density $\rho_0$ are, approximately, $c^2 \Lambda= 16 \pi G \rho_0= 2 H_0^2$ [@conc]. This value of the cosmological constant poses two puzzles. On the one hand, one would expect that $\Lambda$ emerged from vacuum fluctuations. In a theory of quantum gravity, these fluctuations would have Planck energy density. The discrepancy from this theoretical expectations is of nearly 120 orders of magnitude, since, in Planck units, $H_0\approx 10^{-60}$. This is the so-called cosmological constant problem [@ccp]. The value of $\Lambda$, on the other hand, is constant, whereas the density of matter decreases with expansion. As a consequence, the relation $16\pi G\rho_0\approx c^2 \Lambda$ is not valid in most of the history of the universe. Why is it precisely now that the matter content and $\Lambda$ provide similar contributions to the energy? This additional puzzle is known as the cosmic coincidence problem [@coinc]. A new perspective of the cosmological constant problem, which puts the emphasis on fundamental aspects of gravity rather than in purely quantum field theory (QFT) considerations, has recently emerged with the advent of holography [@holo]. In an over-simplified version, the holographic principle states that the entropy $S$ [@entro] of a physical system subject to gravity is bounded from above by a quarter of its boundary area in Planck units, $S\leq A/(4l_p^{2})$. From this point of view, the physical degrees of freedom are not proportional to the volume in the presence of the gravitational field, but reside in the bounding surface. A more rigorous, covariant formulation of the holographic conjecture has been elaborated by Bousso, providing in principle an entropy bound on null hypersurfaces [@bo; @Bousso]. Other less general holographic proposals that find straightforward application to spatial volumes in cosmology have also been suggested [@FS; @BR; @H]. In this respect, an issue of debate has been the largest region of the universe in which an entropy bound may be feasible. Fischler and Susskind [@FS] originally proposed to consider the particle horizon, at least for adiabatic evolution, but other possibilities that appear more natural were soon suggested. One such possibility is the use of the cosmological apparent horizon, which bounds an anti-trapped region and has an associated notion of gravitational entropy [@bo; @BR]. Another proposal that has found considerable support is the restriction to the Hubble radius $cH_0^{-1}$ [@H], since this supplies the scale of causal connection beyond which gravitational perturbations on a flat background cannot grow with time. It is worth noting, anyway, that for a flat FRW model like the one that possibly describes our universe, the apparent and Hubble horizons do in fact coincide [@BR]. For any spacetime with a positive cosmological constant, Bousso [@Nbound] has argued that the holographic principle leads to the prediction that the number of degrees of freedom $N$ available in the universe is related to $\Lambda$ by $$\label{Nbound} N=\frac{3\pi}{\Lambda l_P^2\ln{2}}.$$ The observable entropy $S$ is then bounded by $N\ln{2}$. This conjecture is called the $N$ bound. Under quantization, the system would be describable by a Hilbert space of finite dimension (equal to $2^N$). Bousso’s conjecture is largely influenced by Banks’ ideas about the cosmological constant [@Banks]. According to Banks, $\Lambda$ should not be considered a parameter of the theory; rather, it is determined by the inverse of the number of degrees of freedom. From this viewpoint, the cosmological constant problem disappears, because $N$ can be regarded as part of the data that describe the system at a fundamental level. Based also on holography, other possible explanations have been proposed for the value of $\Lambda$ that are closer in spirit to the standard methods of QFT [@CT]. Since the cosmological constant affects the large scale structure of the universe but should originate from effective local vacuum fluctuations, it may provide a natural connection between macro and microphysics. In addition, $\Lambda$ is related to the number of degrees of freedom by the holographic principle. As a consequence, one could expect that holography would play a fundamental role in explaining the coincidence of the large numbers arising in cosmology and particle physics. A first indication that this intuition may work is provided by Zizzi’s work [@Zizzi], who recovered Eddington number starting with a discrete quantum model for the early universe that saturates the holographic bound. The main aim of the present paper is to prove that the large number hypothesis and the holographic conjecture are in fact not fully independent. To be more precise, we will show that, in a homogeneous, isotropic, and (quasi)flat universe like ours, the relations between large numbers can be explained by the holographic principle assuming that the present energy density is nearly dominated by $\Lambda$. The scaling relations that lie behind the large number hypothesis can be expressed in the form $$\begin{aligned} \label{lE} l_N&\approx&\Omega l_P,\\ \label{mE} m_N&\approx& \Omega^{-1} m_P,\\ \label{lU} l_U\equiv cH_0^{-1}&\approx &\Omega^3 l_P,\\ \label{mU} m_U&\approx& \Omega^3 m_P.\end{aligned}$$ The scale $\Omega$ has the value $10^{19}$–$10^{20}$. Here, $m_N$ and $l_N$ are the mass and radius of a nucleon, e.g. the proton. The symbol $l_U$ denotes the observable radius of the universe, that we define as the distance that light can travel in a Hubble time $H_0^{-1}$. This time is roughly the age of our universe. Finally, the mass of the universe $m_U$ is the energy contained in a spatial region of radius $l_U$. In fact, relations (\[lE\]) and (\[mE\]) are not independent. For an elementary particle governed by quantum mechanics, the typical effective size should be of the order of its Compton wavelength, $l_N\approx \hbar/(cm_N)$. It therefore suffices to explain, for instance, why $m_Pm_N^{-1}$ is of order $\Omega$. Something similar happens with the scaling laws (\[lU\]) and (\[mU\]). Assuming homogeneity and isotropy, $m_U$ is defined as $4\pi l_U^3\rho^T_0/3$. Here, $\rho^T_0\equiv\rho_0+c^2 \Lambda/(8\pi G)$ is the total energy density. Hence, given the relation between $l_U$ and $l_P$, formula (\[mU\]) amounts to the approximate equality $\rho_0^T\approx\rho_0^C$, where $\rho_0^C\equiv 3 H_0^2/(8\pi G)$ is the critical density of a FRW model at present. In a universe like ours, the scaling equation for $m_U$ is thus a consequence of Eq. (\[lU\]) and spatial flatness. Examining relations (\[lE\])–(\[mU\]), a length scale $l_S$ of order $\Omega^2$ in Planck units appears to be missing. Roughly, this scale corresponds to the size of stellar gravitational collapse determined by Chandrasekhar limit (or any other similar mass limit) [@FPL]. Actually, for such stellar-mass black holes, the formulas of the Schwarzschild radius and the Chandrasekhar mass [@Weinberg] lead to $$\label{bh} l_S\approx \Omega^2 l_P,\hspace*{.8cm} m_S\approx \Omega^2 m_P.$$ At this stage of our discussion, the only scaling laws that remain unexplained are relations (\[mE\]) and (\[lU\]). In fact, one of these approximate identities can be viewed as the definition of $\Omega$, e.g. the equation for $l_U$. The appearance of large numbers in our relations may then be understood, following Dirac [@Dirac], as a purely cosmological issue. Since $H_0^{-1}$ is essentially the age of the universe, the fact that $\Omega\gg 1$ is just a consequence of the universe being so old. In addition, it is easy to check that, given formula (\[lU\]), the scaling transformation for $m_N$ is equivalent to Eq. (\[rel\]). Therefore, the only coincidence of large numbers that needs explanation is the Eddington-Weinberg relation. Suppose now that nucleons (or hadronic particles in general) can be described as elementary excitations of typical size $l_N$ in an effective quantum theory. The number of physical degrees of freedom in a spatial region of volume $V$ will be of the order of $3V/(4\pi l_N^3)$. In a cosmological setting, it seems natural to consider the Hubble radius as the largest size of the region in which such an effective quantum description of particles may exist, because it provides the scale of causal connection where the microphysical interactions take place. For a homogeneous and isotropic universe with negligible curvature, like the one we inhabit, the FRW equations imply that $8\pi G\rho_0+c^2 \Lambda\approx 3 H_0^2$ [@Weinberg]. Given the positivity of $\rho_0$, guaranteed by the dominant energy condition, the maximum Hubble radius is thus close to $\sqrt{3/\Lambda}$. For an almost flat FRW universe, the volume of the corresponding spatial region is nearly $4\pi \sqrt{3/\Lambda^{3}}$. As a consequence, the maximum number of observable degrees of freedom $N$ in this kind of cosmological scenarios should roughly be $\sqrt{27/(\Lambda^{3}l_N^{6})}$. Taking into account the holographic $N$ bound (\[Nbound\]), we then conclude $$\label{lrel} l_N\approx (l_P^4 \Lambda^{-1})^{1/6}.$$ Using that $l_N m_N\approx l_Pm_P$, a relation that we have already justified, we immediately obtain $$\label{mrel}m_N^3\approx m_P^3(l_P^2\Lambda)^{1/2}.$$ This approximate identity reproduces Eq. (\[rel\]) provided that the present Hubble radius $cH_0^{-1}$ is close to $\Lambda^{-1/2}$. Therefore, the so-far unexplained Eddington-Weinberg relation can be understood from a holographic perspective, assuming an almost flat FRW cosmology, if and only if the cosmological constant has a nearly dominant contribution to the present energy density. This is ensured, e.g., by cosmic coincidence. Note that the result $c^2\Lambda\approx H_0^2$ can be regarded as a partial solution to the cosmological constant problems (the value of $\Lambda$ and cosmic coincidence) in our (quasi)flat universe if, adopting a different viewpoint, we take for granted Bousso’s proposal and Eq. (\[rel\]). Alternatively, if we use the Eddington-Weinberg relation and $c^2\Lambda\approx H_0^2$, the arguments given above about the relation between $N$ and $l_N$ allow us to reach an approximate version of the $N$ bound for our spacetime. Thus, we see that in a nearly homogeneous, isotropic and flat universe like ours, the cosmological constant problems, the $N$ bound, and the coincidence of large numbers are interrelated. In our application of the $N$ bound, we have argued that the Hubble radius is the largest scale in which microphysics can act. Nonetheless, our conclusions would not have changed if, as proposed in Ref. [@BR] for cosmic holography, we had employed the cosmological apparent horizon instead of the Hubble radius, because they are approximately equal in quasiflat FRW models. We have also made use of the fact that, for this kind of models, the maximum Hubble radius is nearly $\sqrt{3/\Lambda}$ if $\Lambda$ is positive. This is also the size of the cosmological horizon of the de Sitter space with the same value of $\Lambda$. In (almost) flat FRW cosmologies with a dominant $\Lambda$-term at late times, a situation that apparently applies to our universe, any observer has a future event horizon that tends asymptotically to such a de Sitter horizon. Hence, our results would neither have been altered had we replaced the maximum Hubble radius with the asymptotic event horizon in all our considerations. The fact that the $N$ bound provides an effective length scale for microphysics, given by Eq. (\[lrel\]), has played a central role in our arguments. This fact has allowed us to understand the origin of the Eddington-Weinberg relation. According to the explanation that we have put forward, such a relation does not hold at all times, but only when the cosmological constant dominates the energy density. Although we expect this condition to be satisfied at present and in the future, it excludes the early stages of the evolution of the universe. In our theoretical framework, the constants of nature $G$, $\hbar$, and $c$ do not vary with time, and so we do not recover Dirac’s cosmology [@Dirac]. In obtaining relation (\[lrel\]), we have actually supposed that the total number of degrees of freedom $N$ available in the universe is roughly of the same order as the maximum number of degrees observable in its baryonic content. It should be clear that this assumption does not conflict with the fact that the present energy density is not dominated by baryonic matter. More importantly, since the number of baryonic degrees of freedom cannot exceed $N$, the quantity $(l_P^4\Lambda^{-1})^{1/6}$ provides, in any case, a lower bound to the typical size of nucleons $l_N$. Further discussion of this point will be presented elsewhere. The length scale (\[lrel\]) has also been deduced by Ng, although replacing $\Lambda^{-1}$ with the square of the observable radius of the universe [@Ng]. However, he has proposed to interpret $l_N$ as the minimum resolution length in the presence of quantum gravitational fluctuations, instead of as the typical size of particles in the effective QFT that describes the baryonic content. From our viewpoint, this scale does not provide a fundamental length limiting the resolution of spacetime measurements, but rather restricts the number of degrees of freedom available in the effective QFT. Concerning the value of $l_N$, Ng proposes two ways to deduce it. In one of them, a spatial region is considered as a Salecker-Wigner clock able to discern distances larger than its Schwarzschild radius [@Ng]. The question arises whether this interpretation is applicable to the observable universe, because its Schwarzschild and Hubble radii are of the same order of magnitude. The other line of reasoning employs holographic arguments related to those presented here. Nevertheless, since Ng uses the present size of the universe instead of $\Lambda^{-1/2}$, it is not clear whether the resolution scale that he obtains must be viewed as time independent. Let us return to expression (\[lU\]) for the present Hubble radius, which we have interpreted as the definition of $\Omega$. We have argued that the fact that $\Omega\gg 1$ can be regarded as a consequence of the old age of the universe, which is a cosmological problem and not a numerical coincidence between microscopic and macroscopic parameters. Nonetheless, using the $N$ bound and the present dominance of $\Lambda$, it is actually possible to explain the appearance of the large scale $\Omega$ along very similar lines to those proposed by Banks for the resolution of the cosmological constant problem [@Banks]. As we have seen, when the energy density is nearly dominated by $\Lambda$, the Hubble radius is close to $\sqrt{3/\Lambda}$. In addition, the $N$ bound implies that this latter length is equal to $l_P\sqrt{N\ln{2}/\pi}$. Recalling Eq. (\[lU\]), we then obtain $$\label{Om}\Omega\approx N^{1/6}.$$ So, $\Omega$ is a large number because our universe contains a huge amount of degrees of freedom. From this perspective, the value of $\Omega$ is fixed by $N$, which can be considered an input of the theory that describes our world. Finally, we want to present some brief comments about the entropy of the universe. If the only entropic contribution were baryonic, we could estimate it as $S_{b}\approx n_N$. Here, we have supposed that each baryon has an associated entropy of order unity, and $n_N$ is Eddington number, that can be calculated as the ratio of the baryonic mass of the universe to the typical mass of a nucleon. In a rough approximation (valid for our estimation of orders of magnitude), we can identify the matter and the baryonic energy densities. Taking into account cosmic coincidence, we can then approximate $n_N$ by $m_Um_N^{-1}$. In this way, we get $S_b\approx n_N\approx \Omega^4$. This is much less than the maximum allowed entropy, which, from relation (\[Om\]) and the definition of $N$, is of the order of $\Omega^6$. An intermediate entropic regime would be reached if the matter of the universe collapsed into stellar-mass black holes. As we have commented, this regime corresponds to the length scale $l_S\approx \Omega ^2 l_P$. One can check that, in this case, the entropy would be $S_S\approx \Omega^5$. It is rather intriguing that $S_S$ matches relatively well what seems to be the actual entropy of the universe, $S_0$. The main contribution to this entropy comes from super-massive black holes in galactic nuclei. Assuming that a typical galaxy contains $10^{11}$–$10^{12}$ stellar masses $m_S$ and that the mass of its central black hole is $10^6$–$10^{7}$ $m_S$, it is straightforward to find that $S_0\approx 1$–$10^{3}$ $S_S$. Summarizing, we have proved that, in the light of the holographic principle, the relations between large numbers constructed from microscopic and cosmological parameters are not independent of other fine-tuning and coincidence problems that have a purely cosmological nature. More explicitly, provided that the universe can be approximately described by a spatially homogenous, isotropic, and flat cosmological model and that the main contribution to the present energy density comes from the cosmological constant, it is possible to explain all the scaling relations that motivated Dirac’s large number hypothesis appealing exclusively to basic principles and to the $N$ bound conjecture. G.A.M.M. acknowledges DGESIC for financial support under Research Project No. PB97-1218. S.C. was partially supported by CNPq. The authors want to thank also L.J. Garay and P.F. González-Díaz for helpful comments. A.S. Eddington, [*Fundamental Theory*]{} (Cambridge University Press, Cambridge, England, 1946). See, e.g., S. Weinberg, [*Gravitation and Cosmology*]{} (Wiley, New York, 1972). W. L. Freedman [*et al.*]{}, Astrophys. J. [**553**]{}, 47 (2001). P.A.M. Dirac, Nature (London) [**139**]{}, 323 (1937). T. Damour, G.W. Gibbons, and J.H. Taylor, Phys. Rev. Lett. [**61**]{}, 1151 (1988); J.G. Williams, X.X. Newhall, and J.O. Dickey, Phys. Rev. D [**53**]{}, 6730 (1996). A.G. Riess [*et al.*]{}, Astron. J. [**116**]{}, 1009 (1998); S. Perlmutter [*et al.*]{}, Astrophys. J. [**517**]{}, 565 (1999). P. de Bernardis [*et al.*]{}, Nature (London) [**404**]{}, 955 (2000); S. Hanany [*et al.*]{}, Astrophys. J. Lett. [**545**]{}, L5 (2000); S. Padin [*et al.*]{}, [*ibid.*]{} [**549**]{}, L1 (2001). M. Tegmark, M. Zaldarriaga, and A.J.S. Hamilton, Phys. Rev. D [**63**]{}, 043007 (2001); X. Wang, M. Tegmark, and M. Zaldarriaga, astro-ph/0105091. S. Weinberg, Rev. Mod. Phys. [**61**]{}, 1 (1989); V. Sahni and A. Starobinsky, Int. J. Mod. Phys. D [**9**]{}, 373 (2000). P.J. Steinhardt, in [*Critical Problems in Physics*]{}, edited by V.L. Fitch and D.R. Marlow (Princeton University Press, Princeton, 1997). G. ’t Hooft, gr-qc/9310026; L. Susskind, J. Math. Phys. (N.Y.) [**36**]{}, 6377 (1995). In fact, we call $S$ the entropy divided by Boltzmann constant. R. Bousso, JHEP [**9907**]{}, 004 (1999). R. Bousso, JHEP [**9906**]{}, 028 (1999); Class. Quantum Grav. [**17**]{}, 997 (2000). W. Fischler and L. Susskind, hep-th/9806039. D. Bak and S.-J. Rey, Class. Quantum Grav. [**17**]{}, L83 (2000). G. Veneziano, Phys. Lett. B [**454**]{}, 22 (1999); R. Easther and D. Lowe, Phys. Rev. Lett. [**82**]{}, 4967 (1999); N. Kaloper and A. Linde, Phys. Rev. D [**60**]{}, 103509 (1999). R. Bousso, JHEP [**0011**]{}, 038 (2000). T. Banks, hep-th/0007146. A.G. Cohen, D.B. Kaplan, and A.E. Nelson, Phys. Rev. Lett. [**82**]{}, 4971 (1999); S. Thomas, hep-th/0010145. P.A. Zizzi, Int. J. Theor. Phys. [**38**]{}, 2333 (1999). S. Carneiro, Found. Phys. Lett. [**11**]{}, 95 (1998). Y.J. Ng, hep-th/0010234; gr-qc/0006105.
--- address: 'Mathematisches Institut, Universität Freiburg, D-79102 Freiburg' author: - Bernd Siebert date: 'February 5, 1997; revision of August 14, 2005' title: 'Virtual fundamental classes, global normal cones and Fulton’s canonical classes' --- [**Introduction**]{} This note, written in January 1997, grew out of an attempt to understand references [@behrend], [@behrendfantechi] and [@litian]. In these papers two related but different methods are presented for the construction of a certain Chow class on moduli spaces of stable (parametrized) curves in a projective manifold $V$, called virtual fundamental class. This class replaces the usual fundamental class of these spaces in the definition of basic enumerative invariants of $V$ involving curves, called Gromov-Witten (GW-) invariants. They are invariant under smooth deformations of $V$. Both approaches are based on a globalization of the concept of normal cones of germs of the space under study inside some modelling space, that is $C_{U|M}$ for $U\subset X$ open with $\iota: U \hookrightarrow M$ and $M$ smooth over $k$. The essential idea of using bundles of cones inside a vector bundle for globalizing virtual fundamental classes is due to Li and Tian. The data needed to glue differs however somewhat in the two constructions. A proper understanding of the relationship between the two approaches seemed necessary for finding the natural framework for comparison of algebraic virtual fundamental classes with the author’s definition in [@si1] of virtual fundamental classes in the symplectic context [@si3]. In a first step Behrend and Fantechi use a generalization of the concept of scheme, called Artin stacks, to make sense of the quotient $C_{U|M}/T_M|_U$. These quotients being unique up to canonical isomorphism they glue to an Artin (cone) stack ${{{\mathcal C}}}_X$ intrinsically associated to any $X$. In a second step they need a morphism $\ph^\bullet: [{{\mathcal F}}^{-1} \rightarrow {{\mathcal F}}^0] \rightarrow {{\mathcal L}}_X^\bullet$ (in the derived category) from a two-term complex of locally free sheaves to the cotangent complex, inducing an isomorphism in $H^0$ and an epimorphism in $H^{-1}$, to cook up an ordinary cone $C(\ph^\bullet)\subset F_1$, $F_1$ the vector bundle associated to ${{\mathcal F}}^{-1}$. Intersection with the zero section finally produces the virtual fundamental class. So a priori the latter depends on the choice of $\ph^\bullet$. This is a very natural and mature approach, that clearly separates the globalization process of the normal cone from the construction of the virtual fundamental class. A possible disadvantage is that in dealing with Artin stacks some of the necessary verifications become rather technical, non-geometric in nature. Li and Tian circumvent the morphism to the cotangent complex by introducing the notion of “perfect tangent obstruction complex”. This is a morphism ${{\mathcal F}}^{-1}\rightarrow{{\mathcal F}}^0$ of locally free sheaves on $X$ with kernel and cokernel being tangent and obstruction spaces for morphisms to $X$, compatible with base change. Using relative, formal, “Kuranishi families” as an intermediate object they construct a well-defined cone $C\subset F_1$. Another, less important difference to [@behrendfantechi] is the use of an absolute obstruction theory instead of one relative to the space of pre-stable curves. In a previous version of [@litian] the slightly stronger claim was made that already a presentation ${{\mathcal F}}^{-1}\rightarrow{{\mathcal F}}^0$ of $\Omega_X$ should suffice to construct the cone. In trying to understand this statement I was lead to the problem of reformulating [@behrendfantechi] from the point of view of gluing local cones. Since in the latter reference Artin stacks are used only as book-keeping device rather than as actual spaces it should not come as surprise that one can get along without them (this has already been indicated in op.cit.). Contrary to what I expected, things can be formulated in a rather elegant but direct way via some *yoga of cones bundles*. This part of the paper (Sections 2 and 3) is just a down-to-earth reformulation of (parts of) Sections 2–4 of [@behrendfantechi]. Section 1 presents the necessary notations concerning cones and linear spaces, the latter being a convenient way of looking at coherent sheaves for our purposes. In Section 4 we establish a closed formula for virtual fundamental classes involving only the scheme-theoretic structure of $X$ via Fulton’s canonical class (Definition \[fultons\_class\]) and the Chern class of the virtual bundle $F_0-F_1$, $F_i$ the vector bundle associated to ${{\mathcal F}}^{-i}$ (Theorem \[closed\_formula\]). This formula was actually found by the author in summer 1995 while searching for a purely algebraic definition of GW-invariants. It should be useful for computations, see the author’s recent little survey [@si2]. A few remarks on GW-theory are in order. First, today I consider the yoga of cone bundles in Sections 2 and 3 as one ingredient for the most economic path to algebraic Gromov-Witten invariants. The other ingredients are going over to Deligne-Mumford stacks, and replacing the morphism to the cotangent complex by an obstruction theory. If the latter is defined similar to [@artin], 2.6, rather than in [@litian], one can show [@si4] that it is locally nothing but a morphism to the cotangent complex as in [@behrendfantechi]. Hence the yoga of cone bundles applies to produce the virtual fundamental class. Second, I would like to illustrate the perspective of the content of Section 4 by the following formula for virtual fundamental classes in Gromov-Witten theory. Let $V$ be a projective variety, smooth over a field $K$ of characteristic $0$, and $R\in A_1(V)$, the first Chow group. If $g=0$ or ${{\mathcal C}}:= {{\mathcal C}}_{R,g,k}(V)$, the moduli space of stable curves $(C,{\bf x},\ph:C\rightarrow V)$ in $V$ of genus $g$ with $k$ marked points ${\bf x}= (x_1,\ldots,x_k)$ and $\ph_*[C]=R$, is embeddable into a space smooth over ${\mathfrak{M}}_{g,k}$, then the virtual fundamental class relevant for GW-invariants is $$[\![{{\mathcal C}}]\!]\ =\ \Big\{c(\ind_{R,g,k}^V)^{-1}\cap c_F({{\mathcal C}}/{\mathfrak{M}}_{g,k})\Big\}_{d(V,R,g,k)}\, .$$ Here $\{\,.\,\}_d$ denotes the $d$-dimensional part of a cycle, ${\mathfrak{M}}_{g,k}$ is the Artin stack of $k$-pointed pre-stable curves of genus $g$, $d(V,R,g,k)= c_1(V)\cdot R +(1-g) \dim V+ 3g-3$ is the expected dimension, and $c_F({{\mathcal C}}/ {\mathfrak{M}}_{g,k})$ is Fulton’s canonical class for ${{\mathcal C}}$ relative ${\mathfrak{M}}_{g,k}$. Here $\ind_{R,g,k}^V=F_0-F_1$ is the virtual vector bundle associated to the partial resolution $\varphi^\bullet: [{{\mathcal F}}^{-1}\to {{\mathcal F}}^0]\to {{\mathcal L}}^\bullet_{{{\mathcal C}}/ {\mathfrak{M}}_{g,k}}$ mentioned in the introduction. It represents the (domain of the) perfect relative obstruction theory $(R\pi_*(f^T_V))^\vee$ of Behrend [@behrend] in $K^0({{\mathcal C}})$. A note on categories: To keep things simple we work here in the category of schemes of finite type over a field $k$, not necessarily algebraically closed or of characteristic 0. The extension to other base schemes is straightforward. For the purpose of GW-theory one also has to replace schemes by (generalizations of) orbifolds, that is Deligne-Mumford stacks in the algebraic category or analytic orbispaces in an analytic context. Again, our results can be easily adapted to these categories. For GW-theory this is still not sufficient, because ${\mathfrak{M}}_{g,k}$ is an Artin stack rather than Deligne-Mumford. One can nevertheless give a construction of the relevant cone without ever really using Artin stacks. For instance, Fulton’s canonical class relative ${\mathfrak{M}}_{g,k}$ has the following simple definition: Embed ${{\mathcal C}}$ into a smooth Deligne-Mumford $k$-stack $N$. In the important case $g=0$ one could take ${{\mathcal C}}_{\iota_*R,g,k} (\pr^N)$, if $\iota: V\hookrightarrow \pr^N$ is a closed embedding. Let $q:{{\mathcal U}}\rightarrow {{\mathcal C}}$ be the universal curve. Then the pull-back of the (virtual) tangent bundle of ${\mathfrak{M}}_{g,k}$ is $$T_{\mathfrak{M}} := {\mathcal{E} \!\textrm{\textit{xt}}}^1_q (\omega_{{{\mathcal U}}/{{\mathcal C}}},{{\mathcal O}}_{{\mathcal U}})- {\mathcal{E} \!\textrm{\textit{xt}}}^0_q (\omega_{{{\mathcal U}}/{{\mathcal C}}},{{\mathcal O}}_{{\mathcal U}})\,,$$ as an element of $K^0({{\mathcal C}})$ (the ${\mathcal{E} \!\textrm{\textit{xt}}}^i_q$ are the derived functors of $q_*\circ{\mathcal{H} \!\textrm{\textit{om}}}$) and $$\begin{aligned} c_F({{\mathcal C}}/ {\mathfrak{M}}_{g,k})&=& \Big(c(T_{\mathfrak{M}})^{-1} \cup c(T_N)\Big) \cap s(C_{{{\mathcal C}}/N})\\ &=& c(T_{\mathfrak{M}})^{-1}\cap c_F({{\mathcal C}})\,.\end{aligned}$$ Here all sheaves and classes have to be understood in the sense of Deligne-Mumford stacks. With these remarks understood the theorem is a special case of Theorem \[closed\_formula\]. In preparing this paper discussions with H. Flenner and S.Schröer have been helpful. I am grateful to the referee for a very attentive reading of the manuscript and several competent suggestions. Cones ===== Linear spaces ------------- For any algebraic $k$-scheme $X$, $\Aff^1_X=X\times\Aff^1_k$ has the structure of a [*ring over*]{} $X$: There are morphisms $$\alpha:\Aff^1_X\times_X\Aff^1_X\longrightarrow\Aff^1_X\, ,\quad \iota:\Aff^1_X\longrightarrow\Aff^1_X\, ,\quad \mu:\Aff^1_X\times_X\Aff^1_X\longrightarrow\Aff^1_X\, ,$$ and sections $n,e:X\rightarrow\Aff^1_X$ fulfilling the usual commutative ring axioms with $\alpha$ as addition, $\iota$ as additive inverse, $\mu$ as multiplication and $n$, $e$ as neutral elements for $\alpha$, $\mu$. A [*linear space over $X$*]{} is an $\Aff^1_X$-module of finite type over $X$, that is, an affine morphism $\pi:L\rightarrow X$ of finite type together with morphisms $$a:L\times_X L\longrightarrow L,\quad m:\Aff^1_X\times_X L \longrightarrow L$$ and a zero section $z:X\rightarrow L$, fulfilling the usual module axioms relative $X$, that is $m\circ(\mu\times\Id_L) = m\circ(\Id_{\Aff^1_X}\times m)$ as maps from $\Aff^1_X \times_X \Aff^1_X\times_X L$ to $L$ etc.[^1] By abuse of notation we just write $L$ for the tuple $(\pi,a,m)$. In the sequel we will restrict to linear spaces that are [*representable*]{}, that is, which locally are closed subspaces of vector bundles with induced linear structure. With the obvious notion of homomorphism of linear spaces over $X$ we get the category ${{\mathop{\rm Lin}}}(X)$ of representable linear spaces over $X$. There is an anti-equivalence of categories $${{\mathop{\rm Lin}}}(X)\longrightarrow{{\mathop{\rm Coh}}}(X)$$ to the category of coherent ${{\mathcal O}}_X$-modules [@EGA-II],§1.7: On objects this associates to $L\in {{\mathop{\rm Lin}}}(X)$ the sheaf $\Hom_{{{\mathop{\rm Lin}}}(X)}(L,\Aff^1_X)$; in the other direction, ${{\mathcal F}}\in{{\mathop{\rm Coh}}}(X)$ corresponds to $$L({{\mathcal F}})\ :=\ \Spec_{{{\mathcal O}}_X}S^\bullet{{\mathcal F}}\, ,$$ where $S^\bullet{{\mathcal F}}$ is the symmetric algebra over the ${{\mathcal O}}_X$-module ${{\mathcal F}}$. For example, the addition operation $a: L({{\mathcal F}})\times_X L({{\mathcal F}})\to L({{\mathcal F}})$ comes from the diagonal morphism ${{\mathcal F}}\to {{\mathcal F}}\oplus {{\mathcal F}}$, $f\mapsto (f,f)$ by application of the functor $L=\Spec_{{{\mathcal O}}_X}\circ S^\bullet$. Note that a vector bundle $E$ corresponds to the locally free sheaf ${{\mathcal O}}(E^\vee)$, $E^\vee$ the [*dual*]{} bundle. Representable linear spaces are thus just another way to look at coherent sheaves. We will jump freely between both descriptions and use whichever seems more appropriate in a particular context. Note also that ${{\mathop{\rm Lin}}}(X)$ is an abelian category, so it makes sense to talk about monomorphisms, epimorphisms and exact sequences. A monomorphism $\Phi:E\rightarrow F$ of linear spaces corresponds to an epimorphism $\ph:{{\mathcal F}}\rightarrow{{\mathcal E}}$ of sheaves and is thus a closed embedding of schemes. An epimorphism $\psi:F\rightarrow G$ of linear spaces, however, need not be a surjection of schemes (consider the inclusion $\psi:{{\mathcal I}}\hookrightarrow{{\mathcal O}}_X$ for any nontrivial ideal sheaf ${{\mathcal I}}$). Cones ----- A [*cone*]{} $C$ over $X$ is a scheme of the form $\Spec_{{{\mathcal O}}_X} S^\bullet$ where $S^\bullet= \oplus_{d\ge0}S^d$ is a graded ${{\mathcal O}}_X$-module with $S^0={{\mathcal O}}_X$ and $S^\bullet$ generated by $S^1\in{{\mathop{\rm Coh}}}(X)$. $S^\bullet$ as graded algebra is not in general determined up to isomorphism by the scheme $C$ over $X$. For the grading one needs to distinguish the generating submodule $S^1={{\mathcal F}}$, or, equivalently, a closed embedding $C\hookrightarrow L({{\mathcal F}})$ into a linear space. Such datum could be called [*polarization*]{} of $C$. We will only deal with polarized cones in the sequel. \[normal\_cone\]*If $X$ is a closed subscheme of an algebraic $k$-scheme $M$ with ideal sheaf ${{\mathcal I}}$ then the cone $$C_{X|M}=\Spec_{{{\mathcal O}}_X}\big(\oplus_{d\ge0}{{\mathcal I}}^d/{{\mathcal I}}^{d+1}\big)$$ over $X$ is called [*normal cone*]{} to $X$ in $M$. $C_{X|M}$ is naturally embedded into the [*normal space*]{} $N_{X|M}=L({{\mathcal I}}/{{\mathcal I}}^2)$ of $X$ in $M$ (to avoid confusion with $({{\mathcal I}}/{{\mathcal I}}^2)^\vee$, I would rather not call $N_{X|M}$ normal sheaf as in [@behrendfantechi]).* If $C$, $C'$ are cones over $X$, then so is $C\oplus C':=C\times_X C'$. To a polarized cone $C=\Spec S^\bullet$ is associated a Chow class on $X$, its Segre class $$s(C)\ :=\ \sum_{r\ge0}p_*\big(\xi^r\cap[\pr(C)]\big),$$ where $p:\pr(C) :=\Proj S^\bullet\rightarrow X$ is the projection and $\xi = c_1({{\mathcal O}}_{\pr(C)}(1))$. We propose the following formulation of the concept of exact sequence of cones [@fulton Expl.4.1.6]. Let $$\mbox{\hspace*{4cm}} 0\longrightarrow E\stackrel{\Phi}{\longrightarrow}F \stackrel{\Psi}{\longrightarrow}Q\longrightarrow 0 \mbox{\hspace*{4cm}}(*)$$ be an exact sequence of linear spaces. Let $C\subset Q$ be a cone and set $\tilde C:=\Psi^{-1}(C)$. Then $(*)$ restricts to $$0\longrightarrow E\longrightarrow\tilde C\longrightarrow C \longrightarrow 0 \quad.$$ Sequences of cones of this form will be called [*exact*]{}. *Exact sequences of cones might not be very useful unless $(*)$ splits locally. In this case $\tilde C$ is locally of the form $C\oplus E$, and as in [@fulton Expl.4.1.6] one can show $s(\tilde C)=s(C\oplus E)$. In the non-split case a convenient way to relate the Segre classes of $\tilde C$ and $C$ seem to be unknown.* But note that if $E$ is a vector bundle $(*)$ always splits locally, and so we retrieve the definition of exact sequences of cones as in [@fulton]. For an exact sequence of cones as in the definition $\tilde C$ is preserved by the additive action of $E$ on $F$. In other words, $\tilde C$ wears the structure of an $E$-module. More generally, if $\Phi:E\rightarrow F$ is a homomorphism of linear spaces and $C\subset F$ is a cone then $C$ is called [*$E$-cone*]{} if $C$ is an $E$-module via $\Phi$, that is if $C$ is preserved by the additive action of $E$ on $F$ induced by $\Phi$. \[C\_Z|X\] In the situation of Example \[normal\_cone\] $C_{X|M}$ is a $T_M|_X$-cone via the natural homomorphism $\Phi:T_M|_X \rightarrow N_{X|M}$. On the sheaf level this action of $T_M|_X$ is $$\begin{aligned} \bigoplus_d {{\mathcal I}}^d/{{\mathcal I}}^{d+1}&\longrightarrow& S^\bullet \Omega_M|_X \otimes \bigoplus_d {{\mathcal I}}^d/{{\mathcal I}}^{d+1},\end{aligned}$$ where for $f_i\in {{\mathcal I}}/{{\mathcal I}}^2$ the image of $f_1\cdot\ldots\cdot f_d$ in the direct summand $S^e \Omega_M|_X \otimes {{\mathcal I}}^{d-e}/{{\mathcal I}}^{d-e+1}$ of the target is the sum over all partitions $\{i_1,\dots,i_e\}$, $\{j_1,\dots,j_{d-e}\}$ of $\{1,\dots,n\}$ of terms $$\begin{aligned} \textrm{d}f_{i_1}\cdot \ldots\cdot \textrm{d}f_{i_e}\otimes f_{j_1}\cdot\ldots\cdot f_{j_{d-e}}.\end{aligned}$$ There are many examples of morphisms of linear spaces $E\rightarrow F$ and $E$-cones $C\subset F$ that do not descend to the quotient $Q=E/F$, for instance the examples in Remark \[coh\_cones\],3 and in Remark \[C\_X|M\_descend?\]. However, there is one important class of morphisms where it is always possible, namely for locally split monomorphisms. We first treat the split case: \[C\_in\_EplusF\] Let ${{\mathcal E}}$, ${{\mathcal F}}\in{{\mathop{\rm Coh}}}(X)$ and $E=L({{\mathcal E}})$, $F=L({{\mathcal F}})$ the corresponding linear spaces over $X$ and $C\subset E\oplus F$ an $E$-invariant closed subscheme with respect to the action of $E$ on the first summand. Then $C$ is of the form $E\oplus\bar C$ for some uniquely determined closed subscheme $\bar C\subset F$. [[*Proof.* ]{}]{}The statement is local in $X$, so we may assume $X=\Spec A$, $E=\Spec A[{\bf X}]/\langle{\bf e}\rangle$, $F=\Spec A[{\bf Y}]/ \langle{\bf f}\rangle$ with ${\bf X}=(X_1,\ldots X_r)$, ${\bf Y}= (Y_1,\ldots Y_s)$ and ${\bf e}=(e_1,\ldots,e_k)$, ${\bf f}=(f_1,\ldots,f_l)$ tuples of linear forms with coefficients in $A$, $\langle{\bf e}\rangle$, $\langle{\bf f}\rangle$ the ideals generated by their entries. Then $C=\Spec A[{\bf X},{\bf Y}]/I$ with $I$ an ideal containing $\langle{\bf e}\rangle+\langle{\bf f}\rangle$. The only possible candidate for $\bar C$ is the intersection of $C$ with $0\oplus F$, that is $\bar C=\Spec A[{\bf X},{\bf Y}]/(I+\langle{\bf X}\rangle)=\Spec A[{\bf Y}]/\bar I$, with $\bar I=\{f(0,{\bf Y})\mid f({\bf X},{\bf Y})\in I\}$. We have to show that $I=\langle\,\bar I\,\rangle+\langle{\bf e}\rangle$. $C$ to be $E$-invariant means that for any $\displaystyle f({\bf X}, {\bf Y})=\sum_{M,N}a_{MN}{\bf X}^M{\bf Y}^N\in I$ $$\mbox{\hspace*{2cm}} f({\bf X}+{\bf X'},{\bf Y})\ =\ \sum_{M,N}a_{MN}({\bf X}+{\bf X'})^M {\bf Y}^N \ \in\ \langle\,I\,\rangle+\langle\ph({\bf e}) \rangle\mbox{\hspace*{2cm}}(*)$$ holds in $A[{\bf X},{\bf X'},{\bf Y}]$, where $\ph:A[{\bf X}] \rightarrow A[{\bf X'}]$, $X_\mu\mapsto X_\mu'$. Modulo $\langle{\bf X}\rangle$ this says $$f({\bf X'},{\bf Y})\in\langle\,\bar I\,\rangle+\langle\ph({\bf e})\rangle$$ in $A[{\bf X'},{\bf Y}]$. Replacing $\bf X'$ by $\bf X$ we thus get $I\subset\langle\,\bar I\,\rangle+\langle{\bf e}\rangle$. For the other direction we look at $(*)$ modulo ${\bf X}+{\bf X'}$ to conclude $$f(0,{\bf Y})\ =\ \sum_N a_{0N}{\bf Y}^N \ \in\ I+{\bf e}\ =\ I$$ for any $f\in I$, that is $\bar I\subset I$. \[quot\_cone\] Let $$\mbox{\hspace*{5cm}}0\longrightarrow F\longrightarrow E \stackrel{q}{\longrightarrow}Q \mbox{\hspace*{5cm}}(*)$$ be an exact sequence of linear spaces with $F$ a vector bundle, and let $C\subset E$ be an $F$-cone. Then there exists a unique cone $\bar C\subset Q$ such that $(*)$ induces an exact sequence of cones $$0\longrightarrow F\longrightarrow C\longrightarrow\bar C \longrightarrow0\, .$$ In particular, $C$ descends to $Q$: $C=q^{-1}(\bar C)$. [[*Proof.* ]{}]{}By replacing $Q$ by the closed subspace $E/F\subset Q$ we may assume $q$ to be an epimorphism. Then, since $F$ is a vector bundle, locally $(*)$ splits and we may apply the previous lemma to construct $\bar C\subset Q$. In other words, the proposition says that $\bar C$ is the scheme-theoretic quotient of $C$ by the free action of $F$. This is a convenient way to think about $\bar C$. Going up and down for $E$-cones =============================== In this section we investigate the behavior of $E$-cones under morphisms of two-term complexes, that is commutative squares, in ${{\mathop{\rm Lin}}}(X)$. If $\Phi_\bullet=(\Phi_0,\Phi_1): F_\bullet=(F_0\rightarrow F_1)\rightarrow (E_0\rightarrow E_1)$ is such a morphism the corresponding morphism of coherent sheaves will be written $\ph^\bullet=(\ph^{-1},\ph^0):{{\mathcal E}}^\bullet =({{\mathcal E}}^{-1}\rightarrow{{\mathcal E}}^0)\rightarrow{{\mathcal F}}^\bullet= ({{\mathcal F}}^{-1}\rightarrow{{\mathcal F}}^0)$. Then $\Phi_i=L(\ph^{-i})$, $E_i=L({{\mathcal E}}^{-i})$, $F_i=L({{\mathcal F}}^{-i})$ for $i=0,1$. Going up -------- Let $\Phi_\bullet:F_\bullet\rightarrow E_\bullet$ be a commutative square in ${{\mathop{\rm Lin}}}(X)$, and $C\hookrightarrow E_1$ an $E_0$-cone. Then $\Phi_1^{-1}(C)\hookrightarrow F_1$ is an $F_0$-cone. [[*Proof.* ]{}]{}Consider the diagram $$\begin{array}{ccc} F_0\oplus F_1&\stackrel{\alpha}{{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}}&F_1\\[10pt] {\makebox[0cm]{${\scriptstyle\Phi_0\oplus\Phi_1\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle\Phi_0\oplus\Phi_1\ }$}}&&{\makebox[0cm]{$\phantom{\ \scriptstyle\Phi_1} {{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}{\ \scriptstyle\Phi_1}$}}\\[10pt] E_0\oplus E_1&\stackrel{\alpha'}{{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}}&E_1 \end{array}$$ with horizontal arrows the morphisms defining the $F_0$- and $E_0$-module structures on $F_1$ and $E_1$ respectively. By hypothesis $E_0\oplus C$ is a closed subscheme of $(\alpha')^{-1}(C)$. Thus $F_0\oplus\Phi_1^{-1}(C) =(\Phi_0\oplus\Phi_1)^{-1} (E_0\oplus C)$ is a closed subscheme of $\alpha^{-1} (\Phi_1^{-1}(C))$. By this lemma we are able to make the following definition. (going up) Let $\Phi_\bullet:F_\bullet\rightarrow E_\bullet$ be a commutative square in ${{\mathop{\rm Lin}}}(X)$ and $C\subset E_1$ an $E_0$-cone. Then the $F_0$-cone $$\Phi_\bullet^!(C):=\Phi_1^{-1}(C)$$ in $F_1$ is called [*pull-back*]{} of $C$ under $\Phi_\bullet$. The pull-back depends only on the homotopy class of $\ph^\bullet$ (or $\Phi_\bullet$). Let $\ph^\bullet, \psi^\bullet:{{\mathcal E}}^\bullet=[{{\mathcal E}}^{-1} \stackrel{d}{\rightarrow} {{\mathcal E}}^0] \rightarrow {{\mathcal F}}^\bullet$ be homotopic commutative squares in ${{\mathop{\rm Coh}}}(X)$. Then for any $E_0$-cone $C\subset E_1$ $$\Phi_\bullet^!(C)\ =\ \Psi_\bullet^!(C)\, .$$ [[*Proof.* ]{}]{}Let $k:{{\mathcal E}}^0\to {{\mathcal F}}^{-1}$ be a homotopy: $\psi^{-1} = \ph^{-1}+k\circ d$, $\psi^0=\ph^0+ d\circ k$. Writing $K=L(k)$ and $\alpha:E_0\oplus E_1\rightarrow E_1$ for the structure map, $\Psi_1$ may be decomposed into $$F_1\stackrel{(K,\Phi_1)}{\longrightarrow}E_0\oplus E_1 \stackrel{\alpha}{\longrightarrow}E_1\, .$$ Since $E_0\oplus C\subset\alpha^{-1}(C)$, $(K,\Phi_1)^{-1} (E_0\oplus C) =\Phi_1^{-1}(C)$ is a closed subscheme of $\Psi_1^{-1}(C)$. But the claim is symmetric in $\Phi_\bullet$, $\Psi_\bullet$, hence $\Phi_1^{-1}(C)=\Psi_1^{-1}(C)$. The next result about functoriality of going up follows directly from the definition. \[gu\_functorial\] Let $\Phi_\bullet:E_\bullet\rightarrow F_\bullet$, $\Psi_\bullet: F_\bullet\rightarrow G_\bullet$ be commutative squares of linear spaces and $C\subset G_1$ a $G_0$-cone. Then $$(\Psi_\bullet\circ\Phi_\bullet)^!(C)\ =\ \Phi_\bullet^!\circ \Psi_\bullet^!(C)\, .$$ Going down ---------- Going down, or push-forward, of $F_0$-cones in $F_1$ to $E_1$ is a little more subtle. The central tool will be Proposition \[quot\_cone\]. To make this proposition applicable we need a little lemma. \[ass\_exact\_sequence\] Let $\ph^\bullet:({{\mathcal E}}^{-1}\stackrel{d}{\rightarrow}{{\mathcal E}}^0) \rightarrow ({{\mathcal F}}^{-1}\stackrel{d'}{\rightarrow}{{\mathcal F}}^0)$ be a commutative square in ${{\mathop{\rm Coh}}}(X)$. Then the complex $$0\longrightarrow{{\mathcal E}}^{-1}\stackrel{(d,\ph^{-1})}{\longrightarrow} {{\mathcal E}}^0\oplus{{\mathcal F}}^{-1}\stackrel{\ph^0\circ\prj_1 -d'\circ\prj_2}{\longrightarrow}{{\mathcal F}}^0\longrightarrow0$$ is exact at ------ --------------------------------------------- ----------------------------------------------------------------------------- i) ${{\mathcal F}}^0$ iff $H^0(\ph^\bullet)$ is surjective ii) ${{\mathcal E}}^0\oplus{{\mathcal F}}^{-1}$ iff $H^0(\ph^\bullet)$ is injective and $H^{-1}(\ph^\bullet)$ is surjective iii) ${{\mathcal E}}^{-1}$ iff $H^{-1}(\ph^\bullet)$ is injective. ------ --------------------------------------------- ----------------------------------------------------------------------------- [[*Proof.* ]{}]{}Chase the diagram $$\begin{array}{rcccccccccccl} \phantom{.}& 0&{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&{{\mathcal K}}&{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&{{\mathcal E}}^{-1} &\stackrel{d}{{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}}&{{\mathcal E}}^0 &{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&{{\mathcal Q}}&{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&0\\[8pt] &&&{\makebox[0cm]{${\scriptstyleH^{-1}(\ph^\bullet)\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyleH^{-1}(\ph^\bullet)\ }$}}&&{\makebox[0cm]{${\scriptstyle\ph^{-1}\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle\ph^{-1}\ }$}}&& {\makebox[0cm]{$\phantom{\ \scriptstyle\ph^0} {{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}{\ \scriptstyle\ph^0}$}}&&{\makebox[0cm]{$\phantom{\ \scriptstyleH^0(\ph^\bullet)} {{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}{\ \scriptstyleH^0(\ph^\bullet)}$}}\\[12pt] &0&{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&{{\mathcal K}}'&{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&{{\mathcal F}}^{-1} &\stackrel{d'}{{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}}&{{\mathcal F}}^0 &{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&{{\mathcal Q}}'&{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&0&. \end{array}$$ If $\ph^\bullet$ is a quasi-isomorphism we thus get exactness of the stated complex. And $\ph^\bullet$, viewed as a commutative square, is cartesian (${{\mathcal E}}^{-1}={{\mathcal E}}^0\oplus_{{{\mathcal F}}^0}{{\mathcal F}}^{-1}$) iff $H^0(\ph^\bullet)$ is injective and $H^{-1}(\ph^\bullet)$ is an isomorphism, and it is cocartesian (${{\mathcal F}}^0=({{\mathcal E}}^0\oplus {{\mathcal F}}^{-1})/ {{\mathcal E}}^{-1}$) iff $H^0(\ph^\bullet)$ is an isomorphism and $H^{-1}(\ph^\bullet)$ is surjective. Assume now that $F_0$ is a vector bundle and that $\Phi_\bullet:[F_0 \stackrel{D'}{\rightarrow}F_1]\rightarrow[E_0 \stackrel{D}{\rightarrow}E_1]$ induces an isomorphism on $H^0$ and a closed embedding of linear spaces on $H^1$. If these conditions are satisfied we say that [*going down is applicable to $\Phi_\bullet$*]{}. Then $$0\longrightarrow F_0\stackrel{(\Phi_0,-D')}{\longrightarrow} E_0\oplus F_1 \stackrel{q}{\longrightarrow}E_1$$ is exact (Lemma \[ass\_exact\_sequence\], $q=D\circ\prj_1 +\Phi_1\circ\prj_2$) and we may apply Proposition \[quot\_cone\]. \[going\_down\] (going down) Let $\Phi_\bullet:F_\bullet\rightarrow E_\bullet$ be a commutative square in ${{\mathop{\rm Lin}}}(X)$, to which going down is applicable (see above), and let $C\subset F_1$ be an $F_0$-cone. The unique cone $\bar C\subset\img q\subset E_1$ with $q^{-1}(\bar C)=E_0\oplus C$, which exists by Proposition \[quot\_cone\], is called [*push-forward*]{} of $C$ by $\Phi_\bullet$, denoted $(\Phi_\bullet)_!(C)$. Note that $(\Phi_\bullet)_!(C)$ is actually an $E_0$-cone because $E_0\oplus C$ is one. And by Proposition \[quot\_cone\]: \[ex\_seq\_cones\] If going down is applicable to $\Phi_\bullet:F_\bullet\rightarrow E_\bullet$, and $C\subset F_1$ is an $F_0$-cone, there is an exact sequence of cones $$0\longrightarrow F_0\longrightarrow E_0\oplus C\longrightarrow (\Phi_\bullet)_!(C)\longrightarrow0\, .$$ Local freeness of $F_0$ (or local splittability of the relevant exact sequence of linear spaces) seems to be indispensable, since otherwise $E_0\oplus C$ need not descend to $E_1$. See Remark \[coh\_cones\],3 for a related example. As with going up, going down depends only on the homotopy class of $\Phi_\bullet$. Let $\Phi_\bullet$, $\Psi_\bullet:F_\bullet\rightarrow E_\bullet$ be homotopic morphisms of commutative squares in ${{\mathop{\rm Lin}}}(X)$ and $C\subset F_1$ an $F_0$-cone. If going down is applicable to $\Phi_\bullet$ (or, equivalently, to $\Psi_\bullet$) then $$(\Phi_\bullet)_!(C)\ =\ (\Psi_\bullet)_!(C)\, .$$ [[*Proof.* ]{}]{}Let $K:F_1\rightarrow E_0$ be a homotopy between $\Phi_\bullet$ and $\Psi_\bullet$, that is $\Psi_0=\Phi_0+K\circ D'$, $\Psi_1= \Phi_1+D\circ K$ ($D:E_0\rightarrow E_1$, $D':F_0 \rightarrow F_1$ the differentials). Then the following diagram $$\begin{array}{ccccccc} 0&{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&F_0&\stackrel{\makebox[0pt]{$\scriptstyle (\Phi_0,-D')$}}{{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}}&E_0\oplus F_1 &\stackrel{q_\Phi}{{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}}&E_1\\[10pt] &&{\makebox[0cm]{${\scriptstyle\Id\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle\Id\ }$}}&&{\makebox[0cm]{${\scriptstyle\chi\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle\chi\ }$}}&&{\makebox[0cm]{${\scriptstyle\Id\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle\Id\ }$}}\\[10pt] 0&{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&F_0&\stackrel{\makebox[0pt]{$\scriptstyle (\Psi_0,-D')$}}{{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}}&E_0\oplus F_1 &\stackrel{q_\Psi}{{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}}&E_1 \end{array}$$ with $\chi=(\prj_1-K\circ\prj_2,\prj_2)$, $q_\Phi=D\circ\prj_1+ \Phi_1\circ\prj_2$ and $q_\Psi=D\circ\prj_1+\Psi_1\circ\prj_2$, is commutative. Now $\chi^{-1}(E_0\oplus C) =E_0\oplus C$ and the conclusion follows from the definition of going down. We observe also that since $q^{-1}(\bar C)=E_0\oplus C\subset E_0\oplus F_1$ and $q|_{0\oplus F_1}=\Phi_1$, $\Phi_1^{-1}(\bar C)=C$. In other words: \[left\_inverse\] Whenever going down is applicable to $\Phi_\bullet:F_\bullet \rightarrow E_\bullet$ then $\Phi_\bullet^!$ is a left inverse to $(\Phi_\bullet)_!$, that is $$\Phi_\bullet^!(\Phi_\bullet)_!(C)\ =\ C$$ for any $F_0$-cone $C\subset F_1$. Note that $\Phi_\bullet^!$ is generally not right-inverse to $(\Phi_\bullet)_!$. For example consider $\Phi_\bullet=(\Id,\iota): (F_0\to F_1)\to (F_0\to F_1\oplus N)$ for any linear space $N$ over $X$, $\iota: F_1\to F_1\oplus N$ the inclusion of the first factor and $F_0$ acting trivially on $N$. Then for an $F_0$-cone of the form $C\oplus N$ it holds $(\Phi_\bullet)_!\Phi_\bullet^!(C\oplus N) = C\oplus 0$. Compare however Proposition \[1-1-corr\]. Going down is functorial: Let $\Psi_\bullet:G_\bullet\rightarrow F_\bullet$, $\Phi_\bullet: F_\bullet\rightarrow E_\bullet$ be commutative squares of linear spaces to which going down is applicable, and let $C\subset G_1$ be a $G_0$-cone. Then $$(\Phi_\bullet\circ\Psi_\bullet)_!(C) \ =\ (\Phi_\bullet)_! (\Psi_\bullet)_!(C)\, .$$ [[*Proof.* ]{}]{}Consider the following diagram of linear spaces and cones: $$\begin{array}{ccccccccc} &&0&&0\\[8pt] &&{\makebox[0cm]{${\scriptstyle\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle\ }$}}&&{\makebox[0cm]{${\scriptstyle\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle\ }$}}\\[14pt] &&G_0&\stackrel{\Id_{G_0}}{{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}}&G_0\\[5pt] &&{\makebox[0cm]{${\scriptstyle \bigg(\!\!\!\begin{array}{c} \scriptstyle\Psi_0\\ \scriptstyle\Id_{G_0} \end{array}\!\!\!\bigg)\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle \bigg(\!\!\!\begin{array}{c} \scriptstyle\Psi_0\\ \scriptstyle\Id_{G_0} \end{array}\!\!\!\bigg)\ }$}} &&{\makebox[0cm]{$\phantom{\ \scriptstyle \left(\!\!\!\begin{array}{c} \scriptstyle\Psi_0\\ \scriptstyle 0\\ \scriptstyle -D_G \end{array}\!\!\!\right)} {{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}{\ \scriptstyle \left(\!\!\!\begin{array}{c} \scriptstyle\Psi_0\\ \scriptstyle 0\\ \scriptstyle -D_G \end{array}\!\!\!\right)}$}}\\[-30pt] 0&{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&F_0\oplus G_0 &\stackrel{\left(\!\!\!\begin{array}{cc} \scriptstyle\Id_{F_0}& \scriptstyle 0\\ \scriptstyle -\Phi_0& \scriptstyle \Phi_0\circ\Psi_0\\ \scriptstyle 0& \scriptstyle -D_G \end{array}\!\!\!\right)}{{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}}&F_0\oplus E_0\oplus C &\stackrel{(\Phi_1\circ D_F, D_E, \Phi_1\circ\Psi_1)}{{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}} &(\Phi_\bullet\circ \Psi_\bullet)_! C&{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&0\\[8pt] &&{\makebox[0cm]{${\scriptstyle(-\Id_{F_0},\Psi_0)\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle(-\Id_{F_0},\Psi_0)\ }$}}&&{\makebox[0cm]{${\scriptstyle \left(\!\!\!\begin{array}{ccc} \scriptstyle 0&\!\!\scriptstyle \Id_{E_0}&\!\! \scriptstyle 0\\ \scriptstyle D_F&\!\!\scriptstyle 0&\!\! \scriptstyle \Psi_1 \end{array}\!\!\!\right)\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle \left(\!\!\!\begin{array}{ccc} \scriptstyle 0&\!\!\scriptstyle \Id_{E_0}&\!\! \scriptstyle 0\\ \scriptstyle D_F&\!\!\scriptstyle 0&\!\! \scriptstyle \Psi_1 \end{array}\!\!\!\right)\ }$}}&&{\makebox[0cm]{${\scriptstyle\Id_{E_1}\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle\Id_{E_1}\ }$}}\\[12pt] 0&{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&F_0&{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&E_0\oplus (\Psi_\bullet)_! C &\stackrel{(D_E,\Phi_1)}{{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}} &(\Phi_\bullet)_!(\Psi_\bullet)_! C &{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&0\\[-3pt] &&&\left(\!\!\!\begin{array}{c} \scriptstyle\Phi_0\\ \scriptstyle -D_F \end{array}\!\!\!\right)\\[-15pt] &&{\makebox[0cm]{${\scriptstyle\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle\ }$}}&&{\makebox[0cm]{${\scriptstyle\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle\ }$}}\\[12pt] &&0&&0 \end{array}$$ We claim that rows and columns in this diagram are exact. The left vertical sequence is trivially exact. The upper howizontal sequence is exact by Proposition \[ex\_seq\_cones\] applied to $(\Phi_\bullet)_!$ and the cone $(\Psi_\bullet)_!(C)\subset F_1$. Exactness of the middle vertical sequence follows by adding a trivial $E_0$-term to the analogous sequence for $(\Psi_\bullet)_!$ and $C\subset G_1$. For the remaining middle horizontal sequence exactness of the enveloping sequence $$0\longrightarrow F_0\oplus G_0\longrightarrow F_0\oplus E_0\oplus G_1 \longrightarrow E_1\longrightarrow 0$$ of linear spaces is easy to verify. Again by Proposition \[ex\_seq\_cones\] the preimage $\tilde C\subset F_0\oplus E_0\oplus G_1$ of $(\Phi_\bullet\circ\Psi_\bullet)_!(C)\subset E_1$ intersects $0\oplus E_0\oplus G_1$ in $0\oplus E_0\oplus C$, and it is invariant under the action of $F_0$ on the first factor. Hence $\tilde C= F_0\oplus E_0\oplus C$, proving exactness of the middle horizontal sequence. Now exactness of the lower horizontal sequence and of the middle vertical sequence show that the preimage of $(\Phi_\bullet)_!(\Psi_\bullet)_! C$ under the composition of epimorphisms $F_0\oplus E_0\oplus G_1\to E_0\oplus F_1\to E_1$ from the lower right square equals $F_0\oplus E_0\oplus C$. This is the same as the preimage of $(\Phi_\bullet\circ \Psi_\bullet)_!C$. Therefore $(\Phi_\bullet)_!(\Psi_\bullet)_! C$ and $(\Phi_\bullet\circ\Psi_\bullet)_! C$ are the same cones in $E_1$. The case of quasi-isomorphisms ------------------------------ By definition a morphism $\Phi_\bullet$ of two-term complexes is a quasi-isomorphism if $H^i(\Phi_\bullet)$ is an isomorphism for $i=0,1$. This is equivalent to requiring that $\Phi_\bullet$ viewed as a commutative square is cartesian and cocartesian, see Lemma \[ass\_exact\_sequence\]. Going up and down behaves well with respect to quasi-isomorphisms: \[1-1-corr\] Let $\Phi_\bullet:F_\bullet\rightarrow E_\bullet$ be a quasi-isomorphism of two-term complexes of linear spaces with $F_0$ locally free. Then going up and down induces a functorial one-to-one correspondence between $F_0$-cones $C\subset F_1$ and $E_0$-cones $\bar C\subset E_1$. [[*Proof.* ]{}]{}In view of Proposition \[left\_inverse\] it remains to show that if $\bar C\subset E_1$ is an $E_0$-cone then $\bar C=(\Phi_\bullet)_! \Phi_\bullet^!(\bar C)$. This is a local problem. We may thus assume that there exists a local splitting $\sigma: E_0\oplus F_1\to F_0$ of the exact sequence $$0\longrightarrow F_0\longrightarrow E_0\oplus F_1 \stackrel{q}{\longrightarrow}E_1\longrightarrow0,\quad q=D\circ \prj_1+\Phi_1\circ\prj_2$$ from Lemma \[ass\_exact\_sequence\]. Then $\chi=(\sigma,q): E_0\oplus F_1\to F_0\oplus E_1$ is an isomorphism mapping the diagonal $F_0$-action on $E_0\oplus F_1$ to the action on the first factor of $F_0\oplus E_1$. Since $\sigma$ is a splitting, $\chi$ induces an isomorphism $\ker(\sigma)\to E_1$. Therefore $$\chi\big(\ker(\sigma)\cap q^{-1}(\bar C)\big) =\chi(q^{-1}(\bar C))\cap (0\oplus E_1)=0\oplus \bar C.$$ But $\chi(q^{-1}(\bar C))$ is an $F_0$-cone, and hence Proposition \[quot\_cone\] implies $\chi(q^{-1}(\bar C)) = F_0\oplus \bar C$. By definition this says $\bar C=(\Phi_\bullet)_! \Phi_\bullet^!(\bar C)$. Using the nice behavior under quasi-isomorphisms we may now define [*going down for morphisms in the derived category*]{} $D({{\mathop{\rm Coh}}}(X))$ of the category of coherent sheaves. In the language of linear spaces a morphism of two-term complexes in the derived category $\Phi_\bullet: F_\bullet\rightarrow E_\bullet$ consists of 1. another two-term complex $G_\bullet$ 2. a quasi-isomorphism $\Theta_\bullet:E_\bullet\rightarrow G_\bullet$ 3. and a morphism $\Psi_\bullet:F_\bullet\rightarrow G_\bullet$. Two morphisms defined by tuples $(G_\bullet,\Theta_\bullet, \Psi_\bullet)$ and $(G'_\bullet,\Theta'_\bullet,\Psi'_\bullet)$ are considered equivalent if there exists a two-term complex ${\tilde G}_\bullet$ and quasi-isomorphisms $\Lambda_\bullet: G_\bullet\rightarrow{\tilde G}_\bullet$, $\Lambda'_\bullet: G'_\bullet\rightarrow{\tilde G}_\bullet$ making the following diagram commutative up to homotopy: $$\begin{array}{ccccc} &&\ G_\bullet\ &&\\[7pt] &{\makebox[0pt]{${\scriptstyle\Psi_\bullet\atop\ \ }{{\begin{picture}(0,0) \unitlength 1pt\put(-15,-15){\vector(1,1){30}}\end{picture}}}\phantom{\scriptstyle\Psi_\bullet\atop\ \ }$}}&{\makebox[0cm]{$\phantom{\ \scriptstyle\raisebox{-5pt}{$\scriptstyle \Lambda_\bullet$}} {{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}{\ \scriptstyle\raisebox{-5pt}{$\scriptstyle \Lambda_\bullet$}}$}}&{\makebox[0pt]{$\phantom{\scriptstyle\Theta_\bullet\atop\ \ } {{\begin{picture}(0,0) \unitlength 1pt\put(15,-15){\vector(-1,1){30}}\end{picture}}}{\scriptstyle\Theta_\bullet\atop\ \ }$}}&\\[15pt] \hspace{5cm}F_\bullet\ &&{\tilde G}_\bullet&&\ E_\bullet \hspace{5cm}(*)\\[7pt] &{\makebox[0pt]{${\ \ \atop\scriptstyle\Psi'_\bullet\ }{{\begin{picture}(0,0) \unitlength 1pt\put(-15,15){\vector(1,-1){30}}\end{picture}}}\phantom{\ \ \atop\scriptstyle\Psi'_\bullet\ }$}}&{\makebox[0cm]{$\phantom{\ \scriptstyle\raisebox{2pt}{$\scriptstyle \Lambda'_\bullet$}}{{\begin{picture}(0,0) \unitlength 1pt\put(0,-16){\vector(0,1){30}}\end{picture}}}{\ \scriptstyle\raisebox{2pt}{$\scriptstyle \Lambda'_\bullet$}}$}}&{\makebox[0pt]{$\phantom{\ \ \atop\ \scriptstyle\Theta'_\bullet} {{\begin{picture}(0,0) \unitlength 1pt\put(15,15){\vector(-1,-1){30}}\end{picture}}}{\ \ \atop\ \scriptstyle\Theta'_\bullet}$}}&\\[15pt] &&G'_\bullet&& \end{array}$$ \[go\_down\_deriv\] (going down in the derived category) Let $\Phi_\bullet:F_\bullet \rightarrow E_\bullet$ be a morphism of two-term complexes of linear spaces in the derived category, inducing an isomorphism on $H^0$ and a closed embedding on $H^1$. Moreover, we require $F_0$ to be locally free. When these assumptions are satisfied we say that [*going down is applicable to $\Phi_\bullet$*]{}. In this case the [*push-forward*]{} of an $F_0$-cone $C\subset F_1$ is defined to be the $E_0$-cone $$(\Phi_\bullet)_!(C):=(\Theta_\bullet)^!(\Psi_\bullet)_!(C)\subset E_1\, ,$$ whenever $(G_\bullet,\Psi_\bullet,\Theta_\bullet)$ is a representative of $\Phi_\bullet$. Using the previous results it is easy to check that this is well-defined. \[coh\_cones\] One might wonder if there exists “going down” when being only given maps on the level of cohomology. There are three remarks I want to make on this. 1. A map in cohomology is considerably weaker than a map of complexes, even for two-term complexes. For instance, let $E_\bullet=[E_0\stackrel{D}{\rightarrow}E_1]$ be a non-split epimorphism of linear spaces and $K=\ker D$. Then $H^\bullet(E_\bullet) =H^\bullet([K_\bullet\rightarrow0])$, but the identity map in cohomology is not induced by a morphism of complexes. 2. There is going down for “cones coming from cohomology”: By such cones we mean cones of the form $C=p^{-1}(\bar C)$ for some $\bar C\subset H^1(E_\bullet)$, $p:E_1 \rightarrow H^1 (E_\bullet)$ the cokernel of $E_\bullet$. Namely, if $\rho: H^1( E_\bullet)\rightarrow H^1(F_\bullet)$ is a closed embedding and $p': F_1 \rightarrow H^1(F_\bullet)$ is the cokernel of $F_\bullet$, one may set $\rho_!(C) :={p'}^{-1}(\rho(\bar C))$. In case $\rho=H^1(\Phi_\bullet)$ with $\Phi_\bullet:E_\bullet\rightarrow F_\bullet$ a morphism to which going down is applicable, then $\rho_!(C)$ obviously coincides with $(\Phi_\bullet)_!(C)$. 3. Not every $E_0$-cone in $E_1$ comes from cohomology. As a simple example with $C\neq E_1$ but $H^1(E_\bullet)=0$ take $X=\Aff^1_k=\Spec k[T]$, $E_0=E_1=L({{\mathcal O}}_X) =\Spec k[T,U]$, $D:E_0\rightarrow E_1$ corresponding to the homomorphism of $k[T]$-algebras sending $U$ to $TU$, and $C=V(TU)\subset E_1$, the linear space corresponding to the structure sheaf of the origin. See also Remark \[C\_X|M\_descend?\] for another, less artificial example. Global normal cones =================== If $\iota:X\hookrightarrow M$ is a closed embedding of algebraic $k$-schemes the normal cone $C_{X|M}\subset N_{X|M}$ is a $TM|_X$-cone (Example \[C\_Z|X\]). With nonsingular $M$ theses normal cones are essentially unique, namely up to vector bundle factors. In fact, if $\iota':X\hookrightarrow M'$ is another such embedding we may consider the diagonal $(\iota,\iota'):X \hookrightarrow M\times M'$ to reduce to the case where $\iota= \pi\circ\iota'$, $\pi:M'\rightarrow M$ a smooth morphism. But then there is an exact sequence of cones $$\begin{aligned} 0\longrightarrow(\iota')^*T_{M'|M}\longrightarrow C_{X|M'} \longrightarrow C_{X|M}\longrightarrow0\, .\label{eqn1}\end{aligned}$$ Based on this observation Behrend and Fantechi show that to any $X$ there is associated a cone stack (a certain Artin stack) over $X$ of pure relative dimension zero, the [*intrinsic normal cone*]{} ${{\mathcal C}}_X$. Locally, ${{\mathcal C}}_X$ is nothing but the stack-theoretic quotient $C_{X|M}/T_M|_X$, and the above exact sequence of cones is responsible for the fact that these quotients glue. One essential insight of Behrend and Fantechi is that one can retrieve an actual cone over $X$ by giving a morphism $\ph^\bullet: {{\mathcal F}}^\bullet\rightarrow{{\mathcal L}}_X^\bullet$ in the derived category inducing an isomorphism in $H^0$ and an epimorphism in $H^{-1}$, and such that ${{\mathcal F}}^\bullet=[{{\mathcal F}}^{-1}\rightarrow{{\mathcal F}}^0]$ is a two-term complex of locally free sheaves. Here ${{\mathcal L}}_X^\bullet$ is the cotangent complex of $X$. (In the language of [@behrendfantechi], $\ph^\bullet$ is a “global resolution” of a “perfect obstruction theory” for $X$.) The cotangent complex is a complicated and largely mysterious object canonically associated to any scheme, or even ringed topos [@illusie]. However, here we will work exclusively with the truncated complex $\tau_{\ge -1}{{\mathcal L}}_X^\bullet$. This is simply an object of the derived category that has the following explicit local description: If $U\subset X$ is an open subscheme and $U\hookrightarrow M$ is a closed embedding into a smooth scheme $M$ then $$\tau_{\ge -1} {{\mathcal L}}_X^\bullet= [{{\mathcal I}}/{{\mathcal I}}^2\to \Omega_M|_U],$$ where the complex on the right hand side has entries at $-1$ and $0$ (this follows from the exact triangle for the cotangent complex, see below). In particular, if $X$ is globally embedded into a smooth scheme we can avoid the cotangent complex at all. Using our study of going up and down for $E$-cones we will see that the object needed is the following. \[global\_normal\_space\] A [*global normal space*]{} for $X$ is a morphism $\ph^\bullet: {{\mathcal F}}^\bullet = [{{\mathcal F}}^{-1}\rightarrow{{\mathcal F}}^0] \rightarrow \tau_{\ge -1}{{\mathcal L}}_X^\bullet$ in the derived category with ${{\mathcal F}}^0$ locally free and inducing an isomorphism in $H^0$ and an epimorphism in $H^{-1}$. Given a global normal space $\Phi_\bullet:\tau_{\le1} (L_X)_\bullet\rightarrow F_\bullet$, now written in terms of linear spaces $\tau_{\le 1}(L_X)_\bullet=L(\tau_{\ge -1}{{\mathcal L}}_X^\bullet)$ etc., we may construct a cone $C=C(\Phi_\bullet)\subset F_0$ as follows: Let $U\subset X$ be an open set embedded into a nonsingular $M$, $\iota: U\hookrightarrow M$. The exact triangle of relative cotangent complexes associated to $U\rightarrow M\rightarrow\Spec k$ yields a morphism in the derived category $$\Lambda_\bullet:[T_M|_U\rightarrow N_{U|M}]\longrightarrow \tau_{\le 1}(L_U)_\bullet$$ that induces isomorphisms in $H^i$, $i=0,1$. As $T_M|_U$ is a vector bundle the composition $\Phi_\bullet|_U\circ \Lambda_\bullet$ fulfills the assumption of Definition \[go\_down\_deriv\]. We may thus define $$\begin{aligned} C|_U\ :=\ \big(\Phi_\bullet|_U\circ\Lambda_\bullet\big)_! (C_{U|M})\, .\label{eqn2}\end{aligned}$$ It remains to show \[independence\] $(\Phi_\bullet|_U\circ\Lambda_\bullet)_!(C_{U|M})\subset F_1|_U$ is independent of choices. [[*Proof.* ]{}]{}It suffices to treat the case of another embedding $\iota':U \rightarrow M'$ s.th. $\iota=\pi\circ\iota'$ for some smooth morphism $\pi:M'\rightarrow M$, see above. We have a commutative diagram with exact rows and columns $$\begin{array}{cccccccccccl} &&&&0&&0\\[8pt] &&&&{\makebox[0cm]{${\scriptstyle\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle\ }$}}&&{\makebox[0cm]{${\scriptstyle\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle\ }$}}\\[12pt] &&&&\makebox[0pt]{${\iota'}^*T_{M'|M}$}&=\!=\!=& \makebox[0pt]{${\iota'}^*T_{M'|M}$}\\[8pt] &&&&{\makebox[0cm]{${\scriptstyle\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle\ }$}}&&{\makebox[0cm]{${\scriptstyle\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle\ }$}}\\[12pt] 0&{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&T_U&{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&{\iota'}^*T_{M'}& {{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&N_{U|M'} &{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&T_1(U)&{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&0\\[8pt] &&{\makebox[0cm]{${\scriptstyle\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle\ }$}}&&{\makebox[0cm]{${\scriptstyle\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle\ }$}}&&{\makebox[0cm]{$\phantom{\ \scriptstyle} {{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}{\ \scriptstyle}$}}&&{\makebox[0cm]{$\phantom{\ \scriptstyle} {{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}{\ \scriptstyle}$}}\\[12pt] 0&{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&T_U&{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&\iota^*T_M& {{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&N_{U|M} &{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&T_1(U)&{{\begin{picture}(25,6)(2.5,-4) \unitlength 1pt\put(0,0){\vector(1,0){30}}\end{picture}}}&0&\!\!\!.\\[8pt] &&&&{\makebox[0cm]{${\scriptstyle\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle\ }$}}&&{\makebox[0cm]{${\scriptstyle\ }{{\begin{picture}(0,0) \unitlength 1pt\put(0,16){\vector(0,-1){30}}\end{picture}}}\phantom{\scriptstyle\ }$}}\\[12pt] &&&&0&&0 \end{array}$$ Here $T_1(U)$ is the linear space associated to the first higher cotangent sheaf of $U$. This shows that $D \pi$ induces a quasi-isomorphism $\Theta_\bullet: [T_{M'}|_U \rightarrow N_{U|M'}] \rightarrow [T_M|_U\rightarrow N_{U|M}]$ with $\Lambda'_\bullet= \Lambda_\bullet \circ\Theta_\bullet$. Moreover, from the exact sequence of cones (\[eqn1\]) $$C_{U|M'}\ =\ \Theta_1^{-1}(C_{U|M})\ =\ \Theta_\bullet^!(C_{U|M})\, .$$ Thus by Proposition \[1-1-corr\] we conclude $$(\Lambda'_\bullet)_!(C_{U|M'})\ =\ (\Lambda_\bullet)_!(\Theta_\bullet)_!\Theta_\bullet^!(C_{U|M}) \ =\ (\Lambda_\bullet)_!(C_{U|M})\, .$$ Here is the first of the two main results of this paper. \[global\_normal\_cone\] Let $X$ be an algebraic $k$-scheme. To any global normal space $\Phi_\bullet: \tau_{\le 1}(L_X)_\bullet\rightarrow F_\bullet$ for $X$ is associated an $F_0$-cone $C(\Phi_\bullet)\subset F_1$, locally of the form (\[eqn2\]), and of pure dimension equal to $\rk F_0$. [[*Proof.* ]{}]{}It remains to check the statement on the dimension. Locally, we may choose a representation of $\Phi_\bullet\circ \Lambda_\bullet: [T_M|_U\rightarrow N_{U|M}] \rightarrow F_\bullet$, where $\iota:U\hookrightarrow M$ is a closed embedding of an open $U\subset X$, by $(G_\bullet,\Theta_\bullet,\Psi_\bullet)$ with - $\Psi_\bullet:[T_M|_U\rightarrow N_{U|M}]\rightarrow G_\bullet$ - $\Theta_\bullet:F_\bullet\rightarrow G_\bullet$ a quasi-isomorphism - $G_\bullet=[G_0\rightarrow G_1]$ with $G_0$ a vector bundle (!). We get two exact sequences of cones (Proposition \[ex\_seq\_cones\]) $$\begin{array}{rcccccccl} 0&\longrightarrow&T_M|_U&\longrightarrow&G_0\oplus C_{U|M} &\longrightarrow&(\Psi_\bullet)_!C_{U|M}&\longrightarrow&0\\[1ex] 0&\longrightarrow&F_0&\longrightarrow&G_0\oplus C(\Phi_\bullet)|_U &\longrightarrow&(\Psi_\bullet)_!C_{U|M}&\longrightarrow&0\, . \end{array}$$ The first one shows that $(\Psi_\bullet)_!C_{U|M}$ is pure dimensional of dimension equal to $\rk G_0$, and then by the second one $C(\Phi_\bullet)$ is pure $(\rk F_0)$-dimensional. $C(\Phi_\bullet)$ is called the [*global normal cone*]{} associated to the global normal space $\Phi_\bullet$. \[C\_X|M\_descend?\] 1)   The picture would be especially simple if for any closed embedding $\iota:X \hookrightarrow M$ into a nonsingular $M$, $C_{X|M}$ came from a cone in the intrinsically defined linear space $T_1(X)$. This is, however, generally wrong: Consider the fat point $X=\Spec R$, $R=k[X,Y]/(X^2,XY,Y^2)$, with its embedding into $M=\Aff_k^2=\Spec k[X,Y]$. Letting $A$, $B$, $C$ correspond to the generators $X^2$, $XY$, $Y^2$ of the ideal, $C_{X|M}=\Spec R[A,B,C]/ (B^2-AC,XC-YB,XB-YA)$. $T_1(X)$ is the linear space corresponding to the kernel $I_T$ of $$\begin{aligned} &R[A,B,C]/(XC-YB,XB-YA)\ \longrightarrow\ R[dX,dY],\\ &A\mapsto 2XdX,\ B\longmapsto YdX+XdY,\ C\mapsto 2YdY\, ,\end{aligned}$$ which is $(XA,YA,XB,YB,XC,YC)=(X,Y)\cdot(A,B,C)$. A cone in $N_{X|M}=\Spec R[A,B,C]/(XC-YB,XB-YA)$ comes from $T_1(X)$ iff its ideal is generated by polynomials in $XA$, $YA$, $XB$, $YB$, $XC$, $YC$. This is not the case for $B^2-AC$, and so $C_{X|M}$ does not come from a cone in $T_1(X)$.\ 2)  By uniqueness of minimal free resolutions of modules over a local ring (see e.g. [@eisenbud Thm.20.2]) it is not hard to show that for any, not necessarily closed point $x\in X$ there is a [*minimal*]{} germ of global normal spaces at $x$. This is constructed by embedding an étale neighborhood of $x$ in $X$ into a smooth $k$-scheme $M$ of dimension ${\rm embdim}_x X= \dim_{k(x)} \Omega_x\otimes k(x)$. We assume $k$ perfect here to assure that regular $k$-schemes are smooth over $k$. A germ of global normal space at $x$ can then be defined by selecting a minimal set of generators for the ideal defining $X\hookrightarrow M$. This germ of global normal space is minimal in the sense that any other germ of global normal space at $x$ can be obtained by adding trivial factors. As a consequence, the germ at $x$ of any global normal cone is isomorphic to $C_{X|M}$ plus a vector bundle factor. Morally speaking, the “nonlinear parts” of global normal cones are locally unique. Virtual fundamental class and Fulton’s canonical class ====================================================== Virtual fundamental classes --------------------------- If $X$ is an algebraic $k$-scheme and $\Phi_\bullet:\tau_{\le 1} (L_X)_\bullet \rightarrow F_\bullet$ is a global normal space for $X$ with also [*$F_1$ a vector bundle*]{} we speak of a [*free*]{} global normal space of [*rank*]{} $\rk(\Phi_\bullet)=\rk F_0-\rk F_1$. We may then intersect the zero section $s:X\rightarrow F_1$ of $F_1$ with the global normal cone $C(\Phi_\bullet)\subset F_1$ to produce a class on $X$. Let $\Phi_\bullet: \tau_{\le 1} (L_X)_\bullet\rightarrow F_\bullet$ be a free global normal space. The Chow class $$[X,\Phi_\bullet]\ :=\ s^![C(\Phi_\bullet)]\in A_{\rk(\Phi_\bullet)}(X)$$ is called [*virtual fundamental class*]{} of $X$ with respect to $\Phi_\bullet$. Note that $[X,\Phi_\bullet]$ contains as much information as $[C(\Phi_\bullet)]\in A_{\rk(F_0)}(F_1)$, for $$[C(\Phi_\bullet)]\ =\ p^!s^![C(\Phi_\bullet)]\ =\ p^![X,\Phi_\bullet]\, ,$$ where $p:F_1\rightarrow X$ is the projection. One of the most important property of such classes is their compatibility with specializations. In the application to the construction of invariants from moduli spaces associated to a projective manifold V, say (as in Gromov-Witten or Donaldson-theory), this property implies invariance under smooth deformations of $V$. There are two versions of the specialization theorem, one involving global normal spaces of the total space of a family, and the other working with [*relative*]{} global normal spaces (that is, a morphism $\Phi_\bullet: \tau_{\le 1}(L_{{{\mathcal X}}|S})_\bullet \rightarrow F_\bullet$, where ${{\mathcal X}}\rightarrow S$ is the family of algebraic $k$-schemes under consideration). We do not have anything to add to the presentation in [@behrendfantechi Proposition 5.10 and Proposition 7.2], which translates almost literally into our language. Before turning to an explicit formula for the computation of $[X,\Phi_\bullet]$ in hopefully more accessible terms, we want to add the following point of view: For any pure-dimensional cone $C$ in a vector bundle $F$ there is a formula for the intersection with the zero locus in terms of the Segre class of $C$ and the total Chern class of $F$ [@fulton Expl. 4.1.8]. Applied to $C(\Phi_\bullet)\subset F_1$ it says $$[X,\Phi_\bullet]\ =\ \{c(F_1)\cap s(C(\Phi_\bullet)) \}_{\rk(\Phi_\bullet)}\, ,$$ where $\{\,.\,\}_d: A_*(X)\rightarrow A_d(X)$ denotes the projection to the $d$-dimensional part. Now for any $r>0$, the image of $[C(\Phi_\bullet)]$ under the monomorphism $\iota_r: F_1 \hookrightarrow F_1\oplus\Aff_X^r$ becomes rationally trivial, while $c(F_1\oplus \Aff^r_X) = c(F_1)$. Thus letting ${\tilde s}^r:X\hookrightarrow F_1\oplus\Aff_X^r$ be the zero section we see $$\{c(F_1)\cap s(C(\Phi_\bullet))\}_{\rk(\Phi_\bullet)-r}\ =\ ({\tilde s}^r)^!(\iota_r)_*[C(\Phi_\bullet)]\ =\ 0\, .$$ This teaches us two things: First, if $F_1$ splits off a trivial factor $F_1={\bar F}_1\oplus\Aff^1_X$ with $\img\Phi_1 \subset{\bar F}_1$ then $[X,\Phi_\bullet]=0$. So the result is trivial if $F_1$ is not chosen small enough. And second, if $[X,\Phi_\bullet]\neq 0$ then $\rk(\Phi_\bullet)$ is the smallest number $d$ such that $$\{c(F_1)\cap s(C(\Phi_\bullet))\}_d\ \neq\ 0\, .$$ Let $X$ be smooth of dimension $n$. Then the cotangent complex of $X$ is exact at ${{\mathcal L}}_X^{-1}$. So the natural morphism $\Phi_\bullet: \tau_{\le 1} (L_X)_\bullet\rightarrow [T_X\rightarrow O]$, $O=X$ the trivial linear space over $X$, is an isomorphism in $H^0$ and $H^1$. Then $C(\Phi_\bullet)=X=O$ and $c(F_1)\cap s(C(\Phi_\bullet)) =s(C(\Phi_\bullet))=[X]$ has vanishing components in dimensions smaller than $n$. Fulton’s canonical class ------------------------ If an algebraic $k$-scheme $X$ is globally embeddable into a smooth $k$-scheme $M$ (e.g. $X$ quasi-projective) then $$c_F(X)\ :=\ c(T_M|_X)\cap s(C_{X|M})\, \in A_*(X)$$ is a Chow-class on $X$ that is independent of the choice of embedding [@fulton Expl. 4.2.6]. \[fultons\_class\] $c_F(X)$ is called [*Fulton’s canonical class*]{}. Note that if $X$ is smooth one may choose $X=M$ and so $c_F(X)=c(T_X)\cap[X]$. For comparison of $c_F(X)$ with Mather’s and MacPherson’s Chern classes see [@aluffi]. Given a (not necessarily free) global normal space $\Phi_\bullet: \tau_{\le 1} (L_X)_\bullet\rightarrow F_\bullet$ for $X$, $c_F(X)$ can also be expressed as follows: \[Fultons\_class\] Let $\Phi_\bullet:\tau_{\le 1} (L_X)_\bullet\rightarrow F_\bullet$ be a global normal space for a quasi-projective $X$. Then $$c_F(X)\ =\ c(F_0)\cap s(C(\Phi_\bullet))\, .$$ [[*Proof.* ]{}]{}By quasi-projectivity there exists a [global]{} closed embedding $\iota:X\hookrightarrow M$ of $X$ into a smooth $M$. This yields a globally defined morphism in the derived category $\Lambda_\bullet: [T_M|_X\rightarrow N_{X|M}]\rightarrow \tau_{\le 1} (L_X)_\bullet$. Also by quasi-projectivity any sheaf is the quotient of a locally free sheaf. Hence there is a global representative $(G_\bullet, \Theta_\bullet, \Psi_\bullet)$ of $\Phi_\bullet\circ \Lambda_\bullet$ in the construction of Theorem \[global\_normal\_cone\], that is, with $G_0$ a vector bundle, $\Theta_\bullet:F_\bullet \rightarrow G_\bullet$ a quasi-isomorphism, $\Psi_\bullet: [T_M|_X\rightarrow N_{X|M}]\rightarrow G_\bullet$. We get two exact sequences of cones with vector bundle kernels (see Proposition \[ex\_seq\_cones\]) $$\begin{array}{rcccccccl} 0&\longrightarrow&T_M|_X&\longrightarrow& G_0\oplus C_{X|M}&\longrightarrow& (\Psi_\bullet)_!(C_{X|M})&\longrightarrow&0\\[2ex] 0&\longrightarrow&F_0&\longrightarrow&G_0\oplus C(\Phi_\bullet)&\longrightarrow&(\Psi_\bullet)_!(C_{X|M}) &\longrightarrow&0\, , \end{array}$$ which by the multiplicativity of Segre classes in exact sequences of cones with vector bundle kernels imply $$c(T_M|_X)\cap s(C_{X|M})\ =\ c(G_0)\cap s\Big((\Psi_\bullet)_! (C_{X|M})\Big)\ =\ c(F_0)\cap s(C(\Phi_\bullet))\, .$$ If $X$ is any algebraic $k$-scheme with global normal spaces one could take the right-hand side of the formula in the proposition as definition for a generalization of Fulton’s canonical class on projective schemes. However, I was not able to prove independence of this class from the choice of $\Phi_\bullet$. And in case $X$ is not quasi-projective but embeddable into a smooth scheme, in the construction of Theorem \[global\_normal\_cone\] we might not be able to choose $G_0$ locally free. Then the coincidence of this class with $c_F(X)$ is not clear either. The problem is that on one hand the globally defined complex linking two global normal spaces $\Phi_\bullet: \tau_{\le 1}(L_X)_\bullet\rightarrow F_\bullet$, $\Phi'_\bullet: \tau_{\le 1} (L_X)_\bullet \rightarrow F'_\bullet$ is the cotangent complex, which need not be globally representable by a complex $L_\bullet$ with $L_0$ a vector bundle, while on the other hand Segre classes do not behave well in exact sequences unless the kernels are vector bundles. We are now ready to deduce the announced formula for the virtual fundamental class. \[closed\_formula\] Let $X$ be a projective $k$-scheme and $\Phi_\bullet: \tau_{\le 1} (L_X)_\bullet\rightarrow F_\bullet$ a free global normal space for $X$ of constant rank $d$. Then $$[X,\Phi_\bullet]\ =\ \Big\{c(\ind F_\bullet)^{-1}\cap c_F(X) \Big\}_d\, ,$$ where $\ind F_\bullet$ is the virtual bundle $F_0-F_1\in K^0(X)$. [[*Proof.* ]{}]{}As remarked at the end of the last subsection the virtual fundamental class can be computed by the formula $$s^![C(\Phi_\bullet)]\ =\ \Big\{c(F_1)\cap s(C(\Phi_\bullet))\Big\}_d\, .$$ Now just insert $c(F_0)^{-1}\cup c(F_0)$ and use Proposition \[Fultons\_class\] 1)   This formula enlightens the dependence of virtual fundamental classes on the choice of global normal spaces: Interestingly, $[X,\Phi_\bullet]$ depends only on the index bundle of $F_\bullet$ rather than on any of the finer data used to construct $C(\Phi_\bullet)$. But note also that for another choice $\Phi'_\bullet: \tau_{\le 1} (L_X)_\bullet\rightarrow F'_\bullet$ of global normal space, $[X,\Phi'_\bullet]$ can not in general be computed from $[X,\Phi_\bullet]$ and $\ind F_\bullet$, $\ind F'_\bullet$ alone.\ 2)   One can take this formula as [*definition*]{} of the virtual fundamental class of $X$ without knowing anything about the more sophisticated theory of global normal cones in the non-projective case. This was the point of view of the author in summer 1995 in an attempt to define GW-invariants in algebraic geometry, when I observed it from formal considerations. Unfortunately, I was not aware of Vistoli’s rational equivalence [@vistoli], from which the crucial independence of the invariants under smooth deformations can be derived. I learned also that the same formula has independently been discovered by Brussee for complex spaces constructed as zero locus of holomorphic Fredholm sections of holomorphic Banach bundles over complex Banach manifolds, as occurring for example in Seiberg-Witten theory [@brussee] (the interpretation of Brussee’s $c_*(X)$ as Fulton’s canonical class is not quite clear, though).\ 3)   At the beginning of Section 3 we mentioned Behrend and Fantechi’s intrinsic normal cone ${{\mathcal C}}_X$, which locally was the stack-theoretic quotient of $C_{X|M}$ by the action of $T_M|_X$ for some embedding $X\hookrightarrow M$ into a smooth space. Now if $X$ is globally embedded into a smooth scheme $M$, ${{\mathcal C}}_X$ is globally the quotient of $C_{X|M}$ by $T_M|_X$. Hence in view of the multiplicative behavior of Segre classes in exact sequences of cones with vector bundle kernels, $c_F(X)$ could with some right considered as [*Segre class of ${{\mathcal C}}_X$*]{}. Conversely, if there was a theory of Segre classes for cone stacks, the Segre class of ${{\mathcal C}}_X$ would generalize Fulton’s canonical class to arbitrary algebraic $k$-schemes. [XXXXXX]{} P. Aluffi: [*Chern classes for singular hypersurfaces*]{}, Trans. Amer. Math. Soc. [**351**]{} (1999), 3989–4026 M. Artin: [*Versal deformations and algebraic stacks*]{}, Inv. Math. [**27**]{} (1974) 165–189 K. Behrend: [*GW-invariants in algebraic geometry*]{}, Inv. Math. [**127**]{} (1997), 601–617 K. Behrend, B. Fantechi: [*The intrinsic normal cone*]{}, Inv. Math. [**128**]{} (1997), 45–88 R. Brussee: [*The canonical class and the $C^\infty$-properties of Kähler surfaces*]{}, New York Journ. Math. [**2**]{} (1996), 103–146 A. Grothendieck: [*Éléments de géométrie algébrique II: Étude globale élémentaire de quelques classes de morphismes.*]{} Publ. Math. Inst. Hautes Étud. Sci. 8 (1961). D. Eisenbud: [*Commutative algebra. With a view toward algebraic geometry*]{}, Springer 1995 W. Fulton: [*Intersection theory*]{}, Springer 1984 L. Illusie: [*Complexe cotangent et déformations*]{}, 2 vols., Lecture Notes in Mathematics 239/283, Springer 1971/1972 R. Hartshorne: [*Residues and duality*]{}, Lecture Notes in Math.20, Springer 1966 J. Li, G. Tian: [*Virtual moduli cycles and GW-invariants of algebraic varieties*]{}, in: Topics in symplectic $4$-manifolds, 1st International Press lectures presented in Irvine, CA, USA, March 28-30, 1996 (R. Stern ed.), 47–83 B. Siebert: [*Gromov-Witten invariants for general symplectic manifolds*]{}, preprint [dg-ga 9608005]{}, accepted as Habilitationsschrift, Bochum 1998. B. Siebert: [*An update on (small) quantum cohomology*]{}, in: Mirror symmetry III, Proceedings of the conference on Geometry and Physics, Montreal 1995 (D.H. Phong, L. Vinet, S.T. Yau eds.), 279–312, AMS/IP Stud. Adv. Math. [**10**]{}, Amer. Math. Soc. 1999 B. Siebert: [*Algebraic and symplectic Gromov-Witten invariants coincide*]{}, Ann. Inst. Fourier [**49**]{} (1999) 1743–1795. B. Siebert: [*Logarithmic Gromov-Witten invariants*]{}, unfinished manuscript 2001. A. Vistoli: [*Intersection theory on algebraic stacks and on their moduli spaces*]{}, Inv. Math. [**97**]{} (1989) 613–670 [^1]: “Linear space (over $X$)” or “linear fiber space” (“Linearer Faserraum”) seem to be the classical notation for the “abelian cones” of [@behrendfantechi]
--- author: - Philip Born - Karsten Holldack title: Analysis of Granular Packing Structure by Scattering of THz Radiation --- abstract {#abstract .unnumbered} ======== Scattering methods are widespread used to characterize the structure and constituents of matter on small length scales. This motivates this introductory text on identifying prospective approaches to scattering-based methods for granular media. A survey to light scattering by particles and particle ensembles is given. It is elaborated why the established scattering methods using X-rays and visible light cannot in general be transferred to granular media. Spectroscopic measurements using Terahertz radiation are highlighted as they to probe the scattering properties of granular media, which are sensitive to the packing structure. Experimental details to optimize spectrometer for measurements on granular media are discussed. We perform transmission measurements on static and agitated granular media using Fourier-transform spectroscopy at the THz beamline of the BessyII storage ring. The measurements demonstrate the potential to evaluate degrees of order in the media and to track transient structural states in agitated bulk granular media. Introduction ============ Scattering is a highly efficient method to characterize the structure of matter at small distances. Prominent examples like X-ray diffraction (XRD), small-angle X-ray scattering (SAXS), small-angle light scattering (SALS) (also called laser light scattering) and static light scattering (SLS) rely on the angular redistribution of electromagnetic waves during their propagation within the matter of interest [@Glatter1995; @Xu2002; @Brown1996]. They all give space- and time-averaged information of atomic, molecular or colloidal form and structure factors. This is accompanied by moderate instrumental effort and short measurement times, as in the case of the visible light scattering techniques. Scattering methods are so fundamental to materials science, that it is worth discussing their potential for studies on granular matter. Apparently, scattering methods could provide experimental benefits compared to imaging methods. A single exposure of an area detector or a detector array, e.g. a CCD camera, could reveal ensemble-averaged structural information on a sample. Thus, even investigation of non-stationary processes in bulk samples should become possible using scattering methods with good time resolution, which is a blind spot of investigations with imaging methods (compare the introductory article to the special topic section on imaging in this issue). The direct transfer of established scattering approaches to granular media is not possible though. The mentioned scattering methods rely on resolving the momentum transfer to an incident wave by measuring the angular scattering pattern. We discuss in the following section fundamental situations of scattering of electromagnetic radiation from particles and particle ensembles. It becomes obvious that in most situations scattering from dense granular media is distinct from X-ray and light scattering in atomic or colloidal media as the mean free path of the radiation becomes very short. Light transport through granular samples yet happens by scattering, and the transport properties of a granular medium will reflect the scattering properties of the particle ensemble. We show that with Terahertz (THz) radiation this transport properties become sensitive to the packing structure. We intend a beginners guide to the relevant approaches and literature. A coherent derivation of scattering properties starting from individual particles ranging up to dense media is not possible. Fundamental regimes can be classified by the ratio of individual particle size to wavelength, which will persist in dense media. The discussion will thereby be restricted to elastic scattering, inelastic processes like Raman scattering and absorption processes will be neglected. We then give an introduction to design and setup of experiments to probe scattering properties of granular media in section \[sec:THzscatt\]. We finally show light transport measurements on granular media which undergo disorder-order transitions in section \[sec:experiments\]. These demonstrate the potential to monitor structural changes in-situ. Scattering-based methods for granular media thus do not provide the high levels of information as imaging methods do under optimal conditions. Size and shape distributions can be determined with imaging without a priori assumptions, and local structural properties can be probed without the need for spatial averaging. Still, scattering-based methods might find their place in granular matter science, as they allow for a time-resolution which for index-matching or tomography could be reached only with high instrumental effort. Scattering basics {#sec:basics} ================= ![image](1.pdf){width="80.00000%"} We start the survey on scattering basics with single particle scattering properties, from which major regimes of scattering from particle ensembles can be deduced. Predictions on the light transport through granular media can be made from the scattering basics, which serves as a decision guide to select a scattering-based method for characterization. We finally derive in this section the relation among structure sensitivity and the wavelength of the used light. This short survey is designed only to highlight the approaches for scattering-based methods for granular media. Good references for scattering from particles [@Hulst1981; @Bohren1983; @Born1999] and particle ensembles [@Born1999; @Ishimaru1999; @Mishchenko2014] in general are available. Scattering regimes for individual particles {#sec:singleparts} ------------------------------------------- The elastic scattering of electromagnetic radiation by a particle can be classified using the extinction efficiency $Q_{e} = \sigma_{e}/(\pi a^{2}) = Q_{a}+Q_{s}$, the ratio of the extinction cross-section $\sigma_{e}$ to the geometrical cross-section of the particle with radius $a$, and the sum of the absorption and scattering efficiency $Q_{a}$ and $Q_{s}$. The $Q_{e}$ of spherical dielectric particles of different sizes can be scaled on top of each other using the average shift in phase $$\rho = 2 x |m-1| \label{eq:rho}$$ of the transmitted wave acquired by passing the particles [@Hulst1981; @Bohren1983; @Mishchenko2014]. $x = 2\pi a /\lambda$ is the size parameter of a particle, $a$ is the particle radius, $\lambda$ the wavelength, and $m$ the refractive index of the particles. $m$ is determined relative to the surrounding medium and in general is complex, where the imaginary part describes the absorption properties of the particles. Analytical calculation of the efficiencies is generally only possible for spherical or cylindrical particles [@Bohren1983], but numerical methods and high computing powers increasingly allow for calculations for other shapes [@Mishchenko2000]. Such a calculated master curve for $Q_{e}$ of spherical particles is given in Fig. \[fig:Q\]. Three regimes become specifiable: *$\rho < 1,\ Q_{e} < 1$:* : The scattering by particles is rather isotropic (see left inset in Fig. \[fig:Q\]), and the particles scatter and absorb less radiation than is impinging on their geometrical cross-section in this regime. This regime is achieved either by particles smaller than the wavelength ($x$ small), or by close matching of the refractive indexes of the particle and the surrounding medium ($m$ small). This is the regime for imaging through the particles, as is achieved by using long-wave radiation to image objects in granular media (see the contribution by F. Ott, S. Herminghaus K. Huang in this issue), or by laser-sheet scanning through index-matched granular media (see the contribution by J. A. Dijksman and N. Brodu in this issue). Scattering in this regime can be described by the Rayleigh- or Rayleigh-Gans-Debye-approximations (RGD, see Sec. \[sec:firstorder\]). *$\rho \gg 1,\ Q_{e} \approx 2$*: : Scattering of light is confined into a narrow cone close to the forward direction (right inset in Fig. \[fig:Q\]). In this limit the particles are either much larger than the wavelength or having large refractive index mismatches. The particles theoretically extinct twice as much radiation than is impinging on their geometrical cross-section. However, this is only measurable with highly optimized setups like SALS-instruments where very small scattering angles can be discriminated. The light scattered close to the forward direction and the transmitted light cannot be discriminated with uncollimated detectors, and measured scattering efficiencies reduce (dashed line in Fig. \[fig:Q\]). This is the regime for conventional imaging of granular particles using visible light or for tomography using X-rays (see the contribution by S. Weis and M. Schröter in this issue). Scattering in this regime can often be well approximated by the theory of Fraunhofer diffraction by opaque planar discs. *$\rho \geq 1,\ Q_{e}$ variable*: : The scattering patterns depend strongly on the particle size in this regime (central inset in Fig. \[fig:Q\]). Scattering resonances occur, such that efficiencies can exceed even 2. Scattering patterns and extinction efficiencies have to be calculated using the Lorenz-Mie theory for scattering [@Hulst1981; @Bohren1983], which is more involved than the approximations valid in the other regimes and requires knowledge of the complex refractive index. Particle size analysis by measuring angular scattering patterns is possible in all three scattering regimes. This is accomplished by fitting calculated scattering patterns to the measured patterns. This is particularly convenient in the approximation of Fraunhofer scattering, as no knowledge on the refractive index of the particles is required and the scattering efficiency is independent of particle size. In the regime of Rayleigh- and Rayleigh-Gans-Debye scattering, the strong dependence of the scattering efficiency on the particle size ($\propto a^{6}$ in the limit of Rayleigh scattering [@Hulst1981]) exacerbates deduction of size distributions from measured scattering patterns. In the intermediate regime, as no approximate formulas are available, knowledge of the complex refractive index of the particles is required for particle size analysis from measured scattering patterns. The nontrivial dependence of the scattering efficiency on the particle size in this intermediate regime additionally allows for particle size analysis by spectroscopic methods [@Born2015]. Analysis of packing structures, however, is not possible in all three regimes. A unique approach to structure analysis, like measuring the angular scattering pattern for particle size analysis, cannot be given. Two established approaches to analyze packing structures will be discussed in the following sections. Scattering from particle ensembles {#sec:media} ---------------------------------- ![image](2.jpg){width="80.00000%"} Scattering from an ensemble of particles differs from scattering from individual particles due to the electromagnetic interaction among the particles and the interference of the scattered radiation [@Mishchenko2014]. The suitable approach for analyzing structures of the particle ensemble can be identified using the parameters $L$, $l$ and $l_{c}$, which are indicated in Fig. \[fig:bulk\]. $L$ denotes the geometric size of the sample. $l_{c}$ is a spatial correlation length in the sample. We define $l_{c}$ as any distance of a positive correlation peak in the pair correlation function $g(r)$ of the particle ensemble. The largest $l_{c}$ can become comparable to $L$ for a single-crystalline arrangement of the particles, and reduces to a few particle diameter for disordered hard-sphere packings [@Waseda1980; @Hansen2005]. The mean free path $l$ of the electromagnetic radiation within the ensemble of particles gives an average length that the electromagnetic wave travels before being scattered again. $l$ is determined by the number density of the particles $\varphi$ and their scattering efficiency $Q_{s}$, $$l = 1/(\varphi\cdot Q_{s}\cdot \pi a^{2}). \label{eq:l}$$ The scattering efficiency $Q_{s}$ of the particles in the sample is exactly the same as the one discussed in Sec. \[sec:singleparts\] only for dilute samples, and is generally reduced compared to the dilute value by various effects in dense ensembles [@Fraden1990; @Auger2011]. Three major regimes of light scattering in particle ensembles become distinguishable by their $l$ and $L$ [@Ishimaru1999]: *$l>L$*: : The scattering length is larger than the sample size $L$ for optically dilute samples. This is the limit where the first-order approximation, as discussed in Sec. \[sec:firstorder\], is valid for the whole particle ensemble. The structure factor, which describes the reciprocal structure of a particle ensemble can be obtained from the angular scattering pattern within the limits of the first-order approximation. An optical dilution $l>L$ can be achieved in different ways (compare Eq. \[eq:l\]). Either the sample is very dilute and has a low number density $\varphi$. Or the particles have a very small scattering efficiency $Q_{s}$, which can be achieved by using wavelengths much larger than the particles or by index matching of the particles (see Sec. \[sec:singleparts\]). Finally, if increasing $l$ is not possible in these ways, as samples with large $\rho$ or $\varphi$ are to be investigated, the sample could be made very thin to minimize $L$ [@Born2014; @Medebach2007]. Many colloidal samples are in this regime of single-scattering, as the particles are at least partially index-matched by the suspending liquid and the size parameter of the particles are small enough. *$l<L$*: : The mean free path is smaller than the sample size and the light is scattered multiple times before exiting the sample. The sample takes a turbid or milky-white appearance depending on the degree of multiple scattering and of absorption. The first-order approximation is not valid anymore, as the particles are excited by the incoming wave as well as scattered waves. A general analytical theory for light propagation in the case of multiple scattering becomes very involved and is subject of ongoing research [@Kristensson2015; @Ishimaru1999], and direct determination of the structure factor of a particle ensemble seems not possible anymore. *$l\ll L$*: : In the extreme of $l\ll L$ the light propagation reaches the limit of light diffusion, in which the incoming wave is fully scrambled and light propagation follows random walk statistics [@Ishimaru1999]. The statistics of this random walk are parametrized by a diffusion coefficient for light $D = c l^{\ast}/3$, where $c$ is the speed of light and $l^{\ast}$ is the randomization length, the distance the light has to travel before it is randomized again. The light transport in the limit of light diffusion is important for the approach to scattering-based analysis of granular media, and is therefore described in detail: *$l>l_{c}$*: : A local first-order approximation could be made if $l$ is at least larger than any spatial correlation length $l_{c}$ of the sample (this is the situation sketched in Fig. \[fig:bulk\]). The sample could be assumed to consist of many independent single-scattering samples [@Weitz1993; @Fraden1990; @Rojas2004]. A diffusion-like light propagation and the applicability of a local first-order approximation allows for diffuse-transmission spectroscopy (DTS) [@Kaplan1994], which is discussed in Sec. \[sec:diffusion\] in more detail. The method allows for verifying predicted structure factors [@Kaplan1994]. Samples that regularly fulfill the conditions for DTS are large colloidal samples, such that partial index-matching and particle size allow for large a $l$, but still strong multiple scattering occurs within the sample. *$l<l_{c}$*: : The light is scattered multiple times even within distances where particle positions are correlated. The first-order approximation cannot be applied even locally to a cluster of correlated particles anymore and the scattering patterns are considerably altered. An example for scattering in such a regime are Kikuchi patterns, which are created by multiple scattering within a single-crystalline region [@Nisbet2015]. *$l<\lambda$* : : In this regime an extreme situation of wave propagation is predicted. Waves may become fully localized when the direction of the wave is randomized on distances shorter than the wavelength. The scattering objects have to be smaller than the wavelength but still have to have a very high scattering efficiency to observe this behavior, a combination which is difficult to match. Presently this behavior has not been observed and seems unlikely to occur in particle packings [@Skipetrov2016; @Sperling2016]. Further constraints are set by ratio of the wavelength and structural distances in the sample: *$\lambda\ll r$*: : The wavelength is much shorter than the distance $r$ between obstacles or surfaces in the sample. In this regime, called far-field scattering, the scattered wave can be approximated as an outgoing spherical wave, which is prerequisite to the analysis used in the first-order approximation [@Born1999]. *$\lambda\geq r$*: : Electromagnetic interaction between objects happens by the evanescent, non-propagating near-field component of the scattered wave when they are in close proximity. Propagation properties have to be corrected for this near-field coupling, which may not be an established method presently [@Petrova2009; @Schaefer2012; @Rezvani2015]. *$\lambda\approx l_{c}$*: : The wavelength matches a correlation length in the sample and Bragg-scattering occurs. Bragg-like scattering also emerges in disordered materials with strongly correlated particle positions, and affect strongly the possibilities for wave propagation [@Rojas2004; @Froufe2016]. The modulation of the density of possible wave states in correlated media is reminiscent of formation of a band structure. *$\lambda\gg l_{c}$*: : The sample effectively behaves like a homogeneous medium when the wavelength in the experiment exceeds any spatial correlation length in the sample [@Hapke1993]. The medium then has an effective refractive index which could be predicted by effective medium approximations. A widely used approximation to calculate an effective refractive index is the Maxwell-Garnett equation, $$m_{eff}^{2}= m_{m}^{2}\frac{2\phi(m_{p}^{2}-m_{m}^{2})+m_{p}^{2}+2m_{m}^{2}}{2m_{m}^{2}+m_{p}^{2}+\phi(m_{m}^{2}-m_{p}^{2})} \label{eq:MG}$$ where indexes $m$ and $p$ indicate host medium and particles, $\phi$ is the volume fraction occupied by particles, and absorption is neglected [@Kolokova2001]. Measuring the refractive index of a sample thus in principle allows retrieving the volume fraction of particles in the sample. More complex effective medium theories may have to be applied, which also take correlated particle position into account to calculate effective refractive indexes [@Tsang2000]. The first-order approximation {#sec:firstorder} ----------------------------- The *first-order approximation* plays a prominent role in the established scattering-based methods as it fundamentally relates structure and scattering properties. It is based on two assumptions [@Born1999]. One is that the wave scattered by the sample is observed only far from the sample, so that the scattered wave can be approximated as an outgoing spherical wave. The other assumption is that within the sample the total electromagnetic field can be approximated by the incoming wave. The scattering amplitude function turns into the Fourier transform of the scattering potential of the sample within the validity of these two assumptions. The scattering potential describes the spatial distribution of the refractive index or alternatively of the dielectric coefficients within the sample. These approximations are also termed *Rayleigh-* and *Rayleigh-Gans-Debye-scattering* when the sample consists of individual particles [@Hulst1981; @Bohren1983]. Each dipole of a particle is only excited by the incoming wave and the scattered field is observed far away from the sample. The approximations are also called *single-scattering-* or *Born approximation* with samples consisting of ensembles of many particles, as all particles only scatter the incoming light and not the light scattered from other particles in the medium [@Born1999]. Measuring the scattering amplitude of a sample in all spatial directions, determined by the azimuthal angle $\theta$ (the polar angles will be averaged for isotropic media) or the momentum transfer vector $\vec{q} = 4\pi/\lambda\cdot\sin(\theta/2)$, allows calculating a low-pass filtered approximation of the scattering potential of the sample [@Born1999]. This is especially interesting for samples which consist of identical objects whose scattering potential can be described as the convolution of the scattering potential of a single object and a set of Delta-functions. The Fourier-transform of the scattering process turns the convolution into a product of the form factor of the objects $F(|\vec{q}|)$ and the structure factor of the sample $S(|\vec{q}|)$ [@Feigin1987]. $$I_{s}(\vec{q}) \propto F(|\vec{q}|) \cdot S(|\vec{q}|) \label{eq:firststorder}$$ Measuring the scattering amplitude of dilute samples allows determining $F(|\vec{q}|)$ of the particles. Measuring the scattering amplitude of the dense sample and division by $F(|\vec{q}|)$ consequently allows isolating the structure factor of the particle packing [@Feigin1987]. The structure factor is the momentum transfer- or reciprocal space Fourier-transform of the radial distribution function [@Feigin1987]. The structure factor of disordered hard-sphere packings exhibits a pronounced peak at a momentum transfer of $|\vec{q}|= 2\pi/d$ [@Hansen2005], where $d=2a$ is the particle diameter. This indicates a pronounced correlation length $l_{c}$ within the packing with a length of $d$. The hard-sphere structure factor exhibits some trailing oscillations, whose amplitude decay as $|\vec{q}|^{-2}$, and are barely visible beyond $|\vec{q}|\approx 10\pi/d$ [@Hansen2005]. The diffusion approximation {#sec:diffusion} --------------------------- The diffusion approximation gives a solution to the radiative transfer equation in the limit of strong multiple scattering [@Ishimaru1999]. A relation among the packing structure as described by $S(|\vec{q}|)$ and the transmitted intensity can also be established in this regime. The central quantity of diffusion-like radiation transport, the random walk step length or *randomization length* $l^{\ast}$, is determined by the scattering anisotropy of the individual scattering events. It takes many scattering events to randomize the light propagation direction if scattering is very anisotropic and confined to the forward direction (compare Sec. \[sec:singleparts\]), and $l^{\ast}\gg l$. If the light is isotropically scattered at each scattering event, $l^{\ast}$ becomes equal to $l$. A relation among the randomization length and the packing structure in the sample can be obtained in situations in which a local first-order approximation is valid [@Weitz1993]. This implies that the light scattered by a cluster with correlated positions is only scattered again in the far-field of this cluster ($l>>l_{c}$), and that within a cluster with correlated positions the single-scattering approximation holds. The sample could be considered an ensemble of independent clusters, whose scattering amplitude can be described by $F(|\vec{q}|) \cdot S(|\vec{q}|)$ [@Kaplan1994]. The relation among $l^{\ast}$ and $l$ is given by the scattering anisotropy or the average cosine of the azimuthal scattering angle of the scattering events within the validity of the local first-order approximation [@Fraden1990]: $$\begin{aligned} l^{\ast} &=& l / \left<(1-\cos(\theta))\right> \\ &=& \frac{1}{\varphi}\cdot \frac{1}{\int_{0}^{4\pi/\lambda}{F(|\vec{q}|)S(|\vec{q})|(1-\cos(\theta))|\vec{q}|dq}} \label{eq:lstar}\end{aligned}$$ The spectral transmission $T(\lambda) = \frac{I_{s}}{I_{0}}$ through a sample with diffusion-like light transport is then, in the limit of negligible absorption, proportional to the ratio of $l^{\ast}$ and the sample thickness $L$ [@Kaplan1994]: $$T(\lambda) = \frac{I_{s}}{I_{0}}\propto\left(\frac{l^{\ast}(\lambda)}{L}\right) \frac{c_{1}}{1+c_{2}\left(\frac{l^{\ast}(\lambda)}{L}\right)}, \label{eq:T}$$ where the coefficients $c_{1}$ and $c_{2}$ depend on the reflection coefficients from the sample boundaries and a geometrical factor for the light source. From Equations \[eq:lstar\] and \[eq:T\] it follows that spectral transmission measurements probe integrated properties of the scattering amplitude of the sample, what forms the basis of diffuse-transmission spectroscopy (DTS). A direct determination of $S(|\vec{q}|)$ as in the regime of single-scattering is not possible, but predicted structure factors can be validated [@Kaplan1994]. The upper integration bound in Eq. \[eq:lstar\] causes the spectral structure sensitivity of DTS. The structure factor of hard-sphere packings barely exhibits any features beyond $|\vec{q}|\approx 10\pi/d$ with the main peak at $|\vec{q}|= 2\pi/d$ [@Hansen2005]. There will be little to none spectral variation in transmission if $4\pi/\lambda$ is much larger than $10\pi/d$. The spectral transmission measurement consequently will be most sensitive to the variations in the structure factor if wavelengths from a fraction of $d$ up to a few times $d$ are used. A combination of visible light and granular media, in contrast, would lead to integration boundaries which are orders of magnitude larger than where the last features of $S(|\vec{q}|)$ are observable. The spectral transmission of visible light through granular media thus is basically independent of the packing structure within the sample, as can be observed by the common white appearance of distinct samples like snow, sugar, flour or salt. DTS will become sensitive to the structure of granular media again when electromagnetic radiation with $\lambda\approx d$ is applied, as is discussed in the following. Classification of granular media {#sec:granmedia} -------------------------------- The scattering regimes and the approximations required for obtaining structural information were schematically defined in the preceding sections. It is instructive to now classify a typical granular medium according to above scheme. Classification requires the mean free path $l$ within the sample and the wavelength $\lambda$. The mean free path $l$ can be estimated for a granular packing using Eq. \[eq:l\]. Granular media are conventionally characterized by a close-packing of the particles, leading to high packing fractions $\phi$ around the close-packing limit of monosized spheres of $\approx$0.64. The number density $\varphi$ and the volume fraction $\phi$ are connected by $\varphi = \phi/(4/3\pi a^{3})$. The scattering efficiency is well approximated by being between 1 and 2 over broad ranges of wavelengths or particle sizes (unless the wavelength is much larger than the particles or index matching is used, see Sec. \[sec:singleparts\]). Using these numbers in Eq. \[eq:l\] gives values for $l$ of around a particle diameter. This is an upper bound due to the high assumed numbers for $\phi$ and $Q_{s}$, but this number still indicates that in the case of granular media the mean free path is very short up to the light changing direction at each particle surface (compare Fig. \[fig:stroke\]). This extremely short $l$ makes scattering from granular media different from the situation of colloidal, molecular or atomic media combined with X-ray scattering or visible light scattering. The small relative refractive indexes ($|m-1|$ in Eq. \[eq:rho\]) and the possibility to dilution regularly lead to large $l$, both compared to $l_{c}$ and to $L$. ![The mean free path $l$, normalized by the diameter of of the spheres $2a$, of a dense packing according to eq. \[eq:l\]. Assumed are a volume fraction $\Phi = \phi\cdot4/3\pi a^{3}=0.64$ and the single-particle scattering efficiency $Q_{s}$ displayed in Fig. \[fig:Q\]. The mean free path in a dense packing rapidly decays to a single particle diameter for particles larger than the wavelength. These extremely short $l$ are illustrated in the inset by a pencil stroke imaged through one and two layer of transparent particles with 500 $\mu$m diameter. The difference among the blurred stroke imaged through the monolayer and the double layer show a redirection of the light at each particle layer.[]{data-label="fig:stroke"}](3.pdf){width="50.00000%"} The evaluation by the first-order approximation requires single-scattering ($l>L$) and far-field propagation ($\lambda\ll d$) (see Sec. \[sec:firstorder\]). These conditions can only be reached by a particle monolayer for dense granular media [@Born2014] or dilution to sparse particles [@Xu2002]. The scattering regime of the medium will be single-scattering nearly independent of the wavelength in these situation, while the wavelength adjusts whether RGD-, Mie- or Fraunhofer scattering theory has to be applied for the particles. On the other hand, bulk granular media will regularly fulfill the prerequisite $l\ll L$ for diffusion-like wave transport. The wavelength used in the scattering experiment will have a strong influence whether $l<l_{c}$ or $l>l_{c}$ and whether $\lambda\ll r$, $\lambda\approx l_{c}$ or $\lambda> l_{c}$ will be reached, and thus whether DTS could be applied to the medium. Diffuse-transmission THz spectroscopy {#sec:diffuseTHz} ------------------------------------- Typical grain sizes in experiments with granular media are a few tens of micrometer up to a few millimeter. These particle diameters correspond to wavelengths from the THz region of the electromagnetic spectrum. Spectroscopic THz transmission measurements will be sensitive to the local structure of granular samples, as $\lambda$ varies from fractions of $d$ to a few times $d$ (see Sec. \[sec:diffusion\]). The problem of scattering in bulk granular media has attracted interest soon after the more widespread introduction of THz spectrometer. Scattering effects in powders were complicating the evaluation of THz spectra in pharmaceutical or security applications [@Taday2004; @Bandyopadhyay2007; @Zurk2008]. Correction for the scattering effects is required to quantitatively extract absorption features of molecular substances [@Shen2008; @Kaushik2012]. It was noted that THz extinction spectra on granular samples change when the samples start flowing, what indicated a structure sensitivity of the spectra [@May2009]. Transmission experiments with THz radiation were also used as scale model experiments for scattering from dense particulate media in the visible spectral range [@Pearce2004; @Pearce2001]. However, with the mean free path $l$ being roughly a particle diameter (see Sec. \[sec:granmedia\] and Fig. \[fig:stroke\]) and $\lambda$ being roughly a particle diameter, $l>l_{c}$ and $\lambda\ll r$ will not be reachable for densely packed media. The approximations made for the extablished DTS methodology thus will not be fulfilled for granular media in general. We suggest that transmission spectroscopy can still be used to obtain structural information on bulk granular media. No propagating wave is possible when $\lambda$ matches the Bragg-condition for backscattering ($\theta = \pi$), $|\vec{q}|\cdot d = 2\pi$ or $\lambda = 2d$. In this case only standing waves formed by the incident and reflected wave are possible [@Gerthsen2010]. Hard-sphere packings form isotropic Brillouin zones with radius $2\pi /d$ [@Froufe2016], as can be seen from the peak of the structure factor. Propagation of waves with $\lambda = 2d$ thus will be suppressed isotropically, and a photonic bandgap might be formed [@Froufe2016; @Takagi2004; @Takagi2010]. Measuring the wavelength of the suppressed transmission reveals the position of the structure factor peak. The degree of extinction of the propagating wave is dependent of the development of the structure factor peak and thus the degree of correlation in the sample [@Rojas2004; @Froufe2016] Scattering of THz radiation: Materials and Methods {#sec:THzscatt} ================================================== Two approaches to probe the scattering by granular particles and granular particle packings were highlighted in the previous section. First, angle-resolved measurements can only be performed and analyzed using the first-order approximation with particle monolayer samples or extremely dilute samples. Such measurements can be performed using specialized small-angle laser light setups with visible light [@Xu2002], but also in the THz range [@Born2014]. Second, spectroscopic measurements on Bragg-scattering resonances can be performed on bulk samples using THz radiation. Common prerequisites for such measurements are low absorption and large illuminated areas. The Terahertz spectral region covers the range from 0.1 THz to 10 THz, or 3 mm to 30 $\mu$m, respectively [@Brundermann2011]. Some properties of THz are similar to the infrared and visible region, and similar optical components like mirrors and lenses can be used. On the other hand, properties of the microwave frequency region are pertinent, and components like antennas and waveguides are used. Books that provide broader and deeper information on THz methods and materials are available [@Brundermann2011; @Peiponen2013; @Naftaly2015]. Materials --------- One important aspect that is shared among the THz region and microwaves is dielectric heating. Therefor absorption losses are frequently high in the THz region [@Brundermann2011]. Especially water and water vapor are strong absorbers, which limits free-space propagation of THz radiation in experimental setups. Useful materials for optical components and also particle packing experiments are unpolar polymers. In table \[tab:materials\] some common materials for experiments with THz radiation are listed. It is apparent, that even the optimal materials for experiments with THz radiation, like PTFE and PE, have much higher absorption coefficients than materials for optical components in the visible range have. Material $m'$ $\alpha$ \[cm$^{-1}$\] ------------------- ------ ------------------------ -- teflon (PTFE) 1.44 2…3 paraffin 1.49 0.1…6 polypropylen (PP) 1.5 0.1…2 polyethylene (PE) 1.53 0.06…2 polystyrene (PS) 1.59 0.28…2 plexiglass 1.61 0.59…15 quartz 2.4 0.05…5 window glass 2.58 1.95…20 sapphire 4 0.1…20 visible range: window glass 1.47 0.000001 : Overview over optical properties of common materials for THz experiments. Approximative values for the real part of the refractive index $m'$ and the absorption coefficient of the material $\alpha$ are given. These values are estimates from various sources [@Brundermann2011; @Naftaly2015; @Piesiewicz2007; @Folks2007; @Cunningham2011] and for a spectral range from 0.5 THz to 4 THz.[]{data-label="tab:materials"} Methods ------- THz radiation allows for setup design in close analogy to optical setups in the visible and infrared range. Optical components like lenses and mirrors can be obtained off-the-shelf[@Brundermann2011; @Naftaly2015; @Peiponen2013]. #### Angle-resolved measurements: As discussed in Sec. \[sec:granmedia\] and \[sec:firstorder\], conventional scattering experiments, where scattered intensity is measured against the scattering angle, will play a secondary role in characterization of granular packings. Such conventional scattering experiments require a collimated incoming monochromatic beam from some light source, a sample holder with sample and a detector on a goniometer to measure angle-resolved scattered intensities. Light sources which are conveniently used for scattering experiments are lasers, which produce monochromatic collimated beams. Laser for THz radiation are available with various pump principles and gain media. The most convenient THz lasers are probably quantum cascade lasers (QCL) [@Richter2010]. A beam expansion might be required when particle sizes reach the beam diameter, in order to average over meaningful particle numbers. Sample cells are preferably made out of unpolar polymers like PE or PS, which have a nearly constant refractive index over most of the THz spectral range and moderate absorption (see tab. \[tab:materials\]). Thin PE-foil, which is available in kitchen stores, is a pretty good window material or sample holder. Crystalline quartz windows could also be used if higher thermal, chemical or mechanical robustness is required [@Brundermann2011]. These benefits come at the cost of a higher refractive index and thus higher reflection losses at the windows. Golay cells are a workhorse for detection of THz radiation [@Brundermann2011]. They can be operated at room temperature, but are pretty slow. Semiconducting bolometers or photodectors are fast, but require cooling by liquid nitrogen or even liquid helium for highest sensitivity [@Brundermann2011]. The detector should be equipped with some collection and collimation optics when the source is equipped with a beam expander. #### Spectroscopic measurements: Spectroscopic measurement of the light transport properties seems the more promising approach for characterization of particles and particle packings which are dominated by multiple scattering. THz spectrometer are available off-the-shelf, thus one does not need to worry about source, detector and setup. The most common setup types for spectroscopy with THz radiation are Fourier-Transform (FT) spectrometer and THz-Time Domain Spectrometer (THz-TDS) [@Brundermann2011]. FT-THz spectrometer are compatible with FT-IR spectrometer. Only the semitransparent mirror and the detector are optimized for THz radiation, and the setup usually can be evacuated to minimize atmospheric absorption. FT-THz spectrometer provide excellent resolution down to 0.2 GHz, wide spectral range of 11 THz and short measurement time below a minute. The setups also usually provide modularity, so that source and detector in principle could be used in other experiments. FT-THz spectrometer can be combined with synchrotron radiation as light source. The spectrometer has then a pulsed, coherent light source, with superior intensity especially in the range of long-wave THz radiation [@Holldack2016]. THz-TD-spectrometer use a pulsed source to probe the THz spectral range. They usually have resolutions of 1.2 GHz, a dynamic range of 75 dB around a frequency of 1 THz (which rapidly reduces for shorter and longer wavelengths), spectral ranges of 4 THz, and measurement times can be shorter than a minute. However, they have the benefits of being less expensive than FT-THz spectrometer, of a flexible measurement geometry that can easily be changed from reflection to transmission, of compactness which facilitates integration into other experiments, and of providing phase information, which means that the effective refractive index of the sample could directly be determined [@Brundermann2011]. Decisive parameters for transport measurements on scattering samples are mainly high intensity, a high dynamic range, and broad spectral range. Strong intensity losses will happen due to scattering in the bulk samples and the non-neglectable absorption in THz range and require high intensities and high dynamic ranges. The spectral range of the setup directly determines the particle sizes and correlation lengths that can be investigated, thus a broad spectral range is highly desirable. High resolution, however, might be interesting for characterization of particles, as a fine ripple structure in the scattering spectra give information on size distributions [@Born2015]. Resolution is usually not an issue for characterization of packings, as $F(|\vec{q}|)$ and $S(|\vec{q}|)$ do not show abrupt changes or steep gradients (compare Sec. \[sec:diffusion\]). Optimization of THz spectroscopy {#sec:optimization} -------------------------------- THz has potential to become a supplemental method to imaging methods for granluar media. Spectroscopy allows obtaining the transmitted intensity $I_{s}(\lambda)$. The transmission is defined as $T(\lambda) = I_{s}(\lambda)/I_{0}(\lambda)$, the ratio of measured intensity with sample to measured intensity without sample. Only few adjustments need to be made to conventional optical paths in order to optimize THz spectroscopy for granular media: ![Scheme of light propagation in the FT-THZ spectrometer used in the experiments. An iris is used to block scattered light down to a minimal angle, below which it will reach the detector. The beam diameter $l_{1}$ is 10 mm, the distance to the collimating iris $l_{2}$ is 70 mm, allowing light scattered up to $\theta_{max} = 4^{\circ}$ to reach the detector. The sample cell consisted of a feedthrough which was sealed by thin PE foil hold in position in by two o-rings. The diameter $l_{3}$ of the feedthrough is 20 mm, the thickness $L$ in beam direction is 10 mm. The sample was rotated by a stepper motor to improve the measurements, see sec. \[sec:optimization\].[]{data-label="fig:setup"}](4.pdf){width="50.00000%"} #### Collimated detection vs. integrating sphere: A typical optical path is sketched in Fig. \[fig:setup\]. Divergent radiation from a source is somehow collimated, passes through the sample and is focused onto a detector. Usually an iris or some other optical element limits the angle $\theta_{max}$ up to which scattered radiation is accepted by the detector. This is important for characterization of individual particles by measuring their extinction spectroscopy [@Born2015]. A large $\theta_{max}$ will lead to incorrect results in this situation (see Fig. \[fig:Q\]). The situation is different for measurements in the limit of light diffusion. Only scattered light will be transported through the sample. An integrating sphere to measure all the diffusively transmitted light is required to quantitatively measure $T(\lambda)$ and $l^{*}(\lambda)$ [@Kaplan1994]. However, assuming a slab like geometry with much larger lateral dimensions than in beam direction, most of the light will leave the sample in a central region [@Leutz1996], and measuring the transmission with a collecting lens as in THz-TD- or a collecting mirror as in FT-THz-spectrometer gives an reliable estimate of the spectral transmission properties. ![Comparison of transmission spectra measured with coherent THz radiation and with and without rotating the sample. Rotation smears out the irregular speckle pattern and allows for measuring smooth transmission spectra. The curves are offset for clarity.[]{data-label="fig:rotation"}](5.pdf){width="50.00000%"} #### Rotation for configuration averaging and speckle washing: Pulsed light sources like synchrotron sources or time domain spectrometer generate coherent THz radiation [@Abo2003]. This will lead to an irregular intensity distribution, a speckle pattern, after transmission through the sample. This spatial irregularity also may become manifest in the measured spectra when no integrating sphere is used. Rotation of the sample smears out the speckles and flattens the spectra (see Fig. \[fig:rotation\]). Rotation of the sample during the measurement also increases the number of configurations from which spectra are measured and increase the reproducibility of the measured spectra. ![Comparison of transmission spectra obtained with a wedge-shaped sample cell and with flat parallel windows. The cell with flat parallel windows forms an etalon for the THz radiation, leading to oscillatory transmission values. The oscillations are effectively supressed with the wedge-shaped sample cell. The curves are offset for clarity.[]{data-label="fig:etalon"}](6.pdf){width="50.00000%"} #### Fabry-Perot etalons: A conventional sample cell with flat parallel entrance and exit windows forms a nice etalon for the large wavelengths from the THz spectrum. Oscillatory fringes from the sample cell etalon will be superimposed to the spectral extinction from the sample. These etalon effects could be either corrected numerically [@Withaya2006], or the sample cell windows are very gently misaligned to a wedge-shaped sample cell (see Fig. \[fig:etalon\]). This will give a moderate increase in reflection losses, but efficiently suppresses etalon fringes. Experiments {#sec:experiments} =========== The discussion in Sec. \[sec:basics\] leads to the conclusion that gaining structural information on bulk granular media with scattering-based methods is not straight forward. The assumptions made in established methods to receive structural information are not fulfilled in granular media. Scattering still is expected to leave an imprint on transmission spectra. The Bragg-condition for backscattering reads as $|\vec{q}|d=2\pi$ or $2d/\lambda = 1$ for a hard-sphere packing with a structure factor peak at $|\vec{q}|=2\pi/d$. Around this condition a suppression of light transmission can be expected, whose degree depends on the development of the structure factor peak (Sec. \[sec:diffuseTHz\]). We demonstrate this effect and its potential for characterization of granular packings. Experimental implementation {#sec:implementation} --------------------------- Transmission spectra were measured using a Bruker IFS 125 HR spectrometer at the THz-beamline of the BessyII storage ring (Berlin, Germany) [@Holldack2016; @Holldack2007]. Synchrotron radiation in the *low-$alpha$*-mode of the storage ring was used as light source. The instrument was evacuated during the measurements to minimize absorption by air and humidity. Spectra were usually taken at moderate spectral resolution of 30 GHz. The acquisition times were a few seconds per scan and roughly 3 min for spectra consisting of 100 averaged scans. A 6 $\mu$m multilayer beamsplitter was used in combination with a 4.2 K Bolometer (IR-Labs) as detector. The granular media samples where 500 $\mu$m polystyrene (PS) spheres or mixtures of 500 $\mu$m and 80 $\mu$m PS spheres. The sample with free particles consisted of a thin polyethylene (PE) foil perpendicular to the THz beam, to which individual PS particles where electrostatically adhered. The bulk samples where produced by pouring particles into a cylindrical plastic container. The two flat surfaces of the cylinder where made of PE foil for transmission measurements, the cylinder height in beam direction $L$ was 1 cm. The windows were slightly misaligned for suppression of etalon effects. The particles formed a dense, disordered packing just after pouring into the cylinder. After tapping the sample container a sufficient amount of time the particles arranged into a few crystalline regions. The particles were also treated with an isopropanol-paraffin solution, which made the particles very sticky. These sticky particles formed a very loosely packed disordered packing upon pouring into the container. The wavelength is rescaled by the particle diameter for displaying the results, which emphasizes the structural length scales that affect the spectra. The transmission value of the free particles is scaled by the ratio of the area covered by particles to the total beam diameter to correct for low density of particles. We expect scattering features at wavelengths comparable to $2d$. This condition has to be met inside of the sample, while the wavelength is measured outside of the sample. The measured wavelength of any scattering feature has to be scaled by the refractive index of the sample to obtain probed structural length scales in the sample. Obtaining an effective refractive index of a granular sample is not trivial. The spectral range of interest here ($\lambda\approx 2d$) is beyond the limit of the validity of effective medium theories (compare Sec. \[sec:media\] and Eq. \[eq:MG\]). Still, a general behavior like suggested by Eq. \[eq:MG\] has to be expected, a refractive index of the sample changing from the vacuum value at low densities up to a refractive index close to bulk polystyrol at high packing densities. Experimental results -------------------- ![image](7.pdf){width="90.00000%"} Figure \[fig:structure\] shows transmission measurements of granular sample with various degrees of order. The free particles show a behavior as expected from their cross-sections (Fig. \[fig:Q\]). The cross-sections become very small for wavelengths larger than the particles, and measured transmissions are high. Transmission values on the other hand become very small when the cross-sections increase for wavelengths equal or smaller than the particles. The transmission spectra of bulk packings exhibit the same general appearance, transmission close to 1 for wavelengths much larger than the particles, and ceasing transmission for wavelengths equal and shorter than the particles. A general increase in transmission with increase in packing density can be observed when comparing the bulk media. This might be an effect of the increasing refractive index of the sample and thus reduction of the scattering efficiency. The increasing correlation of the particle positions with packing density induces Bragg-like scattering, which increasingly lowers transmission at the scattering resonance condition of $\lambda = 2d$. The measured position of the scattering feature $\lambda\approx 2.5d$ and the predicted $\lambda\approx 2d$ differs, most likely due to the different wavelength within the sample. Using Eq. \[eq:MG\] to get a tentative refractive index of a PS sample with $\phi\approx 0.64$ gives a refractive index of 1.3. This cannot be exact as the approximations made for eq \[eq:MG\] are not fulfilled, but it illustrates that the shift in refractive index can be high enough enough to explain the difference. $\phi$ is even higher the crystalline arrangement, which consequently increases $m_{eff}$ and the measured peak position further. ![Evolution of the transmission spectrum of a vibrated packing of monosized spheres with time. The initial position of the scattering feature ($\lambda\approx 2.25d$) indicates a dense disordered packing, which evolves into a polycrystalline packing with a strong scattering feature at $\lambda\approx 2.6d$ within a few minutes.[]{data-label="fig:time"}](8.pdf){width="45.00000%"} Summarizing, the transmission spectrum is characteristic of the packing density and the correlation length in the sample. This could be used to track structural changes in samples. Two examples are given in Figures \[fig:time\] and \[fig:demix\]. A small vibration motor is attached to the sample cell for the measurements in Fig. \[fig:time\]. The sample cell is again filled with monodisperse 500 $\mu$m PS spheres. The vibration motor is started with starting the measurements. The vibration induces some mobility of the particles and the whole packing can evolve from a dense, disordered packing into a crystalline packing. This evolution can be observed in Fig. \[fig:time\]. The scattering feature is at the beginning of the measurements at the position for a dense, disordered packing as in Fig. \[fig:structure\]. This feature transforms into the feature of the crystalline packing within a few seconds. It may be subject of further work to extract crystallization kinetics from such measurements. ![Evolution of the transmission spectrum of a vibrated binary mixture of spheres with size ratio 5:0.8. The spectra are focused on the spectral region for Bragg-scattering within a packing of the small particles. Emergence of a scattering feature at $\lambda\approx 2.15d$ after a few minutes of vibration indicates segregation of the two particle sizes and formation of regions with densely packed small particles.[]{data-label="fig:demix"}](9.pdf){width="45.00000%"} The sample cell was filled with a binary mixture of 500 $\mu$m PS spheres and 80 $\mu$m PS spheres with a 1:1 volume ratio in another experiment (Fig. \[fig:demix\]). The wavelength is rescaled by the diameter of the smaller particles. Again, the measurements are started together with the vibration motor. At the beginning the spectrum is rather featureless, indicating that no dominant correlation length is present in the sample. A scattering feature in transmission evolves after a few minutes of vibration. This indicates that the sample has started to segregate and clusters with predominant small-small particle contacts emerge. Interestingly, segregation in this binary mixture happens on longer time scales than crystallization of the monosized spheres (Fig. \[fig:time\]). This might indicate higher kinetic barriers for segregation than for crystallization. Future work may use transmission spectra to track segregation processes and other dynamics in fluidized granular media in more detail. Conclusion and outlook ====================== Dense granular media exhibit very short mean free paths for most spectral regions. Particles are densely packed up to mechanical contact. These two issues prevent application of established scattering methodology to granular media. Spectroscopic THz transmission measurements are a promising approach to probe structural properties of granular packings with a scattering-based method. The low instrumental effort for such experiments will be minimized with spreading of benchtop solutions like Thz-TDS. We highlight this potential by THz transmission measurements on static packings with different degrees of spatial correlations. Different packing structures can be easily discriminated through their transmission spectra. We also demonstrate the possibility for monitoring transient structural states in agitated granular media with good time resolution. An important advancement required for further application of the presented approach is developing a reliable model for the effective refractive index over large ranges of $\lambda/d$-values. Finally, it may be noted that THz transmission spectroscopy offers the possibility to investigate photonic properties of disordered media on length scales where real space structures can be monitored and manipulated more easily than on length scales of visible light. P. B. thanks Matthias Sperl and Andreas Meyer for their continued support of the project, Jan Haeberle and Sebastian Pitikaris for help during measurement campaigns and proofreading the manuscript, and A. Schnegg and D. Ponwitz for support at the THz-beamline at Bessy II. [9]{}\[sec:literature\] O. Glatter and O. Kratky, *Small Angle X-ray Scattering* (Academic Press, Boston, 1982). R. Xu, *Particle Characterization: Light Scattering Methods* (Kluwer Springer Netherlands, Heidelberg, 2002). W. Brown, *Light Scattering: Principles and development* (Clarendon Press, Oxford, 1996). H. C. van de Hulst, *Light Scattering by Small Particles* (Dover Publications, New York, 1981). C. F. Bohren and D. R. Huffman, *Absorption and Scattering of Light by Small Particles* (John Wiley & Sons, Weinheim, 1983). M. Born and E. Wolf, *Principles of optics, seventh (expanded) edition* (Cambrige University Press, Cambridge, 1999). A. Ishimaru, *Wave Propagation and Scattering in Random Media* (John Wiley & Sons, Weinheim, 1999). M. I. Mishchenko, *Electromagnetic Scattering by Particles and Particle Groups* (Cambrige University Press, Cambridge, 2014). M. I. Mishchenko, J. W. Hovenier and L. D. Travis *Light Scattering by Nonspherical Particles* (Academic Press, San Diego, 2000). P. Born, K. Holldack, and M. Sperl, *Granul. Matter* **17**, 531 (2015). Y. Waseda, *The structure of non-crystalline materials: Liquids and amorphous solids* (McGraw-Hill International Book Co, New York , 1980). J.-P. Hansen and I. R. McDonald, *Theory of simple liquids* (Academic Press, London, 2005). S. Fraden and G. Maret, *Phys. Rev. Lett.* **65** 512 (1990). J.-C. Auger and B. Stout, *J. Coatings Technol. Res.* **9**, 287 (2011). P. Born, N. Rothbart, M. Sperl, and H.-W. Hübers, *Europhysics Lett.* **106**, 48006 (2014). M. Medebach, C. Moitzi, N. Freiberger, and O. Glatter, *J. Colloid Interface Sci.* **305**, 88 (2007). G. Kristensson, *J. Quant. Spectrosc. Radiat. Transf.* **164**, 97 (2015). D. A. Weitz and D. J. Pine, *Diffusing wave spectroscopy*, in: *Dynamic Light Scattering: The Method and Some Applications*, W. Brown, Ed. (Oxford University Press, Oxford, 1993). L. Rojas-Ochoa, J. Mendez-Alcaraz, J. Sáenz, P. Schurtenberger, and F. Scheffold, *Phys. Rev. Lett.* **93**, 73903 (2004). P. D. Kaplan, A. D. Dinsmore, A. G. Yodh, and D. J. Pine, *Phys. Rev. E* **50**, 4827 (1994). A. G. A. Nisbet, G. Beutier, F. Fabrizi, B. Moser and S. P. Collins, *Acta Cryst.* **A71**, 20 (2015). S. E. Skipetrov and J. H. Page, *New J. Phys.* **18**, 21001 (2016). T. Sperling, L. Schertel, M. Ackermann, G. J. Aubry, C. M. Aegerter and G. Maret, *New J. Phys.* **18**, 013039 (2016). E. V. Petrova, V. P. Tishkovets, and K. Jockers, *Sol. Syst. Res.* **43**, 100 (2009). J. Schäfer, S. Lee, and A. Kienle, *J. Quant. Spectrosc. Radiat. Transf.* **113**, 2113 (2012). R. Rezvani Naraghi, S. Sukhov, J. J. Sáenz, and A. Dogariu, *Phys. Rev. Lett.* *115*, 203903 (2015). L. S. Froufe-Pérez, M. Engel, P. F. Damasceno, N. Muller, J. Haberko, S. C. Glotzer, and F. Scheffold, *Phys. Rev. Lett.* **117**, 53902 2016. B. Hapke, *Theory of Reflectance and Emittance Spectroscopy* (Cambrige University Press, Cambridge, 1993). L. Kolokolova and B. A. S. Gustafson, *J. Quant. Spectrosc. Radiat. Transf.* **70**, 611 (2001). L. Tsang, C.-T Chen, A. T. C. Chang, J. Guo, and K.-H. Ding, *Radio Sci.* **35**, 731 (2000). L. A. Feigin and D. I. Svergun, *Structure Analysis by Small-Angle X-ray and Neutron Scattering* (Plenum Press, New York, 1987) P. F. Taday, *Philos. Trans. A. Math. Phys. Eng. Sci.* **362**, 351 (2004). A. Bandyopadhyay, A. Sengupta, R. B. Barat, D. E. Gary, J. F. Federici, M. Chen, and D. B. Tanner, *Int. J. Infrared Millimeter Waves* **28**, 969 (2007). L. M. Zurk, G. Sundberg, S. Schecklman, Z. Zhou, A. Chen, and E. I. Thorsos, *Terahertz for Military and Security Applications VI,* **6949**, 694907 (2008). Y. C. Shen, P. F. Taday, and M. Pepper, *Appl. Phys. Lett.* **92**, 51103 (2008). M. Kaushik, B. W.-H. Ng, B. M. Fischer, and D. Abbott, *IEEE Photonics Technol. Lett.* **24**, 155 (2012). R. K. May, M. Evans, S. Zhong, R. Clarkson, Y. Shen, L. F. Gladden, and J. A. Zeitler, *Real-time in situ measurement of particle size in flowing powders by terahertz time-domain spectroscopy* in *2009 34th Int. Conf. Infr., Millim., THz Waves*, 1 (2009). J. Pearce, Z. Jian, and D. M. Mittleman, *Philos. Trans. A. Math. Phys. Eng. Sci.* **362**, 301 (2004). J. Pearce and D. M. Mittleman, *Opt. Lett.* **26**, 2002 (2001). D. Meschede, *Gerthsen Physik* (Springer, Heidelberg, 2010). K. Takagi, K. Seno, and A. Kawasaki, *Appl. Phys. Lett.* **85**, 3681 (2004). K. Takagi, M. Omote, and A. Kawasaki, *J. Micromechanics Microengineering* **20**, 35032 (2010). E. Bründermann, H.-W- Hübers and M. Kimmitt, *Terhertz Techniques*(Springer, Heidelberg, 2011). K.-E. Peiponen, J. A. Zeitler and M. Kuwata-Gonokami, *Terahertz Spectroscopy and Imaging* (Springer, Berlin, 2013). N. Naftaly, *Terhertz metrology*(Artech House, Boston, 2015). R. Piesiewicz, C. Jansen, S. Wietzke, D. Mittleman, M. Koch, and T. Kürner, *Int. J. Infrared Millimeter Waves* **28**, 363 (2007). W. R. Folks, S. K. Pandey, and G. Boreman, *Opt. Terahertz Sci. Tech.* **2**, 10 (2007). P. D. Cunningham, N. N. Valdes, F. A. Vallejo, L. M. Hayden, B. Polishak, X.-H. Zhou, J. Luo, A. K.-Y. Jen, J. C. Williams, and R. J. Twieg, *J. Appl. Phys.* **109**, 43505 (2011). H. Richter, M. Greiner-Bär, S. Pavlov, A. D. Semenov, M. Wienold, L. Schrottke, M. Giehler, R. Hey, H. T. Grahn, and H. W. Hübers, *Opt. Express* **18**, 5890 (2010). K. Holldack and A. Schnegg, *Journal of large-scale research facilities* **2**, A51 (2016). http://dx.doi.org/10.17815/jlsrf-2-74 W. Leutz and J. Ricka, *Opt. Commun.* **126**, 260 (1996). M. Abo-Bakr, J. Feikes, K. Holldack, P. Kuske, W. Peatman, U. Schade, G. Wüstefeld, and H.-W. Hübers, *Phys. Rev. Lett.* **90**, 94801 (2003). W. Withayachumnankul, B. Fergusson, T. Rainsford, S. Mickan, and D. Abbot, *Fluct. Noise Lett.* **6**, L227 (2006). K. Holldack and D. Ponwitz, *AIP Conference Proceedings* **879**, 599 (2007).
--- abstract: 'We adapt the spinorial geometry method to investigate supergravity backgrounds with near maximal number of supersymmetries. We then apply the formalism to show that the IIB supergravity backgrounds with 31 supersymmetries preserve an additional supersymmetry and so they are maximally supersymmetric. This rules out the existence of IIB supergravity preons.' date: November 2002 --- hep-th/0606049\ KUL-TF-06/20\ \ ${}^1$ Institute for Theoretical Physics, K.U. Leuven\ Celestijnenlaan 200D\ B-3001 Leuven, Belgium\ ${}^2$ DAMTP, Centre for Mathematical Sciences\ University of Cambridge\ Wilberforce Road, Cambridge, CB3 0WA, UK ${}^3$ Department of Mathematics\ King’s College London\ Strand\ London WC2R 2LS, UK\ 1.5 cm It has been known for some time that a priori in type II and eleven-dimensional supergravities there may exist backgrounds with any number of supersymmetries. This is because the holonomy of the supercovariant connection of these theories is a subgroup of $SL(32,\bR)$ and so any $N<32$ spinors have a non-trivial stability subgroup in the holonomy group. For a more detailed explanation see [@hull; @duffl; @gpta] for the M-theory and [@gptb] for IIB. Furthermore, it was argued in [@ugjggpdr] that the Killing spinor bundle ${\cal K}$ can be any subbundle of the Spin bundle and the spacetime geometry depends on the trivialization of ${\cal K}$. This is unlike what happens in the case of Riemannian and Lorentzian geometries [@berger; @figueroab] and heterotic and type I supergravities[^1] [@gpgran], where there are restrictions both on the number of Killing spinors and the Killing spinor bundle. In this paper, we shall show that IIB backgrounds with 31 supersymmetries are maximally supersymmetric. Backgrounds with 31 supersymmetries have been considered before in the context of M-theory [@bandos] and have been termed as preons. To our knowledge this is the first example which demonstrates that there are restrictions on the number of supersymmetries of type II backgrounds. To do this, we shall adapt the spinorial method [@jguggp] of solving Killing spinor equations to backgrounds that admit near maximal number of supersymmetries. We shall mostly focus on IIB and eleven-dimensional supergravity but most of the analysis extends to all supergravity theories. To adapt the spinorial method to backgrounds with near maximal number of supersymmetries, we introduce a “normal” ${\cal K}^\perp$ to the Killing spinor bundle ${\cal K}$ of a supersymmetric background. The spinors of IIB supergravity are complex positive chirality Weyl spinors, so the Spin bundle is ${\cal S}^c_+={\cal S}_+\otimes\bC$, where ${\cal S}_+$ is the rank sixteen bundle of positive chirality Majorana-Weyl spinors. ${\cal S}^c_+$ may also be thought of as an associated bundle of a principal bundle with fibre $SL(32, \bR)$, the holonomy group of the supercovariant connection, acting with the fundamental representation on $\bR^{32}$. If a background admits $N$ Killing spinors which span the fibre of the Killing spinor bundle ${\cal K}$, then one has the sequence 0\_+\^c\_+\^c/[K]{}0 . The inclusion $i:\,{\cal K}\rightarrow {\cal S}_+^c$ can be locally described as \_r=\_[i=1]{}\^[32]{} f\^i\_r \_i ,   r=1,…, N ,where $\eta_p$, $p=1,\dots,16$, is a basis in the space of positive chirality Majorana-Weyl spinors, $\eta_{16+p}= i\eta_p$ and the coefficients $f$ are real spacetime functions. For our notation and spinor conventions see [@ugjggpdr]. Any $N$ Killing spinors related by a local $Spin(9,1)$ transformation give rise to the same spacetime geometry. This is because the Killing spinor equations and the field equations of IIB supergravity are Lorentz invariant. Therefore any bundles of Killing spinors and any choice of sections related by a $Spin(9,1)$ gauge transformation[^2] should be identified. To construct ${\cal K}^\perp$, first consider the dual ${}^\star{\cal S}_+^c$ of ${\cal S}_+^c$ and introduce a basis $\eta^i$, $\eta^i(\eta_j)=\delta^i{}_j$, i.e. $\eta^{16+p}=-i\eta^p$. Next consider the sections $\a$ of ${}^\star{\cal S}_+^c$ that annihilate the Killing spinors $\epsilon_r$, i.e $\a(\epsilon)=0$, or equivalently f\^i\_r u\_i=0 ,    =u\_i \^i , where $u_i$ are real spacetime functions. Since the matrix $f=(f^i{}_r)$ has rank $N$, there are $32-N$ solutions to this equation. These solutions span the sections of the co-kernel, ${\rm coker}\, i\subset {}^\star{\cal S}^c_+$ of the inclusion map $i: {\cal K}\rightarrow {\cal S}_+^c$. It is well-known that $Spin(9,1)$ has an invariant inner product $B: {\cal S}_+\otimes {\cal S}_-\rightarrow \bR$ B(, )=-B(, )=&lt;B(\^\*), &gt; , which extends to $B: {\cal S}^c_+ \otimes {\cal S}^c_-\rightarrow \bC$ as a bi-linear in both entries. Next consider (, )=[Re]{} B(, ) , which defines a non-degenerate pairing ${\cal B}: {\cal S}^c_+ \otimes {\cal S}^c_-\rightarrow \bR$. This in turn induces a isomorphism $j: {}^\star{\cal S}^c_+\rightarrow {\cal S}^c_-$ as ${\cal B}(j(\a), \epsilon)=\a(\epsilon)$. We identify the image of $j$, $j({\rm coker}\, i)\subset {\cal S}^c_- $, as the “normal” bundle ${\cal K}^\perp$ of ${\cal K}$, i.e. $j({\rm coker}\, i)={\cal K}^\perp$. Clearly if $\a\in {\rm coker}\, i$ and $\epsilon\in {\cal K}$, then $\a(\epsilon)=0$, and so one gets the “orthogonality” condition, (j(), )=0 . Observe that ${\cal S}_+^c/{\cal K}={}^\star{\cal K}^\perp$. To write this orthogonality condition in components, introduce a basis in ${\cal S}^c_-$, say $\theta_{i'}=-\Gamma_0\eta_i$. Then write $j(\a)=\nu=n^{i'} \theta_{i'}$ and the condition (\[normcol\]) can be written as n\^[i’]{} [B]{}\_[i’ j]{} f\^j\_r=0 , where ${\cal B}_{i'j}={\cal B}(\theta_{i'}, \eta_j)$. The condition (\[normcol\]), or equivalently (\[normcolcom\]), leads to a correspondence between the $N$ Killing spinors and the $32-N$ normal directions, i.e. N32-N . This is because instead of specifying the Killing spinors, one can determine the normal spinors. Substituting the normal spinors into these equations, one can then solve for the Killing spinors. In addition, the construction of ${\cal K}^\perp$ and (\[normcol\]) or (\[normcolcom\]) are $Spin(9,1)$ covariant. Because of this, the $Spin(9,1)$ gauge symmetry can be used to bring the normal spinors instead of the Killing spinors into a canonical form. In turn, this leads to a simplification in the expression for the Killing spinors which can be used to solve the Killing spinor equations for backgrounds with near maximal number of supersymmetries. We shall demonstrate this for IIB backgrounds with 31 supersymmetries. Furthermore, one may consider cases such that the sections of ${\cal K}^\perp$ are invariant under some non-trivial stability subgroup of $Spin(9,1)$. It is clear these cases are related to (e.g. maximal and half-maximal) $G$-backgrounds [@ugjggpdr; @gmaxsusy], where the invariance condition was imposed on the Killing spinors. The spinorial geometry techniques that we use to investigate backgrounds with $N$ supersymmetries can be adapted to examine backgrounds with $32-N$ supersymmetries and vice-versa. One can easily extend the construction described above to M-theory. In particular, one again has 0/[K]{}0 , where ${\cal S}$ is the spin bundle associated with the Majorana representation of $Spin(10,1)$. The inclusion map $i:\, {\cal K}\rightarrow {\cal S}$ can be written locally as $\epsilon_r= \sum_{i=1}^{32} f^i{}_r \eta_i$, where $f^i{}_r$ are real spacetime functions and $(\eta_i, i=1, \dots, 32)$ is a basis of Majorana spinors. As in the IIB case, we consider the the co-kernel of the inclusion map $i:\, {\cal K}\rightarrow {\cal S}$, ${\rm coker}\, i\subset {}^\star {\cal S}$. It is well known that ${\cal S}$ admits a $Spin(10,1)$ invariant inner product $B$ which gives rise to an isomorphism $j:\, {}^\star {\cal S}\rightarrow {\cal S}$. As in the IIB case, we define the normal bundle of the Killing spinor bundle as ${\cal K}^\perp= j({\rm coker}\, i)$. In this case, ${\cal K}^\perp$ is a subbundle of ${\cal S}$ and ${\cal S}/{\cal K}={\cal K}^\perp$. Taking a section $\nu= n^i\eta_i$ of ${\cal K}^\perp$, the orthogonality condition analogous to (\[normcol\]) and (\[normcolcom\]) is n\^i B\_[ij]{} f\^j\_r=0 , where $B_{ij}=B(\eta_i, \eta_j)$. The condition (\[elcon\]) is $Spin(10,1)$ covariant. As an example consider IIB backgrounds that admit 31 supersymmetries. According to the correspondence $N\leftrightarrow 32-N$, these are related to backgrounds with one supersymmetry investigated in [@ugjggpa; @ugjggpdr]. To carry out the computation, we need to find the canonical form of spinors in ${\cal S}^c_-$ up to $Spin(9,1)$ transformations. It is easy to deduce using an argument similar to [@ugjggpa] that there are three kinds of orbits of $Spin(9,1)$ in the negative chirality Weyl spinors with stability subgroups $Spin(7)\ltimes\bR^8$, $SU(4)\ltimes \bR^8$ and $G_2$. A canonical form of these spinors is &&\_1=(n+im) (e\_5+e\_[12345]{}) ,   \_2= (n-+im) e\_5+ (n++im) e\_[12345]{} , &&\_3=n(e\_5+e\_[12345]{})+i m (e\_1+e\_[234]{}) , respectively. Using the $Spin(9,1)$ gauge symmetry, we choose ${\cal K}^\perp$ to lie along the directions of one of the above spinors. Consider first the $\nu_1$ case. Write the Killing spinors as \_r= f\^1\_r (1+e\_[1234]{})+ f\^[17]{}\_r i (1+e\_[1234]{})+ f\^k\_r \_k , where $\eta_k$ are remaining basis elements complementary to $1+e_{1234}$ and $i(1+e_{1234})$. In what follows, we use the basis constructed from the five types of spinors in [@ugjggpdr]. Substituting $\epsilon_r$ into (\[normcol\]), we get f\^1\_r n- f\^[17]{}\_r m=0 . Without loss of generality, we take $n\not=0$. Using this, we solve for $f^1{}_r$ and substitute back into the Killing spinors to find \_r=[f\^[17]{}\_rn]{} (m+in) (1+e\_[1234]{})+ f\^k\_r \_k . Similarly for the normal spinors $\nu_2$ and $\nu_3$, we find that &&\_r=[f\^[17]{}\_rn]{}\[ (m+in)(1+e\_[1234]{})\]+[f\^[18]{}\_rn]{} \[(1+e\_[1234]{})-n (1-e\_[1234]{})\]+ f\^[k]{}\_r \_[k]{} , &&\_r=[f\^[19]{}\_rn]{} \[m (1+e\_[1234]{})+i n(e\_[15]{}+e\_[2345]{})\]+ f\^k\_r \_k , correspondingly, where $\eta_k$ are the remaining basis elements in each case. Substituting these spinors into the algebraic Killing spinor equation and using that the rank of the matrix $(f^i{}_r)$ is 31, for the $Spin(7)\ltimes \bR$ case one finds that &&P\_M\^M C\*\[(m+in) (1+e\_[1234]{})\]+[124]{} G\_[M\_1M\_2M\_3]{}\^[M\_1M\_2M\_3]{} (m+in) (1+e\_[1234]{})=0 , &&P\_M\^M\_p=0 ,   G\_[M\_1M\_2M\_3]{}\^[M\_1M\_2M\_3]{}\_p=0 ,   p=2,…, 16 , and similarly &&P\_M\^M C\*\[(m+in) (1+e\_[1234]{})\]+[124]{} G\_[M\_1M\_2M\_3]{}\^[M\_1M\_2M\_3]{} (m+in) (1+e\_[1234]{})=0 , &&P\_M\^M C\*\[(1+e\_[1234]{})-n (1-e\_[1234]{})\] &&          +[124]{} G\_[M\_1M\_2M\_3]{}\^[M\_1M\_2M\_3]{} \[(1+e\_[1234]{})-n (1-e\_[1234]{})\]=0 , &&P\_M\^M C\*\[i (1-e\_[1234]{})\]+[124]{} G\_[M\_1M\_2M\_3]{}\^[M\_1M\_2M\_3]{} \[i (1-e\_[1234]{})\]=0 , &&P\_M\^M\_[p]{}=0 ,   G\_[M\_1M\_2M\_3]{}\^[M\_1M\_2M\_3]{}\_[p]{}=0 ,   p=3,…,16 , and &&P\_M\^M C\*\[m (1+e\_[1234]{})+in(e\_[15]{}+e\_[2345]{})\] &&              +[124]{} G\_[M\_1M\_2M\_3]{}\^[M\_1M\_2M\_3]{} \[m (1+e\_[1234]{})+in(e\_[15]{}+e\_[2345]{})\]=0 , &&P\_M\^M C\*(i (1+e\_[1234]{})+[124]{} G\_[M\_1M\_2M\_3]{}\^[M\_1M\_2M\_3]{} (i (1+e\_[1234]{})=0 , &&P\_M\^M C\* (e\_[15]{}+e\_[2345]{})+[124]{} G\_[M\_1M\_2M\_3]{}\^[M\_1M\_2M\_3]{} (e\_[15]{}+e\_[2345]{})=0 , &&P\_M\^M\_p=0 ,   G\_[M\_1M\_2M\_3]{}\^[M\_1M\_2M\_3]{}\_p=0 ,   p=2,4,…,16 , for the other two cases. The factorization of $P$ and $G$ flux terms on $\eta_p$ occurs because some of the remaining basis elements $\eta_k$ come in complex conjugate pairs $(\eta_p, i\eta_p)$, where $\eta_p$ are Majorana-Weyl spinors. Since the $P$ flux term in the Killing spinor equations contains the charge conjugation matrix, $C*\eta_p=\eta_p$ and $C*(i \eta_p)=-i\eta_p$, there is a relative sign between the $P$ and $G$ flux terms when the algebraic Killing spinor equation is evaluated on $\eta_p$ and $i\eta_p$. It now remains to solve these equations. First, focus on the equation $P_M \Gamma^M\eta_p=0$. Observe that in all cases, the remaining spinors $\eta_p$ contain spinors which are annihilated by either $\Gamma^-$ or $\Gamma^+$. In the former case, the condition $P_M\Gamma^M\eta_p=0$ implies that only the $P_-$ component is non-vanishing while in the latter case implies that only the component $P_+$ is non-vanishing. Since spinors of both types occur, $P=0$. Next consider the conditions on the $G$ flux. It turns out that (\[cona\]), (\[conb\]) or (\[conc\]) imply that $G_{M_1M_2M_3}\Gamma^{M_1M_2M_3}\epsilon=0$ for all spinors $\epsilon$ and so $G=0$. To see this consider the $Spin(7)\ltimes\bR^8$ case. Setting $P=0$ in the first condition in (\[cona\]), we deduce that $G_{M_1M_2M_3}\Gamma^{M_1M_2M_3}(1+e_{1234})=0$. Since the algebraic Killing spinor equations with $P=0$ are linear over the complex numbers, we also have that $G_{M_1M_2M_3}\Gamma^{M_1M_2M_3}i(1+e_{1234})=0$. This together with the remaining conditions in (\[cona\]) imply that $G_{M_1M_2M_3}\Gamma^{M_1M_2M_3}\eta_i=0$ for all the basis elements $\eta_i$. A similar argument applies to the rest of the cases. Thus we have found that the algebraic Killing spinor equations imply that $P=G=0$. We have also verified this by an explicit computation. Finally, if the $P$ and $G$ fluxes vanish, then the gravitino Killing spinor equation of IIB supergravity becomes linear over the complex numbers. This means that backgrounds with vanishing $P$ and $G$ fluxes always preserve an even number of supersymmetries. Thus backgrounds with 31 supersymmetries preserve an additional supersymmetry and so they are maximally supersymmetric. In particular, they are locally isometric [@jfgpa] to Minkowski spacetime, $AdS_5\times S^5$ [@schwarz] and the maximally supersymmetric plane wave [@bfhp]. As a corollary, we have shown that IIB supergravity preons do not exist. Our proof has relied on the algebraic Killing spinor equation of IIB supergravity and so does not straightforwardly generalize to eleven-dimensional supergravity. Nevertheless, as we have seen the normal Killing spinor bundle construction generalizes to M-theory. In addition, one can show that the 31 Killing spinors of M-theory preon backgrounds take a simple form and it may be possible to solve the Killing spinor equations. We hope to report on the existence of M-theory preons in the future. 0.5cm [**Acknowledgements**]{} 0.1cm The research of D.R. is funded by the PPARC grant PPA/G/O/2002/00475 and U.G. has a postdoctoral fellowship funded by the Research Foundation K.U. Leuven. 0.5cm [00]{} C. Hull, “Holonomy and symmetry in M-theory,” arXiv:hep-th/0305039. M. J. Duff and J. T. Liu, “Hidden spacetime symmetries and generalized holonomy in M-theory,” Nucl. Phys. B [**674**]{} (2003) 217 \[arXiv:hep-th/0303140\]. G. Papadopoulos and D. Tsimpis, “The holonomy of the supercovariant connection and Killing spinors,” JHEP [**0307**]{} (2003) 018 \[arXiv:hep-th/0306117\]. G. Papadopoulos and D. Tsimpis, “The holonomy of IIB supercovariant connection,” Class. Quant. Grav.  [**20**]{} (2003) L253 \[arXiv:hep-th/0307127\]. U. Gran, J. Gutowski and G. Papadopoulos, “The G(2) spinorial geometry of supersymmetric IIB backgrounds,” Class. Quant. Grav.  [**23**]{} (2006) 143 \[arXiv:hep-th/0505074\]. U. Gran, J. Gutowski, G. Papadopoulos and D. Roest, “Systematics of IIB spinorial geometry,” Class. Quant. Grav.  [**23**]{} (2006) 1617 \[arXiv:hep-th/0507087\]. M. Berger, “Sur les groupes d’holonomie homogéne des variétés á connexion affines et des variétés riemanniennes”, Bulletin de la Société Mathématique de France, 83:279, (1955). McKenzie Y. Wang, “Parallel spinors and parallel forms”, Ann. Global Anal Geom. Vol 7, No 1 (1989), 59. J. M. Figueroa-O’Farrill, “Breaking the M-waves,” Class. Quant. Grav.  [**17**]{} (2000) 2925 \[arXiv:hep-th/9904124\]. U. Gran, P. Lohrmann and G. Papadopoulos, “The spinorial geometry of supersymmetric heterotic string backgrounds,” JHEP [**0602**]{} (2006) 063 \[arXiv:hep-th/0510176\]. I. A. Bandos, J. A. de Azcarraga, J. M. Izquierdo and J. Lukierski, “BPS states in M-theory and twistorial constituents,” Phys. Rev. Lett.  [**86**]{} (2001) 4451 \[arXiv:hep-th/0101113\]. I. A. Bandos, J. A. de Azcarraga, J. M. Izquierdo, M. Picon and O. Varela, “On BPS preons, generalized holonomies and D = 11 supergravities,” Phys. Rev. D [**69**]{} (2004) 105010 \[arXiv:hep-th/0312266\]. J. Gillard, U. Gran and G. Papadopoulos, “The spinorial geometry of supersymmetric backgrounds,” Class. Quant. Grav.  [**22**]{} (2005) 1033 \[arXiv:hep-th/0410155\]. U. Gran, J. Gutowski, G. Papadopoulos and D. Roest, “Maximally supersymmetric G-backgrounds of IIB supergravity,” arXiv:hep-th/0604079. U. Gran, J. Gutowski and G. Papadopoulos, “The spinorial geometry of supersymmetric IIB backgrounds,” Class. Quant. Grav.  [**22**]{} (2005) 2453 \[arXiv:hep-th/0501177\]. J. Figueroa-O’Farrill and G. Papadopoulos, “Maximally supersymmetric solutions of ten-dimensional and eleven-dimensional supergravities,” JHEP [**0303**]{} (2003) 048: \[arXiv:hep-th/0211089\]. “Pluecker-type relations for orthogonal planes,” \[arXiv:math.ag/0211170\]. J. H. Schwarz, “Covariant Field Equations Of Chiral N=2 D = 10 Supergravity,” Nucl. Phys. B [**226**]{} (1983) 269. M. Blau, J. Figueroa-O’Farrill, C. Hull and G. Papadopoulos, “A new maximally supersymmetric background of IIB superstring theory,” JHEP [**0201**]{} (2002) 047 \[arXiv:hep-th/0110242\]. [^1]: This is provided the parallel spinors are Killing. [^2]: IIB supergravity has a $Spin(9,1)\times U(1)$ gauge symmetry but the restriction to $Spin(9,1)$ will suffice.
--- abstract: | We outline an algorithm for construction of functional bases of absolute invariants under the rotation group for sets of rank 2 tensors and vectors in the Euclidean space of arbitrary dimension. We will use our earlier results for symmetric tensors and add results for sets including antisymmetric tensors of rank 2. That allowed, in particular, constructing of functional bases of differential invariants for vector functions, in particular, of first-order invariants of Poincaré algebra (invariance algebra of Maxwell equations for vector potential). --- [**Invariants for sets of vectors and rank 2 tensors, and differential invariants for vector functions**]{} \ E-mail: iyegorch@imath.kiev.ua The Problem =========== It might seem that all problems related to description of differential invariants were solved in the classical papers by S.Lie or in multiple later papers. However, the author failed to find explicit results on functional bases of invariants of the rotation group for sets of arbitrary multidimensional rank 2 tensors and vectors in the existing literature. Although some of the presented results may seem obvious, their proofs are not always straightforward. The only results widely available are fundamental bases of invariants for three dimensions or for one tensor and/or vector. It may be natural as specific applications, e.g. in mechanics, most often required only rotation invariants for three dimensions. However, differential invariants became a popular subject recently, and many new aspects of this subject were studied. We will not give any comprehensive bibliography here, as even an approximately exhaustive reference list would exceed the length of the paper itself. We listed only some papers as representatives of certain aspects relevant to our specific research. Popularity of the subject and extensive applications of differential invariants e.g. in computer vision, image recognition and characterization of differential equations led the author to believe that systematic specific description of invariants for vectors and tensors in multidimensional space would be relevant. Here we will be using the standard language of the symmetry analysis of differential equations and of description of invariants, see e.g. classical papers by Lie [@RDI1:LieDI], Tresse [@RDI1:Tresse]; books by Glenn [@Glenn], Veblen [@Veblen], Spencer [@RDI1:Spencer], Ovsyannikov [@RDI1:Ovs-eng] and Olver [@RDI1:Olver1]-[@RDI1:Olver3]. This paper reviews and extends our previous research in [@RDI1:Yepreprintcomplexfields] and [@RDI1:FYeDifInvs]. In [@RDI1:Yepreprintcomplexfields] we described bases of first order differential invariants of the Poincaré algebra for the vector potential (in four dimensions) $$\label{PA} \partial_{x_\mu}, \ J_{\mu \nu}= x_\mu\partial_{x_\nu} - x_\nu\partial_{x_\mu} + J_{\mu \nu}= A_\mu\partial_{A_\nu} - A_\nu \partial_{A_\mu},$$ where $\partial_{x_\mu}$ designates the operator $\frac{\partial}{\partial {x_\mu}}$, and in [@RDI1:FYeDifInvs] - second order differential invariants of the Euclid, Poincaré and Galilei algebras for sets of scalar functions. These results described differential invariants, but were actually based upon description of invariants for sets of vectors and symmetric tensors of rank 2. To the moment, the author have not found any earlier results on fundamental bases of invariants for sets of vectors and symmetric tensors of rank 2 for the rotation group. Well-known and obvious results are related to one tensor or a set of one vector and one rank 2 symmetric tensor. As an arbitrary rank 2 tensor can be represented as a sum of a symmetric and an antisymmetric tensor (also of rank 2), description of invariants for such arbitrary tensors would require, in addition to already available functional bases for sets of symmetric tensors, also construction of invariants for antisymmetric tensors and for sets of symmetric and antisymmetric tensors. In our general presentation we will consider vectors and tensors in the $n$-dimensional Euclidean space, with $n$ equal or larger than 4 - as the results for $n=3$ are more obvious, and the rank for the rotation algebra for $n=3$ may require specific consideration. The examples may include the Minkovsky space for three spatial and one time dimensions. The new results in this paper are construction of functional bases of invariants for antisymmetric tensors and for sets of vectors, symmetric and antisymmetric tensors in the Euclidean space of arbitrary dimension. In this paper we imply by invariants only absolute invariants. Two ways to describe invariants ------------------------------- Invariants of transformations may be described by functional bases of differential invariants of a particular order and by fundamental invariants. 1. Functional basis of invariants is a set of functionally independent invariants such that every invariant can be represented as a function of invariants from this set. 2. Fundamental invariants are a set of invariants such that every invariant can be represented through differentiation of some functions of invariants in this set. Here we will construct functional bases of invariants. Variables and Transformations ----------------------------- We will consider sets of vectors and tensors of the type $$U = (u_i), V = (v_{ik}), \quad i,k = 1, ..., n;$$ and rotation operators in the $n$ dimensional Euclid space, $n \geq 4$ $$J_{ik}= u^r_i\partial_{u^r_k} - u^r_k\partial_{u^r_i}+ v^s_{il}\partial_{v^s_{lk}} - v^s_{kl}\partial_{v^s_{li}}$$ Motivation ========== Differential Invariants of Vector and Tensor Functions ------------------------------------------------------ Invariants of tensors we consider here can be used for description of differential invariants of scalar, vector and tensor functions - the relevant sets of derivatives of vector and tensor functions may be treated as vectors and tensors due to the forms of prolongations of the relevant rotation operators. 1. If we have scalar functions, first order differential invariants may be treated as invariants for sets of vectors. 2. If we have scalar functions, second order differential invariants invariants may be treated as invariants for sets of symmetric tensors of rank two and vectors 3. If we have vector functions, first order invariants may be treated as invariants for sets of arbitrary tensors of rank two and vectors 4. If we have vector functions, second order invariants may be treated as invariants for sets of tensors of rank three that are symmetric on two indices, tensors of rank two and vectors. General background ================== The general background for the invariant theory may be found e.g. in [@RDI1:Olver3]. Definitions of Invariants ------------------------- [**Definition**]{} A function $$F = F(U^r,V^s),$$ where $$U^r = (u^r_i), V^s = (v^s_{ik}), \quad i,k = 1, ..., n;$$ are respectively sets of vectors and tensors of rank two, is called an absolute invariant (or simply invariant) for the Lie algebra $L$ with basis elements $J_{ik}$ if $$J_{ik}F=0.$$ As in [@RDI1:FYeDifInvs], it is easy to show that in principle it is sufficient to give all results for sets of two vectors and two tensors, as those may be easily extended for arbitrary numbers of vectors and tensors. Determining Equations --------------------- The equations $$J_{ik}F=0,$$ where $J_{ik}$ are infinitesimal operators of the Euclid rotations, are called determining equations for the invariants $F = F(U^r,V^s)$. We see that the problem of finding invariants is actually a problem of finding solutions for a system of linear partial differential equations of first order. Unfortunately, there is no theory helpful in the case of arbitrary number of equations, and we use some ad-hoc methods. Number of Invariants in the Functional Basis -------------------------------------------- In our consideration, $V^s = (v^s_{ik})$, $i,k = 1, ..., n$, $s=1,2$ is a set of two arbitrary tensors of rank 2. We replace each tensor $V^s$ in our consideration by a pair of a symmetric and an antisymmetric tensors: $$w^s_{ik}=v^s_{ik}+v^s_{ki},$$ $$y^s_{ik}=v^s_{ik}-v^s_{ki}.$$ The number of invariants in a fundamental set is the difference between the number of independent variables and the general rank of the system of determining equations. As an example, let us consider two vectors and two symmetric tensors of rank two. In this case we have(see [@RDI1:FYeDifInvs]) $$2n+2\frac{n(n+1)}{2}=n^2+3n$$ independent variables. The general rank of the system of determining equations in the case of rotational transformations is equal to the number of operators $$\frac{n(n-1)}{2}$$. So we shall have $$2n+2\frac{n(n+1)}{2} - \frac{n(n-1)}{2}= \frac{7n +n^2}{2}$$ functionally independent invariants. The case of one antisymmetric tensor in the multidimensional space appears to be quite cumbersome as determination of the rank of the rotation algebra for such tensor is not straightforward. For such tensor, we will have $\frac{n(n-1)}{2}$ independent components, and $\frac{n(n-1)}{2}$ operators in the rotation algebra; however, the rank will be smaller as at least one invariant of the form $y_{ik}y_{ik}$ is obvious. We can prove that the rank for such tensor will be equal to $$\frac{n(n-1)}{2}-n,$$ and we will have $n$ functionally independent invariants. In the case of two vectors and two antisymmetric tensors of rank two (or just two arbitrary tensors of rank two), we will have $$2n+2n(n+1)$$ variables, and whence $$2n+2n(n+1) - \frac{n(n-1)}{2}= \frac{3n +n^2}{2}$$ invariants. In the case of two vectors, two symmetric tensors and two antisymmetric tensors of rank two (or just two arbitrary tensors of rank two), we will have $$2n+2n(n+1)$$ variables, and whence the number of functionally independent invariants is $$2n+2n(n+1) - \frac{n(n-1)}{2}= \frac{9n +3n^2}{2}$$. Functional Bases of Invariants ------------------------------ [**Theorem 1**]{} [@RDI1:FYeDifInvs]. A functional basis of invariants of the rotational transformations for two vectors and two symmetric tensors in the $n$-dimensional space can be taken as follows: $$\begin{gathered} w^1_{ii}, \ w^2_{ii}, \ u^r_i u^r_i, \ S_{a}(w^1_{ik}),\ S_{a}(w^2_{ik}), \ S_{ab}(w^1_{ik}, w^2_{ik}),\quad R_a(u^r_i, w^1_{ik}), \ r=1,2; \nonumber\end{gathered}$$ where $$\begin{gathered} \nonumber S_{a}(w^r_{ik})=w^r_{i_1 i_2}w^r_{i_2 i_3}...w^r_{i_{a} i_{1}}\end{gathered}$$ We mean summation over the repeated indices from 1 to $n$. In the lists of invariants $S_{a}$, $a$ takes the values from 1 to $n$. $$\begin{gathered} \nonumber S_{ab}(w^1_{ik}, w^2_{ik})=w^1_{i_1 i_2}w^1_{i_2 i_3}...w^1_{i_{b} i_{b+1}}w^2_{i_{b+1} i_{b+2}}...w^2_{i_{a} i_{1}} \nonumber\end{gathered}$$ In the lists of invariants $S_{ab}$, $a$ takes the values from 2 to $n$ and $b$ takes the values from 1 to $a$. $$\begin{gathered} \nonumber R_{a}(u^r_{i}, w^1_{ik})=u^1_{i_1} u^1_{i_a} w^1_{i_1 i_2}...w^1_{i_{a-1} i_{a}}\end{gathered}$$ In the lists of invariants $R_{a}$, $a$ takes the values from 1 to $n$. [**Theorem 2**]{}. A functional basis of the rotational transformations for two vectors and two antisymmetric tensors in the $n$-dimensional space can be taken as follows: $$\begin{gathered} \nonumber S_{ab}(y^1_{ij}y^1_{jk}, y^2_{ij}y^2_{jk}),\quad R_a(u^r_i, y^1_{ij}y^1_{jk}); \\ i,k=1,...,n; \ a,b=1,...,n; \ r=1,2,\nonumber\end{gathered}$$ [**Theorem 3**]{}. A functional basis of the rotational transformations for two vectors, two symmetric and two antisymmetric tensors in the $n$-dimensional space can be taken as follows: $$\begin{gathered} \nonumber S_{ab}(y^1_{ij}y^1_{jk}, y^2_{ij}y^2_{jk}),\quad R_a(u^r_i, y^1_{ij}y^1_{jk}); \\ \nonumber i,k=1,...,n; \ a,b=1,...,n; \ r=1,2,\nonumber\end{gathered}$$ Idea of the Proof that Invariants in the Functional Basis are Functionally Independent -------------------------------------------------------------------------------------- This part of the problem is the most difficult. Such proof can be done with mathematical induction and different for different types of tensors. Actually it is necessary to prove that a Jacobian of the basis of invariants is not zero, that is a determinant of an $(N \times N)$-matrix, $N$ being the number of invariants in the functional basis. If we assume that the set of invariants for the dimension $n$ is functionally independent, we have to prove the same for the set of invariants for the dimension $n+1$. Alternatively, such proof can be easily obtained from the proof in [@RDI1:FYeDifInvs], if we consider $u_{k}y_{ki}$ as new vectors, and convolutions of $y_{ik}y_{kl}$ as new symmetric tensors. To have the general results for tensors of rank two, we need to consider antisymmetric tensors and combination of symmetric and antisymmetric tensors. Example: Differential Invariants for the Vector Potential ========================================================= Vector Potential Functions -------------------------- Here we present, following [@RDI1:Yepreprintcomplexfields], an example of a functional basis of the first-order differential invariants for the vector potential functions $$A_\mu, \mu = 0, 1, ..., n,$$ with respect to Poincaré algebra with the basis operators (\[PA\]). To look for the first-order differential invariants, we may consider invariants for the vector $(A_\mu)$ and two tensors: a symmetric tensor $B_{\mu \nu}=\partial_{A_\nu} A_\mu + \partial_{A_\mu} A_\nu$, and an antisymmetric tensor $L_{\mu \nu}=\partial_{A_\nu} A_\mu - \partial_{A_\mu} A_\nu$. A functional basis of the first-order differential invariants may be taken as follows: $$\begin{gathered} \nonumber A_\mu A_\mu, A_\mu B_{\mu \nu} A_\nu, A_\mu B_{\mu \nu}B_{\nu \alpha}A_\alpha,\\ \nonumber A_\mu B_{\mu \nu} L_{\nu \alpha}A_\alpha, A_\mu L_{\mu \nu} L_{\nu \alpha}A_\alpha, \\ \nonumber B_{\mu \mu}, B_{\mu \nu}B_{\mu\nu}, B_{\mu \nu}B_{\nu \alpha}B_{\alpha \mu}, B_{\mu \nu}B_{\nu \alpha} B_{\alpha \beta} B_{\beta \mu},\\ \nonumber L_{\mu \nu}L_{\mu\nu}, L_{\mu \nu}L_{\nu \alpha} L_{\alpha \beta} L_{\beta \mu}, L_{\mu \nu}L_{\nu \alpha}B_{\alpha \mu}, L_{\mu \nu}B_{\nu \alpha} B_{\alpha \beta} L_{\beta \mu}, L_{\mu \nu}B_{\nu \alpha} L_{\alpha \beta} B_{\beta \mu}. \nonumber\end{gathered}$$ In the case when $\mu, \nu = 0,1,2,3$, the invariants shall depend on 20 components; the general rank of the operators ${J_{\mu \nu}}$ is equal to 6, so we shall have 14 functionally independent invariants. Conclusion ========== Main ideas ---------- 1. Description of invariants is needed for solution of other problems. 2. The task is split into solution of basic tasks (e.g. it is sufficient to consider two vectors/tensors of each type. 3. Description of invariants for special types of tensors is equivalent to finding of conditional invariants [@cond-diff-inv] (we add other conditions to determining equations). Further research ---------------- Immediate follow-up 1. Description of relative invariants and covariant vectors/tensors 2. Description of differential invariants for various extensions of the Euclid algebra 3. Description of invariants for various special tensors, e.g. will smaller number of independent components. 4. Finding of relations among functionally dependent invariants. 5. Description of invariants for nonlinear representations of the rotation group, see e.g. [@RDI1:IAY; @nonlin] These sets of invariants can be useful tools in other problems in studying of partial differential equations. [99]{} S. Lie, Über Differentialinvarianten, [*Math. Ann.*]{} [**24**]{}, 52 (1884) Tresse A., Sur les invariants différentiels des groupes continus de transformations, [*Acta Math.*]{}, [**18**]{}, 1 (1894) Cornell University Library Historical Math Monographs. A treatise on the theory of invariants. Oliver E. Glenn. Boston: Ginn, 1915 O. Veblen, Invariants of Quadratic Differential Forms. (Cambridge Tracts in Mathematics and Mathematical Physics, No. 24.) Cambridge University Press, 1927. A.J.M. Spencer, Theory of invariants, New York, London, Academic Press, 1971 L. V. Ovsyannikov, Group analysis of differential equations, New York, Academic Press, 1982 P.J. Olver, Application of Lie groups to differential equations, New York, Springer Verlag, 1987 P.J. Olver, Equivalence, invariants, and symmetry, Cambridge University Press, 1995 P.J. Olver, Classical Invariant Theory, London Math. Soc. Student Texts, vol. 44, Cambridge University Press, Cambridge, 1999. I.A.Yegorchenko, Symmetry properties of nonlinear equations for complex vector fields, Preprint 89.48, Institute of Mathematics of the Ukr. Acad. Sci, 1989. W.I. Fushchych and I.A.Yegorchenko, Second-order differential invariants of the rotation group $O(n)$ and of its extension $E(n)$, $P(l,n)$, [*Acta Appl. Math.*]{}, [**28**]{} 69 (1992) Yehorchenko I.A., Differential invariants and construction of conditionally invariant equations, Symmetry in nonlinear mathematical physics, in Proceedings of Fourth International Conference “Symmetry in Nonlinear Mathematical Physics” (9–15 July, 2001, Kyiv), Editors A.G. Nikitin, V.M. Boyko and R.O. Popovych, Kyiv, Insitute of Mathematics, 2002, V.43, Part 1, 256–262; math-ph/0304029. I.A. Yehorchenko, Differential invariants of nonlinear representation of the Poincaré algebra. Invariant equations, in Proceedings of the Second International Conference “Symmetry in Nonlinear Mathematical Physics”, Kyiv, 1997, [**1**]{}, 200–205.
--- author: - | [Xun Jian[$~^{\#}$]{}, Xiang Lian[$~^{*}$]{}, Lei Chen[$~^{\#}$]{} ]{}\ *$^{\#}$Hong Kong University of Science and Technology, Hong Kong, China\ {xjian, leichen}@cse.ust.hk\ *$^{*}$Kent State University, Ohio, USA\ xlian@kent.edu** bibliography: - 'community.bib' title: On Efficiently Detecting Overlapping Communities over Distributed Dynamic Graphs --- =
--- abstract: 'We provide an introduction to the theory of calibrated submanifolds through the key examples related with special holonomy. We focus on calibrated geometry in Calabi–Yau, $\operatorname{G}_2$ and $\operatorname{Spin}(7)$ manifolds, and describe fundamental results and techniques in the field.' author: - | Jason D. Lotay\ [University College London]{} title: Calibrated Submanifolds --- Introduction {#introduction .unnumbered} ============ A key aspect of mathematics is the study of variational problems. These can vary from the purely analytic to the very geometric. A classic geometric example is the study of geodesics, which are critical points for the length functional on curves. As we know, understanding the geodesics of a given Riemannian manifold allows us to understand some of the ambient geometry, for example the curvature. The higher dimensional analogue would be to study critical points for the volume functional, and we would hope (and it indeed turns out to be the case) that these critical points, called *minimal submanifolds*, encode crucial aspects of the geometry of the manifold. Just like the geodesic equation, we would expect (and it is true) that minimal submanifolds are defined by a (nonlinear) second order partial differential equation. Such equations are very difficult to solve in general, so a key idea is to find a special class of minimal submanifolds, called *calibrated submanifolds*, which are instead defined by a first order partial differential equation. The definition of calibrated submanifolds is motivated by the properties of complex submanifolds in Kähler manifolds, and turns out to be useful in finding minimizers for the volume functional rather than just critical points. However, finding examples outside the classical complex setting turns out to be difficult, leading to important methods coming from a variety of sources, as well as motivating the study of the deformation theory of these objects. Calibrated submanifolds naturally arise when the ambient manifold has *special holonomy*, including holonomy $\operatorname{G}_2$. In this situation, we would hope that the calibrated submanifolds encode even more, finer, information about the ambient manifold, potentially leading to the construction of new invariants. In this setting, there is also a relationship between calibrated submanifolds and gauge theory: specifically, connections whose curvature satisfies a natural constraint determined by the special holonomy group (so-called *instantons*). For these reasons, calibrated submanifolds form a hot topic in current research, especially in the $\operatorname{G}_2$ setting. These notes are primarily based on a lecture course the author gave at the LMS–CMI Research School “An Invitation to Geometry and Topology via $\operatorname{G}_2$” at Imperial College London in July 2014. Minimal submanifolds ==================== We start by analysing the submanifolds which are critical points for the volume functional. Let $N$ be a submanifold (without boundary) of a Riemannian manifold $(M,g)$ and let $F:N\times(-\epsilon,\epsilon)\rightarrow M$ be a variation of $N$ with compact support; i.e. $F=\text{Id}$ outside a compact subset $\overline{S}$ of $N$ with $S$ open and $F(p,0)=p$ for all $p\in N$. The vector field $X=\frac{\partial F}{\partial t}|_N$ is called the variation vector field (which will be zero outside of $\overline{S}$). We then have the following definition. $N$ is *minimal* if $\frac{\d}{\d t}\operatorname{Vol}(F(S,t))|_{t=0}=0$ for all variations $F$ with compact support $\overline{S}$ (depending on $F$). Notice that we do not ask for $N$ to minimize volume: it is only stationary for the volume. It could even be a maximum! A plane in $\R^n$ is minimal since any small variation will have larger volume. Geodesics are locally length minimizing, so geodesics are minimal. However, as an example, the equator in $\mathcal{S}^2$ is minimal but not length minimizing since we can deform it to a shorter line of latitude. For simplicity let us suppose that $N$ is compact. We wish to calculate $\frac{\d}{\d t}\operatorname{Vol}(F(N,t))|_{t=0}$. Given local coordinates $x_i$ on $N$ we know that $$\operatorname{Vol}(F(N,t))=\int_N\sqrt{\det\left(g\left(\frac{\partial F}{\partial x_i},\frac{\partial F}{\partial x_j}\right)\right)}\operatorname{vol}_N.$$ Let $p\in N$ and choose our coordinates $x_i$ to be normal coordinates at $p$: i.e. so that $\frac{\partial F}{\partial x_i}(p,t)=e_i(t)$ satisfy $g(e_i(0),e_j(0))=\delta_{ij}$. If $g_{ij}(t)=g(e_i(t),e_j(t))$ and $(g^{ij}(t))$ denotes the inverse of the matrix $(g_{ij}(t))$ then we know that $$\frac{\d}{\d t}\sqrt{\det(g_{ij}(t))}|_{t=0}=\frac{1}{2}\frac{\sum_{i,j}g^{ij}(t)g_{ij}'(t)}{\sqrt{\det(g_{ij}(t))}}|_{t=0}=\frac{1}{2}\sum_ig_{ii}'(0).$$ Now, if we let $\nabla$ denote the Levi-Civita connection of $g$, then $$\begin{aligned} \frac{1}{2}\sum_ig_{ii}'(0)&=\frac{1}{2}\sum_i \frac{\d}{\d t}g\left(\frac{\partial F}{\partial x_i},\frac{\partial F}{\partial x_i}\right)|_{t=0}\\ &=\sum_ig(\nabla_Xe_i,e_i)\\ &=\sum_ig(\nabla_{e_i}X,e_i)=\operatorname{div}_N(X)\end{aligned}$$ since $[X,e_i]=0$ (i.e. the $t$ and $x_i$ derivatives commute). Moreover, we see that $$\operatorname{div}_N(X)=\sum_ig(\nabla_{e_i}X,e_i)=\operatorname{div}_N(X^{\rm T})-\sum_ig(X^{\perp},\nabla_{e_i}e_i)=\operatorname{div}_N(X^{\rm T})-g(X,H)$$ (since $\nabla_{e_i}\big(g(X^{\perp},e_i)\big)=0$) where ${}^{\rm T}$ and ${}^\perp$ denote the tangential and normal parts and $$H=\sum_{i}\nabla^{\perp}_{e_i}e_i$$ is the *mean curvature vector*. Overall we have the following. The first variation formula is $$\frac{\d}{\d t}\operatorname{Vol}(F(N,t))|_{t=0}=\int_N \operatorname{div}_N(X)\operatorname{vol}_N=-\int_N g(X,H)\operatorname{vol}_N.$$ The $\operatorname{div}_N(X^{\rm T})$ term does not appear in the first variation formula because its integral vanishes by the divergence theorem as $N$ is compact without boundary. In general, it will still vanish since we assume for our variations that there exists a compact submanifold of $N$ with boundary which contains the support of $X^{\rm T}$ and so that $X^{\rm T}$ vanishes on the boundary. We deduce the following. $N$ is a *minimal submanifold* if and only if $H=0$. The equation $H=0$ is a *second order nonlinear PDE*. We can see this explicitly in the following simple case. For a function $f:U\subseteq\R^{n-1}\rightarrow\R$ where $\overline{U}$ is compact, we see that if $N=\text{Graph}(f)\subseteq\R^n$ then the volume of $N$ is given by $$\operatorname{Vol}(N)=\int_U\sqrt{1+|\nabla f|^2}\operatorname{vol}_U.$$ Any sufficiently small variation can be written $F(N,t)=\text{Graph}(f+th)$ for some $h:U\rightarrow\R$, so we can compute $$\begin{aligned} \frac{\d}{\d t}\operatorname{Vol}(F(N,t))|_{t=0}&=\frac{\d}{\d t}|_{t=0}\int_U\sqrt{1+|\nabla f+t\nabla h|^2}\operatorname{vol}_U\\ &=\int_U\frac{\d}{\d t}|_{t=0}\sqrt{1+|\nabla f|^2+2t\langle\nabla f,\nabla h\rangle+t^2|\nabla h|^2}\operatorname{vol}_U\\ &=\int_U\frac{\langle \nabla f,\nabla h\rangle}{\sqrt{1+|\nabla f|^2}}\operatorname{vol}_U\\ &=-\int_U h\operatorname{div}\left(\frac{\nabla f}{\sqrt{1+|\nabla f|^2}}\right)\operatorname{vol}_U.\end{aligned}$$ We therefore see that $N$ is minimal if and only if this vanishes for all $h$. Hence, $\text{Graph}(f)$ is minimal in $\R^n$ if and only if $$\operatorname{div}\left(\frac{\nabla f}{\sqrt{1+|\nabla f|^2}}\right)=0.$$ We see that we can write this equation as $\Delta f+Q(\nabla f,\nabla^2 f)=0$ where $Q$ consists of nonlinear terms (but linear in $\nabla^2f$). Hence, if we linearise this equation we just get $\Delta f=0$, so $f$ is harmonic. In other words, the minimal submanifold equation is a nonlinear equation whose linearisation is just Laplace’s equation: this is an example of a nonlinear *elliptic* PDE, which we shall discuss further later. A plane in $\R^n$ is trivially minimal because if $X,Y$ are any vector fields on the plane then $\nabla_X^{\perp}Y=0$ as the second fundamental form of a plane is zero. For curves $\gamma$, $H=0$ is equivalent to the geodesic equation $\nabla_{\dot{\gamma}}\dot{\gamma}=0$. The most studied minimal submanifolds (other than geodesics) are minimal surfaces in $\R^3$, since here the equation $H=0$ becomes a scalar equation on a surface, which is the simplest to analyse. In general we would have a system of equations, which is more difficult to study. The helicoid $M=\{(t\cos s,t\sin s,s)\in\R^3\,:\,s,t\in\R\}$ is a complete embedded minimal surface, discovered by Meusnier in 1776. The catenoid $M=\{(\cosh t\cos s,\cosh t\sin s ,t)\in\R^3\,:\,s,t,\in\R\}$ is a complete embedded minimal surface, discovered by Euler in 1744 and shown to be minimal by Meusnier in 1776. The catenoid is another explicit example which is a critical point for volume but not minimizing. In fact the helicoid and the catenoid are locally isometric, and there is a 1-parameter family of locally isometric minimal surfaces deforming between the catenoid and helicoid: see, for example, [@Gray Theorem 16.5] for details. It took about 70 years to find the next minimal surface, but now we know many examples of minimal surfaces in $\R^3$, as well as in other spaces by studying the nonlinear elliptic PDE given by the minimal surface equation. The amount of literature in the area is vast, with key results including the proofs of the Lawson [@BLawson], Willmore [@MNWillmore] and Yau [@IMNYau; @MNYau; @SongYau] Conjectures, and minimal surfaces have applications to major problems in geometry including the Positive Mass Theorem [@SYMass; @SYMass2], Penrose Inequality [@HIPenrose] and Poincaré Conjecture [@Perelman]. Introduction to calibrations ============================ As we have seen, minimal submanifolds are extremely important. However there are two key issues. - Minimal submanifolds are defined by a second order nonlinear PDE system – therefore they are hard to analyse. - Minimal submanifolds are only critical points for the volume functional, but we are often interested in minima for the volume functional – we need a way to determine when this occurs. We can help resolve these issues using the notion of calibration and calibrated submanifolds, introduced by Harvey–Lawson [@HL] in 1982. \[calibration.dfn\] A differential $k$-form $\eta$ on a Riemannian manifold $(M,g)$ is a *calibration* if - $\d\eta=0$ and - $\eta(e_1,\ldots,e_k)\leq 1$ for all unit tangent vectors $e_1,\ldots,e_k$ on $M$. Any non-zero form with constant coefficients on $\R^n$ can be rescaled so that it is a calibration with at least one plane where equality holds. This example shows that there are many calibrations $\eta$, but the interesting question is: for which oriented planes $P=\operatorname{Span}\{e_1,\ldots,e_k\}$ does $\eta(e_1,\ldots,e_k)=1$? More importantly, can we find submanifolds $N$ so that this equality holds on each tangent space? This motivates the next definition. \[calibsub.dfn\] Let $\eta$ be a calibration $k$-form on $(M,g)$. An oriented $k$-dimensional submanifold $N$ of $(M,g)$ is *calibrated* by $\eta$ if $\eta|_N=\operatorname{vol}_N$, i.e. if for all $p\in N$ we have $\eta(e_1,\ldots,e_k)=1$ for an oriented orthonormal basis $e_1,\ldots,e_k$ for $T_pN$. Any oriented plane in $\R^n$ is calibrated. If we change coordinates so that the plane $P$ is $\{x\in\R^n\,:\,x_{k+1}=\ldots=x_n=0\}$ (with the obvious orientation) then $\eta=\d x_1\wedge\ldots\wedge \d x_k$ is a calibration and $P$ is calibrated by $\eta$. Notice that the calibrated condition is now an algebraic condition on the tangent vectors to $N$, so being calibrated is a *first order nonlinear PDE*. We shall motivate these definitions further later, but for now we make the following observation. \[calibmin.thm\] Let $N$ be a calibrated submanifold. Then $N$ is minimal and, moreover, if $F$ is any variation with compact support $\overline{S}$ then $\operatorname{Vol}(F(S,t))\geq\operatorname{Vol}(S)$; i.e. $N$ is volume-minimizing. In particular, if $N$ is compact then $N$ is volume-minimizing in its homology class. Suppose that $N$ is calibrated by $\eta$ and suppose for simplicity that $N$ is compact. We will show that $N$ is homologically volume-minimizing. Suppose that $N'$ is homologous to $N$. Then there exists a compact manifold $K$ with boundary $-N\cup N'$ and, since $\d\eta=0$, we have by Stokes’ Theorem that $$0=\int_K\d\eta=\int_{N'}\eta-\int_N\eta.$$ We deduce that $$\begin{aligned} \operatorname{Vol}(N)&=\int_N\eta=\int_{N'}\eta\leq\operatorname{Vol}(N').\end{aligned}$$ We then have the result by the definition of minimal submanifold. We conclude this introduction with the following elementary result. There are no compact calibrated submanifolds in $\R^n$. Suppose that $\eta$ is a calibration and $N$ is compact and calibrated by $\eta$. Then $\d\eta=0$ so by the Poincaré Lemma $\eta=\d\zeta$, and hence $$\operatorname{Vol}(N)=\int_N\eta=\int_N\d\zeta=0$$ by Stokes’ Theorem. Although there are many calibrations, having calibrated submanifolds greatly restricts the calibrations you want to consider. The calibrations which have calibrated submanifolds have special significance and there is a particular connection with special holonomy, due to the following observations. Let $G$ be the holonomy group of a Riemannian metric $g$ on an $n$-manifold $M$. Then $G$ acts on the $k$-forms on $\R^n$, so suppose that $\eta_0$ is a $G$-invariant $k$-form. We can always rescale $\eta_0$ so that $\eta_0|_P\leq \operatorname{vol}_P$ for all oriented $k$-planes $P$ and equality holds for at least one $P$. Since $\eta_0$ is $G$-invariant, if $P$ is calibrated then so is $\gamma\cdot P$ for any $\gamma\in G$, which usually means we have quite a few calibrated planes. We know by the *holonomy principle* (see, for example, [@Joycebook2 Proposition 2.5.2]) that we then get a parallel $k$-form $\eta$ on $M$ which is identified with $\eta_0$ at every point. Since $\nabla\eta=0$, we have $\d\eta=0$ and hence $\eta$ is a calibration. Moreover, we have a lot of calibrated tangent planes on $M$, so we can hope to find calibrated submanifolds. Complex submanifolds ==================== We would now like to address the question: where does the calibration condition come from? The answer is from *complex geometry*. On $\R^{2n}=\C^n$ with coordinates $z_j=x_j+iy_j$, we have the complex structure $J$ and the distinguished Kähler 2-form $$\omega=\sum_{j=1}^n\d x_j\wedge \d y_j=\frac{i}{2}\sum_{j=1}^n\d z_j\wedge\d \overline{z}_j.$$ More generally we can work with a *Kähler manifold* $(M,J,\omega)$. Our first key result is the following. \[cxcalib.thm\] On a Kähler manifold $(M,J,\omega)$, $\frac{\omega^k}{k!}$ is a calibration whose calibrated submanifolds are the complex $k$-dimensional submanifolds: i.e. submanifolds $N$ such that $J(T_pN)=T_pN$ for all $p\in N$. Since $\d\omega^k=k\d\omega\wedge\omega^{k-1}=0$, Theorem \[cxcalib.thm\] follows immediately from the following result. \[Wirtinger.thm\] For any unit vectors $e_1,\ldots,e_{2k}\in\C^n$, $$\frac{\omega^k}{k!}(e_1,\ldots,e_{2k})\leq 1$$ with equality if and only if $\operatorname{Span}\{e_1,\ldots,e_{2k}\}$ is a complex $k$-plane in $\C^n$. Before proving this we make the following observation. \[starcab.lem\] If $\eta$ is a calibration and $*\eta$ is closed then $*\eta$ is a calibration. Moreover an oriented tangent plane $P$ is calibrated by $\eta$ if and only if there is an orientation on the orthogonal complement $P^{\perp}$ so that it is calibrated by $*\eta$. Suppose that $\eta$ is a calibration $k$-form on $(M,g)$ with $\d\!*\!\eta=0$. Let $p\in M$. Take any $n-k$ orthonormal tangent vectors $e_{k+1},\ldots,e_n$ at $p$. Then there exist $e_1,\ldots,e_k\in T_pM$ so that $\{e_1,\ldots,e_n\}$ is an oriented orthonormal basis for $T_pM$. Since $\{e_1,\ldots,e_n\}$ is an oriented orthonormal basis, we can use the definition of the Hodge star to calculate $$*\eta(e_{k+1},\ldots,e_n)=\eta(e_1,\ldots,e_k)\leq 1.$$ Hence $*\eta$ is a calibration by Definition \[calibration.dfn\]. Moreover, the oriented plane $P=\operatorname{Span}\{e_{k+1},\ldots,e_n\}$ is calibrated by $*\eta$ if and only if there is an orientation on $\operatorname{Span}\{e_1,\ldots,e_k\}=P^{\perp}$ so that it is calibrated by $\eta$, since $\eta(e_1,\ldots,e_k)=\pm *\eta(e_{k+1},\ldots,e_n)=\pm 1$. We can now prove Wirtinger’s inequality. We see that $|\frac{\omega^k}{k!}|^2=\frac{n!}{k!(n-k)!}$ and $\operatorname{vol}_{\C^n}=\frac{\omega^n}{n!}$ so $*\frac{\omega^k}{k!}=\frac{\omega^{n-k}}{(n-k)!}$. Hence, by Lemma \[starcab.lem\], it is enough to study the case where $k\leq \frac{n}{2}$. Let $P$ be any $2k$-plane in $\C^n$ with $2k\leq n$. We shall find a canonical form for $P$. First consider $\langle Ju,v\rangle$ for orthonormal vectors $u,v\in P$. This must have a maximum, so let $\cos\theta_1=\langle Ju,v\rangle$ be this maximum realised by some orthonormal vectors $u,v\in P$, where $0\leq\theta_1\leq\frac{\pi}{2}$. Suppose that $w\in P$ is a unit vector orthogonal to $\operatorname{Span}\{u,v\}$, where $\cos\theta_1=\langle Ju,v\rangle$. The function $$f_w(\theta)=\langle Ju,\cos\theta v+\sin\theta w\rangle$$ has a maximum at $\theta=0$ so $f_w'(0)=\langle Ju,w\rangle =0$. Similarly we have that $\langle Jv,w\rangle =0$, and thus $w\in\operatorname{Span}\{u,v,Ju,Jv\}^{\perp}$. We then have two cases. If $\theta_1=0$ then $v=Ju$ so we can set $u=e_1,v=Je_1$ and see that $P=\operatorname{Span}\{e_1,Je_1\}\times Q$ where $Q$ is a $2(k-1)$-plane in $\C^{n-1}=\operatorname{Span}\{e_1,Je_1\}^{\perp}$. If $\theta_1\neq 0$ we have that $v=\cos\theta_1 Ju+\sin\theta_1 w$ where $w$ is a unit vector orthogonal to $u$ and $Ju$, so we can let $u=e_1$, $w=e_2$ and see that $P=\operatorname{Span}\{e_1,\cos\theta_1 Je_1+\sin\theta_1 e_2\}\times Q$ where $Q$ is a $2(k-1)$-plane in $\C^{n-2}=\operatorname{Span}\{e_1,Je_1,e_2,Je_2\}^{\perp}$. Proceeding by induction we see that we have an oriented basis $\{e_1,Je_1,\ldots,e_n,Je_n\}$ for $\C^n$ so that $$P=\operatorname{Span}\{e_1,\cos\theta_1 Je_1+\sin\theta_1 e_2,\ldots,e_{2k-1},\cos\theta_k Je_{2k-1}+\sin\theta_k e_{2k}\},$$ where $0\leq\theta_1\leq\ldots\leq\theta_{k-1}\leq\frac{\pi}{2}$ and $\theta_{k-1}\leq\theta_k\leq\pi-\theta_{k-1}$. Since we can write $\omega=\sum_{j=1}^n e^j\wedge Je^j$ we see that $\frac{\omega^k}{k!}$ restricts to $P$ to give a product of $\cos\theta_j$ which is certainly less than or equal to $1$. Moreover, equality holds if and only if all of the $\theta_j=0$ which means that $P$ is complex. Putting together Theorem \[cxcalib.thm\] and Theorem \[calibmin.thm\] yields the following. $\!\!$Compact complex submanifolds of Kähler manifolds are homologically volume-minimizing. We know that complex submanifolds are defined by holomorphic functions; i.e. solutions to the Cauchy–Riemann equations, which are a first-order PDE system, as one would expect for calibrated submanifolds. $N=\{(z,\frac{1}{z})\in\C^2\,:\,z\in\C\setminus\{0\}\}$ is a complex curve in $\C^2$, and thus is calibrated. An important non-trivial example of a Kähler manifold is $\C\P^n$, where the zero set of a system of polynomial equations defines a (possibly singular) complex submanifold. Special Lagrangians =================== Complex submanifolds are very familiar, but can we find any other interesting classes of calibrated submanifolds? The answer is that indeed we can, particularly when the manifold has special holonomy. We begin with the case of holonomy $\operatorname{SU}(n)$ – so-called *Calabi–Yau manifolds*. The model example for Calabi–Yau manifolds is $\C^n$ with complex structure $J$, Kähler form $\omega$ and holomorphic volume form $$\Upsilon=\d z_1\wedge\ldots\wedge\d z_n,$$ if $z_1,\ldots,z_n$ are complex coordinates on $\C^n$. Let $M$ be a Calabi–Yau manifold with holomorphic volume form $\Upsilon$. Then $\operatorname{Re}(e^{-i\theta}\Upsilon)$ is a calibration for any $\theta\in\R$. Since $\d\Upsilon=0$, the result follows immediately from the following. \[Lagcalib.thm\] On $\C^n$, $|\Upsilon(e_1,\ldots,e_n)|\leq 1$ for all unit vectors $e_1,\ldots,e_n$ with equality if and only if $P=\operatorname{Span}\{e_1,\ldots,e_n\}$ is a Lagrangian plane, i.e. $P$ is an $n$-plane such that $\omega|_P\equiv 0$. Let $e_1,\ldots,e_n$ be the standard basis for $\R^n$ and let $P$ be an $n$-plane in $\C^n$. There exists $A\in\operatorname{GL}(n,\C)$ so that $f_1=Ae_1,\ldots,f_n=Ae_n$ is an orthonormal basis for $P$. Then $\Upsilon(Ae_1,\ldots,Ae_n)=\det_{\C}(A)$ so $$|\Upsilon(f_1,\ldots,f_n)|^2=|\operatorname{det}_{\C}(A)|^2=|\operatorname{det}_{\R}(A)|=|f_1\wedge Jf_1\wedge\ldots\wedge f_n\wedge J f_n|\leq |f_1||Jf_1|\ldots|f_n||Jf_n|=1$$ with equality if and only if $f_1,Jf_1,\ldots,f_n, Jf_n$ are orthonormal. However, this is exactly equivalent to the Lagrangian condition, since $\omega(u,v)=g(Ju,v)$ so $\omega|_P\equiv 0$ if and only if $JP=P^{\perp}$. A submanifold $N$ of $M$ calibrated by $\operatorname{Re}(e^{-i\theta}\Upsilon)$ is called *special Lagrangian* with phase $e^{i\theta}$. If $\theta=0$ we say that $N$ is simply special Lagrangian. By Theorem \[Lagcalib.thm\], we see that $N$ is special Lagrangian if and only if $\omega|_N\equiv 0$ (i.e. $N$ is Lagrangian) and $\operatorname{Im}\Upsilon|_N\equiv 0$ (up to a choice of orientation so that $\operatorname{Re}\Upsilon|_N>0$). Consider $\C=\R^2$ with coordinates $z=x+iy$, complex structure $J$ given by $Jw=iw$, Kähler form $\omega=\d x\wedge\d y=\frac{i}{2}\d z\wedge\d\overline{z}$ and holomorphic volume form $\Upsilon=\d z=\d x+i\d y$. We want to consider the special Lagrangians in $\C$, which are 1-dimensional submanifolds or curves $N$ in $\C=\R^2$. Since $\omega$ is a 2-form, it vanishes on any curve in $\C$. Hence every curve in $\C$ is Lagrangian. For $N$ to be special Lagrangian with phase $e^{i\theta}$ we need that $$\operatorname{Re}(e^{-i\theta}\Upsilon)=\cos\theta\d x+\sin\theta\d y$$ is the volume form on $N$, or equivalently that $$\operatorname{Im}(e^{-i\theta}\Upsilon)=\cos\theta\d y-\sin\theta\d x$$ vanishes on $N$. This means that $\cos\theta\partial_x+\sin\theta\partial_y$ is everywhere a unit tangent vector to $N$, so $N$ is a straight line given by $N=\{(t\cos\theta ,t\sin\theta )\in\R^2\,:\,t\in\R\}$ (up to translation), so it makes an angle $\theta$ with the $x$-axis, hence motivating the term “phase $e^{i\theta}$”. Notice that this result is compatible with the fact that special Lagrangians are minimal, and hence must be geodesics in $\R^2$; i.e. straight lines. Consider $\C^2=\R^4$. We know that $\omega=\d x_1\wedge\d y_1+\d x_2\wedge\d y_2$. Since $\Upsilon=\d z_1\wedge \d z_2=(\d x_1+i\d y_1)\wedge (\d x_2+i\d y_2)$, we also know that $\operatorname{Re}\Upsilon=\d x_1\wedge\d x_2+\d y_2\wedge\d y_1$, which looks somewhat similar. In fact, if we let $J'$ denote the complex structure given by $J'(\partial_{x_1})=\partial_{x_2}$ and $J'(\partial_{y_2})=\partial_{y_1}$, then $\operatorname{Re}\Upsilon=\omega'$, the Kähler form corresponding the complex structure $J'$. Hence special Lagrangians in $\C^2$ are complex curves for a different complex structure. In fact, we have a hyperkähler triple of complex structures $J_1,J_2,J_3$, where $J_1=J$ is the standard one and $J_3=J_1J_2=-J_2J_1$ so that $J_1=J_2J_3=-J_3J_2$ and $J_2=J_3J_1=-J_1J_3$, and the corresponding Kähler forms are $\omega=\omega_1$, $\omega_2$, $\omega_3$ which are orthogonal and the same length with $\Upsilon=\omega_2+i\omega_3$. This shows we should only consider complex dimension 3 and higher to find new calibrated submanifolds. Let $f:\R^n\rightarrow \R^n$ be a smooth function and let $N=\text{Graph}(f)\subseteq\R^{2n}=\C^n$. We want to see when $N$ is special Lagrangian. We see that tangent vectors to $N$ are given by $$e_1+i\nabla_{e_1}f,\ldots,e_n+i\nabla_{e_n}f.$$ Hence $N$ is Lagrangian if and only if $$\omega(e_j+i\nabla_{e_j}f,e_k+i\nabla_{e_k}f)=\nabla_{e_k}f_j-\nabla_{e_j}f_k=0$$ for all $j,k$. Since $\R^n$ is simply connected, this occurs if and only if there exists $F$ such that $f_j=\nabla_{e_j}F$; i.e. $f=\nabla F$. Recall that $\Upsilon=\d z_1\wedge\ldots\wedge \d z_n$. We know that $N$ is special Lagrangian if and only if $N$ is Lagrangian and $\operatorname{Im}\Upsilon$ vanishes on $N$. Now $$\Upsilon(a_1+ib_1,\ldots,a_n+ib_n)=\operatorname{det}_{\C}(A+iB)$$ where $A,B$ are the matrices with columns $a_i,b_j$ respectively. Hence $$\Upsilon(e_1+i\nabla_{e_1}\nabla F,\ldots,e_n+i\nabla_{e_n}\nabla F)=\operatorname{det}_{\C}(I+i\text{Hess}\,F),$$ where $\operatorname{Hess}F=(\frac{\partial^2 F}{\partial x_i\partial x_j})$. Therefore $N=\text{Graph}(f)$ is special Lagrangian (up to a choice of orientation) if and only if $f=\nabla F$ and $$\operatorname{Im}\operatorname{det}_{\C}(I+i\operatorname{Hess}F)=0.$$ If $n=2$, $$I+i\text{Hess}\,F=\left(\begin{array}{cc} 1+iF_{xx} & iF_{xy} \\ iF_{yx} & 1+iF_{yy}\end{array}\right).$$ Therefore, the determinant gives $$1-F_{xx}F_{yy}+F_{xy}^2+i(F_{xx}+F_{yy}),$$ then the imaginary part is $F_{xx}+F_{yy}$. Therefore, $N$ is special Lagrangian if and only if $\Delta F=0$. As we know, a graph in $\C^2$ of $f=u+iv:\C\to\C$ is a complex surface if and only if $u+iv$ is holomorphic, which implies that $u,v$ are harmonic. We know that special Lagrangians in $\C^2$ are complex surfaces for a different complex structure, so this is expected. If $n=3$, $$I+i\text{Hess}\, F=\left(\begin{array}{ccc} 1+iF_{xx} & i F_{xy} & iF_{xz} \\ iF_{yx} & 1+iF_{yy} & iF_{yz} \\ iF_{zx} & iF_{zy} & 1+iF_{zz} \end{array}\right).$$ Hence, $$\begin{aligned} \operatorname{Im}\operatorname{det}_{\C}(I+i\operatorname{Hess}F)=&\;F_{xx}+F_{yy}+F_{zz}\\ &-F_{xx}(F_{yy}F_{zz}-F_{yz}^2)-F_{xy}(F_{yz}F_{zx}-F_{xy}F_{zz})-F_{zx}(F_{xy}F_{yz}-F_{yy}F_{zx}).\end{aligned}$$ Therefore, $N$ is special Lagrangian if and only if $$\begin{aligned} -\Delta F&=F_{xx}+F_{yy}+F_{zz}\\ &=F_{xx}(F_{yy}F_{zz}-F_{yz}^2)-F_{xy}(F_{xy}F_{zz}-F_{yz}F_{zx})+F_{zx}(F_{xy}F_{yz}-F_{yy}F_{zx})\\ &=\operatorname{det}\operatorname{Hess}F.\end{aligned}$$ We now wish to describe some very important examples of special Lagrangians, which are asymptotic to pairs of planes. $\operatorname{SU}(n)$ acts transitively on the space of special Lagrangian planes with isotropy $\operatorname{SO}(n)$. So any special Lagrangian plane is given by $A\cdot\R^n$ for $A\in\operatorname{SU}(n)$ where $\R^n$ is the standard real $\R^n$ in $\C^n$. Given $\theta=(\theta_1,\ldots,\theta_n)$ we can define a plane $P(\theta)=\{(e^{i\theta_1}x_1,\ldots,e^{i\theta_n}x_n)\in\C^n\,:\,(x_1,\ldots,x_n)\in\R^n\}$ (where we can swap orientation). We see that $P(\theta)$ is special Lagrangian if and only if $\operatorname{Re}\Upsilon|_P=\pm\cos(\theta_1+\ldots+\theta_n)=1$ so that $\theta_1+\ldots+\theta_n\in\pi\Z$. Given any $\theta_1,\ldots,\theta_n\in(0,\pi)$ with $\theta_1+\ldots+\theta_n=\pi$, there exists a special Lagrangian $N$ (called a *Lawlor neck*) asymptotic to $P(0)\cup P(\theta)$: see, for example, [@Joycebook2 Example 8.3.15] or $\S$\[s:angle\] for details. It is diffeomorphic to $\mathcal{S}^{n-1}\times\R$. By rotating coordinates we have a special Lagrangian with phase $i$ asymptotic to $P(-\frac{\theta}{2})\cup P(\frac{\theta}{2})$. The simplest case is when $\theta_1=\ldots=\theta_n=\frac{\pi}{n}$: here $N$ is called the *Lagrangian catenoid*. When $n=2$, under a coordinate change the Lagrangian catenoid becomes the complex curve $\{(z,\frac{1}{z})\in\C^2\,:\,z\in\C\setminus\{0\}\}$ that we saw before. When $n=3$, the only possibilities for the angles are $\sum_i\theta_i=\pi,2\pi$, but if $\sum_i\theta_i=2\pi$ we can rotate coordinates and change the order of the planes so that $P(0)\cup P(\theta)$ becomes $P(0)\cup P(\theta')$ where $\sum_i\theta_i'=\pi$. Hence, given any pair of transverse special Lagrangian planes in $\C^3$, there exists a Lawlor neck asymptotic to their union. Using complex geometry it is easy to classify all of the smooth special Lagrangians in $\C^2$ asymptotic to a pair of transverse planes, and one sees that the Lawlor necks in $\C^2$ are the unique exact special Lagrangians with this property. It is now known that the Lawlor necks are the unique smooth exact special Lagrangian asymptotic to a pair of planes in all dimensions [@IJO]. We can find special Lagrangians in Calabi–Yau manifolds using the following easy result. \[SLfixedpt.prop\] Let $(M,\omega,\Upsilon)$ be a Calabi–Yau manifold and let $\sigma:M\rightarrow M$ be such that $\sigma^2=\text{\emph{Id}}$, $\sigma^*(\omega)=-\omega$, $\sigma^*(\Upsilon)=\overline{\Upsilon}$. Then $\text{Fix}(\sigma)$ is special Lagrangian, if it is non-empty. Let $X=\{[z_0,\ldots,z_4]\in\C\P^4\,:\,z_0^5+\ldots+z_4^5=0\}$ (the *Fermat quintic*) with its Calabi–Yau structure (which exists by Yau’s solution of the Calabi conjecture since the first Chern class of $X$ vanishes). Let $\sigma$ be the restriction of complex conjugation on $\C\P^4$ to $X$. Then the fixed point set of $\sigma$, which is the real locus in $X$, is a special Lagrangian 3-fold (if it is non-empty). (There is a subtlety here: $\sigma$ is certainly an anti-holomorphic isometric involution for the induced metric on $X$, but this is *not* the same as the Calabi–Yau metric on $X$. Nevertheless, it is the case that $\sigma$ satisfies the conditions of Proposition \[SLfixedpt.prop\].) There exists a Calabi–Yau metric on $T^*\mathcal{S}^n$ (the Stenzel metric [@Stenzel]) so that the base $\mathcal{S}^n$ is special Lagrangian: When $n=2$ this is a hyperkähler metric called the Eguchi–Hanson metric [@EguchiHanson]. Constructing calibrated submanifolds ==================================== It is easy to construct complex submanifolds in Kähler manifolds algebraically. Constructing other calibrated submanifolds is much more challenging because one needs to solve a nonlinear PDE, even in Euclidean space. There are approaches in Euclidean space and other simple spaces which have involved reducing the problem to ODEs or other problems which do not require PDE (for example, algebraic methods). For example, we have the following methods, which you can find out more about in [@Joycebook2] or the references provided. - Symmetries/evolution equations [@Goldstein; @HL; @Haskins.SLcones1; @Ionel.Minoo; @Joyce.evol; @Joyce.quadrics; @Joyce.sym; @Joyce.U1fib; @Joyce.U1a; @Joyce.U1b; @Joyce.U1c; @Lotayevol; @Lotaysym]. - Use of integrable systems to study calibrated cones [@Carberry; @Carberry.McIntosh; @Haskins.SLcones2; @Joyce.int; @McIntosh]. - Calibrated cones and ruled smoothings of these cones [@BryantOct; @Bryant.SL; @Fox.coass; @Fox.Cayley; @Joyce.ruled; @Lotayevol; @Lotay2R; @LotayLag]. - Vector sub-bundle constructions [@Ionel.Kar.Minoo; @Kar.Leung; @Kar.Minoo]. - Classification of calibrated submanifolds satisfying pointwise constraints on their second fundamental form [@Bryant.SL; @Fox.thesis; @Ionel.SL; @LotayLag; @Lotayassoc]. However, an important direction which has borne fruit in calibrated geometry and special holonomy recently has been to study the nonlinear PDE head on, especially by perturbative and gluing methods. We want to solve nonlinear PDE, so how do we tackle this? The idea is to use the linear case to help. Suppose we are on a compact manifold $N$ and recall the theory of linear *elliptic* operators $L$ of order $l$ on $N$, including: - the definition of ellipticity of $L$ via the *principal symbol* $\sigma_L$ (which encodes the highest order derivatives in the operator) being an isomorphism; - the use of *Hölder spaces* $C^{k,a}$ to give elliptic regularity theory (so-called *Schauder theory*), namely that if $w\in C^{k,a}$ and $Lv=w$ then $v\in C^{k+l,a}$ and there is a universal constant $C$ so that $$\|v\|_{C^{k+l,a}}\leq C(\|Lv\|_{C^{k,a}}+\|v\|_{C^0})$$ (and we can drop the $\|v\|_{C^0}$ term if $v$ is orthogonal to $\operatorname{Ker}L$); - the adjoint operator $L^*$ and that $\sigma_{L^*}=(-1)^l\sigma_L^*$ so that $L^*$ is elliptic if and only if $L$ is elliptic; and - the Fredholm theory of $L$, namely that $\operatorname{Ker}L$ (and hence $\operatorname{Ker}L^*$) is finite-dimensional, and we can solve $Lv=w$ if and only if $w\in (\operatorname{Ker}L^*)^{\perp}$. We shall discuss this in a model example which we shall use throughout this section. The Laplacian on functions is given by $\Delta f=\d^*\d f$ which in normal coordinates at a point is given by $f\mapsto -\sum_i \frac{\partial^2f}{\partial x_i^2}$, so it is a linear second order differential operator. We see that its principal symbol is $\sigma_{\Delta}(x,\xi)f= -|\xi|^2f$ which is an isomorphism for $\xi\in T^*_xN\setminus\{0\}$, so $\Delta$ is elliptic. We therefore have that if $h\in C^{k,a}(N)$ and $\Delta f=h$ then $f\in C^{k+2,a}(N)$, and we have an estimate $$\|f\|_{C^{k+2,a}}\leq C(\|\Delta f\|_{C^{k,a}}+\|f\|_{C^0}).$$ We also know that $\Delta^*=\Delta$ and $\operatorname{Ker}\Delta$ is given by the constant functions (since if $f\in\operatorname{Ker}\Delta$ then $$0=\langle f,\Delta f\rangle_{L^2}=\langle f,\d^*\d f\rangle_{L^2}=\|\d f\|_{L^2}^2$$ so $\d f=0$). Hence, we can solve $\Delta f=h$ if and only if $h$ is orthogonal to the constants, i.e. $\int_N h\operatorname{vol}_N=0$. The operator defining the minimal graph equation for a hypersurface is $$P(f)=-\operatorname{div}\left(\frac{\nabla f}{\sqrt{1+|\nabla f|^2}}\right),$$ which is a nonlinear second order operator whose linearisation $L_0P$ at $0$ is $\Delta$. Thus $P$ is a nonlinear elliptic operator at $0$. If we linearise $P$ at $f_0$ we find a more complicated expression depending on $f_0$, but it is still a perturbation of the Laplacian. Suppose we are on a compact manifold $N$ and we want to solve $P(f)=0$ where $P$ is the minimal graph operator on functions $f$. Let us consider regularity for $f$. We can re-arrange $P(f)=0$ by taking all of the second derivatives to one side as: $$R(x,\nabla f(x))\nabla^2 f(x)=E(x,\nabla f(x))$$ where $x\in N$. Since $L_0P=\Delta$ is elliptic and ellipticity is an open condition we know that the operator $L_f$ (depending on $f$) given by $$L_f(h)(x)=R(x,\nabla f(x))\nabla^2h(x)$$ is a *linear* elliptic operator whenever $\|\nabla f\|_{C^0}$ is small, in particular if $\|f\|_{C^{1,a}}$ is sufficiently small. The operator $L_f$ does not have smooth coefficients, but if $f\in C^{k,a}$ then the coefficients $R\in C^{k-1,a}$. Suppose that $f\in C^{1,a}$ and $\|f\|_{C^{1,a}}$ is small with $P(f)=0$. Then $L_f(f)=E(f)$ and $L_f$ is a linear *second order* elliptic operator with coefficients in $C^{0,a}$ and $E(f)$ is in $C^{0,a}$. So by elliptic regularity we can deduce that $f\in C^{2,a}$. We have gained one degree of regularity, so we can “bootstrap”, i.e. proceed by induction and deduce that any $C^{1,a}$ solution to $P(f)=0$ is smooth. $C^{1,a}$-minimal submanifolds (and thus calibrated submanifolds) are *smooth*. More sophisticated techniques can be used to deduce that $C^1$-minimal submanifolds are real analytic [@Morrey]. Notice that elliptic regularity results are *not* valid for $C^k$ spaces, so this result is not obvious. We can also arrange our simple equation $P(f)=0$ as $\Delta f+Q(\nabla f,\nabla^2f)=0$, where $Q$ is nonlinear but linear in $\nabla^2f$. If we know that $\int_N P(f)\operatorname{vol}_N=0$, i.e. that $P(f)$ is orthogonal to the constants, then we can always solve $\Delta f_0=-Q(\nabla f,\nabla^2f)$. We do know that $\int_NP(f)\operatorname{vol}_N=0$ since $P$ has a divergence form. This means we are in the setting for implementing the Implicit Function Theorem for Banach spaces to conclude that we can always solve $P(f)=0$ for some $f$ near $0$, and $f$ will be smooth by our regularity argument above. In general, we will use the following. Let $X,Y$ be Banach spaces, let $U\ni 0$ be open in $X$, let $P:U\rightarrow Y$ with $P(0)=0$ and $L_0P:X\rightarrow Y$ surjective with finite-dimensional kernel $K$. Then for some $U$, $P^{-1}(0)=\{u\in U\,:\, P(u)=0\}$ is a manifold of dimension $\dim K$. Moreover, if we write $X=K\oplus Z$, $P^{-1}(0)=\text{\emph{Graph}}\, G$ for some map $G$ from an open set in $K$ to $Z$ with $G(0)=0$. This gives us a way to describe all perturbations of a given calibrated submanifold, as we now see in the special Lagrangian case, due to McLean [@McLean]. \[SLdef.thm\] Let $N$ be a compact special Lagrangian in a Calabi–Yau manifold $M$. Then the moduli space of deformations of $N$ is a smooth manifold of dimension $b^1(N)$. One should compare this result to the deformation theory for complex submanifolds in Kähler manifolds. There, one does not get that the moduli space is a smooth manifold: in fact, it can be singular, and one has *obstructions* to deformations. It is somewhat remarkable that special Lagrangian calibrated geometry enjoys a much better deformation theory than this classical calibrated geometry. The deformation theory of embedded compact complex submanifolds in Calabi–Yau manifolds has recently been revisited using analytic techniques [@Moore.cpt]. The tubular neighbourhood theorem gives us a diffeomorphism $\exp:S\subseteq\nu(N)\rightarrow T\subseteq M$ which maps the zero section to $N$; in other words, we can write any nearby submanifold to $N$ as the graph of a normal vector field on $N$. We know that $N$ is Lagrangian, so the complex structure $J$ gives an isomorphism between $\nu(N)$ and $TN$ and the metric gives an isomorphism between $TN$ and $T^*N$: $v\mapsto g(Jv,.)=\omega(v,.)=\alpha_v$. Therefore any deformation of $N$ in $T$ is given as the graph of a 1-form. In fact, using the Lagrangian neighbourhood theorem, we can arrange that any $N'\in T$ is the graph of a 1-form $\alpha$, so that if $f_\alpha:N\rightarrow N_{\alpha}$ is the natural diffeomorphism then $$f_{\alpha}^*(\omega)=\d\alpha\quad\text{and}\quad -*f_{\alpha}^*(\operatorname{Im}\Upsilon)=F(\alpha,\nabla\alpha)=\d^*\alpha+Q(\alpha,\nabla\alpha),$$ where the second formula follows from a calculation using the special Lagrangian condition on $N$ and the fact that the ambient structure is Calabi–Yau. Hence, $N_{\alpha}$ is special Lagrangian if and only if $P(\alpha)=(F(\alpha,\nabla\alpha),\d\alpha)=0$. This means that infinitesimal special Lagrangian deformations are given by closed and coclosed 1-forms, which is the kernel of $L_0P$. Since $\operatorname{Im}\Upsilon=0$ on $N$ we have that $[\operatorname{Im}\Upsilon]=0$ on $N_\alpha$, which means that $f_{\alpha}^*(\operatorname{Im}\Upsilon)$ is exact. Thus $F(\alpha,\nabla\alpha)=-*f_{\alpha}^*(\operatorname{Im}\Upsilon)$ is coexact and so $$P:C^{\infty}(S)\rightarrow \d^*(C^{\infty}(T^*N))\oplus\d(C^{\infty}(T^*N))\subseteq C^{\infty}(\Lambda^0T^*N\oplus\Lambda^2T^*N).$$ If we let $X=C^{1,a}(T^*N)$, $Y=\d^*(C^{1,a}(T^*N))\oplus\d(C^{1,a}(T^*N))$ and $U=C^{1,a}(S)$ we can apply the Implicit Function Theorem if we know that $$L_0P:\alpha\in X\mapsto (\d^*\alpha,\d\alpha)\in Y$$ is surjective, i.e. given $\d\beta+\d^*\gamma\in Y$ does there exist $\alpha$ such that $\d\alpha=\d\beta$ and $\d^*\alpha=\d^*\gamma$? If we let $\alpha=\beta+\d f$ then we need $\Delta f=\d^*\d f=\d^*(\gamma-\beta)$. Since $$\int_N\d^*(\gamma-\beta)\operatorname{vol}_N=\pm\int_N\d*(\gamma-\beta)=0$$ we can solve the equation for $f$, and hence $L_0P$ is surjective. Therefore $P^{-1}(0)$ is a manifold of dimension $\dim\operatorname{Ker}L_0P=b^1(N)$ by Hodge theory. Moreover, if $P(\alpha)=0$ then $N_{\alpha}$ is special Lagrangian, hence minimal and since $\alpha\in C^{1,a}$ we deduce that $\alpha$ is in fact smooth. The special Lagrangian $\mathcal{S}^n$ in $T^*\mathcal{S}^n$ has $b^1=0$ and so is rigid. Observe that if we have a special Lagrangian $T^n$ in $M$ then $b^1(T^n)=n$ and, if the torus is close to flat then its deformations locally foliate $M$ (as there will be $n$ nowhere vanishing harmonic 1-forms), so we can hope to find special Lagrangian torus fibrations. This cannot happen in compact manifolds without singular fibres, but still motivates the SYZ conjecture in Mirror Symmetry. The deformation result also motivates the following theorem [@Bryant.embed]. Every compact oriented real analytic Riemannian 3-manifold can be isometrically embedded in a Calabi–Yau 3-fold as the fixed point set of an involution. Theorem \[SLdef.thm\] has also been extended to certain non-compact, singular and boundary settings, for example in [@Butscherdef; @JoyceCS2; @Pacini1]. Another well-known way to get a solution of a linear PDE from two solutions is simply to add them. However, for a nonlinear PDE $P(v)=0$ this will not work. Intuitively, we can try to add two solutions to give us a solution $v_0$ for which $P(v_0)$ is small. Then we may try to perturb $v_0$ by $v$ to solve $P(v+v_0)=0$. Geometrically, this occurs when we have two calibrated submanifolds $N_1,N_2$ and then glue them together to give a submanifold $N$ which is “almost” calibrated, then we deform $N$ to become calibrated. If the two submanifolds $N_1,N_2$ are glued using a very long neck then one can imagine that $N$ is almost the disjoint union of $N_1,N_2$ and so close to being calibrated. If instead one scales $N_2$ by a factor $t$ and then glues it into a singular point of $N_1$, we can again imagine that as $t$ becomes very small $N$ resembles $N_1$ and so again is close to being calibrated. These two examples are in fact related, because if we rescale the shrinking $N_2$ to fixed size, then we get a long neck between $N_1$ and $N_2$ of length of order $-\log t$. However, although these pictures are appealing, they also reveal the difficulty in this approach: as $t$ becomes small, $N$ becomes more “degenerate”, giving rise to analytic difficulties which are encoded in the geometry of $N_1,N_2$ and $N$. These ideas are used extensively in geometry, and particularly successfully in calibrated geometry e.g. [@Butscher; @HaskinsKapouleas; @JoyceCS5; @JoyceCS3; @JoyceCS4; @YILee; @Lotaydesing; @Lotaydesing2; @Pacini2]. A particular simple case is the following, which we will describe to show the basic idea of the gluing method. Let $N$ be a compact connected 3-manifold and let $i:N\rightarrow M$ be a special Lagrangian immersion with tranverse self-intersection points in a Calabi–Yau manifold $M$. Then there exist embedded special Lagrangians $N_t$ such that $N_t\rightarrow N$ as $t\rightarrow 0$. One might ask about the sense of convergence here: for definiteness, we can say that $N_t$ converges to $N$ in the sense of currents; that is, if we have any compactly supported 3-form $\chi$ on $M$ then $\int_{N_t}\chi\rightarrow\int_N\chi$ as $t\rightarrow 0$. However, all sensible notions of convergence of submanifolds will be true in this setting. Here we only provide a sketch of the proof: see, for example, [@JoyceCS5 $\S$9] for a detailed proof. At each self-intersection point of $N$ the tangent spaces are a pair of transverse 3-planes, which we can view as a pair of tranverse special Lagrangian 3-planes $P_1,P_2$ in $\C^3$. Since we are in dimension 3, we know that there exists a (unique up to scale) special Lagrangian Lawlor neck $L$ asymptotic to $P_1\cup P_2$. We can then glue $tL$ into $N$ near each intersection point to get a compact embedded submanifold $S_t=N\# tL$ (if we glue in a Lawlor neck for every self-intersection point). We can also arrange that $S_t$ is Lagrangian, i.e. that it is a Lagrangian connect sum. Now we want to perturb $S_t$ to be special Lagrangian. Since $S_t$ is Lagrangian, by the deformation theory we can write any nearby submanifold as the graph of a 1-form $\alpha$, and this graph will be special Lagrangian if and only if (using the same notation as in our deformation theory discussion) $$P_t(\alpha)=(-*f_{\alpha}^*(\operatorname{Im}\Upsilon),f_\alpha^*(\omega))=0.$$ Since $S_t$ is Lagrangian but not special Lagrangian we have that $$f_\alpha^*(\omega)=\d\alpha\quad\text{and}\quad -*f_{\alpha}^*(\operatorname{Im}\Upsilon)=P_t(0)+\d_t^*\alpha+Q_t(\alpha,\nabla\alpha)$$ where $P_t(0)=-*\Im\Upsilon|_{S_t}$ and $\d_t^*=L_0P_t$, which is a perturbation of the usual $\d^*$ since we are no longer linearising at a point where $P_t(0)=0$. By choosing $\alpha=\d f$, we then have to solve $$\Delta_tf=-P_t(0)-Q_t(\nabla f,\nabla^2 f)$$ where $\Delta_t$ is a perturbation of the Laplacian. For simplicity, let us suppose that $\Delta_t$ is the Laplacian on $S_t$. The idea is to view our equation as a fixed point problem. We know that if we let $X^k=\{f\in C^{k,a}(N)\,:\,\int_Nf\operatorname{vol}_N=0\}$ then $\Delta_t: X^{k+2}\rightarrow X^k$ is an isomorphism so it has an inverse $G_t$. We know by our elliptic regularity result that there exists a constant $C(\Delta_t)$ such that $$\|f\|_{C^{k+2,a}} \leq C(\Delta_t)\|\Delta_tf\|_{C^{k,a}} \Leftrightarrow \|G_th\|_{C^{k+2,a}}\leq C(\Delta_t)\|h\|_{C^{k,a}}$$ for any $f\in X^{k+2}$, $h\in X^k$. We thus see that $P_t(f)=0$ for $f\in X^{k+2}$ if and only if $$f = G_t(- P_t(0) - Q_t(f))= F_t(f).$$ The idea is now to show that $F_t$ is a contraction sufficiently near $0$ for all $t$ small enough. Then it will have a (unique) fixed point near $0$, which will also be smooth because it satisfies $P_t(f)=0$ and hence defines a special Lagrangian as the graph of $\d f$ over $S_t$. We know that $F_t:X^{k+2}\rightarrow X^{k+2}$ with $$\begin{aligned} \|F_t(f_1) - F_t(f_2)\|_{C^{k+2,a}} &= \|G_t(Q_t(f_1) - Q_t(f_2))\|_{C^{k+2,a}}\leq C(\Delta_t)\|Q_t(f_1) - Q_t(f_2)\|_{C^{k,a}}.\end{aligned}$$ Since $Q_t$ and its first derivatives vanish at $0$ we know that $$\|Q_t(f_1) - Q_t(f_2)\|_{C^{k,a}} \leq C(Q_t)\|f_1 - f_2\|_{C^{k+2,a}} (\|f_1\|_{C^{k+2,a}} + \|f_2\|_{C^{k+2,a}} ).$$ We deduce that $$\|F_t(f_1)-F_t(f_2)\|_{C^{k+2,a}}\leq C(\Delta_t)C(Q_t)\|f_1-f_2\|_{C^{k+2,a}}(\|f_1\|_{C^{k+2,a}} + \|f_2\|_{C^{k+2,a}} )$$ and $$\|F_t(0)\|_{C^{k+2,a}}=\|G_t(P_t(0))\|_{C^{k+2,a}}\leq C(\Delta_t)\|P_t(0)\|_{C^{k,a}}.$$ Hence, $F_t$ is a contraction on $\overline{B_{\epsilon_t}(0)} \subseteq X^{k+2}$ if we can choose $\epsilon_t$ so that $$2C(\Delta_t)\|P_t(0)\|_{C^{k,a}} \leq \epsilon_t \leq \frac{1}{4C(\Delta_t)C(Q_t)}.$$ (This also proves Theorem \[SLdef.thm\], where we used the Implicit Function Theorem, by hand since there $P_t(0) = P(0) = 0$ so we just need to take $\epsilon_t$ small enough.) In other words, we need that - $P_t(0)$ is small, so $S_t$ is “close” to being calibrated and is a good approximation to $P_t(f) = 0$; - $C(\Delta_t),C(Q_t)$, which are determined by the linear PDE and geometry of $N,L$ and $S_t$, are well-controlled as $t\rightarrow 0$. The statement of the theorem is then that there exists $t$ sufficiently small and $\epsilon_t$ so that the contraction mapping argument works. This is a delicate balancing act since as $t\rightarrow 0$ parts of the manifold are collapsing, so the constants $C(\Delta_t),C(Q_t)$ above (which depend on $t$) can and typically do blow-up as $t\rightarrow 0$. To control this, we need to understand the Laplacian on $N,L$ and $S_t$ and introduce “weighted” Banach spaces so that $tL$ gets rescaled to constant size (independent of $t$), and $S_t$ resembles the union of two manifolds with a cylindrical neck (as we described earlier). It is also crucial to understand the relationship between the kernels and cokernels of the Laplacian on the *non-compact* $N$ (with the intersection points removed), $L$ and compact $S_t$: here is where connectedness is important so that the kernel and cokernel of the Laplacian is 1-dimensional. In more challenging gluing problems it is not possible to show that the relevant map is a contraction, but rather one can instead appeal to an alternative theorem (e.g. Schauder fixed point theorem) to show that it still has a fixed point. Associative and coassociative submanifolds ========================================== We now want to introduce our calibrated geometry associated with $\operatorname{G}_2$ holonomy. The first key result is the following. \[G2cab.thm\] Let $(M^7,\varphi)$ be a $\operatorname{G}_2$ manifold (so $\varphi$ is a closed and coclosed positive 3-form). Then $\varphi$ and $*\varphi$ are calibrations. Let $u,v,w$ be oriented orthonormal vectors in $\R^7$. There exists an element $A$ of $\operatorname{G}_2$ so that $Au=e_1$. The subgroup of $\operatorname{G}_2$ fixing $e_1$ is isomorphic to $\operatorname{SU}(3)$, and we know from the proof of Wirtinger’s inequality (Theorem \[Wirtinger.thm\]) there exists a (special) unitary transformation so that $v=e_2$ and $w=\cos\theta e_3+\sin\theta v$ for some $\theta$ and $v$ orthogonal to $e_1,e_2,e_3$. Since $\varphi(e_1,e_2,.)=\d x_3$, we see that $\varphi(u,v,w)=\cos\theta$. Hence, since $\varphi$ is closed, $\varphi$ is a calibration and the calibrated planes are given by $A.\operatorname{Span}\{e_1,e_2,e_3\}$ for $A\in\operatorname{G}_2$. By Lemma \[starcab.lem\], $*\varphi$ is also a calibration. Let us look at the calibrated planes and start with $\varphi$, which we take to be the following on $\R^7$: $$\varphi=\d x_{123}+\d x_{145}+\d x_{167} +\d x_{246} -\d x_{257} -\d x_{347} -\d x_{356},$$ where we use the short-hand notation $\d x_{ij\ldots k}=\d x_i\wedge\d x_j\wedge\ldots\wedge \d x_k$. Hence, $*\varphi$ on $\R^7$ is given by: $$\varphi=\d x_{4567}+\d x_{2367} +\d x_{2345} +\d x_{1357} -\d x_{1346} -\d x_{1256} -\d x_{1247}.$$ If $u,v,w$ are unit vectors in $\R^7\cong \operatorname{Im}\O$ (the imaginary octonions), then $\varphi(u,v,w)=\langle u\times v,w\rangle=1$ if and only if $w=u\times v$, so $P=\operatorname{Span}\{u,v,w\}$ is a copy of $\operatorname{Im}\H$ in $\operatorname{Im}\O$; in other words, $\operatorname{Span}\{1,u,v,w\}$ is an associative subalgebra of $\O$. Moreover, suppose we define a vector-valued 3-form $\chi$ on $\R^7$ by $$\chi(u,v,w)=[u,v,w]=u(vw)-(uv)w,$$ known as the *associator*. Then we observe the following. A 3-plane $P$ in $\R^7$ satisfies $\chi|_P\equiv 0$ if and only if $P$ admits an orientation so that it is calibrated by $\varphi$. Since the associator is clearly invariant under $\operatorname{G}_2$ we can put any plane $P$ in standard position using $\operatorname{G}_2$, i.e. as in the proof of Theorem \[G2cab.thm\], we can write $P=\operatorname{Span}\{e_1,e_2,\cos\theta e_3+\sin\theta v\}$ for some $v$ orthogonal to $e_1,e_2,e_3$. We can calculate that $[e_1,e_2,e_3]=0$ whereas $[e_1,e_2,v]\neq 0$ for any $v$ orthogonal to $e_1,e_2,e_3$. Moreover, $P$ is calibrated by $\varphi$ if and only if $\theta=0$. We thus see that $P$ is calibrated by $\varphi$ (up to a choice a orientation) if and only if $\chi|_P\equiv 0$. Hence we call the $\varphi$-calibrated planes *associative*. In general on a $\operatorname{G}_2$ manifold we can define a 3-form $\chi$ with values in $TM$ using the pointwise formula above. For $*\varphi$ we see that $*\varphi|_P=\operatorname{vol}_P$ for a plane $P$ if and only if $\varphi|_{P^{\perp}}=\operatorname{vol}_{P^{\perp}}$. Hence the planes calibrated by $*\varphi$ are the orthogonal complements of the associative planes, so we call them *coassociative*. We have a similar alternative characterisation for 4-planes calibrated by $*\varphi$. A 4-plane $P$ in $\R^7$ satisfies $\varphi|_P\equiv 0$ if and only if $P$ admits an orientation so that it is calibrated by $*\varphi$. We know that given a 4-plane $P$ we can choose coordinates such that $P^{\perp}=\operatorname{Span}\{e_1,e_2,\cos\theta e_3+\sin\theta (a_4 e_4+a_5e_5+a_6e_6+a_7e_7)\}$ where $\sum_ja_j^2=1$. Then $$\begin{aligned} P=\operatorname{Span}\{&-\sin\theta e_3+\cos\theta (a_je_j),a_5e_4-a_4e_5+a_7e_6-a_6e_7,\\ &a_6e_4-a_7e_5-a_4e_6+a_5e_7,a_7e_4+a_6e_5-a_5e_6 -a_4e_7\}.\end{aligned}$$ We can then see directly that $*\varphi|_P=\cos\theta$. We also have $\varphi(e_i,e_j,e_k)=0$ for $i,j,k\in\{4,5,6,7\}$ and $e_3\lrcorner\varphi =-\d x_{47}-\d x_{56}$, so that $\varphi(-\sin\theta e_3+\cos \theta(a_je_j),v,w)$ is a non-zero multiple of $\sin\theta$ for some $v,w\in P$. Hence $\varphi|_P=0$ if and only if $\theta=0$, which is if and only if $P$ is calibrated by $*\varphi$ (again up to a choice of orientation). We thus can define our calibrated submanifolds. Submanifolds in $(M^7,\varphi)$ calibrated by $\varphi$ are called *associative* 3-folds. Moreover, $N$ is associative if and only if $\chi|_N\equiv 0$ (up to a choice of orientation). Submanifolds in $(M^7,\varphi)$ calibrated by $*\varphi$ are called *coassociative* 4-folds. Moreover, $N$ is coassociative if and only if $\varphi|_N\equiv 0$ (up to a choice of orientation). It is instructive to see the form that the associative or coassociative condition takes by studying associative or coassociative graphs in $\R^7$: see [@HL] for details. A simple way to get associative and coassociative submanifolds is by using known geometries. Let $x_1,\ldots,x_7$ be coordinates on $\R^7$ and let $z_j=x_{2j}+ix_{2j+1}$ be coordinates on $\C^3$ so that $\R^7=\R\times\C^3$. - $N=\R\times S\subseteq \R\times\C^3$ is associative or coassociative if and only if $S$ is a complex curve or a special Lagrangian 3-fold with phase $-i$, respectively. - $N\subseteq\{0\}\times\C^3$ is associative or coassociative if and only if $N$ is a special Lagrangian 3-fold or a complex surface, respectively. Recall the Kähler form $\omega$ and holomorphic volume form $\Upsilon$ on $\C^3$. We can write $$\varphi=\d x_1\wedge\omega+\operatorname{Re}\Upsilon\quad\text{and}\quad *\varphi=\frac{1}{2}\omega^2-\d x_1\wedge\operatorname{Im}\Upsilon.$$ For associatives, we see that $\varphi|_{\R\times S}=\d x_1\wedge\operatorname{vol}_S$ if and only if $\omega|_S=\operatorname{vol}_S$ and $\varphi|_N=\operatorname{Re}\Upsilon|_N$ for $N\subseteq\C^3$. For coassociatives, we see that $*\varphi|_{\R\times S}=\d x_1\wedge\operatorname{vol}_S$ if and only if $-\operatorname{Im}\Upsilon|_S=\operatorname{vol}_S$ and $*\varphi|_N=\frac{1}{2}\omega^2|_N$ for $N\subseteq\C^3$. The results quickly follow. We can also produce examples in $\operatorname{G}_2$ manifolds with an isometric involution. Let $(M,\varphi)$ be a $\operatorname{G}_2$ manifold with an isometric involution $\sigma\neq \operatorname{id}$ such that $\sigma^*\varphi=\varphi$ or $\sigma^*\varphi=-\varphi$. Then $\text{Fix}(\sigma)$ is an associative or coassociative submanifold in $M$ respectively, if it is non-empty. We also have explicit examples of associatives and coassociatives. The first explicit examples of associatives in $\R^7$ not arising from other geometries are given in [@Lotayevol] from symmetry and evolution equation considerations. The first explicit non-trivial examples of coassociatives in $\R^7$ are given in [@HL]. There are two dilation families: one which has one end asymptotic to a cone $C$ on a non-round $\mathcal{S}^3$, and one which has two ends asymptotic to $C\cup\R^4$. The cone $C$ was discovered earlier by Lawson–Osserman [@LO] and was the first example of a volume-minimizing submanifold which is not smooth (it is Lipschitz but not $C^1$). In the Bryant–Salamon holonomy $\operatorname{G}_2$ metric on the spinor bundle of $\mathcal{S}^3$ [@BS], the base $\mathcal{S}^3$ is associative. In the Bryant–Salamon holonomy $\operatorname{G}_2$ metrics on $\Lambda^2_+T^*\mathcal{S}^4$ and $\Lambda^2_+T^*\C\P^2$ [@BS], the bases $\mathcal{S}^4$ and $\C\P^2$ are coassociative. We now want to understand deformations of associatives and coassociatives, from which perturbation or gluing results will follow. We begin with associatives. Notice that if $P$ is an associative plane, $u\in P$ and $v\in P^{\perp}$ then $u\times v\in P^{\perp}$ since $\varphi(w,u,v)=g(w,u\times v)=g(v,w\times u)=0$ for all $w\in P$ since $w\times u\in P$. Thus, if $N$ is associative, cross product gives a (Clifford) multiplication $m:C^{\infty}(T^*N\otimes\nu(N))\rightarrow C^{\infty}(\nu(N))$ (viewing tangent vectors as cotangent vectors via the metric). Hence, using the normal connection $\nabla^{\perp}:C^{\infty}(\nu(N)) \rightarrow C^{\infty}(T^*N\otimes\nu(N))$ on $\nu(N)$ we get a linear operator $${\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}=m\circ\nabla^{\perp}:C^{\infty}(\nu(N))\rightarrow C^{\infty}(\nu(N)).$$ We call ${\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}$ the *Dirac operator*. We see that its principal symbol is given by $\sigma_{{\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}}(x,\xi)v=i\xi\times v$, so ${\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}$ is elliptic, and we also have that ${\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}^*={\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}$. Since a 3-manifold is always spin, we have a spinor bundle $\SS$ on $N$, a connection $\nabla:C^{\infty}(\SS)\rightarrow C^{\infty}(T^*M\otimes \SS)$ (a lift of the Levi-Civita connection) and we have Clifford multiplication $m:C^{\infty}(T^*M\otimes \SS)\rightarrow C^{\infty}(\SS)$ given by $m(\xi,v)=\xi\cdot v$. Hence we have a composition ${\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}=m\circ\nabla:C^{\infty}(\SS)\rightarrow C^{\infty}(\SS)$, which is a first order linear differential operator called the Dirac operator. Locally it is given by ${\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}v= \sum_i e_i\cdot\nabla_{e_i} v$, so we have that $\sigma_{{\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}}(\xi,v)=i\xi\cdot v$. Hence ${\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}$ is elliptic. Moreover ${\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}$ is self-adjoint. In fact, it is possible (see e.g. [@McLean]) to see that the complexified normal bundle $\nu(N)\otimes\C=\SS\otimes V$ for a $\C^2$-bundle $V$ over $N$, so that the Dirac operator on $\nu(N)$ is just a “twist” of the usual Dirac operator on $\SS$. Consider a compact associative $N$. We want to describe the associative deformations of $N$, just as in the case of special Lagrangians above. To be consistent with that previous setting, we will now use $P$ to denote a nonlinear deformation map: we trust that this will not cause confusion given the context. We know that $\exp_v(N)=N_v$, which is the graph of $v$, is associative for a normal vector field $v$ if and only if $*\exp_v^*(\chi)\in C^{\infty}(TM|_N)$ is $0$. In fact, it turns out that $P(v)=*\exp_v^*(\chi)\in C^{\infty}(\nu(N))$ since $N$ is associative and $$L_0P(v)=*\d(v\lrcorner\chi)={\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}v.$$ Here $L_0P$ is not typically surjective so we cannot apply our Implicit Function Theorem, except when $\operatorname{Ker}{\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}=\operatorname{Ker}{\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}^*=\{0\}$. However, we can still say something in these circumstances, for which we make a small digression to a more general situation. Suppose $X,Y$ are Banach spaces. Let $U\subseteq X$ be an open set with $0\in U$ and let $P:U\rightarrow Y$ be a smooth map with $P(0)=0$ such that $L_0P:X\rightarrow Y$ is Fredholm. Let $\mathcal{I}=\operatorname{Ker}L_0P$ and let $\mathcal{O}$ be such that $Y=L_0P(X)\oplus\mathcal{O}$, which exists and is finite-dimensional by the assumption that $L_0P$ is Fredholm. We then let $Z=X\oplus\mathcal{O}$ and define $F:U\oplus\mathcal{O}\rightarrow Y$ by $F(u,y)=P(u)+y$. We see that $L_0F:X\oplus\mathcal{O}\rightarrow Y=L_0P(X)\oplus\mathcal{O}$ is given by $L_0F(x,y)=L_0P(x)+y$ which is surjective and $L_0F(x,y)=0$ if and only if $L_0P(x)=0$ and $y=0$, thus $\operatorname{Ker}L_0F=\operatorname{Ker}L_0P\times\{0\}$. There exists $W\subseteq X$ such that $\operatorname{Ker}L_0P\oplus W=X$. Applying the Implicit Function Theorem, there exist open sets $U_1\subseteq \operatorname{Ker}L_0P$ containing $0$, $U_2\subseteq W$ containing $0$ and $U_3\subseteq\mathcal{O}$ containing $0$ and smooth maps $G_2:U_1\rightarrow U_2$, $G_3:U_1\rightarrow U_3$ such that $$F^{-1}(0)\cap U_1\times U_2\times U_3=\{(u,G_2(u),G_3(u))\,:\,u\in U_1\}.$$ We also know that $P(x)=0$ if and only if $F(x,y)=0$ and $y=0$. Hence $$P^{-1}(0)\cap U_1\times U_2=\{(u,G_2(u))\,:\,u\in G_3^{-1}(0)\}.$$ Let $\mathcal{U}=U_1$ and define $\pi:\mathcal{U}\rightarrow\mathcal{O}$ by $\pi(u)=G_3(u)$. Then $P^{-1}(0)\cap U_1\times U_2$ is a graph over $\pi^{-1}(0)$, and hence $P^{-1}(0)$ is locally homeomorphic to $\pi^{-1}(0)$. Sard’s Theorem says that generically $\pi^{-1}(y)$ is a smooth manifold of dimension $\dim\mathcal{I}-\dim\mathcal{O}=\dim\operatorname{Ker}L_0P-\dim\text{Coker}\,L_0P$, which is the index of $L_0P$. Hence, the expected dimension of $P^{-1}(0)$ is the index of $L_0P$. In the associative setting we have that the linearisation is ${\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}$, which is elliptic and thus Fredholm, and we know that $\text{index}\,{\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}=\dim\operatorname{Ker}{\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}-\dim\operatorname{Ker}{\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}^*=0$. We deduce the following [@McLean]. \[assocdef.thm\] The expected dimension of the moduli space of deformations of a compact associative 3-fold $N$ in a $\operatorname{G}_2$ manifold is $0$ and infinitesimal deformations of $N$ are given by the kernel of ${\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}$ on $\nu(N)$. Moreover, if $\operatorname{Ker}{\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}=\{0\}$ then $N$ is rigid. The dimension of the kernel of ${\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}$ typically depends on the metric on $N$ rather than just the topology, so it is usually difficult to determine. However, there are some cases where one can ensure the moduli space is smooth cf. [@Gayet]. For the associative $N=\mathcal{S}^3$ in $\SS(\mathcal{S}^3)$, $\nu(N)=\SS(\mathcal{S}^3)$ so ${\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}$ is just the usual Dirac operator. A theorem of Lichnerowicz states that $\operatorname{Ker}{\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}=\{0\}$ as $\mathcal{S}^3$ has positive scalar curvature so $N$ is rigid. Corti–Haskins–Nordström–Pacini construct rigid associative $\mathcal{S}^1\times\mathcal{S}^2$s in compact holonomy $\operatorname{G}_2$ twisted connected sums [@CHNP]. For coassociatives, the deformation theory is much better behaved, like for special Lagrangians [@McLean]. Let $N$ be a compact coassociative in a $\operatorname{G}_2$ manifold (or just a 7-manifold with closed $\operatorname{G}_2$ structure). The moduli space of deformations of $N$ is a smooth manifold of dimension $b^2_+(N)$. Since $N$ is coassociative the map $v\mapsto v\lrcorner\varphi=\alpha_v$ defines an isomorphism from $\nu(N)$ to a rank 3 vector bundle on $N$, which is $\Lambda^2_+T^*N$, the 2-forms on $N$ which are self-dual (so $*\alpha=\alpha$). We can therefore view nearby submanifolds to $N$ as graphs of self-dual 2-forms. We know that $N_v=\exp_v(N)$ is coassociative if and only if $\exp_v^*(\varphi)=0$. We see that $$\frac{\d}{\d t}\exp_{tv}^*(\varphi)|_{t=0}=\mathcal{L}_v\varphi=\d(v\lrcorner\varphi)=\d\alpha_v.$$ Hence nearby coassociatives $N'$ to $N$ are given by the zeros of $P(\alpha)=\d\alpha+Q(\alpha,\nabla\alpha)$. Moreover, since $\varphi=0$ on $N$, $[\varphi]=0$ on $N'$ and hence $P:C^{\infty}(\Lambda^2_+T^*N)\rightarrow \d (C^{\infty}(\Lambda^2T^*N))$. Here $P$ is not elliptic, but $L_0P=\d$ has finite-dimensional kernel, the closed self-dual 2-forms, since $\d\alpha=0$ implies that $\d^*\alpha =-*\d*\alpha=0$ so $\alpha$ is harmonic. Moreover, $L_0P$ has injective symbol so it is overdetermined elliptic, which means that elliptic regularity still holds. Another way to deal with this is to consider $F(\alpha,\beta)=P(\alpha)+\d^*\beta$ for $\beta$ a 4-form. Now $F^{-1}(0)$ is the disjoint union of $P^{-1}(0)$ and multiples of the volume form, as exact and coexact forms are orthogonal. Moreover, $L_0F(\alpha,\beta)=\d\alpha+\d^*\beta$ is now elliptic. Overall, we can apply our standard Implicit Function Theorem if we know that $$\d(C^{k+1,a}(\Lambda^2_+T^*N))=\d(C^{k+1,a}(\Lambda^2T^*N)).$$ This is true because by Hodge theory if $\alpha$ is a 2-form, we can write $\alpha=\d^*\beta+\gamma$ for a 3-form $\beta$ and a closed form $\gamma$, so $\d\alpha=\d\d^*\beta=\d(\d^*\beta+*\d^*\beta)$ and $\d^*\beta+*\d^*\beta$ is self-dual. The $\mathcal{S}^4$ and $\C\P^2$ in the Bryant–Salamon metrics on $\Lambda^2_+T^*\mathcal{S}^4$ and $\Lambda^2_+T^*\C\P^2$ have $b^2_+=0$ and so are rigid. For a K3 surface and $T^4$ we have $b^2_+=3$ and $\Lambda^2_+$ is trivial, so we can hope to find coassociative K3 and $T^4$ fibrations of compact $\operatorname{G}_2$ manifolds. There is a programme [@Kovalev] for constructing a coassociative K3 fibration (with singular fibres). Towards completing this programme, the first examples of compact coassociative 4-folds with conical singularities in compact holonomy $\operatorname{G}_2$ twisted connected sums were constructed in [@Lotaystab]. Again, we have a similar isometric embedding result for coassociative 4-folds, motivated by the deformation theory result [@Bryant.embed]. Any compact oriented real analytic Riemannian 4-manifold whose bundle of self-dual 2-forms is trivial can be isometrically embedded in a $\operatorname{G}_2$ manifold as the fixed points of an isometric involution. The deformation theory results for compact associative and coassociative submanifolds have been extended to certain non-compact, singular and boundary settings, for example in [@GayetWitt; @JoySalur; @KovalevLotay; @LotayCScoass; @LotayACcoass; @LotayACass]. Cayley submanifolds =================== We now discuss our final class of calibrated submanifolds. On a $\operatorname{Spin}(7)$ manifold $(M^8,\Phi)$ (so $\Phi$ is a closed admissible form), $\Phi$ is a calibration. Let $P$ be a plane in $\R^8\cong\C^4$. Since $\operatorname{SU}(4)\subseteq\operatorname{Spin}(7)$, by the proof of Wirtinger’s inequality (Theorem \[Wirtinger.thm\]), we can choose $A\in\operatorname{Spin}(7)$ so that $A(P)$ is spanned by $$\{e_1,\cos\theta_1 e_2+\sin\theta_1e_3,e_5,\cos\theta_2 e_6+\sin\theta_2e_7\}.$$ We take the standard $\operatorname{Spin}(7)$ form $\Phi$ on $\R^8$ to be: $$\begin{aligned} \Phi=&\;\d x_{1234}+\d x_{1256}+\d x_{1278} +\d x_{1357} -\d x_{1367} -\d x_{1458} -\d x_{1467}\\ &+\d x_{5678}+\d x_{3478} +\d x_{3456} +\d x_{2468} -\d x_{2457} -\d x_{2367} -\d x_{2358},\end{aligned}$$ again using the notation $\d x_{ij\ldots k}=\d x_i\wedge\d x_j\wedge\ldots\wedge \d x_k$. Then $\Phi|_P=(\cos\theta_1\cos\theta_2+\sin\theta_1\sin\theta_2)\operatorname{vol}_P=\cos(\theta_1-\theta_2)\operatorname{vol}_P$. Hence $\Phi$ is a calibration as it is closed. We can thus define our calibrated submanifolds in $\operatorname{Spin}(7)$ manifolds. Submanifolds in $(M^8,\Phi)$ calibrated by $\Phi$ are called Cayley 4-folds. The name Cayley submanifolds is because of the relation between the submanifolds and the octonions or Cayley numbers $\O$. We can relate Cayley submanifolds to all of the other calibrated geometries we have seen. - Complex surfaces and special Lagrangian 4-folds in $\C^4$ are Cayley in $\R^8=\C^4$. - Write $\R^8=\R\times\R^7$. Then $\R\times S$ is Cayley if and only if $S$ is associative in $\R^7$ and $N\subseteq \R^7$ is Cayley in $\R^8$ if and only if $N$ is coassociative in $\R^7$. Recall the Kähler form $\omega$ and holomorphic volume form $\Upsilon$ on $\C^4$ and the $\operatorname{G}_2$ 3-form $\varphi$ on $\R^7$. Part (a) is immediate from the formula $\Phi=\frac{1}{2}\omega^2+\operatorname{Re}\Upsilon$, since complex surfaces are calibrated by $\frac{1}{2}\omega^2$, special Lagrangians are calibrated by $\operatorname{Re}\Upsilon$, $\Upsilon$ vanishes on complex surfaces and $\omega$ vanishes on special Lagrangians. Part (b) follows immediately from the formula $\Phi=\d x_1\wedge\varphi+*\varphi$. We can also use an isometric involution to construct Cayley submanifolds as in our previous calibrated geometries. Let $(M,\Phi)$ be a $\operatorname{Spin}(7)$ manifold and let $\sigma\neq\operatorname{id}$ be an isometric involution with $\sigma^*\Phi=\Phi$. Then $\text{Fix}(\sigma)$ is Cayley submanifold, if it is non-empty. The first interesting explicit examples of Cayleys in $\R^8$ not arising from other geometries were given in [@Lotay2R] and are asymptotic to cones. The base $\mathcal{S}^4$ in the Bryant–Salamon holonomy $\operatorname{Spin}(7)$ metric on $\SS_+(\mathcal{S}^4)$ [@BS] is Cayley. We now discuss deformations of a compact Cayley $N$, for which we need some discussion of algebra related to $\operatorname{Spin}(7)$. Since $\Lambda^2(\R^8)^*$ is 28-dimensional and the 21-dimensional Lie algebra of $\operatorname{Spin}(7)$ sits inside the space of $2$-forms, we must have a distinguished $7$-dimensional subspace $\Lambda^2_7$ of 2-forms on $\R^8$. So what is this subspace? Let $u,v\in\R^8$. Then we can construct a 2-form $u\wedge v$, viewing $u,v$ as cotangent vectors. We can also construct a 2-form from $u,v$ by considering $\Phi(u,v,.,.)$. It is then true that $$\Lambda^2_7=\{u\wedge v+\Phi(u,v,.,.)\,:\,u,v\in\R^8\}.$$ When $P$ is a Cayley plane and $u,v\in P$ are orthogonal we see that $\Phi(u,v,.,.)=*_P(u\wedge v)$ so that $u\wedge v+\Phi(u,v,.,.)$ is self-dual on $P$. Since $\Lambda^2_+P^*$ is 3-dimensional, we see that there must be a 4-dimensional space $E$ of 2-forms on $P$ such that $\Lambda^2_7|_P=\Lambda^2_+P^*\oplus E$. Moreover, if $u\in P$ and $v\in P^{\perp}$ then $m(u,v)=u\wedge v+\Phi(u,v,.,.)\in E$ and the map $m:P\times P^{\perp}\rightarrow E$ is surjective. Now let us move to a Cayley submanifold $N$ in a $\operatorname{Spin}(7)$ manifold $(M,\Phi)$. On $M$ we have a rank 7 bundle $\Lambda^2_7$ of 2-forms and we have that $\Lambda^2_7|_N=\Lambda^2_+T^*N\oplus E$ for some rank 4 bundle $E$ over $N$. The map $m$ above defines a (Clifford) multiplication $m:C^{\infty}(T^*N\otimes \nu(N))\rightarrow C^{\infty}(E)$ (viewing tangent vectors as cotangent vectors via the metric), and thus using the normal connection $\nabla^{\perp}:C^{\infty}(\nu(N))\rightarrow C^{\infty}(T^*N\otimes \nu(N))$ we get a linear first order differential operator $${\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}_+=m\circ\nabla^{\perp}:C^{\infty}(\nu(N))\rightarrow C^{\infty}(E).$$ Again this an elliptic operator called the *positive Dirac operator*, but it is not self-adjoint: its adjoint is the negative Dirac operator from $E$ to $\nu(N)$. If $N$ is spin, the spinor bundle $\SS$ splits as $\SS_+\oplus\SS_-$, and the Dirac operator ${\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}$ splits into ${\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}_{\pm}$ from $\SS_{\pm}$ to $\SS_{\mp}$ so that ${\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}(v_+,v_-)=({\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}_-v_-,{\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}_+v_+)$. Hence ${\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}^*={\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}$ says that ${\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}_{\pm}^*={\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}_{\mp}$. It turns out (see, for example, [@McLean]) that there exists a $\C^2$-bundle $V$ on $N$ so that $\nu(N)\otimes\C=\SS_+\otimes V$, $E\otimes\C=\SS_-\otimes V$ and ${\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}_+$ on $\nu(N)$ is a “twist” of the usual positive Dirac operator. However, not every 4-manifold is spin, so we cannot always make this identification. On $\O$ there exists a 4-fold cross product, whose real part gives $\Phi$ and whose imaginary part we call $\tau$. Perhaps unsurprisingly, we have the following result, which we will leave as an exercise for the reader. A 4-plane $P$ in $\R^8$ satisfies $\tau|_P\equiv 0$ if and only if it admits an orientation so that it is calibrated by $\Phi$. We can extend $\tau$ to a $\operatorname{Spin}(7)$ manifold, except that we need a rank 7 vector bundle on $M$ in which $\tau$ takes values: we have one, namely $\Lambda^2_7$. So we have the following alternative characterisation of Cayley 4-folds. A submanifold $N$ in a $\operatorname{Spin}(7)$ manifold is Cayley (up to a choice of orientation) if and only if $\tau\in C^{\infty}(\Lambda^4T^*M;\Lambda^2_7)$ vanishes on $N$. Now suppose that $N$ is a compact Cayley 4-fold. Then the zeros of the equation $F(v)=*\exp_v^*(\tau)$ for $v\in C^{\infty}(\nu(N))$ define Cayley deformations (as the graph of $v$). We know that $F$ takes values in $\Lambda^2_7|_N=\Lambda^2_+T^*N\oplus E$ and it turns out that $$L_0F(v)=*\d(v\lrcorner\tau)={\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}_+v$$ since $N$ is Cayley. So, we potentially have a problem because $F$ does not necessarily take values only in $E$ (and in general it will not just take values in $E$). However, the Cayley condition on $N$ means that $F(v)=0$ if and only $P(v)=\pi_EF(v)=0$, where $\pi_E$ is the projection onto $E$ (again, we are using $P$ to denote the nonlinear deformation map as in our previous discussion, and we expect it will not cause confusion given the context). Then the operator $P:C^{\infty}(\nu(N))\rightarrow C^{\infty}(E)$ and $L_0P={\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}_+$ is elliptic. Again, we cannot say that $L_0P$ is surjective, so we have the following using the same argument as in the lead up to Theorem \[assocdef.thm\], cf. [@McLean]. \[Caydef.thm\] The expected dimension of the moduli space of deformations of a compact Cayley 4-fold $N$ in a $\operatorname{Spin}(7)$ manifold is $\text{\emph{ind}}\,{\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}_+=\dim\operatorname{Ker}{\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}_+-\dim\operatorname{Ker}{\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}_+^*$ with infinitesimal deformations given by $\operatorname{Ker}{\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}_+$ on $\nu(N)$. Moreover, $$\text{\emph{ind}}\,{\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}_+=\frac{1}{2}\sigma(N)+\frac{1}{2}\chi(N)-[N].[N],$$ where $\sigma(N)=b^2_+(N)-b^2_-(N)$ (the signature of $N$), $\chi(N)=2b^0(N)-2b^1(N)+b^2(N)$ (the Euler characteristic of $N$) and $[N].[N]$ is the self-intersection of $N$, which is the Euler number of $\nu(N)$. For the Cayley $N=\mathcal{S}^4$ in $\SS_+(\mathcal{S}^4)$, $\nu(N)=\SS_+(\mathcal{S}^4)$ and ${\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}_+$ is the usual positive Dirac operator. Again, since $N$ has positive scalar curvature, we see that $\operatorname{Ker}{\ensuremath \hspace{1pt}\raisebox{0.025cm}{\slash}\hspace{-0.24cm} D}_{\pm}=\{0\}$ so $N$ is rigid. Theorem \[Caydef.thm\] has been extended to various other non-compact, singular and boundary settings, for example in [@Moore.CS; @Ohst1; @Ohst2]. The angle theorem {#s:angle} ================= To conclude these notes, we now discuss a very natural and elementary problem in Euclidean geometry where calibrations play a major, and perhaps unexpected, role. If one takes two lines in $\R^2$ intersecting transversely, then their union is never length-minimizing. A natural question to ask is: does this persist in higher dimensions? In other words, when is the union of two transversely intersecting $n$-planes in $\R^{2n}$ volume-minimizing? Two such planes are determined by the $n$ angles between them as follows. \[angles.lem\] Let $P,Q$ be oriented $n$-planes in $\R^{2n}$. There exists an orthonormal basis $e_1,\ldots,e_{2n}$ for $\R^{2n}$ such that $P=\operatorname{Span}\{e_1,\ldots,e_n\}$ and $$Q=\operatorname{Span}\{\cos\theta_1 e_1+\sin\theta_1e_{n+1},\ldots,\cos\theta_n e_n+\sin\theta_n e_{2n}\}$$ where $0\leq\theta_1\leq\ldots\leq\theta_{n-1}\leq\frac{\pi}{2}$ and $\theta_{n-1}\leq\theta_n\leq\pi-\theta_{n-1}$. These angles are called the *characterising angles* of $P,Q$. The proof is very similar to the argument in the proof of Wirtinger’s inequality (Theorem \[Wirtinger.thm\]). Choose unit $e_1\in P$ and maximise $\langle e_1,u_1\rangle$ for $u_1\in Q$, and let $e_{n+1}\in P^{\perp}$ be defined by $u_1=\cos\theta_1e_1+\sin\theta_1e_{n+1}$. Now choose $e_2\in P\cap e_1^{\perp}$ and maximise $\langle e_2,u_2\rangle$ for $u_2\in Q\cap u_1^{\perp}$, then proceed by induction. If the characterising angles of $P,Q$ are $\theta_1,\ldots,\theta_n$, then the characterising angles of $P,-Q$ are $\psi_1,\ldots,\psi_n$ where $\psi_j=\theta_j$ for $j=1,\ldots,n-1$ and $\psi_n=\pi-\theta_n$. The idea of the following theorem is that the union of $P\cup Q$ is area-minimizing if $P,-Q$ are not too close together [@Lawlor]. Let $P,Q$ be oriented transverse $n$-planes in $\R^{2n}$ and let $\psi_1,\ldots,\psi_n$ be the characterising angles between $P,-Q$. Then $P\cup Q$ is volume-minimizing if and only if $\psi_1+\ldots+\psi_n\geq \pi$. Notice that this criteria is impossible to fulfill in 1 dimension. We will sketch the proof which involves calibrations in a fundamental way in both directions. For details, look at [@Harvey]. First if $P\cup Q$ does not satisfy the angle condition, we can choose coordinates by Lemma \[angles.lem\] so that $P=P(-\frac{\psi}{2})$ and $-Q=P(\frac{\psi}{2})$ where $\psi=(\psi_1,\ldots,\psi_n)$ and $P(\psi)=\{(e^{i\psi_1,}x_1,\ldots,e^{i\psi_n}x_n)\,:\,(x_1,\ldots,x_n)\in\R^n\}$ as given earlier. We know that we have a special Lagrangian Lawlor neck $N$ asymptotic to $P(-\frac{\psi'}{2})\cup P(\frac{\psi'}{2})$ for any $\psi'$ where $\sum_{i=1}^n\psi'_i=\pi$. The claim is then that since $\sum \psi_i<\pi$ we can find $\psi'$ so that $\sum\psi'_i=\pi$ and $N\cap P(\pm\frac{\psi'}{2})$ is non-empty (in fact, an ellipsoid). This is actually a way to characterise $N$. Hence since $N$ is calibrated by $\operatorname{Im}\Upsilon$ and $\operatorname{Im}\Upsilon|_{P\cup Q}<\operatorname{vol}_{P\cup Q}$, $P\cup Q$ cannot be volume-minimizing by the usual Stokes’ Theorem argument for calibrated submanifolds. We now provide a few extra details, for which we need to describe $N$. For maps $z_1,\ldots,z_n:\R\rightarrow\C$ define $$N=\{(t_1z_1(s),\ldots,t_n z_n(s))\in\C^n\,:\,s\in\R,\,t_1,\ldots,t_n\in\R,\, \sum_{j=1}^nt_j^2=1\}.$$ It is not difficult to calculate that $N$ is special Lagrangian with phase $i$ (so calibrated by $\operatorname{Im}\Upsilon$) if and only if $$\overline{z_j}\frac{\d z_j}{\d s}=i f_j \overline{z_1\ldots z_n}$$ for positive real functions $f_j$. Suppose that $f_j=1$ for all $j$. Write $z_j=r_je^{i\theta_j}$, let $\theta=\sum_{j=1}^n\theta_j$ and suppose that $z_j(0)=c_j>0$. From the differential equation, one quickly sees that $r_j^2=c_j^2+u$ for some function $u$ with $u(0)=0$ and $r_1\ldots r_n\cos\theta=c_1\ldots c_n$. If we now suppose that $u=t^2$, we see that $$\theta_j(t)=\int_0^{t}\frac{a_j\d t}{(1+a_jt^2)\sqrt{\frac{1}{t^2}\big((1+a_1t^2)\ldots(1+a_nt^2)-1\big)}}$$ where $a_j=c_j^{-2}$. We observe that $\theta\rightarrow\pm\frac{\pi}{2}$ as $t\rightarrow\pm\infty$ and hence $N$, which is a Lawlor neck, is asymptotic to a pair of planes where the sum of the angles is $\pm\frac{\pi}{2}$. Now fix $t>0$ and define $$f:X=\{(a_1,\ldots,a_n)\in\R^n\,:\,a_j\geq 0\}\rightarrow Y=\{(\theta_1,\ldots,\theta_n)\in\R^n\,:\,\theta_j\geq 0,\sum_{j=1}^n\theta_j<\frac{\pi}{2}\}$$ by $f(a_1,\ldots,a_n)=(\theta_1,\ldots,\theta_n)$ where $$\theta_j=\int_0^{t}\frac{a_j\d t}{(1+a_jt^2)\sqrt{\frac{1}{t^2}\big((1+a_1t^2)\ldots(1+a_nt^2)-1\big)}}.$$ It is clear that if $n=1$, $f:X\rightarrow Y$ is surjective. We want to show it is surjective for all $n$. For $\theta\in (0,\frac{\pi}{2})$ define $H_{\theta}=\{(\theta_1,\ldots,\theta_n)\in Y\,:\,\sum_{j=1}^n\theta_j=\theta\}$. By our discussion above we see that $$f^{-1}(H_{\theta})\subseteq S_{\theta}=\{(a_1,\ldots,a_n)\in X\,:\,(1+a_1t^2)\ldots(1+a_nt^2)=\cos^{-2}t\}.$$ Notice that if the degree of $f:\partial S_{\theta}\rightarrow\partial H_{\theta}$ is 1 then the degree of $f:S_{\theta}\rightarrow H_{\theta}$ is 1. Thus, by induction on $n$, we see that $f:S_{\theta}\rightarrow H_{\theta}$ is surjective. Now, given any plane $\{(e^{i\theta_1}x_1,\ldots e^{i\theta_n}x_n)\,:\,(x_1,\ldots,x_n)\in\R^n\}$ where $(\theta_1,\ldots,\theta_n)\in\ Y$, $\theta_j\neq 0$ for all $j$, we see that we can choose a Lawlor neck $N$ which intersects the plane in a hypersurface as claimed. We now move to the other direction in the statement of the Angle Theorem. If $P\cup Q$ does satisfy the angle condition, then (by choosing coordinates so that $P=\R^n$ and $Q$ is in standard position) we claim that it is calibrated by a so-called *Nance calibration*: $$\eta(u_1,\ldots,u_n)=\operatorname{Re}\big((\d x_1+u_1\d y_1)\wedge\ldots\wedge(\d x_n+u_n\d y_n)\big)$$ where $u_1,\ldots,u_n\in\mathcal{S}^2\subseteq\Im\H$. If $u_m=i$ for all $m$ then $\eta=\operatorname{Re}\Upsilon$, so it is believable that it is a calibration in general, but we now show that it is indeed true. Let $x_1,y_1,\ldots,x_n,y_n$ be coordinates on $\R^{2n}$. We call an $n$-form $\eta$ on $\R^{2n}$ a torus form if $\eta$ lies in the span of forms of type $$\d x_{i_1}\wedge\ldots\wedge\d x_{i_k}\wedge \d y_{j_1}\wedge\ldots\wedge\d y_{j_l}$$ where $\{i_1,\ldots,i_k\}\cap\{j_1,\ldots,j_l\}=\emptyset$ and $\{i_1,\ldots,i_k\}\cup\{j_1,\ldots,j_l\}=\{1,\ldots,n\}$. We now claim that a torus form $\eta$ is a calibration if and only if $$\eta(\cos\theta_1e_1+\sin\theta_1e_{n+1},\ldots,\cos\theta_ne_n+\sin\theta_{n}e_{2n})\leq 1$$ for all $\theta_1,\ldots,\theta_n\in\R$. For $n=1$, $\eta=\d x_1\wedge\d y_1$ which is a calibration. Suppose that the result holds for $n=k$. Let $\eta$ be a torus form on $\R^{2(k+1)}$ and rescale $\eta$ so that the maximum of $\eta$ is $1$ and is attained at some plane. The idea is to show using the argument in the proof of Wirtinger’s inequality to put planes in standard position that we can write $\eta=e_1\wedge \eta_1+e_2\wedge \eta_2$ where $e_1,e_2$ are orthonormal and span an $\R^2$ and $\eta_1,\eta_2$ are torus forms on $\R^{2k}$. The claim then follows by induction on $n$. Hence, the Nance calibration $\eta$ above is a calibration and moreover we know $P(\theta)$ is calibrated by $\eta(u)$ if and only if $$\prod_{j=1}^n(\cos\theta_j+\sin\theta_ju_j)=1.$$ We then just need to find the $u_j$ determined by $\theta_j$. Notice that the condition that $\psi_1+\ldots+\psi_n\geq\pi$ holds if and only if $\theta_n\leq\theta_1+\ldots+\theta_{n-1}$. If we write $\cos\theta_j+\sin\theta_ju_j=w_j\overline{w}_{j+1}$ where $w_{n+1}=w_1$ and $w_j$ are unit imaginary quaternions then the product condition is satisfied and we just need $\langle w_j,w_{j+1}\rangle =\cos\theta_j$, which is equivalent to finding $n$ points on the unit 2-sphere so that $d(w_j,w_{j+1})=\theta_j$, where $\theta_n\leq \theta_1+\ldots+\theta_{n-1}$. This is indeed possible, by considering an $n$-sided spherical polygon. [99]{} S. Brendle, *Embedded minimal tori in $S^3$ and the Lawson conjecture*, Acta Math. [**211**]{} (2013), no. 2, 177–190. R. L. Bryant, [*Submanifolds and special structures on the octonians*]{}, J. Differential Geom. [**17**]{} (1982), 185–232. R. L. Bryant, *Calibrated embeddings in the special Lagrangian and coassociative cases*, Ann. Global Anal. Geom. [**18**]{} (2000), no. 3-4, 405–435. R. L. Bryant, *Second order families of special Lagrangian 3-folds*, Perspectives in Riemannian geometry, pp. 63–98, CRM Proc. Lecture Notes [**40**]{}, Amer. Math. Soc., Providence, RI, 2006. R. L. Bryant and S. Salamon, [*On the construction of some complete metrics with exceptional holonomy*]{}, Duke Math. J. [**58**]{} (1989), 829–850. A. Butscher, [*Deformations of minimal Lagrangian submanifolds with boundary*]{}, Proc. Amer. Math. Soc. [**131**]{} (2002) 1953–1964. A. Butscher, [*Regularizing a singular special Lagrangian variety*]{}, Comm. Anal. Geom. [**12**]{} (2004), 733–791. E. Carberry, [*Associative cones in the imaginary octonions*]{}, Riemann surfaces, harmonic maps and visualization, pp. 249–263, OCAMI Stud. [**3**]{}, Osaka Munic. Univ. Press, Osaka, 2010. E. Carberry and I. McIntosh, [*Minimal Lagrangian 2-tori in $\mathbb{CP}^2$ come in real families of every dimension*]{}, J. London Math. Soc. (2) [**69**]{} (2004), no. 2, 531–544. A. Corti, M. Haskins, J. Nordström and T. Pacini, [*$\operatorname{G}_2$-manifolds and associative submanifolds via semi-Fano 3-folds*]{}, Duke Math. J. [**164**]{} (2015), no. 10, 1971–2092. T. Eguchi and A. J. Hanson, [*Asymptotically flat self-dual solutions to Euclidean gravity*]{}, Phys. Lett. [**74B**]{} (3) (1978), 249–251. D. Fox, *Second order families of coassociative 4-folds*, Thesis (Ph.D.)â Duke University, ProQuest LLC, Ann Arbor, MI, 2005. D. Fox, *Coassociative cones ruled by 2-planes*, Asian J. Math. [**11**]{} (2007), no. 4, 535–553. D. Fox, *Cayley cones ruled by 2-planes: desingularization and implications of the twistor fibration*, Comm. Anal. Geom. [**16**]{} (2008), no. 5, 937–968. D. Gayet, [*Smooth moduli spaces of associative submanifolds*]{}, Q. J. Math. [**65**]{} (2014), no. 4, 1213–1240. D. Gayet and F. Witt, [*Deformations of associative submanifolds with boundary*]{}, Adv. Math. [**226**]{} (2011), no. 3, 2351–2370. E. Goldstein, *Calibrated fibrations on noncompact manifolds via group actions*, Duke Math. J. [**110**]{} (2001), no. 2, 309–343. A. Gray, E. Abbena and S. Salamon, [*Modern differential geometry of curves and surfaces with Mathematica^^*]{}, Third edition, Studies in Advanced Mathematics, Chapman & Hall/CRC, Boca Raton, FL, 2006. R. Harvey, [*Spinors and calibrations*]{}, Perspectives in Mathematics [**9**]{}, Academic Press, Inc., Boston, MA, 1990. R. Harvey and H. B. Lawson, [*Calibrated geometries*]{}, Acta Math. [**148**]{} (1982), 47–157. M. Haskins, *Special Lagrangian cones*, Amer. J. Math. [**126**]{} (2004), no. 4, 845–871. M. Haskins, [*The geometric complexity of special Lagrangian $T^2$-cones*]{}, Invent. Math. [**157**]{} (2004), no. 1, 11–70. M. Haskins and N. Kapouleas, [*Special Lagrangian cones with higher genus links*]{}, Invent. Math. [**167**]{} (2007), 223–294. G. Huisken and T. Ilmanen, *The inverse mean curvature flow and the Riemannian Penrose inequality*, J. Differential Geom. [**59**]{} (2001), no. 3, 353–437. Y. Imagi, D. D. Joyce and J. Oliveira dos Santos, *Uniqueness results for special Lagrangians and Lagrangian mean curvature flow expanders in $\C^m$*, Duke Math. J. [**165**]{} (2016), no. 5, 847–933. M. Ionel, [*Second order families of special Lagrangian submanifolds in $\C^4$*]{}, J. Differential Geom. [**65**]{} (2003), no. 2, 211–272. M. Ionel, S. Karigiannis and M. Min-Oo, [*Bundle constructions of calibrated submanifolds in $\R^7$ and $\R^8$*]{}, Math. Res. Lett. [**12**]{} (2005), no. 4, 493–512. M. Ionel and M. Min-Oo, [*Cohomogeneity one special Lagrangian 3-folds in the deformed and the resolved conifolds*]{}, Illinois J. Math. [**52**]{} (2008), no. 3, 839–865. K. Irie, F. Marques and A. Neves, [*Density of minimal hypersurfaces for generic metrics*]{}, Ann. of Math. (2) [**187**]{} (2018), no. 3, 963–972. D. D. Joyce, *Evolution equations for special Lagrangian 3-folds in $\C^3$*, Ann. Global Anal. Geom. [**20**]{} (2001), no. 4, 345–403. D. D. Joyce, [*Constructing special Lagrangian $m$-folds in $\C^m$ by evolving quadrics*]{}, Math. Ann. [**320**]{} (2001), no. 4, 757–797. D. D. Joyce, *Ruled special Lagrangian 3-folds in $\C^3$*, Proc. London Math. Soc. (3) [**85**]{} (2002), no. 1, 233–256. D. D. Joyce, *Special Lagrangian $m$-folds in $\C^m$ with symmetries*, Duke Math. J. [**115**]{} (2002), no. 1, 1–51. D. D. Joyce, *$U(1)$-invariant special Lagrangian 3-folds in $\C^3$ and special Lagrangian fibrations*, Turkish J. Math. [**27**]{} (2003), no. 1, 99–114. D. D. Joyce, [*Special Lagrangian submanifolds with isolated conical singularities. V. Survey and applications*]{}, J. Differential Geom. [**63**]{} (2003), 279–347. D. D. Joyce, [*Special Lagrangian submanifolds with isolated conical singularities. II. Moduli spaces*]{}, Ann. Global Anal. Geom. [**25**]{} (2004), 301–352. D. D. Joyce, [*Special Lagrangian submanifolds with isolated conical singularities. III. Desingularization, the unobstructed case*]{}, Ann. Global Ann. Geom. [**26**]{} (2004), 1–58. D. D. Joyce, [*Special Lagrangian submanifolds with isolated conical singularities. IV. Desingularization, obstructions and families*]{}, Ann. Global Ann. Geom. [**26**]{} (2004), 117–174. D. D. Joyce, *$U(1)$-invariant special Lagrangian 3-folds. I. Nonsingular solutions*, Adv. Math. [**192**]{} (2005), no. 1, 35–71. D. D. Joyce, *$U(1)$-invariant special Lagrangian 3-folds. II. Existence of singular solutions*, Adv. Math. [**192**]{} (2005), no. 1, 72–134. D. D. Joyce, *$U(1)$-invariant special Lagrangian 3-folds. III. Properties of singular solutions*, Adv. Math. [**192**]{} (2005), no. 1, 135–182. D. D. Joyce, [*Riemannian holonomy groups and calibrated geometry*]{}, Oxford Graduate Texts in Mathematics [**12**]{}, Oxford University Press, Oxford, 2007. D. D. Joyce, [*Special Lagrangian 3-folds and integrable systems*]{}, Surveys on geometry and integrable systems, pp. 189–233, Adv. Stud. Pure Math. [**51**]{}, Math. Soc. Japan, Tokyo, 2008. D. D. Joyce and S. Salur, [*Deformations of asymptotically cylindrical coassociative submanifolds with fixed boundary*]{}, Geom. Topol. [**9**]{} (2005), 1115–1146. S. Karigiannis and N. C.-H. Leung, *Deformations of calibrated subbundles of Euclidean spaces via twisting by special sections*, Ann. Global Anal. Geom. [**42**]{} (2012), no. 3, 371–389. S. Karigiannis and M. Min-Oo, [*Calibrated subbundles in noncompact manifolds of special holonomy*]{}, Ann. Global Anal. Geom. [**28**]{} (2005), no. 4, 371–394. A. G. Kovalev, [*Coassociative $K3$ fibrations of compact $\operatorname{G}_2$-manifolds*]{}, arXiv:math/0511150. A. Kovalev and J. D. Lotay, [*Deformations of compact coassociative 4-folds with boundary*]{}, J. Geom. Phys. [**59**]{} (2009), 63–73. G. Lawlor, [*The angle criterion*]{}, Invent. Math. [**95**]{} (1989), 437–446. H. B. Lawson and R. Osserman, [*Non-existence, non-uniqueness and irregularity of solutions to the minimal surface system*]{}, Acta Math. [**139**]{} (1977), 1–17. Y.-I. Lee, [*Embedded special Lagrangian submanifolds in Calabi–Yau manifolds*]{}, Comm. Anal. Geom. [**11**]{} (2003), 391–423. J. D. Lotay, [*Constructing associative 3-folds by evolution equations*]{}, Comm. Anal. Geom. [**13**]{} (2005), 999–1037. J. D. Lotay, [*$2$-ruled calibrated $4$-folds in $\R^7$ and $\R^8$*]{}, J. London Math. Soc. [**74**]{} (2006), 219–243. J. D. Lotay, *Calibrated submanifolds of $\R^7$ and $\R^8$ with symmetries*, Q. J. Math. [**58**]{} (2007), 53–70. J. D. Lotay, [*Coassociative 4-folds with conical singularities*]{}, Comm. Anal. Geom. [**15**]{} (2007), 891–946. J. D. Lotay, [*Deformation theory of asymptotically conical coassociative 4-folds*]{}, Proc. London Math. Soc. [**99**]{} (2009), 386–424. J. D. Lotay, [*Desingularization of coassociative 4-folds with conical singularities*]{}, Geom. Funct. Anal. [**18**]{} (2009), 2055–2100. J. D. Lotay, [*Asymptotically conical associative 3-folds*]{}, Q. J. Math. [**62**]{} (2011), 131–156. J. D. Lotay, *Ruled Lagrangian submanifolds of the 6-sphere*, Trans. Amer. Math. Soc. [**363**]{} (2011), 2305–2339. J. D. Lotay, *Associative submanifolds of the 7-sphere*, Proc. London Math. Soc. [**105**]{} (2012), 1183–1214. J. D. Lotay, [*Stability of coassociative conical singularities*]{}, Comm. Anal. Geom. [**20**]{} (2012), 803–867. J. D. Lotay, *Desingularization of coassociative 4-folds with conical singularities: obstructions and applications*, Trans. Amer. Math. Soc. [**366**]{} (2014), 6051–6092. F. C. Marques and A. Neves, *Min-max theory and the Willmore conjecture*, Ann. of Math. (2) [**179**]{} (2014), no. 2, 683–782. F. C. Marques and A. Neves, *Existence of infinitely many minimal hypersurfaces in positive Ricci curvature*, Invent. Math. [**209**]{} (2017), no. 2, 577–616. I. McIntosh, [*Special Lagrangian cones in $\C^3$ and primitive harmonic maps*]{}, J. London Math. Soc. (2) [**67**]{} (2003), no. 3, 769–789. R. C. McLean, [*Deformations of calibrated submanifolds*]{}, Comm. Anal. Geom. [**6**]{} (1998), 705–747. K. Moore, [*Cayley deformations of compact complex surfaces*]{}, arXiv:1710.08799. K. Moore, [*Deformations of conically singular Cayley submanifolds*]{}, J. Geom. Anal. (2018). C. B. Morrey, [*Multiple Integrals in the Calculus of Variations*]{}, Grundlehren Series Volume 130, Springer–Verlag, Berlin, 1966. M. Ohst, [*Deformations of compact Cayley submanifolds with boundary*]{}, arXiv:1405.7886. M. Ohst, [*Deformations of asymptotically cylindrical Cayley submanifolds*]{}, arXiv:1506.00110. T. Pacini, [*Special Lagrangian conifolds, I: moduli spaces*]{}, Proc. London Math. Soc. (3) [**107**]{} (2013), 198–224. T. Pacini, [*Special Lagrangian conifolds, II: gluing constructions in $\C^m$*]{}, Proc. London Math. Soc. (3) [**107**]{} (2013), no. 2, 225–266. G. Perelman, *Finite extinction time for the solutions to the Ricci flow on certain three-manifolds*, arXiv:math/0307245. R. Schoen and S. T. Yau, [*On the proof of the positive mass conjecture in general relativity*]{}, Comm. Math. Phys. [**65**]{} (1979), no. 1, 45–76. R. Schoen and S. T. Yau, [*Positive scalar curvature and minimal hypersurface singularities*]{}, arXiv:1704.05490. A. Song, [*Existence of infinitely many minimal hypersurfaces in closed manifolds*]{}, arXiv:1806.08816. M. B. Stenzel, [*Ricci-flat metrics on the complexification of a compact rank one symmetric space*]{}, Manuscripta Math. [**80**]{} (1993), no. 2, 151–163.
--- abstract: 'A magnetic field rotating on the free surface of a ferrofluid layer is shown to induce considerable fluid motion towards the direction the field is rolling. The measured flow velocity i) increases with the square of the magnetic field amplitude, ii) is proportional to the thickness of the fluid layer, and iii) has a maximum at a driving frequency of about 3 kHz. The pumping speed can be estimated with a two-dimensional flow model.' author: - 'Robert Krauß, Mario Liu$^1$, Bert Reimann, Reinhard Richter, and Ingo Rehberg' title: Fluid pumped by magnetic stress --- Polarizable fluids can show a macroscopic reaction to external electric or magnetic fields. While for most fluids the influence of a magnetic field is fairly weak, colloidal suspensions of magnetic particles – so called Ferrofluids – do show a strong response particularly to static magnetic fields [@Rosensweig]. If these fields are time dependent, a rich variety of phenomena occurs. The internal rotation of the magnetization in an externally rotating magnetic field gives rise to nontrivial effects, as summarized in a recent special issue [@Shliomiseditor]. A particularly interesting example is the driving of a macroscopic flow by means of an external rotating magnetic field [@Pshenichnikov00], because it should allow for a very fine tuning both of the speed and of the direction of the flow even in microscopic channels [@Liu98]. In this paper we present a novel magnetic fluid [@newfluid] in an open flow geometry especially designed for a quantitative comparison between the measured flow velocity and its theoretical estimation. It is scetched in Fig.1. A circular Macrolon$^{\circledR}$ duct with a mean diameter $d$ of 100 mm and a square cross-section of 5 mm $\times$ 5 mm (and 2.5 mm $\times$ 2.5 mm in a second set of measurements) is filled brimful with a magnetic fluid [@newfluid] with a viscosity of $\eta=5.4 \cdot 10^{-3}$ Pas. The orientation of the two coils producing the rotating magnetic field is also indicated in Fig.1: One coil is wrapped around this circular channel and provides a magnetic field in azimuthal direction, and the outer coil produces the vertical component of the field. Both coils are driven with an ac-current with a phase difference of 90$^o$, thus producing a rotating field on the free surface of the fluid within the duct. The characterization of the magnetic susceptibility of the fluid in the duct is of primary importance for the pumping of the fluid, as discussed in this paper. It has been measured as a function of frequency of the external oscillating azimuthal magnetic field by means of a pick-up coil placed into the liquid. Precisely speaking, the magnetization was determined from the difference of the signal detected by the pick-up coil in the empty and the filled channel, under the influence of an oscillatory azimuthal field. The results are presented in Fig.2. The measured susceptibilities of this novel cobalt based fluids are fairly large compared to the more common magnetite based fluids. In particular, the large imaginary part of the susceptibility is essential for the pumping action described in this paper. The pump does work: A rotating field produced by the coils leads to a motion of the fluid in azimuthal direction of the channel [@movie]. Its velocity is on the order of mm/s and can thus be determined by visual inspection of tracer particles swimming on its surface. The next observation is also a qualitative one: When changing the phase difference between the ac-fields from +90$^o$ to $-$90$^o$, the flow changes its sign. The flow direction is such that the vorticity of the flow field is locally parallel to the rotation vector of the magnetic field, i.e. the fluid flows towards the direction the field is rolling. For quantitative measurements it is necessary to use particles which are small compared to the channel width (dandruff, diameter about 1 mm). The velocity is determined by taking the time a particle needs to travel a few centimeters. The size dependence of these measurements is taken into account by using the numerically calcultated – roughly parabolic – velocity profiles , and by assuming that a floating particle represents the mean speed with respect to its diameter. A result obtained for a fixed frequency of 1 kHz is presented in Fig.3, where the maximal flow velocity within the channel is shown as a function of the amplitude $G_0$ of the driving external magnetic field. The velocity increases proportional to $G_0^2$, as demonstrated by the solid line, a parabola. Here we follow the nomenclature of Ref.[@Lebedev03], where the external magnetic far field is marked by [$\boldsymbol G$]{} and the local one by [$\boldsymbol H$]{}. Having demonstrated this quadratic dependence of the velocity on the magnetic field, a first approach to collapse data obtained at different fields is to introduce a reduced velocity by dividing with the square of the external field. Another important influence determining the fluid velocity is the height of the channel: It turns out that the velocity is larger in bigger channels. This leads us to reduce the velocity also by dividing by the height $L$ of the duct. In order to get a dimensionless number one also has to scale with the viscosity of the fluid. Thus we define $$u = v_{\rm max}\frac{\eta}{L \mu_0 G_0^2} \label{u}$$ as a reduced flow velocity. Its measured values are presented as a function of the driving frequency of the rotating magnetic field in Fig.4, for velocities obtained in a 5 mm $\times$ 5 mm and a 2.5 mm $\times$ 2.5 mm duct. Both measurements show a maximum of this velocity in the range of 2 – 4 kHz. The pumping action can be understood as a manifestation of the magnetic stress acting on the magnetized fluid, as summarized in Ref.[@Shliomis03]. This assumption explains all qualitative features of the observation: as long as the magnetization is proportional to the magnetic field (which has been measured to be the case for our fluid, within a precision of 5 % for fields up to about 1500 ${\rm Am^{-1}}$), the stress must be proportional to $G^2$ as demonstrated in Fig.3. If the frequency approaches zero, the magnetization and the field are parallel to each other, the tangential stress is zero and thus the motion of the fluid stops. For finite frequencies the velocity is proportional to the $\chi\prime\prime$ component of the susceptibility, which must increase linearly (to lowest order) with the frequency. For higher frequencies, the imaginary part of the susceptibility $\chi\prime\prime$ has a maximum at about 1 kHz, which explains that the maximal pumping velocity is observed around that frequency. For the quantitative calculation we solve the Laplace equation for the velocity field including the no-slip condition for the fluid at the bottom and the side walls of the duct. At the upper surface of the fluid the magnetic stress provides the boundary condition $$\eta \frac{\partial v}{\partial z}=\frac{\mu _0}{2} ( M_{\rm z} H_{\rm x} - M_{\rm x} H_{\rm z} ), \label{magnetic_stress}$$ because Eq.(65) of Ref. [@Shliomis03] is applicable to our geometry. Both field components $H_{\rm x}$ and $H_{\rm z}$ and the corresponding magnetization $M_{\rm x}$ and $M_{\rm z}$ are not constant in our case, but depend both on time and space. By numerical computations of the internal field $\boldsymbol H$ and the resulting magnetization $\boldsymbol M=\chi \boldsymbol H$, which will be published elsewhere, we get for the maximum fluid velocity in the middle of the upper surface $$v_{\rm max}= \chi\prime\prime \frac{\mu_0 G_0^2}{2} \frac{L} {\eta} \alpha \left(\frac{1+N_{\rm eff} \chi\prime}{(1+N_{\rm eff}\chi\prime )^2+ (N_{\rm eff}\chi\prime\prime )^2}\right). \label{vmax}$$ Here $N_{\rm eff}$ denotes an effective demagnetization factor and $\alpha$ a reduction factor due to the given geometry depending on the aspect ratio $a=L_{\rm y}/L_{\rm z}$ of the rectangular channel. In our case $a=1$ and we obtain $\alpha \approx 0.369$. In our square cross section, the magnetization – and the ensuing stress at the surface of the fluid – is not homogenous. We thus extract an effective demagnetization factor $N_{\rm eff}$ by fitting Eq.(\[vmax\]) to the numerically obtained velocity for different values of $\chi$. We obtain $N_{\rm eff} \approx 0.656$, which seems realistic when comparing to $N=0.5$ for the case of a circular cylinder. From Eqs.(\[u\],\[vmax\]) we finally get the theoretical estimation for the reduced velocity $$u=\frac{\alpha}{2} \frac{\chi\prime\prime (1+N_{\rm eff}\chi\prime) }{(1+N_{\rm eff}\chi\prime)^2+(N_{\rm eff}\chi\prime\prime)^2} \label{u_2}$$ The limiting case of an infinitly wide channel ($N=1, \alpha=1$) is included in this formula. The reduced velocity obtained from Eq.(\[u\_2\]) is presented in Fig.4 as a solid line, with the values of $\chi\prime$ and $\chi\prime\prime$ taken from the measurements presented in Fig.2. The agreement between the measured velocities and the solid line is on a 20% level. This can partly be attributed to the limited accuracy of the measurement procedure, which is indicated by the error bars in Fig.4. The systematic deviations between the data and the theoretical curve are believed to reflect the precision of the simplifying assumptions going into the consideration presented above. For example, the influence of the shape of the meniscus of the fluid adds a three-dimensional complication to the problem, whose influence on the maximal pumping speed is hard to estimate. Moreover, it should be noted that magnetic fluids are not perfectly stable both in their magnetic susceptibility and their viscosity, which might add to the small mismatch between the expectation based on the measurement of the susceptibility and the measured velocity. Finally, taking into account the small amplitude of the magnetic field, any magnetoviscous effects have been neglected for the calculations. In summary, the pump presented here works well and has an interesting potential especially in small geometries where a mechanical driving of the flow is not possible. More importantly, it does seem safe to conclude that the ansatz of a magnetic stress driven motion captures the essence of this pump on a quantitative level. It is a pleasure to thank A. Engel, M. Krekhova and M.I. Shliomis for clarifying discussions. We thank N. Matoussevitch for providing the magnetic liquid. The experiments were supported by Deutsche Forschungsgemeinschaft, Re 588/12. [\*]{} R.E. Rosensweig [*Ferrohydrodynamics*]{}, (Cambridge University Press, Cambridge, 1985). M.I. Shliomis and A. Cebers (eds.) [*Internal rotations in magnetic fluids*]{}, Magnetohydrodynamics [**36**]{} No. 4, (2000). A.F. Pshenichnikov, A.V. Lebedev, in [@Shliomiseditor], 317, and references cited therein. M. Liu, German patent DE 0019842848A1. H. Bönnemann, W. Brijoux, R. Brinkmann, N. Matoussevitch, N. Waldöfner, N. Palina, H. Modrow, Inorganica Chimica Acta [**350**]{}, 617 (2003); H. Bönnemann, W. Brijoux, R. Brinkmann, N. Matoussevitch, N. Waldöfner, German patent DE 10227779.6. Movies showing the pump at work and the effect of flow reversion are located at <http://www.staff.uni-bayreuth.de/~btp909>. A.V. Lebedev, A. Engel, K.I. Morozov and H. Bauke, New Journal of Physics [**5**]{}, 57 (2003). M.I. Shliomis, in [*Ferrofluids: Magnetically Controllable Fluids and Their Applications*]{} edited by S. Odenbach (Lecture Notes in Physics, Vol. 594, Springer, Berlin, 2003), p.85. J.D.  Jackson, [*Classical Electrodynamics*]{} (Wiley, New York, 1998).
--- bibliography: - 'bibliography.bib' --- Hermitian Eigenvalue Problem, Rational Filters, Contour-based Eigensolver, FEAST, Worst-case Convergence Rate, Load Balancing, Non-linear Least Squares, BFGS, Nelder-Mead. 65F15, 41A20, 65Y05 Acknowledgements {#acknowledgements .unnumbered} ================ We thank Jan Winkelmann for having provided support to the first author in developing the bulk of the work that contributed to this paper and Sebastian Achilles for useful discussions.
--- abstract: 'We prove that if the squares of two unconditional bases are equivalent up to a permutation, then the bases themselves are permutatively equivalent. This settles a twenty year-old question raised by Casazza and Kalton in [@CasKal1998]. Solving this problem provides a new paradigm to study the uniqueness of unconditional basis in the general framework of quasi-Banach spaces. Multiple examples are given to illustrate how to put in practice this theoretical scheme. Among the main applications of this principle we obtain the uniqueness of unconditional basis up to permutation of finite sums of quasi-Banach spaces with this property.' address: - | Department of Mathematics, Statistics and Computer Sciences, and InaMat\ Universidad Pública de Navarra\ Pamplona 31006\ Spain - | Department of Mathematics and Computer Sciences\ Universidad de La Rioja\ Logroño 26004\ Spain author: - 'F. Albiac' - 'J. L. Ansorena' title: On the permutative equivalence of squares of unconditional bases --- [^1] Introduction and background =========================== An important long-standing problem in Banach space theory, eventually solved in the negative by Gowers and Maurey in 1997 [@GowMau1997], asked whether any two Banach spaces $X$ and $Y$ such that $X$ is isomorphic to a complemented subspace of $Y$ and such that $Y$ is isomorphic to a complemented subspace of $X$ are isomorphic. This is known, by analogy with a similar result for cardinals in the category of sets, as the *Schröder-Bernstein* problem for Banach spaces. Pe[ł]{}czyński had noticed much earlier, back in 1969, that a little extra information about each space, namely being isomorphic to their squares, is all that is needed for the Schröder-Bernstein problem for Banach spaces to have a positive outcome [@Pel1969]. This observation, nowadays known as *Pe[ł]{}czyński’s decomposition method*, highlighted the role played by the squares of the spaces, and the question arose whether any two Banach spaces $X$ and $Y$ such that $X^{2}\approx Y^{2}$ are isomorphic. This problem was also settled in the aforementioned article by Gowers and Maurey. Indeed, the authors constructed in [@GowMau1997] a Banach space ${\ensuremath{{X}}}$ with ${\ensuremath{{X}}}\approx{\ensuremath{{X}}}^3$ but ${\ensuremath{{X}}}\not\approx{\ensuremath{{X}}}^2$. Then, if we put ${\ensuremath{{Y}}}={\ensuremath{{X}}}^2$, we have that ${\ensuremath{{X}}}$ is isomorphic to a complemented subspace of ${\ensuremath{{Y}}}$, that ${\ensuremath{{Y}}}$ is isomorphic to a subspace of ${\ensuremath{{X}}}$, that ${\ensuremath{{X}}}^2\approx {\ensuremath{{Y}}}^2$, and that ${\ensuremath{{X}}}\not\approx{\ensuremath{{Y}}}$. So, the pair of spaces $X$ and $Y$ serves as a counterexample for both questions. The Schröder-Bernstein problem for Banach spaces is a very basic and natural property that arises most of the time when one is trying to show that two Banach (or quasi-Banach) spaces are isomorphic. However, its practical implementation depends on knowing a priori large classes of spaces when the property holds. And this might be an intractable problem in almost any general setting. Wójtowicz [@Wojto1988] and Wojtaszczyk [@Woj1997] discovered independently, with a lapse of 11 years, the following beautiful criterion in the spirit of the Schröder-Bernstein problem to check whether two unconditional bases (in possibly different quasi-Banach spaces) are permutatively equivalent. \[thm:SBUB\] Let $({\ensuremath{\bm x}}_{n})_{n=1}^{\infty}$ and $({\ensuremath{\bm y}}_{n})_{n=1}^{\infty}$ be two unconditional bases of quasi-Banach spaces $X$ and ${\ensuremath{{Y}}}$. Suppose that $({\ensuremath{\bm x}}_{n})_{n=1}^{\infty}$ is permutatively equivalent to a subbasis of $({\ensuremath{\bm y}}_{n})_{n=1}^{\infty}$ and that $({\ensuremath{\bm y}}_{n})_{n=1}^{\infty}$ permutatively is equivalent to a subbasis of $({\ensuremath{\bm x}}_{n})_{n=1}^{\infty}$. Then $({\ensuremath{\bm x}}_{n})_{n=1}^{\infty}$ and $({\ensuremath{\bm y}}_{n})_{n=1}^{\infty}$ are permutatively equivalent. In particular, $X\approx Y$. The validity of the Schröder-Bernstein principle for unconditional bases has a played a crucial role in the development of the subject of uniqueness of unconditional basis in quasi-Banach spaces (see, e.g., [@AKL2004; @AlbiacLeranoz2008; @AlbiacLeranoz2010; @AlbiacLeranoz2011; @AlbiacLeranoz2011b]). Casazza and Kalton brought this principle to the reader’s awareness in [@CasKal1998] and used it to give new examples of Banach spaces with a unique unconditional basis up to permutation. The simplifying power of the Schröder-Bernstein principle for unconditional bases would have made life much easier also for all the authors who had previously worked on the problem of uniqueness of unconditional bases up to permutation and who, in order to obtain the same conclusions, had to impose additional properties to the bases in relation to other general techniques such as the decomposition method (see e.g. [@BCLT1985]\*[Proposition 7.7]{}). It is indeed remarkable that, although the combinatorial arguments used by Wojtaszczyk to prove Theorem \[thm:SBUB\] are somewhat standard, they went unnoticed until close to the 21st century! The state of art of the Schröder-Bernstein problem for Banach spaces in the pre-Gowers era was described by Casazza in [@Casazza1989]. His paper with Kalton [@CasKal1998] appeared just one year after Gowers and Maurey disproved the Schröder-Bernstein problem for Banach spaces and Wojtaszczyk’s reinterpreted the Schröder-Bernstein principle for unconditional bases. Thus, it is not surprising that the following question was timely raised in [@CasKal1998]: \[CKQuestion\] (See [@CasKal1998]\*[Remarks following the proof of Theorem 5.7]{}) Suppose that $({\ensuremath{\bm x}}_{n})_{n=1}^{\infty}$ and $({\ensuremath{\bm y}}_{n})_{n=1}^{\infty}$ are two unconditional bases whose squares are permutatively equivalent. Does it follow that $({\ensuremath{\bm x}}_{n})_{n=1}^{\infty}$ and $({\ensuremath{\bm y}}_{n})_{n=1}^{\infty}$ are permutatively equivalent? This problem was a driving force for the present investigation and we solve it in the affirmative. In fact we show that the result still holds replacing the assumption on the square of the bases with the weaker assumption that some powers of the bases are permutatively equivalent. We will do that in Section \[sect:SQRTUB\]. Answering Question \[CKQuestion\] in the positive offers a new paradigm to tackle the problem of uniqueness of unconditional basis up to permutation in the general setting of quasi-Banach spaces. The necessary ingredients and preparatory results leading to the main theoretical tool, namely Theorem \[thm:keytechniquebis\], are presented in a self-contained fashion in Section \[sect:UUB\]. In Sections \[Sec:ApplAnti-Euclidean\] and \[SectStrongAbs\] we embark on a comprehensive survey of quasi-Banach spaces with a unique unconditional basis up to permutation which are susceptible to be applied the scheme of Section \[sect:UUB\]. In Section \[DirectSumsUnc\] we further exploit the usefulness of Theorem \[thm:keytechniquebis\] to show that the property of uniqueness of unconditional bases is preserved when we take finite direct sums of a wide class of quasi-Banach spaces with this property. When combined with the spaces from Sections \[Sec:ApplAnti-Euclidean\] and \[SectStrongAbs\] we obtain a myriad of new examples of spaces with uniqueness of unconditional basis up to permutation. We use standard terminology and notation in Banach space theory as can be found, e.g., in [@AlbiacKalton2016]. Most of our results, however, will be established in the general setting of quasi-Banach spaces; the unfamiliar reader will find general information about quasi-Banach spaces in [@KPR1984]. We next gather the notation that it is more heavily used. In keeping with current usage we will write $c_{00}(J)$ for the set of all $(a_j)_{j\in J}\in {\ensuremath{\mathbb{F}}}^J$ such that $|\{j\in J \colon a_j\not=0\}|<\infty$, where ${\ensuremath{\mathbb{F}}}$ could be the real or complex scalar field. Given $s\in{\ensuremath{\mathbb{N}}}$ we put ${\ensuremath{\mathbb{N}}}[s]=\{1, \dots, s\}.$ Given a quasi-Banach space ${\ensuremath{{X}}}$ and $s\in{\ensuremath{\mathbb{N}}}$ we denote by $\kappa[s,{\ensuremath{{X}}}]$ the smallest constant $C$ such that $$\left\Vert \sum_{j=1}^s f_j\right\Vert\le C \left(\sum_{j=1}^s \Vert f_j\Vert\right),\quad f_j\in{\ensuremath{{X}}}.$$ The closed linear span of a subset $V$ of ${\ensuremath{{X}}}$ will be denoted by $[V]$. A countable family ${\ensuremath{\mathcal{B}}}=({\ensuremath{\bm x}}_n)_{n \in {\ensuremath{\mathcal{N}}}}$ in ${\ensuremath{{X}}}$ is an *unconditional basic sequence* if for every $f\in[{\ensuremath{\bm x}}_n \colon n \in {\ensuremath{\mathcal{N}}}]$ there is a unique family $(a_n)_{n \in {\ensuremath{\mathcal{N}}}}$ in ${\ensuremath{\mathbb{F}}}$ such that the series $\sum_{n \in {\ensuremath{\mathcal{N}}}} a_n \, {\ensuremath{\bm x}}_n$ converges unconditionally to $f$. If ${\ensuremath{\mathcal{B}}}$ is an unconditional basic sequence, there is a constant $K\ge 1$ such that $$\left\Vert \sum_{n \in {\ensuremath{\mathcal{N}}}} a_n \, {\ensuremath{\bm x}}_n\right\Vert \le K \left\Vert \sum_{n \in {\ensuremath{\mathcal{N}}}} b_n \, {\ensuremath{\bm x}}_n\right\Vert$$ for any finitely non-zero sequence of scalars $(a_n)_{n\in \mathcal N}$ with $|a_n|\le|b_n|$ for all $n\in{\ensuremath{\mathcal{N}}}$ (see [@AABW2019]\*[Theorem 1.10]{}). If this inequality is satisfied for a given $K$ we say that ${\ensuremath{\mathcal{B}}}$ is $K$-unconditional. If we additionally have $[{\ensuremath{\bm x}}_n \colon n \in {\ensuremath{\mathcal{N}}}]={\ensuremath{{X}}}$ then ${\ensuremath{\mathcal{B}}}$ is an *unconditional basis* of ${\ensuremath{{X}}}$. If ${\ensuremath{\mathcal{B}}}$ is an unconditional basis of ${\ensuremath{{X}}}$, then the map $${\ensuremath{\mathcal{F}}}\colon{\ensuremath{{X}}}\to {\ensuremath{\mathbb{F}}}^{{\ensuremath{\mathcal{N}}}},\quad f=\sum_{n \in {\ensuremath{\mathcal{N}}}} a_n\, {\ensuremath{\bm x}}_n \mapsto ({\ensuremath{\bm x}}_n^*(f))_{n \in {\ensuremath{\mathcal{N}}}} = (a_n)_{n \in {\ensuremath{\mathcal{N}}}}$$ will be called the *coefficient transform* with respect to ${\ensuremath{\mathcal{B}}}$, and the functionals $({\ensuremath{\bm x}}_n^*)_{n \in {\ensuremath{\mathcal{N}}}}$ the *coordinate functionals* of ${\ensuremath{\mathcal{B}}}$. Given a countable set ${\ensuremath{\mathcal{N}}}$, we write ${\ensuremath{\mathcal{E}}}_{{\ensuremath{\mathcal{N}}}}:=({\ensuremath{\bm e}}_n)_{n\in {\ensuremath{\mathcal{N}}}}$ for the canonical unit vector system of ${\ensuremath{\mathbb{F}}}^{{\ensuremath{\mathcal{N}}}}$, i.e., ${\ensuremath{\bm e}}_n=(\delta_{n,m})_{m\in {\ensuremath{\mathcal{N}}}}$ for each $n\in {\ensuremath{\mathcal{N}}}$, where $\delta_{n,m}=1$ if $n=m$ and $\delta_{n,m}=0$ otherwise. A *sequence space* will be a quasi-Banach space ${\ensuremath{{X}}}\subseteq{\ensuremath{\mathbb{F}}}^{{\ensuremath{\mathcal{N}}}}$ for which ${\ensuremath{\mathcal{E}}}_{{\ensuremath{\mathcal{N}}}}$ is a normalized $1$-unconditional basis. The *Banach envelope* of a quasi-Banach space ${\ensuremath{{X}}}$ consists of a Banach space $\widehat{{\ensuremath{{X}}}}$ together with a linear contraction $J_{\ensuremath{{X}}}\colon{\ensuremath{{X}}}\to \widehat{{\ensuremath{{X}}}}$ satisfying the following universal property: for every Banach space ${\ensuremath{{Y}}}$ and every linear contraction $T\colon{\ensuremath{{X}}}\to{\ensuremath{{Y}}}$ there is a unique linear contraction $\widehat{T}\colon \widehat{{\ensuremath{{X}}}}\to {\ensuremath{{Y}}}$ such that $\widehat{T}\circ J_{\ensuremath{{X}}}=T$. We say that a Banach space ${\ensuremath{{Y}}}$ is the Banach envelope of ${\ensuremath{{X}}}$ via the map $J\colon{\ensuremath{{X}}}\to{\ensuremath{{Y}}}$ if the associated map $\widehat{J}\colon\widehat{{\ensuremath{{X}}}}\to{\ensuremath{{Y}}}$ is an isomorphism. Other more specific terminology will be introduced in context when needed. Permutative equivalence of powers of unconditional bases {#sect:SQRTUB} ======================================================== Suppose that ${\ensuremath{\mathcal{B}}}_x=({\ensuremath{\bm x}}_n)_{n \in {\ensuremath{\mathcal{N}}}}$ and ${\ensuremath{\mathcal{B}}}_u=({\ensuremath{\bm u}}_n)_{n \in {\ensuremath{\mathcal{N}}}}$ are (countable) families of vectors in quasi-Banach spaces $X$, $Y$, respectively. We say that ${\ensuremath{\mathcal{B}}}_x=({\ensuremath{\bm x}}_n)_{n \in {\ensuremath{\mathcal{N}}}}$ $C$-*dominates* ${\ensuremath{\mathcal{B}}}_u=({\ensuremath{\bm u}}_n)_{n \in {\ensuremath{\mathcal{N}}}}$ if there is a linear map $T$ from the closed subspace of ${\ensuremath{{X}}}$ spanned by ${\ensuremath{\mathcal{B}}}_x$ into ${\ensuremath{{Y}}}$ with $T({\ensuremath{\bm x}}_n)={\ensuremath{\bm u}}_n$ for all $n \in {\ensuremath{\mathcal{N}}}$ such that $\Vert T\Vert\le C$. If $T$ is an isomorphic embedding, ${\ensuremath{\mathcal{B}}}_x$ and ${\ensuremath{\mathcal{B}}}_u$ are said to be *equivalent*. We say that ${\ensuremath{\mathcal{B}}}_x$ is *permutatively equivalent* to a family ${\ensuremath{\mathcal{B}}}_y=({\ensuremath{\bm y}}_m)_{n\in {\ensuremath{\mathcal{M}}}}$ in $Y$, and we write ${\ensuremath{\mathcal{B}}}_x\sim{\ensuremath{\mathcal{B}}}_y$, if there is a bijection $\pi\colon {\ensuremath{\mathcal{N}}}\to {\ensuremath{\mathcal{M}}}$ such that ${\ensuremath{\mathcal{B}}}_x$ and $({\ensuremath{\bm y}}_{\pi(n)})_{n \in {\ensuremath{\mathcal{N}}}}$ are equivalent. A *subbasis* of an unconditional basis ${\ensuremath{\mathcal{B}}}_x=({\ensuremath{\bm x}}_n)_{n \in {\ensuremath{\mathcal{N}}}}$ is a family $({\ensuremath{\bm x}}_n)_{n\in {\ensuremath{\mathcal{M}}}}$ for some subset ${\ensuremath{\mathcal{M}}}$ of ${\ensuremath{\mathcal{N}}}$. Let $({\ensuremath{{X}}}_i)_{i\in F}$ be a finite collection of (possibly repeated) quasi-Banach spaces. The Cartesian product $\bigoplus_{i\in F}{\ensuremath{{X}}}_i$ equipped with the quasi-norm $$\left\Vert ({\ensuremath{\bm x}}_i)_{i\in F}\right\Vert=\sup_{i\in F} \Vert {\ensuremath{\bm x}}_i\Vert,\quad {\ensuremath{\bm x}}_i\in{\ensuremath{{X}}}_i$$ is a quasi-Banach space. Suppose that ${\ensuremath{\mathcal{B}}}_i=({\ensuremath{\bm x}}_{i,n})_{j\in {\ensuremath{\mathcal{N}}}_i}$ is an unconditional basis of ${\ensuremath{{X}}}_i$ for each $i\in F$. Set $$\label{defN} {\ensuremath{\mathcal{N}}}= \bigcup_{i\in F} \{i\} \times {\ensuremath{\mathcal{N}}}_i.$$ Then the countable sequence $ \bigoplus_{i\in F} {\ensuremath{\mathcal{B}}}_i :=({\ensuremath{\bm x}}_{i,n})_{(i,n)\in {\ensuremath{\mathcal{N}}}} $ given by ${\ensuremath{\bm x}}_{i,n} =({\ensuremath{\bm x}}_{i,n,j})_{j\in F}$, where $${\ensuremath{\bm x}}_{i,n,j}=\begin{cases} {\ensuremath{\bm x}}_{i,n}& \text{ if }i=j, \\ 0 & \text{ otherwise,} \end{cases}$$ is an unconditional basis of $\bigoplus_{i\in F} {\ensuremath{{X}}}_i$. If $F={\ensuremath{\mathbb{N}}}[s]$ and ${\ensuremath{{X}}}_i={\ensuremath{{X}}}$ for all $i\in F$, the resulting direct sum is called the *$s$-fold product* of ${\ensuremath{{X}}}$ and we simply write ${\ensuremath{{X}}}^s=\bigoplus_{i\in F} {\ensuremath{{X}}}_i$. Similarly, if ${\ensuremath{\mathcal{B}}}_i={\ensuremath{\mathcal{B}}}$ for all $i\in F={\ensuremath{\mathbb{N}}}[s]$, we put ${\ensuremath{\mathcal{B}}}^s=\bigoplus_{i\in F} {\ensuremath{\mathcal{B}}}_i$ and say that ${\ensuremath{\mathcal{B}}}^s$ is the $s$-fold product of ${\ensuremath{\mathcal{B}}}$. We will refer to the $2$-fold product of a basis as to the *square* of that basis. We start with an elementary lemma. \[lem:1\] Let ${\ensuremath{\mathcal{B}}}=({\ensuremath{\bm x}}_n)_{n\in {\ensuremath{\mathcal{N}}}}$ be an unconditional basis of a quasi-Banach space ${\ensuremath{{X}}}$. For a given $s\in{\ensuremath{\mathbb{N}}}$, consider the $s$-fold product ${\ensuremath{\mathcal{B}}}^s=({\ensuremath{\bm x}}_{i,n})_{(i,n)\in {\ensuremath{\mathbb{N}}}[s]\times {\ensuremath{\mathcal{N}}}}$. Then, for any function $\alpha\colon {\ensuremath{\mathcal{N}}}\to{\ensuremath{\mathbb{N}}}[s]$, the basic sequence $({\ensuremath{\bm x}}_{\alpha(n),n})_{n\in {\ensuremath{\mathcal{N}}}}$ (which is permutatively equivalent to a subbasis of ${\ensuremath{\mathcal{B}}}^s$) is equivalent to ${\ensuremath{\mathcal{B}}}$. Suppose that ${\ensuremath{\mathcal{B}}}$ is $K$-unconditional. If we put ${\ensuremath{\mathcal{N}}}_i=\alpha^{-1}(i)$ for $i\in{\ensuremath{\mathbb{N}}}[s]$ then $$\left\Vert \sum_{n\in {\ensuremath{\mathcal{N}}}} a_n \, {\ensuremath{\bm x}}_{\alpha(n),n}\right\Vert=\sup_{i\in{\ensuremath{\mathbb{N}}}[s]} \left\Vert \sum_{n\in {\ensuremath{\mathcal{N}}}_i} a_n \, {\ensuremath{\bm x}}_n\right\Vert,$$ for all $(a_n)_{n=1}^\infty\in c_{00}$. Hence, $$\frac{1}{\kappa[s,{\ensuremath{{X}}}]}\left\Vert \sum_{n\in {\ensuremath{\mathcal{N}}}} a_n \, {\ensuremath{\bm x}}_{n}\right\Vert\le \left\Vert \sum_{n\in {\ensuremath{\mathcal{N}}}} a_n \, {\ensuremath{\bm x}}_{\alpha(n),n}\right\Vert \le K\left\Vert \sum_{n\in {\ensuremath{\mathcal{N}}}} a_n \, {\ensuremath{\bm x}}_{n}\right\Vert.\qedhere$$ The following version of the Hall-König Lemma (also known as *Marriage Lemma*) for infinite families of finite sets is essential in the proof of Theorem \[thm:SQRTUB\]. \[thm:HKL\]Let ${\ensuremath{\mathcal{N}}}$ be a set and $({\ensuremath{\mathcal{N}}}_i)_{i\in I}$ be a family of finite subsets of ${\ensuremath{\mathcal{N}}}$. Suppose that $$|F|\le \left| \bigcup_{i\in F} {\ensuremath{\mathcal{N}}}_i\right|$$ for every $F\subseteq I$ finite. Then there is a one-to-one map $\phi\colon I\to {\ensuremath{\mathcal{N}}}$ with $\phi(i)\in {\ensuremath{\mathcal{N}}}_i$ for every $i\in I$. \[lem:SQRTSubases\]Let ${\ensuremath{\mathcal{B}}}_x$ and ${\ensuremath{\mathcal{B}}}_y$ be two unconditional bases of quasi-Banach spaces ${\ensuremath{{X}}}$ and ${\ensuremath{{Y}}}$ respectively. Suppose that ${\ensuremath{\mathcal{B}}}_x^s$ is permutatively equivalent to a subbasis of ${\ensuremath{\mathcal{B}}}_y^s$ for some $s\ge 2$. Then ${\ensuremath{\mathcal{B}}}_x$ is permutatively equivalent to a subbasis of ${\ensuremath{\mathcal{B}}}_y$. Put ${\ensuremath{\mathcal{B}}}_x=({\ensuremath{\bm x}}_{n})_{n\in {\ensuremath{\mathcal{N}}}}$, ${\ensuremath{\mathcal{B}}}_y=({\ensuremath{\bm y}}_{n})_{n\in {\ensuremath{\mathcal{M}}}}$, ${\ensuremath{\mathcal{B}}}_x^s=({\ensuremath{\bm x}}_{i,n})_{(i,n)\in {\ensuremath{\mathbb{N}}}[s]\times {\ensuremath{\mathcal{N}}}}$ and ${\ensuremath{\mathcal{B}}}_y^s=({\ensuremath{\bm y}}_{i,n})_{(i,n)\in {\ensuremath{\mathbb{N}}}[s]\times {\ensuremath{\mathcal{M}}}}$. By hypothesis there is a one-to-one map $$\pi=(\pi_1,\pi_2)\colon {\ensuremath{\mathbb{N}}}[s]\times {\ensuremath{\mathcal{N}}}\to {\ensuremath{\mathbb{N}}}[s]\times {\ensuremath{\mathcal{M}}}$$ such that the unconditional bases ${\ensuremath{\mathcal{B}}}_x^s$ and $ ({\ensuremath{\bm y}}_{\pi(i,n)})_{(i,n)\in {\ensuremath{\mathbb{N}}}[s]\times {\ensuremath{\mathcal{N}}}} $ are equivalent. For $n\in {\ensuremath{\mathcal{N}}}$ set ${\ensuremath{\mathcal{M}}}_n=\{\pi_2(i,n) \colon i\in{\ensuremath{\mathbb{N}}}[s]\}$. If $F$ is a finite subset of ${\ensuremath{\mathcal{N}}}$ we have $$\pi( {\ensuremath{\mathbb{N}}}[s] \times F)\subseteq {\ensuremath{\mathbb{N}}}[s]\times \bigcup_{n\in F} {\ensuremath{\mathcal{M}}}_n,$$ and since $\pi$ is one-to-one, $$s \, |F|\le s \left|\bigcup_{n\in F} {\ensuremath{\mathcal{M}}}_n\right|.$$ Hence, $|F|\le |\cup_{n\in F} {\ensuremath{\mathcal{M}}}_n|$. We also have $|{\ensuremath{\mathcal{M}}}_n|\le s$ for all $n\in {\ensuremath{\mathcal{N}}}$. Therefore, by Theorem \[thm:HKL\], there exist a one-to-one map $\phi\colon {\ensuremath{\mathcal{N}}}\to {\ensuremath{\mathcal{M}}}$, a map $\alpha\colon {\ensuremath{\mathcal{N}}}\to{\ensuremath{\mathbb{N}}}[s]$, and a map $\beta\colon {\ensuremath{\mathcal{M}}}\to{\ensuremath{\mathbb{N}}}[s]$ such that $$\pi(\alpha(n),n)=(\beta(n),\phi(n)), \quad n\in {\ensuremath{\mathcal{N}}},$$ from where it follows that the unconditional basic sequences ${\ensuremath{\mathcal{B}}}_x'=({\ensuremath{\bm x}}_{\alpha(n),n})_{n\in {\ensuremath{\mathcal{N}}}}$ and ${\ensuremath{\mathcal{B}}}_y'=({\ensuremath{\bm y}}_{\beta(n),\phi(n)})_{n\in {\ensuremath{\mathcal{N}}}}$ are equivalent. Since, on the other hand, by Lemma \[lem:1\], ${\ensuremath{\mathcal{B}}}_x'$ is equivalent to ${\ensuremath{\mathcal{B}}}$ and ${\ensuremath{\mathcal{B}}}_y'$ is permutatively equivalent to $({\ensuremath{\bm y}}_m)_{m\in {\ensuremath{\mathcal{M}}}'}$, where ${\ensuremath{\mathcal{M}}}'=\phi({\ensuremath{\mathcal{N}}})$, we are done. \[thm:SQRTUB\] Let ${\ensuremath{\mathcal{B}}}_x$ and ${\ensuremath{\mathcal{B}}}_y$ be two unconditional bases of quasi-Banach spaces ${\ensuremath{{X}}}$ and ${\ensuremath{{Y}}}$. Suppose that ${\ensuremath{\mathcal{B}}}_x^s\sim{\ensuremath{\mathcal{B}}}_y^s$ for some $s\ge 2$. Then ${\ensuremath{\mathcal{B}}}_x\sim {\ensuremath{\mathcal{B}}}_y$. Apylying Theorem \[lem:SQRTSubases\] yields that ${\ensuremath{\mathcal{B}}}_x$ is permutatively equivalent to a subbabis of ${\ensuremath{\mathcal{B}}}_y$, and switching the roles of the basis also the other way around. Using Theorem \[thm:SBUB\] closes the proof. Let ${\ensuremath{\mathcal{B}}}$ be an uconditional basis of a quasi-Banach space. Suppose that ${\ensuremath{\mathcal{B}}}^t$ is permutatively equivalent to a subbasis of ${\ensuremath{\mathcal{B}}}^s$ for some $t>s\ge 1$. Then ${\ensuremath{\mathcal{B}}}^2\sim {\ensuremath{\mathcal{B}}}$. Since $t\ge s+1$, ${\ensuremath{\mathcal{B}}}^{s+1}$ is permutatively equivalent to a subbasis of ${\ensuremath{\mathcal{B}}}^s$. By induction we deduce that ${\ensuremath{\mathcal{B}}}^{u+1}$ is permutatively equivalent to a subbasis of ${\ensuremath{\mathcal{B}}}^u$ for every $u\ge s$, and so by transitivity, ${\ensuremath{\mathcal{B}}}^u$ is permutatively equivalent to a subbasis of ${\ensuremath{\mathcal{B}}}^s$ for every $u\ge s$. In particular, ${\ensuremath{\mathcal{B}}}^{2s}$ is is permutatively equivalent to a subbasis of ${\ensuremath{\mathcal{B}}}^s$. Therefore, by Theorem \[lem:SQRTSubases\], ${\ensuremath{\mathcal{B}}}^{2}$ is is permutatively equivalent to a subbasis of ${\ensuremath{\mathcal{B}}}$. Since ${\ensuremath{\mathcal{B}}}$ is permutatively equivalent to a subbasis of ${\ensuremath{\mathcal{B}}}^2$, applying Theorem \[thm:SBUB\] we are done. A new theoretical approach to the uniqueness of unconditional basis in quasi-Banach spaces {#sect:UUB} ========================================================================================== From a structural point of view, it is useful to know if a given space has an unconditional basis and, if the answer is yes, whether this is the unique unconditional basis of the space. Recall that a quasi-Banach space ${\ensuremath{{X}}}$ with an unconditional basis ${\ensuremath{\mathcal{B}}}$ is said to have a *unique unconditional basis*, if every semi-normalized unconditional basis of ${\ensuremath{{X}}}$ is equivalent to ${\ensuremath{\mathcal{B}}}$. For convenience, from now on all bases will be assumed to be semi-normalized. Note that, if ${\ensuremath{\mathcal{B}}}=({\ensuremath{\bm x}}_n)_{n \in {\ensuremath{\mathcal{N}}}}$ is a semi-normalized unconditional basis then it is equivalent to the normalized basis $({\ensuremath{\bm x}}_n/\Vert {\ensuremath{\bm x}}_n\Vert)_{n \in {\ensuremath{\mathcal{N}}}}$. For a Banach space with a symmetric basis it is rather unusual to have a unique unconditional basis. It is well-known that $\ell_{2}$ has a unique unconditional basis [@KotheToeplitz1934], and a classical result of Lindenstrauss and Pe[ł]{}czy[ń]{}ski [@LinPel1968] asserts that $\ell_{1}$ and $c_{0}$ also have a unique unconditional basis. Lindenstrauss and Zippin [@LinZip1969] completed the picture by showing that those three are the only Banach spaces in which all unconditional bases are equivalent. Once we have determined that a Banach space does not have a symmetric basis (a task that can be far from trivial) we must rethink the problem of uniqueness of unconditional basis. In fact, an unconditional non-symmetric basis admits a continuum of nonequivalent permutations (cf. [@Hennefeld1973]\*[Theorem 2.1]{}). Hence for Banach spaces without symmetric bases it is more natural to consider instead the question of uniqueness of unconditional bases up to (equivalence and) a permutation, (UTAP) for short. We say that ${\ensuremath{{X}}}$ has a (UTAP) unconditional basis ${\ensuremath{\mathcal{B}}}$ if every unconditional basis in ${\ensuremath{{X}}}$ is permutatively equivalent to ${\ensuremath{\mathcal{B}}}$. The first movers in this direction were Edelstein and Wojtaszczyk, who proved that finite direct sums of $c_0$, $\ell_1$ and $\ell_2$ have a (UTAP) unconditional basis [@EdelWoj1976]. Bourgain et al. embarked on a comprehensive study aimed at classifying those Banach spaces with unique unconditional basis up to permutation, that culminated in 1985 with their *Memoir* [@BCLT1985]. They showed that the spaces $c_{0}(\ell_{1})$, $c_{0}(\ell_{2})$, $\ell_{1}(c_{0})$, $\ell_{1}(\ell_{2})$ and their complemented subspaces with unconditional basis all have a (UTAP) unconditional basis, while $\ell_{2}(\ell_{1})$ and $\ell_{2}(c_{0})$ do not. However, the hopes of attaining a satisfactory classification were shattered when they found a nonclassical Banach space, namely the $2$-convexification ${\ensuremath{\mathcal{T}}}^{(2)}$ of Tsirelson’s space having a (UTAP) unconditional basis. Their work also left many open questions, most of which remain unsolved as of today. On the other hand, in the context of quasi-Banach spaces that are not Banach spaces, the uniqueness of unconditional basis seems to be the norm rather than an exception. For instance, it was shown in [@Kalton1977] that a wide class of nonlocally convex Orlicz sequence spaces, including the $\ell_{p}$ spaces for $0<p<1$, have a unique uncoditional basis. The same is true in nonlocally convex Lorentz sequence spaces ([@KLW1990; @AlbiacLeranoz2008]) and (UTAP) in the Hardy spaces $H_{p}({\ensuremath{\mathbb{T}}})$ for $0<p<1$ ([@Woj1997]). This section is geared towards Theorem \[thm:keytechniquebis\], which tells us that, under three straightforwardly verified conditions regarding a space and a basis, the unconditional bases of a space are all permutatively equivalent. The techniques used in the proof of this theorem are a development of the methods introduced by Casazza and Kalton in [@CasKal1998; @CasKal1999] to investigate the problem of uniqueness of unconditional basis in a class of Banach lattices that they called *anti-Euclidean*. The subtle but crucial role played by the lattice structure of the space in the proof of Theorem \[thm:keytechniquebis\] has to be seen in that it will permit to simplify the untangled way in which the vectors of one basis can be written in terms of the other. These techniques have been extended to the nonlocally convex setting and efficiently used in the literature to establish the uniqueness of unconditional basis up to permutation of the spaces $\ell_{p}(\ell_{q})=(\ell_{q}\oplus \ell_{p}\oplus\dots\oplus \ell_{p}\dots)_{q}$ for $p\in (0,1]\cup \{\infty\}$ and $q\in (0,1]\cup \{2,\infty\}$ (see [@AKL2004; @AlbiacLeranoz2008; @AlbiacLeranoz2010; @AlbiacLeranoz2011; @AlbiacLeranoz2011b]), with the convention that $\ell_{\infty}$ here means $c_{0}$. Before moving on, recall that an unconditional basic sequence ${\ensuremath{\mathcal{B}}}_u=({\ensuremath{\bm u}}_{m})_{m\in {\ensuremath{\mathcal{M}}}}$ in a quasi-Banach space ${\ensuremath{{X}}}$ is said to be *complemented* if its closed linear span ${\ensuremath{U}}= [{\ensuremath{\mathcal{B}}}_u]$ is a complemented subspace of ${\ensuremath{{X}}}$, i.e., there is a bounded linear map $P\colon{\ensuremath{{X}}}\to{\ensuremath{U}}$ with $P|_{\ensuremath{U}}={\ensuremath{\mathrm{Id}}}_{\ensuremath{U}}$. Notice the unconditional basic sequence ${\ensuremath{\mathcal{B}}}_u=({\ensuremath{\bm u}}_m)_{m\in {\ensuremath{\mathcal{M}}}}$ is complemented in ${\ensuremath{{X}}}$ if and only if there exists a sequence $({\ensuremath{\bm u}}_m^*)_{m\in {\ensuremath{\mathcal{M}}}}$ in ${\ensuremath{{X}}}^*$ such that ${\ensuremath{\bm u}}_m^*({\ensuremath{\bm u}}_n)=\delta_{m,n}$ for every $(m,n)\in {\ensuremath{\mathcal{M}}}^2$ and a there is a linear bounded map $P_u\colon{\ensuremath{{X}}}\to {\ensuremath{{X}}}$ given by $$\label{eq:projCUBS} P_u(f)=\sum_{m\in {\ensuremath{\mathcal{M}}}} {\ensuremath{\bm u}}_m^*(f) \, {\ensuremath{\bm u}}_m, \quad f\in{\ensuremath{{X}}}.$$ We will refer to $({\ensuremath{\bm u}}_m^*)_{m\in {\ensuremath{\mathcal{M}}}}$ as a sequence of *projecting functionals* for ${\ensuremath{\mathcal{B}}}_u$. A family ${\ensuremath{\mathcal{B}}}_u=({\ensuremath{\bm u}}_m)_{m\in {\ensuremath{\mathcal{M}}}}$ in ${\ensuremath{{X}}}$ with mutually disjoint supports with respect to a given unconditional basis ${\ensuremath{\mathcal{B}}}$ is an unconditional basic sequence. In the case when, moreover, ${\operatorname{supp}}({\ensuremath{\bm u}}_m)$ is finite for every $m\in {\ensuremath{\mathcal{M}}}$ we say that ${\ensuremath{\mathcal{B}}}_u$ is a block basic sequence (with respect to ${\ensuremath{\mathcal{B}}}$). We say that the block basic sequence ${\ensuremath{\mathcal{B}}}_u$ is *well complemented* (with respect to ${\ensuremath{\mathcal{B}}}$) if we can choose a sequence of projecting functionals ${\ensuremath{\mathcal{B}}}_{u}^*=({\ensuremath{\bm u}}_m^*)_{m\in {\ensuremath{\mathcal{M}}}}$ with ${\operatorname{supp}}({\ensuremath{\bm u}}_m^*)\subseteq {\operatorname{supp}}({\ensuremath{\bm u}}_m)$ for all $m\in {\ensuremath{\mathcal{M}}}$. In this case, ${\ensuremath{\mathcal{B}}}_{u}^*$ is called a sequence of *good projecting functionals* for ${\ensuremath{\mathcal{B}}}_{u}$. The following definition identifies and gives relief to an unstated feature shared by some unconditional bases. Examples of such bases can be found, e.g., in [@Kalton1977; @CasKal1998; @AlbiacLeranoz2008], where the property naturally arises in connection with the problem of uniqueness of unconditional basis. An unconditional basis ${\ensuremath{\mathcal{B}}}=({\ensuremath{\bm x}}_n)_{n\in {\ensuremath{\mathcal{N}}}}$ of a quasi-Banach space will be said to be *universal for well complemented block basic sequences* if for every semi-normalized well complemented block basic sequence ${\ensuremath{\mathcal{B}}}_u=({\ensuremath{\bm u}}_m)_{m\in {\ensuremath{\mathcal{M}}}}$ of ${\ensuremath{\mathcal{B}}}$ there is a map $\pi\colon {\ensuremath{\mathcal{M}}}\to {\ensuremath{\mathcal{N}}}$ such that $\pi(m)\in{\operatorname{supp}}({\ensuremath{\bm u}}_n)$ for every $m\in {\ensuremath{\mathcal{M}}}$, and ${\ensuremath{\mathcal{B}}}_u$ is equivalent to the rearranged subbasis $({\ensuremath{\bm x}}_{\pi(m)})_{m\in {\ensuremath{\mathcal{M}}}}$ of ${\ensuremath{\mathcal{B}}}$. The ideas in the following definition and proposition are implicit in [@Kalton1977]. An unconditional basis ${\ensuremath{\mathcal{B}}}=({\ensuremath{\bm x}}_n)_{n\in {\ensuremath{\mathcal{N}}}}$ of a quasi-Banach space ${\ensuremath{{X}}}$ will be said to have the *peaking property* if every semi-normalized well complemented block basic sequence ${\ensuremath{\mathcal{B}}}_u=({\ensuremath{\bm u}}_m)_{m\in {\ensuremath{\mathcal{M}}}}$ with respect to ${\ensuremath{\mathcal{B}}}$ satisfies $$\label{eq:gh} \inf_{m\in {\ensuremath{\mathcal{M}}}} \sup_{n\in {\ensuremath{\mathcal{N}}}} |{\ensuremath{\bm u}}_m^*({\ensuremath{\bm x}}_n)| \, |{\ensuremath{\bm x}}_n^*({\ensuremath{\bm u}}_m)|>0$$ for some sequence $({\ensuremath{\bm u}}_m^*)_{m\in {\ensuremath{\mathcal{M}}}}$ of good projecting functionals for ${\ensuremath{\mathcal{B}}}_u$. \[prop:k2one\] Suppose ${\ensuremath{\mathcal{B}}}=({\ensuremath{\bm x}}_n)_{n \in {\ensuremath{\mathcal{N}}}}$ is an unconditional basis of a quasi-Banach space $X$. If ${\ensuremath{\mathcal{B}}}$ has the peaking property then it is universal for well complemented block basic sequences. Let ${\ensuremath{\mathcal{B}}}_u=({\ensuremath{\bm u}}_m)_{m\in {\ensuremath{\mathcal{M}}}}$ be a semi-normalized well complemented block basic sequence and ${\ensuremath{\mathcal{B}}}_u^*=({\ensuremath{\bm u}}_m^*)_{m\in {\ensuremath{\mathcal{M}}}}$ be a sequence of good projecting functionals for ${\ensuremath{\mathcal{B}}}_u$ such that holds. There is $\pi\colon {\ensuremath{\mathcal{M}}}\to {\ensuremath{\mathcal{N}}}$ one-to-one with $$\inf_{m\in {\ensuremath{\mathcal{M}}}} |{\ensuremath{\bm x}}_{\pi(m)}^*({\ensuremath{\bm u}}_m)| \, |{\ensuremath{\bm x}}_{\pi(m)}^*({\ensuremath{\bm u}}_m)|>0.$$ For $m\in {\ensuremath{\mathcal{M}}}$ let us put $$\lambda_m= {\ensuremath{\bm x}}_{\pi(m)}^*({\ensuremath{\bm u}}_m),\quad \mu_m={\ensuremath{\bm x}}_{\pi(m)}({\ensuremath{\bm u}}_m^*),$$ and set $${\ensuremath{\bm v}}_m=\lambda_m \, {\ensuremath{\bm x}}_{\pi(m)},\quad {\ensuremath{\bm v}}_m^*= \mu_m \, {\ensuremath{\bm x}}^*_{\pi(m)}.$$ By [@AlbiacAnsorena2020]\*[Lemma 3.1]{}, ${\ensuremath{\mathcal{B}}}_v=({\ensuremath{\bm v}}_m)_{m \in {\ensuremath{\mathcal{M}}}}$ is equivalent to ${\ensuremath{\mathcal{B}}}_u$. In particular, ${\ensuremath{\mathcal{B}}}_v$ is semi-normalized so that $\inf_m \lambda_m>0$ and $\sup_m \lambda_m<\infty$. It follows that ${\ensuremath{\mathcal{B}}}_v$ is equivalent to $({\ensuremath{\bm x}}_{\pi(m)})_{m \in {\ensuremath{\mathcal{M}}}}$. The last ingredient in the deconstruction process we are carrying out is the following feature about the lattice structure of a quasi-Banach space. A quasi-Banach space (respectively, a quasi-Banach lattice) ${\ensuremath{{X}}}$ is said to be *sufficiently Euclidean* if $\ell_2$ is crudely finitely representable in ${\ensuremath{{X}}}$ as a complemented subspace (respectively, complemented sublattice), i.e., there is a positive constant $C$ such that for every $n\in{\ensuremath{\mathbb{N}}}$ there are bounded linear maps (respectively, lattice homomorphisms) $I_n \colon\ell_2^n \to {\ensuremath{{X}}}$ and $P_n\colon {\ensuremath{{X}}}\to \ell_2^n$ with $P_n\circ I_n ={\ensuremath{\mathrm{Id}}}_{\ell_2^n}$ and $ \Vert I_n\Vert \, \Vert P_n \Vert\le C$. We say that ${\ensuremath{{X}}}$ is *anti-Euclidean* (resp. *lattice anti-Euclidean*) if it is not sufficiently Euclidean. Any (semi-normalized) unconditional basis of a quasi-Banach space ${\ensuremath{{X}}}$ is equivalent to the unit vector system of a sequence space and so it induces a lattice structure on ${\ensuremath{{X}}}$. In general, we will say that an unconditional basis has a property about lattices if its associated sequence space has it. And the other way around, i.e., we will say that a sequence space enjoys a certain property relevant to bases if its unit vector system does. A quasi-Banach lattice ${\ensuremath{{X}}}$ is said to be *L-convex* if there is $\varepsilon>0$ so that whenever $f$ and $(f_i)_{i=1}^k$ in ${\ensuremath{{X}}}$ satisfy $0\le f_i\le f$ for every $i=1$, …, $k$, and $(1-\varepsilon)kf\ge \sum_{i=1}^k f_i$ we have $\varepsilon \Vert f \Vert \le \max_{1\le i \le k} \Vert f_i\Vert$. Kalton [@Kalton1984b] showed that a quasi-Banach lattice is $L$-convex if and only if it is $p$-convex for some $p>0$. So, most quasi-Banach lattices (and unconditional bases) ocurring naturally in analysis are L-convex. The space $\ell_1$ is the simplest and most important example of anti-Euclidean space (see e.g. [@AlbiacAnsorena2020]\*[Comments previous to Remark 2.9]{}). So, it is helpful to be able to count on conditions that guarantee that the Banach envelope of a given quasi-Banach space is $\ell_1$. \[lem:BEl1\]Suppose ${\ensuremath{{X}}}$ is a quasi-Banach space with an unconditional basis ${\ensuremath{\mathcal{B}}}$ that dominates the unit vector basis of $\ell_1$. Then the Banach envelope of ${\ensuremath{{X}}}$ is $\ell_1$ via the coefficient transform. The following lemma is useful when dealing with unconditional bases that dominate the canonical basis of $\ell_1$. Given an unconditional basis ${\ensuremath{\mathcal{B}}}=({\ensuremath{\bm x}}_n)_{n \in {\ensuremath{\mathcal{N}}}}$ with coordinate functionals $({\ensuremath{\bm x}}_n^*)_{n \in {\ensuremath{\mathcal{N}}}}$ and $A\subseteq {\ensuremath{\mathcal{N}}}$ finite we will put $${\ensuremath{\mathbf{1}}}_A[{\ensuremath{\mathcal{B}}}]=\sum_{n\in A} {\ensuremath{\bm x}}_n\quad \text{ and }\quad {\ensuremath{\mathbf{1}}}_A^*[{\ensuremath{\mathcal{B}}}]=\sum_{n\in A} {\ensuremath{\bm x}}_n^*.$$ If ${\ensuremath{\mathcal{B}}}$ is clear from context we simply write ${\ensuremath{\mathbf{1}}}_A={\ensuremath{\mathbf{1}}}_A[{\ensuremath{\mathcal{B}}}]$ and ${\ensuremath{\mathbf{1}}}_A^*={\ensuremath{\mathbf{1}}}_A^*[{\ensuremath{\mathcal{B}}}]$. \[lem:k2two\] Let ${\ensuremath{\mathcal{B}}}=({\ensuremath{\bm x}}_n)_{n \in {\ensuremath{\mathcal{N}}}}$ be an unconditional basis of a quasi-Banach space ${\ensuremath{{X}}}$. Suppose that ${\ensuremath{\mathcal{B}}}$ dominates the canonical basis of $\ell_1$. Then every semi-normalized well complemented block basic sequence of ${\ensuremath{{X}}}$ with respect to ${\ensuremath{\mathcal{B}}}$ is equivalent to a well complemented block basic sequence $({\ensuremath{\bm u}}_m)_{m \in {\ensuremath{\mathcal{M}}}}$ for which $({\ensuremath{\mathbf{1}}}^*_{{\operatorname{supp}}({\ensuremath{\bm u}}_m)})_{m \in {\ensuremath{\mathcal{M}}}}$ is a sequence of projecting functionals. Let $C_1$ be such that $\sum_{n \in {\ensuremath{\mathcal{N}}}} |{\ensuremath{\bm x}}_n^*(f)|\le C_1 \Vert f \Vert$ for every $f\in{\ensuremath{{X}}}$. Set $$C_2=\sup_{m \in {\ensuremath{\mathcal{M}}}} \Vert {\ensuremath{\bm u}}_m\Vert,\quad C_3=\sup_{m \in {\ensuremath{\mathcal{M}}}} \Vert {\ensuremath{\bm u}}_m^*\Vert,\;\quad \text{and}\; C_4=\sup_{n \in {\ensuremath{\mathcal{N}}}} \Vert {\ensuremath{\bm x}}_n\Vert.$$ Fix $m \in {\ensuremath{\mathcal{M}}}$ and put $$A_m=\left\{n \in {\ensuremath{\mathcal{N}}}\colon |{\ensuremath{\bm u}}_m^*({\ensuremath{\bm x}}_n)| > \frac{1}{2C_1 C_2}\right\}.$$ We have $$\sum_{n \in {\ensuremath{\mathcal{N}}}\setminus A_m} |{\ensuremath{\bm x}}_n^*({\ensuremath{\bm u}}_m) \, {\ensuremath{\bm u}}_m^*({\ensuremath{\bm x}}_n)| \le\frac{1}{2C_1 C_2} \sum_{n \in {\ensuremath{\mathcal{N}}}\setminus A_m} | {\ensuremath{\bm x}}_n^*({\ensuremath{\bm u}}_m)|\le \frac{1}{2}.$$ Hence, $$\begin{aligned} \lambda_m:&=\sum_{n\in A_m} |{\ensuremath{\bm x}}_n^*({\ensuremath{\bm u}}_m) \, {\ensuremath{\bm u}}_m^*({\ensuremath{\bm x}}_n)|\\ &\ge -\frac{1}{2}+\sum_{n \in {\ensuremath{\mathcal{N}}}} |{\ensuremath{\bm x}}_n^*({\ensuremath{\bm u}}_m) \, {\ensuremath{\bm u}}_m^*({\ensuremath{\bm x}}_n)|\\ &\ge -\frac{1}{2}+{\ensuremath{\bm u}}_m^*({\ensuremath{\bm u}}_m)=\frac{1}{2}.\end{aligned}$$ Let $${\ensuremath{\bm v}}_m =\lambda_m^{-1} \sum_{n\in A_m} |{\ensuremath{\bm x}}_n^*({\ensuremath{\bm u}}_m) \, {\ensuremath{\bm u}}_m^*({\ensuremath{\bm x}}_n)| \, {\ensuremath{\bm x}}_n$$ and ${\ensuremath{\bm v}}_m^*={\ensuremath{\mathbf{1}}}_{A_m}^*$. For every $n \in {\ensuremath{\mathcal{N}}}$ we have $${\ensuremath{\bm v}}_m^*({\ensuremath{\bm v}}_m)=1, \quad \lambda_m^{-1} |{\ensuremath{\bm u}}_m^*({\ensuremath{\bm x}}_n)|\le 2 C_3 C_4,$$ and for every $n\in A_m$, $$1\le 2 C_1 C_2 |{\ensuremath{\bm u}}_m^*({\ensuremath{\bm x}}_n)|.$$ Hence, the result follows from [@AlbiacAnsorena2020]\*[Lemma 3.1]{}. We will use the full force of the lattice structure induced by the basis in the following reduction lemma. \[lem:keytechniquebis\]Let ${\ensuremath{{X}}}$ be a quasi-Banach space whose Banach envelope is anti-Euclidean. Suppose that ${\ensuremath{\mathcal{B}}}$ is an L-convex, unconditional basis of $X$ which is universal for well complemented block basic sequences. Then, if ${\ensuremath{\mathcal{B}}}_u$ is another unconditional basis of ${\ensuremath{{X}}}$, there are positive integers $s$ and $t$ such that ${\ensuremath{\mathcal{B}}}_u$ is permutatively equivalent to a subbasis of ${\ensuremath{\mathcal{B}}}^s$ and ${\ensuremath{\mathcal{B}}}$ is permutatively equivalent to a subbasis of ${\ensuremath{\mathcal{B}}}_u^t$. Since ${\ensuremath{\mathcal{B}}}_u$ is lattice anti-Euclidean, [@AKL2004]\*[Theorem 3.4]{} yields that ${\ensuremath{\mathcal{B}}}_u$ is permutatively equivalent to a well complemented block basic sequence of ${\ensuremath{\mathcal{B}}}^s$ for some $s\in{\ensuremath{\mathbb{N}}}$. By [@AlbiacAnsorena2020]\*[Proposition 3.4]{}, ${\ensuremath{\mathcal{B}}}^s$ is universal for well complemented block basic sequences so that ${\ensuremath{\mathcal{B}}}_u$ is permutatively equivalent to a subbasis of ${\ensuremath{\mathcal{B}}}^s$. Since ${\ensuremath{\mathcal{B}}}^s$ inherits the convexity from ${\ensuremath{\mathcal{B}}}$, the basis ${\ensuremath{\mathcal{B}}}_u$ is L-convex and universal for well complemented block basic sequences. Switching the roles of ${\ensuremath{\mathcal{B}}}$ and ${\ensuremath{\mathcal{B}}}_u$ yields the conclusion of the lemma. A remark on the inherited order structure in a quasi-Banach lattice is in order here. Kalton showed in [@Kalton1984b]\*[Theorem 4.2]{} that every unconditional basic sequence ${\ensuremath{\mathcal{B}}}_0$ of a quasi-Banach space with an L-convex unconditional basis ${\ensuremath{\mathcal{B}}}$ is L-convex. This argument would have, indeed, simplified the proof of Lemma \[lem:keytechniquebis\]. However, we wanted to make the point that the validity of the lemma does not depend on such a deep theorem as Kalton’s. We are ready to prove the main result of this section. \[thm:keytechniquebis\] Let ${\ensuremath{{X}}}$ be a quasi-Banach space whose Banach envelope is anti-Euclidean. Suppose ${\ensuremath{\mathcal{B}}}$ is an unconditional basis for ${\ensuremath{{X}}}$ such that: 1. The lattice structure induced by ${\ensuremath{\mathcal{B}}}$ in ${\ensuremath{{X}}}$ is L-convex; 2. ${\ensuremath{\mathcal{B}}}$ is universal for well complemented block basic sequences; and 3. ${\ensuremath{\mathcal{B}}}\sim {\ensuremath{\mathcal{B}}}^2$. Then ${\ensuremath{{X}}}$ has a unique unconditional basis up to permutation. Let ${\ensuremath{\mathcal{B}}}_u$ be another unconditional basis of ${\ensuremath{{X}}}$. Since ${\ensuremath{\mathcal{B}}}^r \sim {\ensuremath{\mathcal{B}}}$ for every $r\in{\ensuremath{\mathbb{N}}}$, applying Lemma \[lem:keytechniquebis\] yields that ${\ensuremath{\mathcal{B}}}_u$ is permutatively equivalent to a subbasis of ${\ensuremath{\mathcal{B}}}$ and that ${\ensuremath{\mathcal{B}}}^t$ is permutatively equivalent to a subbasis of ${\ensuremath{\mathcal{B}}}_u^t$ for some $t\in{\ensuremath{\mathbb{N}}}$. Combining Theorem \[lem:SQRTSubases\] with Theorem \[thm:SBUB\] yields ${\ensuremath{\mathcal{B}}}_u\sim{\ensuremath{\mathcal{B}}}$. Theorem \[lem:SQRTSubases\] becomes instrumental in reaching the conclusion of the previous theorem. Indeed, without it, and under the same hipotheses as in Theorem \[thm:keytechniquebis\], we would have only been able to guarantee that given another unconditional basis ${\ensuremath{\mathcal{B}}}_u$ of ${\ensuremath{{X}}}$, ${\ensuremath{\mathcal{B}}}_u$ is permutatively equivalent to a subbasis of ${\ensuremath{\mathcal{B}}}$ and that ${\ensuremath{\mathcal{B}}}$ is permutatively equivalent to a subbasis of some $s$-fold product of ${\ensuremath{\mathcal{B}}}_u$. Thanks to Theorem \[lem:SQRTSubases\] we can close close the “gap" between ${\ensuremath{\mathcal{B}}}$ and ${\ensuremath{\mathcal{B}}}_u$ and arrive at the permutative equivalence of the two bases. Although this gap might seem small, we would like to emphasize that in the lack of Theorem \[thm:keytechniquebis\] the specialists were forced to use additional properties of ${\ensuremath{\mathcal{B}}}$ to infer that ${\ensuremath{\mathcal{B}}}$ is the unique unconditional basis of ${\ensuremath{{X}}}$. For instance, in the proof that $\ell_1(\ell_p)$, $0<p<1$, has a unique unconditional basis up to permutation, the authors used that all subbases of the canonical basis of $\ell_1(\ell_p)$ are permutatively equivalent to their square (see [@AKL2004]). Applicability of our scheme to anti-Euclidean spaces {#Sec:ApplAnti-Euclidean} ==================================================== Most anti-Euclidean spaces scattered through the literature with a unique unconditional basis (up to permutation) fulfil the hypotheses of Theorem \[thm:keytechniquebis\]. This can be checked on by looking up the corresponding references contained herein. However, with the aim to be as self-contained as possible and for the convenience of the reader we next survey how to verify the hypotheses of Theorem \[thm:keytechniquebis\] in all known spaces (Banach and non-Banach) with a unique unconditional basis and some other new ones. The spaces in this section and the next will be the protagonists of Section \[DirectSumsUnc\], where we will combine them to get the uniqueness of unconditional basis up to permutation of their finite direct sums. In what follows, the symbol $\alpha_i\lesssim \beta_i$ for $i\in I$ means that the families of positive real numbers $(\alpha_i)_{i\in I}$ and $(\beta_i)_{i\in I}$ verify $\sup_{i\in I}\alpha_i/\beta_i <\infty$. If $\alpha_i\lesssim \beta_i$ and $\beta_i\lesssim \alpha_i$ for $i\in I$ we say $(\alpha_i)_{i\in I}$ are $(\beta_i)_{i\in I}$ are equivalent, and we write $\alpha_i\approx \beta_i$ for $i\in I$. The space $\bm{\ell_{1}}$ ------------------------- The simplest example of an anti-Euclidean space is $\ell_1$. Since the canonical basis is perfectly homogeneous, it is universal for well complemented block basic sequences. Finally, since it is symmetric, it is equivalent to its square. Orlicz sequence spaces {#ex:Orlicz} ---------------------- An *Orlicz function* will be a right-continuous increasing function $\varphi\colon[0,\infty)\to[0,\infty)$ such $\varphi(0)=0$, $\varphi(1)=1$ and $\varphi(s+t)\le C (\varphi(s) + \varphi(t))$ for some constant $C$ and every $s$, $t\ge 0$. The *Orlicz space* $\ell_\varphi$ is the space associated to the Luxembourg quasi-norm defined from the modular $(a_n)_{n=1}^\infty \mapsto \sum_{n=1}^\infty \varphi(|a_n|)$. Our assumptions on $\varphi$ yield that $\ell_\varphi$ is a symmetric sequence space. Kalton proved in [@Kalton1977] that if $\varphi$ satisfies $$\label{eq:Orlicz1} t\lesssim \varphi(t), \quad 0\le t \le 1,$$ and $$\label{eq:OrliczK21} \Lambda_\varphi:=\lim_{\varepsilon\to 0^+} \inf_{0<s<1}\frac{-1}{\log \varepsilon}\int_\varepsilon^1 \frac{\varphi(sx)}{sx^2}\, dx=\infty,$$ then $\ell_\varphi$ has a unique unconditional basis up to permutation. It is easy to show that implies that the Banach envelope of $\ell_\varphi$ is anti-Euclidean, and it is implicit in [@Kalton1977] that if and hold, then the unit vector system of $\ell_\varphi$ is universal for well complemented block basic sequences. For the sake of completeness and further reference, we record these results and sketch a proof of them. Let $\varphi$ be an Orlicz function such that both and hold. Then: - The inclusion map from $\ell_\varphi$ in $\ell_1$ is an envelope map. - The unit vector system of $\ell_\varphi$ has the peaking property. Since $\ell_1$ is the Orlicz sequence space associated to the function $t\mapsto t$, we have $\ell_\varphi\subseteq\ell_1$. Then, (i) follows from Lemma \[lem:BEl1\]. Assume by contradiction that ${\ensuremath{\mathcal{B}}}_u=({\ensuremath{\bm u}}_m)_{m \in {\ensuremath{\mathcal{M}}}}$ is a a well complemented block basic sequence of $\ell_\varphi$, that $({\ensuremath{\bm u}}_m^*)_{m \in {\ensuremath{\mathcal{M}}}}$ is a family of well complemented projecting functionals for ${\ensuremath{\mathcal{B}}}_u$, but that $$\inf_{m \in {\ensuremath{\mathcal{M}}}} \sup_{n\in{\ensuremath{\mathbb{N}}}} |{\ensuremath{\bm u}}_m^*({\ensuremath{\bm e}}_n)| \, |{\ensuremath{\bm e}}_n^*({\ensuremath{\bm u}}_m)|=0.$$ Then, by [@Kalton1977]\*[Theorem 6.5]{}, $\ell_\varphi$ has a a complemented basic sequence ${\ensuremath{\mathcal{B}}}_y$ such that ${\ensuremath{{Y}}}=[{\ensuremath{\mathcal{B}}}_y]$ is locally convex. Using (i) and [@AlbiacAnsorena2020]\*[Lemma 2.1]{}, it follows that the restriction of the inclusion map of $\ell_\varphi$ in $\ell_1$ to ${\ensuremath{{Y}}}$ is an isomorphism. Therefore, by [@Kalton1977]\*[Theorem 5.3]{}, we reach the absurdity that $\Lambda_\varphi<\infty$. Lorentz sequence spaces {#ex:Lorentz} ----------------------- Let ${\ensuremath{\bm w}}=(w_n)_{n=1}^\infty$ be a *weight*, i.e., a sequence of positive scalars, and $0<p<\infty$. Suppose that ${\ensuremath{\bm w}}$ decreases to zero. The *Lorentz space* $d({\ensuremath{\bm w}},p)$ is the quasi-Banach space consisting of all $f=(a_n)_{n=1}^\infty\in{\ensuremath{\mathbb{F}}}^{\ensuremath{\mathbb{N}}}$ such that $$\Vert f \Vert_{d({\ensuremath{\bm w}},p)} =\sup_{\pi\in\Pi} \left(\sum_{n=1}^\infty | a_{\pi(n)}|^p \, w_n \right)^{1/p}<\infty,$$ where $\Pi$ is the set of all permutations of ${\ensuremath{\mathbb{N}}}$. The unit vector system is a symmetric basis of $d({\ensuremath{\bm w}},p)$. It was proved in [@AlbiacLeranoz2008] that if the weight fulfils the condition $$\label{eq:lorentk2} \inf_{k\in {\ensuremath{\mathbb{N}}}} \frac{\displaystyle\sum_{n=1}^k w_n}{k^{p}}>0,$$ then $d({\ensuremath{\bm w}},p)$ has a unique unconditional basis up to permutation. Next, we deduce this result by combining Theorem \[thm:keytechniquebis\] with arguments from [@AlbiacLeranoz2008]. Let $0<p<1$ and ${\ensuremath{\bm w}}=(w_n)_{n=1}^\infty$ decreasing to zero. Then $d({\ensuremath{\bm w}},p)\subseteq\ell_1$ if and only if holds. Moreover, if holds, then - the Banach envelope of $d({\ensuremath{\bm w}},p)$ is $\ell_1$ via the inclusion map, and - the unit vector system of $d({\ensuremath{\bm w}},p)$ has the peaking property. For $k\in{\ensuremath{\mathbb{N}}}$ write $s_k=\sum_{n=1}^k w_n$. Assume that $d({\ensuremath{\bm w}},p)\subseteq\ell_1$ and let $C$ be the norm of the inclusion map. If $|A|=k$ we have $$\Vert {\ensuremath{\mathbf{1}}}_A\Vert_1=k,\quad \text{and} \quad \Vert {\ensuremath{\mathbf{1}}}_A\Vert_{{\ensuremath{\bm w}},p}=s_k^{1/p}.$$ Thus $k\le C s_k^{1/p}$ for every $k\in{\ensuremath{\mathbb{N}}}$. The weak-Lorentz space $d_\infty({\ensuremath{\bm u}},p)$ associated to a weight ${\ensuremath{\bm u}}=(u_n)_{n=1}^\infty$ and $0<p<\infty$ consists of all sequences $f\in c_0$ whose non-increasing rearrangement $(a_k^*)_{k=1}^\infty$ satisfies $$\Vert f \Vert_{d_\infty({\ensuremath{\bm u}},p)} =\sup_k\left(\sum_{n=1}^k u_n\right)^{1/p} a_k^*<\infty.$$ We have $d_\infty({\ensuremath{\bm u}},p)\subseteq d({\ensuremath{\bm u}},p)$ for every $0<p<\infty$ and every weight ${\ensuremath{\bm u}}$. If ${\ensuremath{\bm u}}_p=(n^p-(n-1)^p)_{j=1}^\infty$ the rearrangement inequality and the mere definition of the spaces yields $$[d({\ensuremath{\bm u}}_p,p)]^p \cdot [d_\infty({\ensuremath{\bm u}}_p,p)]^{1-p} \subseteq \ell_1.$$ We also have the obvious inclusion $$d({\ensuremath{\bm u}}_p,p) \subseteq [d({\ensuremath{\bm u}}_p,p)]^p \cdot [d({\ensuremath{\bm u}}_p,p)]^{1-p}.$$ Summing up, we obtain $d({\ensuremath{\bm u}}_p,p)\subseteq\ell_1$. Assume that ${\ensuremath{\bm w}}$ fulfils . We deduce that $ d({\ensuremath{\bm w}},p) \subseteq d({\ensuremath{\bm u}}_p,p)$. Therefore, $d({\ensuremath{\bm w}},p) \subseteq \ell_1$. Then, (i) follows from Lemma \[lem:BEl1\]. To prove (ii), we pick a semi-normalized well complemented block basic sequence $({\ensuremath{\bm u}}_m)_{m \in {\ensuremath{\mathcal{M}}}}$ with good projecting functionals $({\ensuremath{\bm u}}_m^*)_{m \in {\ensuremath{\mathcal{M}}}}$. By Lemma \[lem:k2two\], we can suppose that ${\ensuremath{\bm u}}_m^*= {\ensuremath{\mathbf{1}}}^*_{{\operatorname{supp}}({\ensuremath{\bm u}}_m)}$ so that $$\sup_{n\in {\ensuremath{\mathbb{N}}}} | {\ensuremath{\bm u}}_m^*({\ensuremath{\bm e}}_n)| \, |{\ensuremath{\bm e}}_n^*({\ensuremath{\bm u}}_m)|= \sup_{n\in {\ensuremath{\mathbb{N}}}} |{\ensuremath{\bm e}}_n^*({\ensuremath{\bm u}}_m)|.$$ Finally, note that the proof of [@AlbiacLeranoz2008]\*[Theorem 2.4]{} gives $$\inf_{m \in {\ensuremath{\mathcal{M}}}} \sup_{n\in {\ensuremath{\mathbb{N}}}} |{\ensuremath{\bm e}}_n^*({\ensuremath{\bm u}}_m)|>0.\qedhere$$ Tsirelson’s space ----------------- Casazza and Kalton established in [@CasKal1998] the uniqueness of unconditional basis up to permutation of Tsirelson’s space ${\ensuremath{\mathcal{T}}}$ and its complemented subspaces with unconditional basis as a byproduct of their study of complemented basic sequences in lattice anti-Euclidean Banach spaces. Their result answered a question by Bourgain et al. ([@BCLT1985]), who had proved the uniqueness of unconditional basis up to permutation of the $2$-convexifyed Tsirelson’s space ${\ensuremath{\mathcal{T}}}^{(2)}$ of ${\ensuremath{\mathcal{T}}}$ (see Example \[pconvTsi\] in § \[SectStrongAbs\] for the definition). Unlike ${\ensuremath{\mathcal{T}}}^{(2)}$, which is “highly" Euclidean, the space ${\ensuremath{\mathcal{T}}}$ is anti-Euclidean. To see the latter requires the notion of dominance, introduced in [@CasKal1998]. Let ${\ensuremath{\mathcal{B}}}=({\ensuremath{\bm x}}_n)_{n=1}^\infty$ be a (semi-normalized) unconditional basis of a quasi-Banach space ${\ensuremath{{X}}}$. Given $f$, $g\in{\ensuremath{{X}}}$, we write $f\prec g$ if $m<n$ for all $m\in{\operatorname{supp}}(f)$ and $n\in{\operatorname{supp}}(g)$. The basis ${\ensuremath{\mathcal{B}}}$ is said to be *left (resp. right) dominant* if there is a constant $C$ such that whenever $(f_i)_{i=1}^N$ and $(g_i)_{i=1}^N$ are disjointly supported families with $f_i\prec g_i$ (resp. $g_i\prec f_i$) and $\Vert f_i\Vert \le \Vert g_i\Vert$ for all $i\in{\ensuremath{\mathbb{N}}}[N]$, then $\Vert \sum_{i=1}^N f_i\Vert \le C \Vert \sum_{i=1}^N g_i\Vert$. If ${\ensuremath{{X}}}$ is a Banach space with a left (resp. right) dominant unconditional basis ${\ensuremath{\mathcal{B}}}$ there is a unique $r=r({\ensuremath{\mathcal{B}}})\in[1,\infty]$ such that $\ell_r$ is finitely block representable in ${\ensuremath{{X}}}$. In the case when $r({\ensuremath{\mathcal{B}}})\in\{1,\infty\}$, ${\ensuremath{{X}}}$ is anti-Euclidean (see [@CasKal1998]\*[Proposition 5.3]{}). The canonical basis of the *Tsirelson space* ${\ensuremath{\mathcal{T}}}$ is right dominant [@CasKal1998]\*[Proposition 5.12]{}, and $r({\ensuremath{\mathcal{T}}})=1$. Moreover, by [@CasKal1998]\*[Proposition 5.5]{} and [@CasShu1989]\*[page 14]{}, the canonical basis (as well as each of its subases) is equivalent to its square. In our language, [@CasKal1998]\*[Theorem 5.6]{} says that every left (resp. right) dominant unconditional basis is universal for well complemented block basic sequences. Finally, since it is locally convex, ${\ensuremath{\mathcal{T}}}$ is trivially an L-convex lattice. Bourgin-Nakano spaces. ---------------------- Let ${\ensuremath{\mathcal{N}}}$ be a countable set. A *Bourgin-Nakano index* is a family $(p_n)_{n \in {\ensuremath{\mathcal{N}}}}$ in $(0,\infty)$ with $p=\inf_n p_n>0$. The *Bourgin-Nakano space* $\ell(p_{n})$ is the quasi-Banach space built from the modular $$m_{(p_n)}\colon{\ensuremath{\mathbb{F}}}^{{\ensuremath{\mathcal{N}}}} \to[0,\infty), \quad (a_n)_{n \in {\ensuremath{\mathcal{N}}}} \mapsto \sum_{n \in {\ensuremath{\mathcal{N}}}} |a_n|^{p_n}.$$ Note that, by the Monotone Convergence Theorem, the closed unit ball of $\ell(p_{n})$ is the set $$B_{\ell(p_{n})}=\{ f \in{\ensuremath{\mathbb{F}}}^{{\ensuremath{\mathcal{N}}}} \colon m_{(p_n)}(f)\le 1\}.$$ If we endow $\ell(p_{n})$ with the natural ordering, it becomes a $p$-convex quasi-Banach lattice. The separable part $h(p_{n})=[{\ensuremath{\bm e}}_n \colon n \in {\ensuremath{\mathcal{N}}}]$ of $\ell(p_{n})$ is a sequence space. We have $\ell(p_{n})=h(p_{n})$ if and only if $\sup_n p_n<\infty$. These spaces where introduced by Bourgin [@Bourgin1943] in the particular case when $p_n\le 1$ for every $n\in{\ensuremath{\mathcal{N}}}$. Nakano [@Nakano1950] studied the case when $p_n\ge 1$ for every $n \in {\ensuremath{\mathcal{N}}}$, so that the arising spaces are locally convex, i.e., Banach spaces. Let us record some results on Bourgin-Nakano spaces of interest for the purposes of this paper. \[lem:dominancyBN\]Let $(p_{n})_{n \in {\ensuremath{\mathcal{N}}}}$ and $(q_m)_{m \in {\ensuremath{\mathcal{M}}}}$ be Bourgin-Nakano indexes. Let ${\ensuremath{\mathcal{B}}}_u=({\ensuremath{\bm u}}_j)_{j=1}^\infty$ and ${\ensuremath{\mathcal{B}}}_v=({\ensuremath{\bm v}}_j)_{j=1}^\infty$ be normalized block basic sequences in $\ell(p_{n})$ and $\ell(q_{n})$ respectively. Suppose that $p_n \le q_m$ for all $(n,m)\in{\operatorname{supp}}({\ensuremath{\bm u}}_j)\times{\operatorname{supp}}({\ensuremath{\bm v}}_j)$ and all $j\in{\ensuremath{\mathbb{N}}}$. Then ${\ensuremath{\mathcal{B}}}_u$ $1$-dominates ${\ensuremath{\mathcal{B}}}_v$. Let $j\in {\ensuremath{\mathbb{N}}}$. Pick $r_j\in[1,\infty)$ such that $p_n\le r \le q_m$ for every $n\in A_j:={\operatorname{supp}}({\ensuremath{\bm u}}_j)$ and $m\in B_j:={\operatorname{supp}}({\ensuremath{\bm v}}_j)$. Put ${\ensuremath{\bm u}}_j=\sum_{n\in A_j} a_j\, {\ensuremath{\bm e}}_j$ and ${\ensuremath{\bm v}}_j=\sum_{n\in A_j} b_j\, {\ensuremath{\bm e}}_j$. Since $\Vert{\ensuremath{\bm u}}_j\Vert=\Vert {\ensuremath{\bm v}}_j\Vert=1$, we have $$\sum_{n\in A_j} |a_j|^{p_n}=1=\sum_{m\in B_j} |b_m|^{q_m}.$$ Let $f=\sum_{j=1}^\infty c_j \, {\ensuremath{\bm u}}_j\in B_{\ell(p_n)}$. We have $|c_j|\le 1$ for every $j\in{\ensuremath{\mathbb{N}}}$. Hence, $$\begin{aligned} m_{(q_n)}\left(\sum_{j=1}^\infty c_j \, {\ensuremath{\bm v}}_j\right)&=\sum_{j=1}^\infty \sum_{m\in B_j} |c_j|^{q_m} |b_m|^{q_m}\\ &\le \sum_{j=1}^\infty |c_j|^{r} \sum_{m\in B_j} |b_m|^{q_m}\\ &=\sum_{j=1}^\infty |c_j|^{r} \sum_{n\in A_j} |a_n|^{p_n}\\ &\le \sum_{j=1}^\infty \sum_{n\in A_j} |c_j|^{p_n} |a_n|^{p_n}\\ &\le 1.\end{aligned}$$ Therefore, $\sum_{j=1}^\infty c_j \, {\ensuremath{\bm u}}_j\in B_{\ell(q_n)}$. \[prop:BNAE\]Let $(p_n)_{n=1}^\infty$ be a non-increasing (resp. non-decreasing) Bourgin-Nakano index. Then, the unit vector system of $\ell(p_{n})$ is right (resp. left) dominant. Moreover, $r(\ell(p_{n}))=\lim_n p_n$. It is a consequence of Lemma \[lem:dominancyBN\]. Given $(p_n)_{n \in {\ensuremath{\mathcal{N}}}}$ we put $(\widehat{p_{n}})_{n\in {\ensuremath{\mathcal{N}}}}= (\max\{1,p_n\})_{n \in {\ensuremath{\mathcal{N}}}}$. \[prop:BNEnv\]Let $(p_n)_{n \in {\ensuremath{\mathcal{N}}}}$ be a Bourgin-Nakano index. Then the Banach envelope of $\ell(p_{n})$ is $\ell(\widehat{p_{n}})$ via the inclusion map. Put ${\ensuremath{\mathcal{N}}}_{b}=\{ n \in {\ensuremath{\mathcal{N}}}\colon p_n<1\}$, ${\ensuremath{\mathcal{N}}}_{k}=\{n \in {\ensuremath{\mathcal{N}}}\colon p_n\ge 1\}$. The obvious map from ${\ensuremath{\mathbb{F}}}^{{\ensuremath{\mathcal{N}}}}$ onto ${\ensuremath{\mathbb{F}}}^{{\ensuremath{\mathcal{N}}}_b} \times {\ensuremath{\mathbb{F}}}^{{\ensuremath{\mathcal{N}}}_k}$ restricts to a lattice isomorphism from $\ell(p_{n})$ onto $\ell(p_{n})_{n\in {\ensuremath{\mathcal{N}}}_{b}} \oplus \ell(p_{n})_{n\in {\ensuremath{\mathcal{N}}}_{k}}$. Hence, by [@AlbiacAnsorena2020]\*[Lemma 2.3]{}, we can assume without loss of generality that ${\ensuremath{\mathcal{N}}}_k=\emptyset$. In this particular case, since $\sum_{n \in {\ensuremath{\mathcal{N}}}} |a_n|\le 1$ for every $(a_n)_{n \in {\ensuremath{\mathcal{N}}}} \in B_{\ell(p_{n})}$ and ${\ensuremath{\bm e}}_n\in B_{\ell(p_{n})}$ for every $n \in {\ensuremath{\mathcal{N}}}$, the closed convex hull of $B_{\ell(p_{n})}$ in $\ell_1({\ensuremath{\mathcal{N}}})$ is the unit closed ball of $\ell_1({\ensuremath{\mathcal{N}}})$. Since $\ell(\widehat{p_{n}})=\ell_1({\ensuremath{\mathcal{N}}})$ isometrically, we infer that the Banach envelope of $\ell(p_{n})$ is $\ell(\widehat{p_{n}})$ isometrically under the inclusion map. \[cor:BNAE\]Let $(p_n)_{n \in {\ensuremath{\mathcal{N}}}}$ be a Bourgin-Nakano index. Suppose that $\limsup_n p_n \le 1$. Then, the Banach envelope of $\ell(p_{n})$ is anti-Euclidean. Just combine Propositions \[prop:BNAE\] and \[prop:BNEnv\]. \[prop:BNUWC\]Let $(p_n)_{n=1}^\infty$ be a Bourgin-Nakano index. Then, the unit vector system of $\ell(p_{n})$ is universal for well complemented block basic sequences. Let ${\ensuremath{\mathcal{B}}}_y=({\ensuremath{\bm y}}_m)_{m \in {\ensuremath{\mathcal{M}}}}$ be a semi-normalized well complemented block basic sequence and let $({\ensuremath{\bm u}}_m^*)_{m \in {\ensuremath{\mathcal{M}}}}$ be a sequence of good projecting functionals. Since $$\sum_{n \in {\ensuremath{\mathcal{N}}}} {\ensuremath{\bm e}}_n^*({\ensuremath{\bm y}}_m) \, {\ensuremath{\bm y}}_m^*({\ensuremath{\bm e}}_n)={\ensuremath{\bm y}}_m^*({\ensuremath{\bm y}}_m)=1$$ for every $m \in {\ensuremath{\mathcal{M}}}$, there are families $(A_m)_{m \in {\ensuremath{\mathcal{M}}}}$ and $(B_m)_{m \in {\ensuremath{\mathcal{M}}}}$ of subsets of $N$ and $\pi\colon {\ensuremath{\mathcal{M}}}\to {\ensuremath{\mathcal{N}}}$ such that, if $$\lambda_m=\sum_{n\in A_m} {\ensuremath{\bm e}}_n^*({\ensuremath{\bm y}}_m) \, {\ensuremath{\bm y}}_m^*({\ensuremath{\bm e}}_n) \text{ and } \mu_m=\sum_{n\in B_m} {\ensuremath{\bm e}}_n^*({\ensuremath{\bm y}}_m) \, {\ensuremath{\bm y}}_m^*({\ensuremath{\bm e}}_n),$$ then $\min\{| \lambda_m|,|\mu_m|\}\ge 1/2$, $A_m\cup B_m={\operatorname{supp}}({\ensuremath{\bm y}}_m)$, $A_m\cap B_m=\{\pi(m)\}$, and $$\max_{n\in A_m} p_n = \min_{n\in B_m} p_n =p_{\pi(m)}$$ for every $m \in {\ensuremath{\mathcal{M}}}$. Let ${\ensuremath{\bm u}}_m=S_{A_m}({\ensuremath{\bm y}}_m)$, ${\ensuremath{\bm u}}_m^*=S_{A_m}^*({\ensuremath{\bm y}}_m^*)$, ${\ensuremath{\bm v}}_m=S_{B_m}({\ensuremath{\bm y}}_m)$, and ${\ensuremath{\bm v}}_m^*=S_{B_m}^*({\ensuremath{\bm y}}_m)$ for $m \in {\ensuremath{\mathcal{M}}}$. Since ${\ensuremath{\bm u}}_m^*({\ensuremath{\bm u}}_m)=\lambda_m$ and ${\ensuremath{\bm v}}_m^*({\ensuremath{\bm v}}_m)=\mu_m$ for every $m \in {\ensuremath{\mathcal{M}}}$, applying [@AlbiacAnsorena2020]\*[Lemma 3.1]{} yields that both ${\ensuremath{\mathcal{B}}}_u=({\ensuremath{\bm u}}_m)_{m \in {\ensuremath{\mathcal{M}}}}$ and ${\ensuremath{\mathcal{B}}}_v=({\ensuremath{\bm v}}_m)_{m \in {\ensuremath{\mathcal{M}}}}$ are well complemented block basic sequences equivalent to ${\ensuremath{\mathcal{B}}}_y$. By Lemma \[lem:dominancyBN\], ${\ensuremath{\mathcal{B}}}_u$ dominates ${\ensuremath{\mathcal{B}}}:=({\ensuremath{\bm e}}_{\pi(m)})_{m \in {\ensuremath{\mathcal{M}}}}$ and, in turn, ${\ensuremath{\mathcal{B}}}$ dominates ${\ensuremath{\mathcal{B}}}_v$. We infer that ${\ensuremath{\mathcal{B}}}_y$ and ${\ensuremath{\mathcal{B}}}$ are equivalent. \[prop:BNSQ\]Let $(p_n)_{n \in {\ensuremath{\mathcal{N}}}}$ be a Bourgin-Nakano index. The unit vector system of $\ell(p_{n})$ is equivalent to its square if and only if there is a partition $({\ensuremath{\mathcal{N}}}_1,{\ensuremath{\mathcal{N}}}_2)$ of ${\ensuremath{\mathcal{N}}}$ and bijections $\pi_i\colon {\ensuremath{\mathcal{N}}}\to {\ensuremath{\mathcal{N}}}_i$, $i=1$, $2$, such that, for some $0<c<1$, $$\sum_{n \in {\ensuremath{\mathcal{N}}}} c^{\frac{ p_n q_{i,n} }{ |p_n - q_{i,n}| }}<\infty, \quad i=1,2,$$ where if $q_{i,n} = p_{\pi_i(n)}$. This result follows from [@Nakano1951]\*[Theorem 1]{}, which characterizes when two (a priori different) Bourgin-Nakano spaces are identical. We remark that, in certain cases, we can give a more simple characterization of Nakano spaces that are lattice isomorphic to its square. For instance, if $(p_n)_{n=1}^\infty$ is a monotone sequence, then $\ell(p_{n})$ is lattice isomorphic to its square if and only if $$\left| \frac{1}{p_n} - \frac{1}{p_{2n}} \right|\lesssim \frac{1}{1+\log(n)}, \quad n\in{\ensuremath{\mathbb{N}}}$$ (see [@CasKal1998]\*[Proof of Theorem 5.8]{}). \[Nakanouncthm\] Suppose that Bourgin-Nakano index $(p_n)_{n \in {\ensuremath{\mathcal{N}}}}$ satisfies $\limsup_n p_n \le 1$. Suppose also that there exist a partition $({\ensuremath{\mathcal{N}}}_1,{\ensuremath{\mathcal{N}}}_2)$ of ${\ensuremath{\mathcal{N}}}$, and bijections $\pi_i\colon {\ensuremath{\mathcal{N}}}\to {\ensuremath{\mathcal{N}}}_i$, $i=1$, $2$, so that $$\sum_{n \in {\ensuremath{\mathcal{N}}}} c^{1/ | { p_n } -{ p_{\pi_i(n)}}|}<\infty, \quad i=1,2,$$ for some $0<c<1$. Then $\ell(p_{n})$ has a unique unconditional basis up to permutation. Just combine Corollary \[cor:BNAE\], Proposition \[prop:BNUWC\], Proposition \[prop:BNSQ\] and Theorem \[thm:keytechniquebis\]. An important class of anti-Euclidean spaces arises from a special type of bases called strongly absolute. We tackle this case separately in the next section. Applicability to spaces with strongly absolute bases {#SectStrongAbs} ==================================================== In the category of bases one could say that strongly absolute bases are “purely nonlocally convex” bases, in the sense that if a quasi-Banach space $X$ has a strongly absolute basis, then its unit ball is far from being a convex set and so $X$ is far from being a Banach space. The term strongly absolute for a basis was coined in [@KLW1990]. Here we give a slightly different, but equivalent, definition. We say that a (semi-normalized) unconditional basis ${\ensuremath{\mathcal{B}}}=({\ensuremath{\bm x}}_n)_{n \in {\ensuremath{\mathcal{N}}}}$ of a quasi-Banach space ${\ensuremath{{X}}}$ is *strongly absolute* if for every $\varepsilon>0$ there is a constant $0<C(\varepsilon)$ such that $$\label{eq:sb} \sum_{n \in {\ensuremath{\mathcal{N}}}} |{\ensuremath{\bm x}}_n^*(f)| \le \max\left\{ C(\varepsilon) \sup_{n \in {\ensuremath{\mathcal{N}}}} |{\ensuremath{\bm x}}_n^*(f)| , \varepsilon \Vert f \Vert \right\}, \quad f\in{\ensuremath{{X}}}.$$ In the following lemma we record a key property of strongly absolute bases. The proof is straightforward and so we omit it. \[lem:k2three\]Let ${\ensuremath{\mathcal{B}}}=({\ensuremath{\bm x}}_n)_{n \in {\ensuremath{\mathcal{N}}}}$ be a strongly absolute unconditional basis of a quasi-Banach space ${\ensuremath{{X}}}$. Suppose that $V\subseteq{\ensuremath{{X}}}$ is such that $\inf_{f\in V}{\Vert f \Vert^{-1}} \Vert {\ensuremath{\mathcal{F}}}(f)\Vert_1 >0$. Then, $\inf_{f\in V}{\Vert f \Vert^{-1}} \Vert {\ensuremath{\mathcal{F}}}(f)\Vert_\infty>0$. \[prop:K2SA\]Let ${\ensuremath{\mathcal{B}}}$ be a strongly absolute unconditional basis of a quasi-Banach space ${\ensuremath{{X}}}$. Then: - The Banach envelope of ${\ensuremath{{X}}}$ is $\ell_1$ via the coefficient transform. - ${\ensuremath{\mathcal{B}}}$ has the peaking property. It is clear that ${\ensuremath{\mathcal{B}}}$ dominates the unit vector system of $\ell_1$, so that (i) follows from Lemma \[lem:BEl1\]. Let ${\ensuremath{\mathcal{B}}}_u=({\ensuremath{\bm u}}_m)_{m \in {\ensuremath{\mathcal{M}}}}$ be a semi-normalized well complemented block basic sequence. By Lemma \[lem:k2two\] we may assume that $({\ensuremath{\bm u}}_m^*)_{m \in {\ensuremath{\mathcal{M}}}}=({\ensuremath{\mathbf{1}}}^*_{{\operatorname{supp}}({\ensuremath{\bm u}}_m)})_{m \in {\ensuremath{\mathcal{M}}}}$ is a sequence of good projecting functionals for ${\ensuremath{\mathcal{B}}}_u$. Using (i) and [@AlbiacAnsorena2020]\*[Lemma 2.1]{} we deduce that the sequence $({\ensuremath{\mathcal{F}}}({\ensuremath{\bm u}}_m))_{m=1}^\infty$ is semi-normalized in $\ell_1$. Therefore, $$\inf_m \Vert {\ensuremath{\bm u}}_m\Vert^{-1} \Vert {\ensuremath{\mathcal{F}}}({\ensuremath{\bm u}}_m)\Vert_1>0.$$ Lemma \[lem:k2three\] yields $$\begin{aligned} \inf_{m \in {\ensuremath{\mathcal{M}}}} \sup_{n \in {\ensuremath{\mathcal{N}}}} |{\ensuremath{\bm u}}_m^*({\ensuremath{\bm x}}_n)| \, |{\ensuremath{\bm x}}_n^*({\ensuremath{\bm u}}_m)| &= \inf_{m \in {\ensuremath{\mathcal{M}}}} \Vert {\ensuremath{\mathcal{F}}}({\ensuremath{\bm u}}_m)\Vert_\infty\\ &\ge \inf_{m \in {\ensuremath{\mathcal{M}}}} \Vert {\ensuremath{\bm u}}_m \Vert \inf_{m \in {\ensuremath{\mathcal{M}}}} \frac{\Vert {\ensuremath{\mathcal{F}}}({\ensuremath{\bm u}}_m)\Vert_\infty}{\Vert {\ensuremath{\bm u}}_m \Vert} >0.\qedhere\end{aligned}$$ Combining Proposition \[prop:K2SA\] with Theorem \[thm:keytechniquebis\] immediately yields the following general result. \[weakWoj\] Let ${\ensuremath{{X}}}$ be a quasi-Banach space with a strongly absolute unconditional basis which induces an L-convex structure on $X$. If ${\ensuremath{\mathcal{B}}}$ is equivalent to its square, then ${\ensuremath{{X}}}$ has a unique unconditional basis up to permutation. Wojtaszczyk obtained in [@Woj1997] the uniqueness of unconditional basis of a quasi-Banach space ${\ensuremath{{X}}}$ under the same hypotheses as in Corollary \[weakWoj\] replacing ${\ensuremath{\mathcal{B}}}^2\sim {\ensuremath{\mathcal{B}}}$ with the weaker assumption that ${\ensuremath{{X}}}^s\approx{\ensuremath{{X}}}$ for some $s\ge 2$. For the sake of completeness, we next show how we can combine the techniques from [@Woj1997] to pass from the condition “${\ensuremath{{X}}}^s\approx{\ensuremath{{X}}}$ for some $s\ge 2$” to “${\ensuremath{\mathcal{B}}}^2\sim {\ensuremath{\mathcal{B}}}$". \[thm:WoUltimate\] Let $X$ be a quasi-Banach space with a strongly absolute unconditional basis ${\ensuremath{\mathcal{B}}}$ that induces an $L$-convex lattice structure on $X$. If ${\ensuremath{{X}}}^s\approx{\ensuremath{{X}}}$ for some $s\ge 2$ then ${\ensuremath{\mathcal{B}}}^2\sim {\ensuremath{\mathcal{B}}}$; in particular ${\ensuremath{{X}}}^2\approx{\ensuremath{{X}}}$. Put ${\ensuremath{\mathcal{B}}}^s=({\ensuremath{\bm y}}_m)_{m \in {\ensuremath{\mathcal{M}}}}$. We have that ${\ensuremath{\mathcal{B}}}^{s^2}=({\ensuremath{\bm y}}_{i,m})_{(i,m)\in{\ensuremath{\mathbb{N}}}[s]\times {\ensuremath{\mathcal{M}}}}$ is permutatively equivalent to a basis of ${\ensuremath{{X}}}\approx{\ensuremath{{X}}}^{s^2}$. Hence, by [@Woj1997]\*[Proposition 2.10]{}, there is $\alpha\colon {\ensuremath{\mathcal{M}}}\to {\ensuremath{\mathbb{N}}}[s]$ such that ${\ensuremath{\mathcal{B}}}'=({\ensuremath{\bm y}}_{\alpha(m),m})_{m \in {\ensuremath{\mathcal{M}}}}$ is permutatively equivalent to a subbasis of ${\ensuremath{\mathcal{B}}}$. By Lemma \[lem:1\], ${\ensuremath{\mathcal{B}}}^s$ is equivalent to ${\ensuremath{\mathcal{B}}}'$. Since ${\ensuremath{\mathcal{B}}}$ is permutatively equivalent to a subbasis of ${\ensuremath{\mathcal{B}}}^2$ and ${\ensuremath{\mathcal{B}}}^2$ is permutatively equivalent to a subbasis of ${\ensuremath{\mathcal{B}}}^s$, applying Theorem \[thm:SBUB\] yields ${\ensuremath{\mathcal{B}}}^s\sim{\ensuremath{\mathcal{B}}}^2\sim{\ensuremath{\mathcal{B}}}$. As we said before, a strongly absolute unconditional basis can be thought of as a basis that dominates the canonical basis of $\ell_1$ but it is far from it. This intuition is substantiated by the following elementary result whose proof we omit. \[lem:SADom\]Let ${\ensuremath{\mathcal{B}}}_x$ and ${\ensuremath{\mathcal{B}}}_y$ be unconditional bases of quasi-Banach spaces ${\ensuremath{{X}}}$ and ${\ensuremath{{Y}}}$ respectively. Suppose that ${\ensuremath{\mathcal{B}}}_x$ dominates ${\ensuremath{\mathcal{B}}}_y$ and that ${\ensuremath{\mathcal{B}}}_y$ is strongly absolute. Then ${\ensuremath{\mathcal{B}}}_y$ is strongly absolute. To complement the theoretical contents of this section we shall introduce a quantitative tool from approximation theory that measures how far an unconditional basis is from the canonical $\ell_{1}$-basis. Given an unconditional basis ${\ensuremath{\mathcal{B}}}$ of a quasi-Banach space $X$, its *lower democracy function* is defined as $$\varphi_m^l[{\ensuremath{\mathcal{B}}}]=\inf_{|A|\ge m} \Vert {\ensuremath{\mathbf{1}}}_A[{\ensuremath{\mathcal{B}}}]\Vert, \quad m\in{\ensuremath{\mathbb{N}}}.$$ Note that if ${\ensuremath{\mathcal{B}}}$ is strongly absolute then $$\lim\limits_{m\to \infty} \frac{1}{m} \varphi_m^l[{\ensuremath{\mathcal{B}}}]=\infty.$$ The following result establishes that, conversely, if $(\varphi_m^l[{\ensuremath{\mathcal{B}}}])_{m=1}^\infty$ is sufficiently far away from the sequence $(m)_{m=1}^\infty$, then the basis ${\ensuremath{\mathcal{B}}}$ is strongly absolute. \[prop:SADem\]Let ${\ensuremath{\mathcal{B}}}=({\ensuremath{\bm x}}_n)_{n=1}^\infty$ be an unconditional basis of a quasi-Banach space ${\ensuremath{{X}}}$. Suppose that there exists $0<p<1$ such that for some constant $0<C$ we have $$m^{1/p} \le C \varphi_m^l[{\ensuremath{\mathcal{B}}}],\quad m\in{\ensuremath{\mathbb{N}}}.$$ Then ${\ensuremath{\mathcal{B}}}$ is strongly absolute. We may regard $X$ as a sequence space whose basis ${\ensuremath{\mathcal{B}}}$ is just the unit vector system. Pick $r\in(p,1)$. By [@AADK2016]\*[Lemma 6.1]{}(a), $${\ensuremath{{X}}}\subseteq \ell_{p,\infty} \subseteq \ell_r$$ continuously. Since the canonical basis of $\ell_r$ is strongly absolute (see [@Leranoz1992]\*[Lemma 2.2]{}), by Lemma \[lem:SADom\] the proof is over. We will use Proposition \[prop:SADem\] to readily deduce that the following important examples of bases, which are permutatively equivalent to their square, are strongly absolute. Given $0<p_i<1$ for $i\in{\ensuremath{\mathbb{N}}}[n]$, the canonical basis of the mixed norm space $\ell_{p_1}(\cdots\ell_{p_i}(\cdots(\ell_{p_n})))$ is unconditional, strongly absolute, and induces a structure of L-convex lattice on the whole space. Let $d\in{\ensuremath{\mathbb{N}}}$. The canonical basis ${\ensuremath{\mathcal{B}}}$ of the Hardy spaces $H_p({\ensuremath{\mathbb{T}}}^d)$, $0<p<1$ (see [@KLW1990]) satisfies $$m^{1/p} \approx \varphi_m^l[{\ensuremath{\mathcal{B}}}, H_p({\ensuremath{\mathbb{T}}}^d)],\quad m\in{\ensuremath{\mathbb{N}}}.$$ Hence, ${\ensuremath{\mathcal{B}}}$ is strongly absolute. \[TribelLizEx\] Given a dimension $d\in{\ensuremath{\mathbb{N}}}$, let $\Theta_d=\{0,1\}^d \setminus\{ 0\}$ and consider the set of indices $$\Lambda_d= {\ensuremath{\mathbb{Z}}}\times {\ensuremath{\mathbb{Z}}}^d \times \Theta_d.$$ The homogeneous Triebel-Lizorkin sequence space $\ring{{\ensuremath{{\bm{t}}}}}_{p,q}^{s,d}$ of indeces and $p\in (0,\infty)$ and $q\in (0,\infty]$ and smoothness $s\in{\ensuremath{\mathbb{R}}}$ consists of all scalar sequences $f=(a_\lambda)_{\lambda\in\Lambda}$ for which $$\Vert f \Vert_{{\ensuremath{{\bm{t}}}}_{p,q}^{s}}= \left\Vert \left(\sum_{j=-\infty}^\infty \sum_{\delta\in\Theta_d} \sum_{n\in{\ensuremath{\mathbb{Z}}}^d} 2^{jq(s+d/2)} |a_{j,n,\delta}|^q \chi_{Q(j,n)}\right)^{1/q}\right\Vert_p<\infty,$$ were $Q(j,n)$ denotes the cube of length $2^{-j}$ whose lower vertex is $2^{-j}n$. If we restrict ourselves to non-negative “levels” $j$ and we add $\ell_p$ as a component we obtain the inhomegeneous Triebel-Lizorkin sequence spaces. To be precise, set $$\Lambda_d^+=\{(j,n,\delta)\in\Lambda_d \colon j\ge 0\},$$ and define $${\ensuremath{{\bm{t}}}}_{p,q}^{s,d} =\ell_p({\ensuremath{\mathbb{Z}}}^d) \oplus \{ f=(a_\lambda)_{\lambda\in\Lambda_d^+} \colon \Vert f \Vert_{{\ensuremath{{\bm{t}}}}_{p,q}^{s}}<\infty \}.$$ It is known that the wavelet transforms associated to certain wavelet bases normalized in the $L_{2}$-norm are isomorphisms from $F_{p,q}^s({\ensuremath{\mathbb{R}}}^d)$ (respectively $\ring{F}_{p,q}^s({\ensuremath{\mathbb{R}}}^d)$ onto ${\ensuremath{{\bm{t}}}}_{p,q}^s({\ensuremath{\mathbb{R}}}^d)$ (resp., $\ring{{\ensuremath{{\bm{t}}}}}_{p,q}^{s,d}$). See [@FrJaWe1991]\*[Theorem 7.20]{} for the homegeneous case and [@TriebelIII2006]\*[Theorem 3.5]{} for the inhomogenous case. Thus, Triebel-Lizorkin spaces are isomorphic to the corresponding sequence spaces, and the aforementioned wavelet bases (regarded as distributions on Triebel-Lizorkin spaces) are equivalent to the unit vector systems of the corresponding sequence spaces. A similar technique to the one used by Temlyakov in [@Temlyakov1998] to prove that the Haar system is a democratic basis for $L_p$ when $1<p<\infty$ allows us to prove that the unique vector system ${\ensuremath{\mathcal{E}}}$ of $\ring{{\ensuremath{{\bm{t}}}}}_{p,q}^{s,d}$ satisfies $$m^{1/p} \approx \varphi_m^l[{\ensuremath{\mathcal{E}}}, \ring{{\ensuremath{{\bm{t}}}}}_{p,q}^{s,d}],\quad m\in{\ensuremath{\mathbb{N}}}.$$ Consequently, if $p<1$, the unique vector system of both $\ring{{\ensuremath{{\bm{t}}}}}_{p,q}^{s,d}$ and ${\ensuremath{{\bm{t}}}}_{p,q}^{s,d}$ is a strongly absolute unconditional basis. \[pconvTsi\] Given $0<p<\infty$, the *$p$-convexified Tsirelson’s space*, denoted ${{\ensuremath{\mathcal{T}}}}^{(p)}$, is obtained from ${\ensuremath{\mathcal{T}}}$ by putting $$\label{Tsirelsonnorm} \Vert x\Vert_{{{\ensuremath{\mathcal{T}}}}^{(p)}} = \Vert (|a_{n}|^{p})_{n=1}^{\infty}\Vert_{{\ensuremath{\mathcal{T}}}}^{1/p}$$ for those sequences of real numbers $x= (a_{n})_{n=1}^{\infty}$ such that $(|a_{n}|^{p})_{n=1}^{\infty}\in {{\ensuremath{\mathcal{T}}}}$. Equation  defines a norm for $1\le p$ and a $p$-norm when $0<p<1$. Obviously, the space $({{\ensuremath{\mathcal{T}}}}^{(1)}, \Vert \cdot\Vert_{{{\ensuremath{\mathcal{T}}}}^{(1)}})$ is simply $({{\ensuremath{\mathcal{T}}}}, \Vert \cdot\Vert_{{\ensuremath{\mathcal{T}}}})$. For $0<p<\infty$, the canonical basis ${\ensuremath{\mathcal{E}}}$ of ${{\ensuremath{\mathcal{T}}}}^{(p)}$ is $1$-unconditional, it is permutatively equivalent to its square, and satisfies $$m^{1/p} \approx \varphi_m^l[{\ensuremath{\mathcal{E}}}, {{\ensuremath{\mathcal{T}}}}^{(p)}],\quad m\in{\ensuremath{\mathbb{N}}}.$$ Hence in particular if $0<p<1$, ${\ensuremath{\mathcal{E}}}$ is strongly absolute. Uniqueness of unconditional basis of sums of anti-Euclidean spaces {#DirectSumsUnc} ================================================================== Our last application of Theorem \[thm:keytechniquebis\] establishes that the uniqueness of unconditional bases up to permutation of anti-Euclidean quasi-Banach spaces is preserved by *finite* direct sums. \[thm:DirecSums\] Let $({\ensuremath{{X}}}_i)_{i\in F}$ be a finite family of quasi-Banach spaces whose Banach envelopes are anti-Euclidean. Suppose that for each $i\in F$, ${\ensuremath{\mathcal{B}}}_i$ is an unconditional basis of ${\ensuremath{{X}}}_{i}$ such that 1. The lattice structure induced by ${\ensuremath{\mathcal{B}}}_{i}$ in ${\ensuremath{{X}}}_{i}$ is L-convex; 2. ${\ensuremath{\mathcal{B}}}_{i}$ is universal for well complemented block basic sequences; and 3. ${\ensuremath{\mathcal{B}}}_{i}\sim {\ensuremath{\mathcal{B}}}_{i}^2$. Then the space $\bigoplus_{i\in F} {\ensuremath{{X}}}_i$ has a unique unconditional basis up to permutation. Combining [@CasKal1999]\*[Proposition 2.4]{} and [@AlbiacAnsorena2020]\*[Lemma 2.3]{} we see that the Banach envelope of ${\ensuremath{{X}}}=\bigoplus_{i\in F} {\ensuremath{{X}}}_i$ is anti-Euclidean. It is clear that the basis ${\ensuremath{\mathcal{B}}}=\bigoplus_{i\in F} {\ensuremath{\mathcal{B}}}_i$ is L-convex and permutatively equivalent to its square. By [@AlbiacAnsorena2020]\*[Proposition 3.4]{}, ${\ensuremath{\mathcal{B}}}$ is universal for well complemented block basic sequences. So, the result follows from Theorem \[thm:keytechniquebis\]. Merging the results from Sections \[Sec:ApplAnti-Euclidean\] and \[SectStrongAbs\] with Theorem \[thm:DirecSums\] provides new additions to the list of spaces with unique unconditional basis up to a permutation. Let $F$ be a finite set of indeces. Suppose that for each $i\in F$, $X_{i}$ is one of the following spaces: - $\ell_{\varphi}$, where $\varphi$ verifies and , in particular $\ell_{p}$ for $p\le 1$; - $d({\ensuremath{\bm w}}, p)$, where ${\ensuremath{\bm w}}$ verifies ; - ${\ensuremath{\mathcal{T}}}$; - $\ell(p_{n})$, where $(p_{n})_{n=1}^{\infty}$ verifies the hypothesis of Theorem \[Nakanouncthm\]; - $\ell_{p_1}(\cdots\ell_{p_i}(\cdots(\ell_{p_n})))$, where $0<p_i<1$ for $i\in{\ensuremath{\mathbb{N}}}[n]$; - $H_p({\ensuremath{\mathbb{T}}}^{d})$ for $d\in {\ensuremath{\mathbb{N}}}$ and $0<p<1$; - $\ring{{\ensuremath{{\bm{t}}}}}_{p,q}^{s,d}$ or ${\ensuremath{{\bm{t}}}}_{p,q}^{s,d}$ as in Example \[TribelLizEx\]; - ${{\ensuremath{\mathcal{T}}}}^{(p)}$ for $0<p<1$. Then ${\ensuremath{{X}}}=\bigoplus_{i\in F} {\ensuremath{{X}}}_i$ has a unique unconditional basis up to permutation. [^1]: Both authors supported by the Spanish Ministry for Science, Innovation, and Universities, Grant PGC2018-095366-B-I00 for *Análisis Vectorial, Multilineal y Approximación*. The first-named author also acknowledges the support from Spanish Ministry for Economy and Competitivity, Grant MTM2016-76808-P for *Operators, lattices, and structure of Banach spaces*.
--- abstract: 'Within the framework of a Beyond Standard Model (BSM) with a local $SU(3)$ family symmetry, we report an updated fit of parameters which account for the known spectrum of quarks and charged lepton masses and the quark mixing in a $4\times 4$ non-unitary $V_{CKM}$. In this scenario, ordinary heavy fermions, top and bottom quarks and tau lepton, become massive at tree level from Dirac See-saw mechanisms implemented by the introduction of a new set of $SU(2)_L$ weak singlet vector-like fermions, $U,D,E,N$, with $N$ a sterile neutrino. The $N_{L,R}$ sterile neutrinos allow the implementation of a $8\times 8$ general See-saw Majorana neutrino mass matrix with four massless eigenvalues at tree level. Hence, light fermions, including neutrinos, obtain masses from loop radiative corrections mediated by the massive $SU(3)$ gauge bosons. $SU(3)$ family symmetry is broken spontaneously in two stages, whose hierarchy of scales yield an approximate $SU(2)$ global symmetry associated with the $Z_1, Y_1^\pm$ gauge boson masses of the order of 2 TeV. A global fit of parameters to include neutrino masses and lepton mixing is in progress.' author: - 'Albino Hernández-Galeana' title: 'Charged Fermion Masses and Mixing from a $SU(3)$ Family Symmetry Model' --- Introduction ============== The origen of the hierarchy of fermion masses and mixing is one of the most important open problems in particle physics. Any attempt to account for this hierarchy introduce a mass generation mechanism which distinguish among the different Standard Model (SM) quarks and leptons. After the discovery of the scalar Higgs boson on 2012, LHC has not found a conclusive evidence of new physics. However, there are theoretical motivations to look for new particles in order to answer some open questions like; neutrino oscillations, dark matter, stability of the Higgs mass against radiative corrections,etc. In this report, we address the problem of charged fermion masses and quark mixing within the framework of an extension of the SM introduced by the author in [@albinosu32004]. This BSM proposal include a vector gauged $SU(3)$ family symmetry[^1] commuting with the SM group and introduce a hierarchical mass generation mechanism in which the light fermions obtain masses through loop radiative corrections, mediated by the massive bosons associated to the $SU(3)$ family symmetry that is spontaneously broken, while the masses of the top and bottom quarks and that of the tau lepton are generated at tree level from “Dirac See-saw”[@SU3MKhlopov] mechanisms through the introduction of a new set of $SU(2)_L$ weak singlets $U,D,E$ and $N$ vector-like fermions, which do not couple to the $W$ boson, such that the mixing of $U$ and $D$ vector-like quarks with the SM quarks gives rise to and extended $4\times4$ non-unitary CKM quark mixing matrix [@vectorlikepapers]. Model with $SU(3)$ flavor symmetry ================================== Fermion content --------------- Before “Electroweak Symmetry Breaking”(EWSB) all ordinary SM fermions remain massless, and the global symmetry in this limit, including R-handed neutrinos, is: $$\begin{aligned} SU(3)_{q_L}\otimes SU(3)_{u_R}\otimes SU(3)_{d_R}\otimes SU(3)_{l_L}\otimes SU(3)_{\nu_R}\otimes SU(3)_{e_R} \nonumber\\ \nonumber\\ \supset SU(3)_{q_L+u_R+d_R+l_L+e_R+\nu_R} \equiv SU(3) \label{su3symmetry}\end{aligned}$$ We define the gauge symmetry group $$G\equiv SU(3) \otimes G_{SM} \label{gaugegroup}$$ where $SU(3)$ is the gauged family symmetry among families, eq.(\[su3symmetry\]) , and $G_{SM}= SU(3)_C \otimes SU(2)_L \otimes U(1)_Y $ is the “Standard Model” gauge group, with $g_H$, $g_s$, $g$ and $g^\prime$ the corresponding coupling constants. The content of fermions assumes the ordinary quarks and leptons assigned under G as: $q_{iL}^o=\begin{pmatrix} u_{iL}^o \\ d_{iL}^o \end{pmatrix} \;,\; l_{iL}^o=\begin{pmatrix} \nu_{iL}^o \\ e_{iL}^o \end{pmatrix} \; , \; Q = T_{3L} + \frac{1}{2} Y$ $$\Psi_q^o = ( 3 , 3 , 2 , \frac{1}{3} )_L=\begin{pmatrix} q_{1L}^o \\ q_{2L}^o \\ q_{3L}^o \end{pmatrix} \quad , \quad \Psi_l^o= ( 3 , 1 , 2 , -1 )_L=\begin{pmatrix} l_{1L}^o \\ l_{2L}^o \\ l_{3L}^o \end{pmatrix}$$ $$\Psi_u^o = ( 3 , 3, 1 , \frac{4}{3} )_R=\begin{pmatrix} u_R^o \\ c_R^o \\ t_R^o \end{pmatrix} \quad , \quad \Psi_d^o =(3, 3 , 1 , -\frac{2}{3} )_R=\begin{pmatrix} d_R^o \\ s_R^o \\ b_R^o \end{pmatrix}$$ $$\Psi_e^o = (3 , 1 , 1,-2)_R=\begin{pmatrix} e_R^o \\ \mu_R^o \\ \tau_R^o \end{pmatrix} \: ,$$ where the last entry corresponds to the hypercharge $Y$. The model also includes two types of extra $SU(2)_L$ weak singlet fermions: [**Right Handed Neutrinos:**]{} $ \Psi_{\nu_R}^o = ( 3 , 1 , 1 , 0 )_R= \begin{pmatrix} \nu_{e_R} \\ \nu_{\mu_R} \\ \nu_{\tau_R} \end{pmatrix} $ , and the vector-like fermions: [**Sterile Neutrinos:** ]{} $\quad N_L^o, N_R^o = ( 1 , 1 , 1 , 0 ) $ , [**The Vector Like quarks:**]{} $$U_L^o, U_R^o = ( 1 , 3 , 1 , \frac{4}{3} ) \quad , \quad D_L^o, D_R^o = ( 1 , 3 , 1 ,- \frac{2}{3} ) \label{vectorquarks}$$ and [**The Vector Like electron:**]{} $\quad E_L^o, E_R^o = ( 1 , 1 , 1 , -2 ) $.\ The transformation of these vector-like fermions allows the gauge invariant mass terms $$M_U \:\bar{U}_L^o \:U_R^o \,+\, M_D \:\bar{D}_L^o \:D_R^o \,+\, M_E \:\bar{E}_L^o \:E_R^o + h.c. \;,$$ and $$m_D \,\bar{N}_L^o \,N_R^o \,+\, m_L \,\bar{N}_L^o\, (N_L^o)^c \,+\, m_R \,\bar{N}_R^o\, (N_R^o)^c \,+\, h.c$$ The above fermion content make the model anomaly free. After the definition of the gauge symmetry group and the assignment of the ordinary fermions in the usual form under the standard model group and in the fundamental $3$-representation under the $SU(3)$ family symmetry, the introduction of the right-handed neutrinos is required to cancel anomalies [@T.Yanagida1979]. The $SU(2)_L$ weak singlet vector-like fermions have been introduced to give masses at tree level only to the third family of known fermions via Dirac See-saw mechanisms. These vector like fermions, together with the radiative corrections, play a crucial role to implement a hierarchical spectrum for ordinary quarks and charged lepton masses. $SU(3)$ family symmetry breaking ================================ To implement a hierarchical spectrum for charged fermion masses, and simultaneously to achieve the SSB of $SU(3)$, we introduce the flavon scalar fields: $\eta_i,\;i=2,3$, $$\eta_i=(3 , 1 , 1 , 0)=\begin{pmatrix} \eta_{i1}^o\\ \eta_{i2}^o\\ \eta_{i3}^o \end{pmatrix} \;, \quad i=2,3,$$ acquiring the “Vacuum ExpectationValues” (VEV’s): $$\langle \eta_2 \rangle^T = ( 0 , \Lambda_2 , 0) \quad , \quad \langle \eta_3 \rangle^T = ( 0 , 0, \Lambda_3) \:. \label{veveta2eta3}$$ The corresponding $SU(3)$ gauge bosons are defined in Eq. through their couplings to fermions. Thus, the contribution to the horizontal gauge boson masses from Eq.(\[veveta2eta3\]) read - $\eta_2:\quad \frac{g_{H_2}^2 \Lambda_2^2}{2} ( Y_1^+ Y_1^- + Y_3^+ Y_3^-) + \frac{g_{H_2}^2 \Lambda_2^2}{4} ( Z_1^2 + \frac{Z_2^2}{3} - 2 Z_1 \frac{Z_2}{ \sqrt{3}} ) $ - $\eta_3:\quad \frac{g_{H_3}^2 \Lambda_3^2}{2} ( Y_2^+ Y_2^- + Y_3^+ Y_3^-) + g_{H_3}^2 \Lambda_3^2 \frac{Z_2^2}{3} $ [*These two scalars in the fundamental representation is the minimal set of scalars to break down completely the $SU(3)$ family symmetry*]{}. Therefore, neglecting tiny contributions from electroweak symmetry breaking, Eq., we obtain the gauge boson mass terms: $$M_2^2 \,Y_1^+ Y_1^- + M_3^2 \,Y_2^+ Y_2^- + ( M_2^2 + M_3^2) \,Y_3^+ Y_3^- + \frac{1}{2} M_2^2 \,Z_1^2 +\frac{1}{2} \frac{M_2^2 + 4 M_3^2}{3} \,Z_2^2 - \frac{1}{2}( M_2^2 ) \frac{2}{\sqrt{3}} \,Z_1 \,Z_2$$ $$M_2^2= \frac{g_{H}^2 \Lambda_2^2}{2} \quad , \quad M_3^2=\frac{g_{H}^2 \Lambda_3^2}{2} \quad , \quad y \equiv \frac{M_3}{M_2}= \frac{\Lambda_3}{\Lambda_2} \label{M23}$$ [ c | c c ]{} & $Z_1$ & $Z_2$\ \ $Z_1$ & $ M_2^2$ & $ - \frac{ M_2^2}{\sqrt{3}}$\ & &\ $Z_2$ & $ - \frac{M_2^2}{\sqrt{3}}$ & $\quad \frac{M_2^2+4 M_3^2}{3}$ Diagonalization of the $Z_1-Z_2$ squared mass matrix yield the eigenvalues $$\begin{aligned} M_-^2=\frac{2}{3} \left( M_2^2 + M_3^2 - \sqrt{ (M_3^2 - M_2^2)^2 + M_2^2 M_3^2 } \right)=M_2^2 \,y_-\label{Mm} \\ \nonumber\\ M_+^2=\frac{2}{3} \left( M_2^2 + M_3^2 +\sqrt{ (M_3^2 - M_2^2)^2 + M_2^2 M_3^2 } \right)=M_2^2 \,y_+ \label{Mp}\end{aligned}$$ and the gauge boson mass eigenvalues $$M_2^2 \,Y_1^+ Y_1^- + M_3^2\,Y_2^+ Y_2^- + ( M_2^2 + M_3^2) \,Y_3^+ Y_3^- + M_-^2 \,\frac{Z_-^2}{2} + M_+^2 \,\frac{Z_+^2}{2}$$ or $$M_2^2 \,Y_1^+ Y_1^- + M_2^2\,y^2\,Y_2^+ Y_2^- + M_2^2 ( 1 + y^2) \,Y_3^+ Y_3^- + M_2^2\,y_- \,\frac{Z_-^2}{2} + M_2^2\,y_+ \,\frac{Z_+^2}{2}\, ,$$ where $$\begin{pmatrix} Z_1 \\ Z_2 \end{pmatrix} = \begin{pmatrix} \cos\phi & - \sin\phi \\ \sin\phi & \cos\phi \end{pmatrix} \begin{pmatrix} Z_- \\ Z_+ \end{pmatrix} \label{z1z2mixing}$$ $$\cos\phi \, \sin\phi=\frac{\sqrt{3}}{4} \,\frac{M_2^2}{\sqrt{ M_2^4 + M_3^2 (M_3^2 - M_2^2) } }$$ *Notice that in the limit $y =\frac{M_3}{M_2} \gg 1$, $\sin\phi \rightarrow 0$, $\cos\phi \rightarrow 1$, and we get an approximate $SU(2)$ global symmetry for the $Z_1, Y_1^\pm$ almost degenerated gauge boson masses of order $M_2$. Thus, the hierarchy of scales in the SSB yields an approximate $SU(2)$ global symmetry in the spectrum of $SU(3)$ gauge boson masses. Actually this approximate $SU(2)$ symmetry may play the role of a custodial symmetry to suppress properly the tree level $\Delta F=2$ “Flavour Changing Neutral Currents” (FCNC) processes mediated by the lower scale of horizontal gauge bosons with masses of few TeV’s* Electroweak symmetry breaking ============================= Recently ATLAS [@ATLAS] and CMS [@CMS] at the Large Hadron Collider announced the discovery of a Higgs-like particle, whose properties, couplings to fermions and gauge bosons will determine whether it is the SM Higgs or a member of an extended Higgs sector associated to a BSM theory. The Electroweak Symmetry Breaking (EWSB) in the $SU(3)$ family symmetry model involves the introduction of two triplets of $SU(2)_L$ Higgs doublets, namely; $$\Phi^u=(3,1,2,-1)=\begin{pmatrix} \begin{pmatrix} \phi^o\\ \phi^- \end{pmatrix}_1^u \\\\ \begin{pmatrix} \phi^o\\ \phi^- \end{pmatrix}_2^u \\\\ \begin{pmatrix} \phi^o\\ \phi^- \end{pmatrix}_3^u \end{pmatrix} \qquad , \qquad \Phi^d=(3,1,2,+1)=\begin{pmatrix} \begin{pmatrix} \phi^+\\ \phi^o \end{pmatrix}_1^d \\\\ \begin{pmatrix} \phi^+\\ \phi^o \end{pmatrix}_2^d \\\\ \begin{pmatrix} \phi^+\\ \phi^o \end{pmatrix}_3^d \end{pmatrix} \, ,$$ with the VEV?s $$\Phi^u \rangle = \begin{pmatrix} \langle \Phi_1^u \rangle \\ \langle \Phi_2^u \rangle \\ \langle \Phi_3^u \rangle \end{pmatrix} \quad , \quad \langle \Phi^d \rangle= \begin{pmatrix} \langle \Phi_1^d \rangle \\ \langle \Phi_2^d \rangle \\ \langle \Phi_3^d \rangle \end{pmatrix} \;,$$ where $$\Phi_i^u \rangle = \frac{1}{\sqrt[]{2}} \begin{pmatrix} v_{ui} \\ 0 \end{pmatrix} \quad , \quad \langle \Phi_i^d \rangle = \frac{1}{\sqrt[]{2}} \begin{pmatrix} 0 \\ v_{di} \end{pmatrix} \:.$$ The contributions from $\langle \Phi^u \rangle$ and $\langle \Phi^d \rangle$ yield the $W$ and $Z$ gauge boson masses and mixing with the $SU(3)$ gauge bosons $$\begin{gathered} \frac{g^2 }{4} \,(v_u^2+v_d^2)\, W^{+} W^{-} + \frac{ (g^2 + {g^\prime}^2) }{8} \,(v_u^2+v_d^2)\,Z_o^2 \\ \\ + \frac{1}{4} \sqrt{g^2 + {g^\prime}^2} \,g_H\,Z_o \, \left[ \,(v_{1u}^2-v_{2u}^2 -v_{1d}^2+v_{2d}^2)\,Z_1 + (v_{1u}^2+v_{2u}^2 -2v_{3u}^2 -v_{1d}^2-v_{2d}^2+2v_{3d}^2)\,\frac{Z_2}{\sqrt{3}} \right. \\ \\ \left. + 2\,(v_{1u} v_{2u}-v_{1d} v_{2d})\,\frac{Y_1^+ + Y_1^-}{\sqrt{2}} + 2\,(v_{1u} v_{3u}-v_{1d} v_{3d})\,\frac{Y_2^+ + Y_2^-}{\sqrt{2}} + 2\,(v_{2u} v_{3u}-v_{2d} v_{3d})\,\frac{Y_3^+ + Y_3^-}{\sqrt{2}} \right] \\ \\ + \frac{g_H^2}{4} \, \left\{\, \frac{1}{2} \,(v_{1u}^2+v_{2u}^2+v_{1d}^2+v_{2d}^2)\, Z_1^2 + \frac{1}{2} \,(v_{1u}^2+v_{2u}^2+4 v_{3u}^2+v_{1d}^2+v_{2d}^2+4 v_{3d}^2)\, \frac{Z_2^2}{3} \right. \\ \\ + (v_{1u}^2+v_{2u}^2+v_{1d}^2+v_{2d}^2)\, Y_1^+ Y_1^- + (v_{1u}^2+v_{3u}^2+v_{1d}^2+v_{3d}^2)\, Y_2^+ Y_2^- +(v_{2u}^2+v_{3u}^2+v_{2d}^2+v_{3d}^2) \,Y_3^+ Y_3^- \\ \\ + (v_{1u}^2-v_{2u}^2 + v_{1d}^2-v_{2d}^2)\,Z_1 \, \frac{Z_2}{\sqrt{3}} + (v_{2u} v_{3u}+v_{2d} v_{3d})\,(Y_1^+ Y_2^- + Y_1^- Y_2^+) \\ \\ + (v_{1u} v_{2u}+v_{1d} v_{2d})\,(Y_2^+ Y_3^- + Y_2^- Y_3^+) +(v_{1u} v_{3u}+v_{1d} v_{3d})\,(Y_1^+ Y_3^+ + Y_1^- Y_3^-) \\ \\ \left. + 2\,(v_{1u} v_{2u}+v_{1d} v_{2d})\, \frac{Z_2}{\sqrt{3}}\, \frac{Y_1^+ + Y_1^-}{\sqrt{2}} + (v_{1u} v_{3u}+v_{1d} v_{3d})\, (Z_1 - \frac{Z_2}{\sqrt{3}} )\, \frac{Y_2^+ + Y_2^-}{\sqrt{2}} \right. \\ \left. - (v_{2u} v_{3u}+v_{2d} v_{3d})\, (Z_1 + \frac{Z_2}{\sqrt{3}} )\, \frac{Y_3^+ + Y_3^-}{\sqrt{2}} \right\} \label{ewyimixcont} \end{gathered}$$ $v_u^2=v_{1u}^2+v_{2u}^2+v_{3u}^2$ , $v_d^2= v_{1d}^2+v_{2d}^2+v_{3d}^2$. Hence, if we define as usual $M_W=\frac{1}{2} g v$, we may write $ v=\sqrt{v_u^2+v_d^2 } \thickapprox 246$ GeV. $$Y_j^1=\frac{Y_j^+ + Y_j^-}{\sqrt{2}} \quad , \quad Y_j^\pm=\frac{Y_j^1 \mp i Y_j^2}{\sqrt{2}}$$ *The mixing of $Z_o$ neutral gauge boson with the $SU(3)$ gauge bosons modify the couplings of the standard model Z boson with the ordinary quarks and leptons* Fermion masses =============== Dirac See-saw mechanisms ------------------------ Now we describe briefly the procedure to get the masses for fermions. The analysis is presented explicitly for the charged lepton sector, with a completely analogous procedure for the $u$ and $d$ quarks and Dirac neutrinos. With the fields of particles introduced in the model, we may write the gauge invariant Yukawa couplings, as $$h\:\bar{\psi}_l^o \:\Phi^d \:E_R^o \;+\; h_2 \:\bar{\psi}_e^o \:\eta_2 \:E_L^o \;+\; h_3 \:\bar{\psi}_e^o \:\eta_3 \:E_L^o \;+\; M \:\bar{E}_L^o \:E_R^o \;+ h.c \label{DiracYC}$$ where $M$ is a free mass parameter because its mass term is gauge invariant and $h$, $h_2$ and $h_3$ are Yukawa coupling constants. When the involved scalar fields acquire VEV’s we get, in the gauge basis ${\psi^{o}_{L,R}}^T = ( e^{o} , \mu^{o} , \tau^{o}, E^o )_{L,R}$, the mass terms $\bar{\psi}^{o}_L {\cal{M}}^o \psi^{o}_R + h.c $, where $${\cal M}^o = \begin{pmatrix} 0 & 0 & 0 & h \:v_1\\ 0 & 0 & 0 & h \:v_2\\ 0 & 0 & 0 & h \:v_3\\ 0 & h_2 \Lambda_2 & h_3 \Lambda_3 & M \end{pmatrix} \equiv \begin{pmatrix} 0 & 0 & 0 & a_1\\ 0 & 0 & 0 & a_2\\ 0 & 0 & 0 & a_3\\ 0 & b_2 & b_3 & M \end{pmatrix} \;. \label{tlmassmatrix}$$ Notice that ${\cal{M}}^o$ has the same structure of a See-saw mass matrix, here for Dirac fermion masses. So, we call ${\cal{M}}^o$ a [**“Dirac See-saw”**]{} mass matrix. ${\cal{M}}^o$ is diagonalized by applying a biunitary transformation $\psi^{o}_{L,R} = V^{o}_{L,R} \;\chi_{L,R}$. The orthogonal matrices $V^{o}_L$ and $V^{o}_R$ are obtained explicitly in Appendix A. From $V_L^o$ and $V_R^o$, and using the relationships defined there, one computes $$\begin{aligned} {V^{o}_L}^T {\cal{M}}^{o} \;V^{o}_R =Diag(0,0,- \lambda_3,\lambda_4) \label{tleigenvalues}\\ \nonumber \\ {V^{o}_L}^T {\cal{M}}^{o} {{\cal{M}}^{o}}^T \;V^{o}_L = {V^{o}_R}^T {{\cal{M}}^{o}}^T {\cal{M}}^{o} \;V^{o}_R = Diag(0,0,\lambda_3^2,\lambda_4^2) \:.\label{tlLReigenvalues}\end{aligned}$$ where $\lambda_3^2$ and $\lambda_4^2$ are the nonzero eigenvalues defined in Eqs.(\[nonzerotleigenvalues\]-\[paramtleigenvalues\]), $\lambda_4$ being the fourth heavy fermion mass, and $\lambda_3$ of the order of the top, bottom and tau mass for u, d and e fermions, respectively. We see from Eqs.(\[tleigenvalues\],\[tlLReigenvalues\]) that at tree level the See-saw mechanism yields two massless eigenvalues associated to the light fermions. One loop contribution to fermion masses ======================================= Subsequently, the masses for the light fermions arise through one loop radiative corrections. After the breakdown of the electroweak symmetry we can construct the generic one loop mass diagram of Fig. 1. Internal fermion line in this diagram represent the Dirac see-saw mechanism implemented by the couplings in Eq.(\[DiracYC\]). The vertices read from the $SU(3)$ flavor symmetry interaction Lagrangian $$\begin{gathered} i {\cal{L}}_{int} = \frac{g_{H}}{2} \left( \bar{e^{o}} \gamma_{\mu} e^{o}- \bar{\mu^{o}} \gamma_{\mu} \mu^{o} \right) Z_1^\mu + \frac{g_{H}}{2 \sqrt{3}} \left( \bar{e^{o}} \gamma_{\mu} e^{o}+ \bar{\mu^{o}} \gamma_{\mu} \mu^{o} - 2 \bar{\tau^{o}} \gamma_{\mu} \tau^{o} \right) Z_2^\mu \\ + \frac{g_{H}}{\sqrt{2}} \left( \bar{e^{o}} \gamma_{\mu} \mu^{o} Y_1^{+} + \bar{e^{o}} \gamma_{\mu} \tau^{o} Y_2^{+} + \bar{\mu^{o}} \gamma_{\mu} \tau^{o} Y_3^{+} + h.c. \right) \:,\label{SU3lagrangian} \end{gathered}$$ ![ Generic one loop diagram contribution to the mass term $m_{ij} \:{\bar{e}}_{iL}^o e_{jR}^o$](oneloope.pdf){width=".7\textwidth"} where $g_H$ is the $SU(3)$ coupling constant, $Z_1$, $Z_2$ and $Y_i^j\;,i=1,2,3\;,j=1,2,$ are the eight gauge bosons. The crosses in the internal fermion line mean tree level mixing, and the mass $M$ generated by the Yukawa couplings in Eq.(\[DiracYC\]) after the scalar fields get VEV’s. The one loop diagram of Fig. 1 gives the generic contribution to the mass term $m_{ij} \:{\bar{e}}_{iL}^o e_{jR}^o$ $$c_Y \frac{\alpha_H}{\pi} \sum_{k=3,4} m_k^o \:(V_L^o)_{ik}(V_R^o)_{jk} f(M_Y, m_k^o) \qquad , \qquad \alpha_H \equiv \frac{g_H^2}{4 \pi}$$ where $M_Y$ is the gauge boson mass, $c_Y$ is a factor coupling constant, Eq.(\[SU3lagrangian\]), $m_3^o=-\sqrt{\lambda_3^2}$ and $m_4^o=\lambda_4$ are the See-saw mass eigenvalues, Eq.(\[tleigenvalues\]), and $f(x,y)=\frac{x^2}{x^2-y^2} \ln{\frac{x^2}{y^2}}$. Using the results of Appendix A, we compute $$\sum_{k=3,4} m_k^o \:(V_L^o)_{ik}(V_R^o)_{jk} f(M_Y, m_k^o)= \frac{a_i \:b_j \:M}{\lambda_4^2 - \lambda_3^2}\:F(M_Y) \:,$$ $i=1,2,3$ , $j=2,3$, and $F(M_Y)\equiv \frac{M_Y^2}{M_Y^2 - \lambda_4^2} \ln{\frac{M_Y^2}{\lambda_4^2}} - \frac{M_Y^2}{M_Y^2 - \lambda_3^2} \ln{\frac{M_Y^2}{\lambda_3^2}}$. Adding up all the one loop $SU(3)$ gauge boson contributions, we get the mass terms $\bar{\psi^{o}_L} {\cal{M}}_1^o \:\psi^{o}_R + h.c.$, $${\cal{M}}_1^o = \left( \begin{array}{ccrc} D_{11} & D_{12} & D_{13} & 0\\ 0 & D_{22} & D_{23} & 0\\ 0 & D_{32} & D_{33} & 0\\ 0 & 0 & 0 & 0 \end{array} \right) \:\frac{\alpha_H}{\pi}\; ,$$ $$\begin{aligned} D_{11}&=&\frac{1}{2} ( \mu_{22} F_1+\mu_{33} F_2 ) \\ D_{12}&=&\mu_{12} (- \frac{F_{Z_1}}{4}+\frac{F_{Z_2}}{12}) \\ D_{13}&=&- \mu_{13} ( \frac{F_{Z_2}}{6}+ F_m ) \\ D_{22}&=&\mu_{22} (\frac{F_{Z_1}}{4}+\frac{F_{Z_2}}{12} - F_m )+\frac{1}{2} \mu_{33} F_3 \\ D_{23}&=&- \mu_{23} ( \frac{F_{Z_2}}{6} - F_m ) \\ D_{32}&=&- \mu_{32} ( \frac{F_{Z_2}}{6} - F_m ) \\ D_{33}&=& \mu_{33} \frac{F_{Z_2}}{3}+\frac{1}{2} \mu_{22} F_3 \:, \end{aligned}$$ $$F_1 \equiv F(M_{Y_1}) \quad,\quad F_2 \equiv F(M_{Y_2}) \quad,\quad F_3 \equiv F(M_{Y_3})$$ $$M_{Y_1}^2=M_2^2 \quad,\quad M_{Y_2}^2=M_3^2 \quad,\quad M_{Y_3}^2=M_2^2+M_3^2 \,$$ $$F_m=\frac{\cos\phi \sin\phi}{2 \sqrt{3}}\, [\, F(M_-)-F(M_+)\,]$$ with $M_2, M_3, M_-$ and $M_+$ the boson masses defined in Eqs.(\[M23\]-\[Mp\]). Due to the $Z_1 - Z_2$ mixing, we diagonalize the propagators involving $Z_1$ and $Z_2$ gauge bosons according to Eq.(\[z1z2mixing\]) $$Z_1 = \cos\phi \;Z_- - \sin\phi \;Z_+ \quad , \quad Z_2 = \sin\phi \;Z_- + \cos\phi \;Z_+$$ $$\begin{aligned} \langle Z_1 Z_1 \rangle &=& \cos^2\phi\; \langle Z_- Z_- \rangle + \sin^2\phi\; \langle Z_+ Z_+ \rangle \\\\ \langle Z_2 Z_2 \rangle &=& \sin^2\phi\; \langle Z_- Z_- \rangle + \cos^2\phi\; \langle Z_+ Z_+ \rangle \\\\ \langle Z_1 Z_2 \rangle &=& \sin\phi \, \cos\phi \;( \langle Z_- Z_- \rangle - \langle Z_+ Z_+ \rangle )\end{aligned}$$ So, in the one loop diagram contributions: $$F_{Z_1}=\cos^2\phi \,F(M_-) + \sin^2\phi \,F(M_+) \qquad , \qquad F_{Z_2}=\sin^2\phi \,F(M_-) + \cos^2\phi \,F(M_+) \, ,$$ $$\mu_{ij}=\frac{a_i \:b_j \:M}{\lambda_4^2 - \lambda_3^2} = \frac{a_i \:b_j}{a \:b} \:\lambda_3\:c_{\alpha} \:c_{\beta} \:,$$ and $c_{\alpha} \equiv \cos\alpha \:,\;c_{\beta} \equiv \cos\beta \:,\; s_{\alpha} \equiv \sin\alpha \:,\;s_{\beta} \equiv \sin\beta$, as defined in the Appendix, Eq.(\[Seesawmixing\]). Therefore, up to one loop corrections we obtain the fermion masses $$\bar{\psi}^{o}_L {\cal{M}}^{o} \:\psi^{o}_R + \bar{\psi^{o}_L} {\cal{M}}_1^o \:\psi^{o}_R = \bar{\chi_L} \:{\cal{M}} \:\chi_R \:,$$ with ${\cal{M}} \equiv \left[ Diag(0,0,-\lambda_3,\lambda_4)+ {V_L^o}^T {\cal{M}}_1^o\:V_R^o \right]$. Using $V_L^o$, $V_R^o$ from Eqs.(\[VoL\]-\[VoR\]) we get the mass matrix $${\cal{M}}= \begin{pmatrix} m_{11}&m_{12}&c_\beta \:m_{13}&s_\beta \:m_{13} \\ \\ m_{21}& m_{22} & c_\beta \:m_{23} & s_\beta \:m_{23}\\ \\ c_\alpha \:m_{31}& c_\alpha \:m_{32} & (-\lambda_3+c_\alpha c_\beta \:m_{33}) & c_\alpha s_\beta \:m_{33} \\ \\ s_\alpha \:m_{31}& s_\alpha \:m_{32} & s_\alpha c_\beta \:m_{33} & (\lambda_4+s_\alpha s_\beta \:m_{33}) \end{pmatrix} \;,\label{massVI}$$ where $$\begin{aligned} m_{11}=\frac{1}{2} \frac{a_2}{a^\prime} \Pi_1 \quad ,& \quad m_{12}= - \frac{1}{2} \frac{a_1 b_3}{a^\prime b} ( \Pi_2 -6 \mu_{22} F_m ) \\ \nonumber \\ m_{21}= \frac{1}{2} \frac{a_1 a_3}{a^\prime a}\Pi_1 \quad ,& \quad m_{31}=\frac{1}{2} \frac{a_1}{a} \Pi_1 \end{aligned}$$ $$m_{13}=- \frac{1}{2} \frac{a_1 b_2}{a^\prime b} [\Pi_2 +2(2\frac{b_3^2}{b_2^2}-1) \mu_{22}F_m ]$$ $$m_{22}=\frac{1}{2} \frac{a_3 b_3}{a \, b} \left[\frac{a_2}{a^\prime} ( \Pi_2 -6 \mu_{22} F_m )+ \frac{a^\prime b_2}{a_3 b_3} ( \Pi_3 + \Delta ) \right]$$ [$$m_{23}=\frac{1}{2} \frac{a_3 b_3}{a \, b} \left[\frac{a_2 b_2}{a^\prime b_3} ( \Pi_2 +2(2\frac{b_3^2}{b_2^2}-1 ) \mu_{22} F_m ) - \frac{a^\prime}{a_3} ( \Pi_3 -\frac{b_2^2}{b_3^2} \Delta +2\frac{b^2}{b_3^2}\mu_{33} F_m ) \right]$$ ]{} $$m_{32}=\frac{1}{2} \frac{a_3 b_3}{a \, b} \left[\frac{a_2}{a_3} ( \Pi_2 -6 \mu_{22} F_m)-\frac{b_2}{b_3} ( \Pi_3 -\frac{{a^\prime}^2 }{a_3^2} \Delta -2\frac{a^2}{a_3^2}\mu_{33} F_m ) \right]$$ $$m_{33}=\frac{1}{2} \frac{a_3 b_3}{a \, b} \left[\frac{a_2 b_2}{a_3 b_3} ( \Pi_2 - 2 \mu_{22} F_m ) + \Pi_3+ \frac{ {a^\prime}^2 b_2^2}{a_3^2 b_3^2} \Delta - \frac{1}{3} \frac{a^2 b^2}{a_3^2 b_3^2}\mu_{33} F_{Z_2} + 2 ( \frac{b_2^2}{b_3^2} + 2\frac{a_2^2}{a_3^2}-\frac{{a^\prime}^2}{a_3^2} )\mu_{33} F_m \right]$$ $$\begin{aligned} \Pi_1 = \mu_{22} F_1 + \mu_{33} F_2 \quad ,& \quad \Pi_2 = \mu_{22} F_{Z_1} + \mu_{33} F_3 \nonumber \\ \nonumber \\ \Pi_3 = \mu_{22} F_3 + \mu_{33} F_{Z_2} \quad ,& \quad \Delta = \frac{1}{2}\mu_{33}(F_{Z_2} - F_{Z_1} )\end{aligned}$$ *Notice that the $m_{ij}$ mass terms depend just on the $\frac{a_i}{a_j}$ and $\frac{b_i}{b_j}$ ratios of the tree level parameters.* $$a^\prime=\sqrt{a_1^2+a_2^2}\;\; , \;\;a=\sqrt{{a^\prime}^2+a_3^2} \;\; , \;\; b=\sqrt{{b_2^2+b_3^2}} \;,$$ The diagonalization of ${\cal{M}}$, Eq.(\[massVI\]) gives the physical masses for u, d, and e charged fermions. Using a new biunitary transformation $\chi_{L,R}=V_{L,R}^{(1)} \;\Psi_{L,R}$; $\bar{\chi}_L \;{\cal{M}} \;\chi_R= \bar{\Psi}_L \:{V_L^{(1)}}^T {\cal{M}} \; V_R^{(1)} \:\Psi_R $, with ${\Psi_{L,R}}^T = ( f_1 , f_2 , f_3 , F )_{L,R}$ the mass eigenfields, that is $${V^{(1)}_L}^T {\cal{M}} \:{\cal M}^T \;V^{(1)}_L = {V^{(1)}_R}^T {\cal M}^T \:{\cal{M}} \;V^{(1)}_R = Diag(m_1^2,m_2^2,m_3^2,M_F^2) \:,$$ $m_1^2=m_e^2$, $m_2^2=m_\mu^2$, $m_3^2=m_\tau^2$ and $M_F^2=M_E^2$ for charged leptons. Quark $( V_{CKM} )_{4\times 4}$ mixing matrix ---------------------------------------------- Within this $SU(3)$ family symmetry model, the transformations from massless to physical mass fermion eigenfields for quarks and charged leptons are $$\psi_L^o = V_L^{o} \:V^{(1)}_L \:\Psi_L \qquad \mbox{and} \qquad \psi_R^o = V_R^{o} \:V^{(1)}_R \:\Psi_R \,.$$ Recall that vector like quarks, Eq.(\[vectorquarks\]), are $SU(2)_L$ weak singlets, and hence they do not couple to the $W$ boson in the interaction basis. In this way, the interaction of L-handed up and down quarks; ${f_{uL}^o}^T=(u^o,c^o,t^o)_L$ and ${f_{dL}^o}^T=(d^o,s^o,b^o)_L$, to the $W$ charged gauge boson may be written as $$\frac{g}{\sqrt{2}} \,\bar{f^o}_{u L} \gamma_\mu f_{d L}^o {W^+}^\mu = \frac{g}{\sqrt{2}} \,\bar{\Psi}_{u L}\; [(V_{u L}^o\,V_{u L}^{(1)})_{3\times 4}]^T \;(V_{d L}^o\,V_{d L}^{(1)})_{3\times 4}\; \gamma_\mu \Psi_{d L} \;{W^+}^\mu \:,$$ where $g$ is the $SU(2)_L$ gauge coupling. Therefore, the non-unitary $V_{CKM}$ of dimension $4\times4$ is identified as $$(V_{CKM})_{4\times 4} = [(V_{u L}^o\,V_{u L}^{(1)})_{3\times 4}]^T \;(V_{d L}^o\,V_{d L}^{(1)})_{3\times 4}$$ Numerical results {#numerical} ================= *To illustrate the spectrum of masses and mixing, let us consider the following fit of space parameters at the $M_Z$ scale [@xingzhang]* Taking the input values $$M_2 = 2\,\text{TeV} \quad , \quad M_3 = 2000\,\text{TeV} \quad , \quad \frac{\alpha_H}{\pi}=0.2$$ for the $M_2$, $M_3$ horizontal boson masses, Eq.(\[M23\]), and the $SU(3)$ coupling constant, respectively, and the ratio of the electroweak VEV’s: $v_{iu}$ from $\Phi^u\;$ ($v_{id}$ from $\Phi^d$) $$v_{1u}=0 \quad , \quad \frac{ v_{2u}}{ v_{3u}} = 0.1 \quad , \quad \frac{ v_{1d}}{ v_{2d}} = 0.23257 \quad , \quad \frac{ v_{2d}}{ v_{3d}}=0.08373 \: ,$$ we obtain the following mass and mixing matrices, and mass eigenvalues: Quark masses and mixing ----------------------- [**u-quarks:**]{} Tree level see-saw mass matrix: $${\cal M}_u^o= \left( \begin{array}{cccc} 0 & 0 & 0 & 0. \\ 0 & 0 & 0 & 29834. \\ 0 & 0 & 0 & 298340. \\ 0 & 1.49495\times 10^7 & -730572. & 1.58511\times 10^7 \\ \end{array} \right) \,\text{MeV} \,,$$ the mass matrix up to one loop corrections: $${\cal M}_u= \left( \begin{array}{cccc} 1.38 & 0. & 0. & 0. \\ 0. & -532.587 & -2587.14 & -2442.42 \\ 0. & 7064.64 & -172017. & 31927.1 \\ 0. & 70.6499 & 338.204 & 2.18023\times 10^7 \\ \end{array} \right)\,\text{MeV} \, ,$$ and the u-quark masses $$(\,m_u \;,\; m_c \;,\; m_t \;,\; M_U\,)= (\,1.38\;,\; 638.22 \;,\;172181\;,\;2.18023\times 10^7\,)\,\text{MeV}$$ [**d-quarks:**]{} $${\cal M}_d^o= \left( \begin{array}{cccc} 0 & 0 & 0 & 13375.7 \\ 0 & 0 & 0 & 57510.3 \\ 0 & 0 & 0 & 686796. \\ 0 & 723708. & -37338.1 & 6.89219\times 10^7 \\ \end{array} \right)\;\text{MeV}$$ $${\cal M}_d= \left( \begin{array}{cccc} 2.82461 & 0.0338487 & -0.656039 & -0.00689715 \\ 0.65453 & -25.1814 & -217.369 & -2.28527 \\ 0.0562685 & 423.166 & -2820.62 & 46.5371 \\ 0.000562713 & 4.23187 & 44.2671 & 6.89291\times 10^7 \\ \end{array} \right) \;\text{MeV}$$ $$(\,m_d \;,\; m_s \;,\; m_b \;,\; M_D\,)= (\, 2.82368 \;,\; 57.0005\;,\; 2860 \;,\; 6.89291\times 10^7 \,)\;\text{MeV}$$ and the quark mixing $$V_{CKM}= \left( \begin{array}{cccc} 0.97362 & 0.225277 & -0.0362485 & 0.000194044 \\ -0.226684 & 0.973105 & -0.040988 & -0.000310055 \\ 0.0260403 & 0.0481125 & 0.998387 & -0.00999333 \\ -0.000234396 &- 0.000826552 & -0.011432 & 0.000114632 \\ \end{array}\right) \label{vckm}$$ Charged leptons: ---------------- $${\cal M}_e^o= \left( \begin{array}{cccc} 0 & 0 & 0 & 37956.9 \\ 0 & 0 & 0 & 189784. \\ 0 & 0 & 0 & 1.93543\times 10^6 \\ 0 & 548257. & -30307.4 & 1.94497\times 10^8 \\ \end{array}\right)\;\text{MeV}$$ $${\cal M}_e= \left( \begin{array}{cccc} -0.486368 & -0.00536888 & 0.0971221 & 0.000274163 \\ -0.0967909 & -34.7536 & -250.305 & -0.706579 \\ -0.0096786 & 485.768 & -1661.27 & 10.8107 \\ -0.0000967909 & 4.85792 & 38.2989 & 1.94507\times 10^8 \\ \end{array} \right)\;\text{MeV}$$ fit the charged lepton masses: $$( m_e \,,\, m_\mu \,,\, m_\tau \,,\, M_E ) = ( 0.486095 \,,\,102.7\,,\,1746.17\,,\, 3.15956\times 10^8\, )\,\text{MeV}$$ and the charged lepton mixing $$V_{e \,L}^o\, V_{e \,L}^{(1)}= \left( \begin{array}{cccc} 0.973942 & 0.221206 & 0.050052 & 0.000194 \\ -0.226798 & 0.949931 & 0.214927 & 0.0008342 \\ -2.90427\times 10^{-6} & -0.220675 & 0.975296 & 0.009963 \\ 2.62189\times 10^{-7} & 0.0013632 & -0.009906& 0.99995 \\ \end{array} \right) \label{emix}$$ Conclusions =========== We reported recent numerical analysis on charged fermion masses and mixing within a BSM with a local $SU(3)$ family symmetry, which combines tree level “Dirac See-saw” mechanisms and radiative corrections to implement a successful hierarchical mass generation mechanism for quarks and charged leptons. In section \[numerical\] we show a parameter space region where this scenario account for the known hierarchical spectrum of ordinary quarks and charged lepton masses, and the quark mixing in a non-unitary $(V_{CKM})_{4\times 4}$ within allowed values[^2] reported in PDG 2014 [@PDG2014]. *Let me point out here that the solutions for fermion masses and mixing reported in section \[numerical\] suggest that the dominant contribution to EWSB comes from the weak doublets which couple to the third family.* *It is also worth to comment that fermion content, scalar fields, and their transformation under the gauge group, Eq. , all together, forbid tree level Yukawa couplings between ordinary standard model fermions. Consequently, the flavon scalar fields introduced to break the symmetries: $\Phi^u$, $\Phi^d$, $\eta_2$ and $\eta_3$, couple only ordinary fermions to their corresponding vector like fermion at tree level. Thus, FCNC scalar couplings to ordinary fermions are suppressed by light-heavy mixing angles, which as is shown in the quark mixing $(V_{CKM})_{4 \times 4}$, Eq.(\[vckm\]), and the charged lepton mixing, Eq. , may be small enough to suppress properly the FCNC mediated by the scalar fields within this scenario.* Acknowledgements {#acknowledgements .unnumbered} ================ It is a pleasure to thank the organizers N.S. Mankoc-Borstnik, H.B. Nielsen, M. Y. Khlopov, and participants for the stimulating Workshop at Bled, Slovenia. This work was partially supported by the “Instituto Politécnico Nacional”, (Grants from EDI and COFAA) and “Sistema Nacional de Investigadores” (SNI) in Mexico. [99]{} A. Hernandez-Galeana, Rev. Mex. Fis. [**Vol. 50(5)**]{}, (2004) 522. hep-ph/0406315. A. Hernandez-Galeana, Bled Workshops in Physics, (ISSN:1580-4992), [**Vol. 15, No. 2**]{}, (2014) Pag. 93; arXiv:1412.6708\[hep-ph\]; [**Vol. 14, No. 2**]{}, (2013) Pag. 82; arXiv:1312.3403\[hep-ph\]; [**Vol. 13, No. 2**]{}, (2012) Pag. 28; arXiv:1212.4571\[hep-ph\]; [**Vol. 12, No. 2**]{}, (2011) Pag. 41; arXiv:1111.7286\[hep-ph\]; [**Vol. 11, No. 2**]{}, (2010) Pag. 60; arXiv:1012.0224\[hep-ph\]; Bled Workshops in Physics,[**Vol. 10, No. 2**]{}, (2009) Pag. 67; arXiv:0912.4532\[hep-ph\]; Z.G.Berezhiani and M.Yu.Khlopov, [*Sov.J.Nucl.Phys.*]{} 51 (1990) 739; 935; [*Sov.J.Nucl.Phys.*]{} 52 (1990) 60; [*Z.Phys.C- Particles and Fields*]{} 49 (1991) 73; Z.G.Berezhiani, M.Yu.Khlopov and R.R.Khomeriki, [*Sov.J.Nucl.Phys.*]{} 52 (1990) 344; A.S.Sakharov and M.Yu.Khlopov [*Phys.Atom.Nucl.*]{} 57 (1994) 651; M.Yu. Khlopov: *Cosmoparticle physics*, World Scientific, New York -London-Hong Kong - Singapore, 1999; M.Yu. Khlopov: *Fundamentals of Cosmoparticle physics*, CISP-Springer, Cambridge, 2011; Z.G. Berezhiani, J.K. Chkareuli, [*JETP Lett.*]{} [**35**]{} (612) 1982; [*JETP Lett.*]{} [**37**]{} (338) 1983; Z.G. Berezhiani, [*Phys. Lett. B*]{} [**129**]{} (99) 1983. J.A. Aguilar-Saavedra, R. Benbrik, S. Heinemeyer, and M. Pérez-Victoria, arXiv:1306.0572; J.A. Aguilar-Saavedra, arXiv:1306.4432; Jonathan M. Arnold, Bartosz Fornal and Michael Trott, JHEP 1008:059, 2010, arXiv:1005.2185 and references therein. T. Yanagida, Phys. Rev. D [**20**]{}, 2986 (1979). G. Aad *et. al.*, ATLAS Collaboration, Phys. Lett. [**B 716**]{}, 1(2012), arXiv: 1207.7214. S. Chatrchyan *et. al.*, CMS Collaboration, Phys. Lett. [**B 716**]{}, 30(2012), arXiv: 1207.7235. Zhi-zhong Xing, He Zhang and Shun Zhou, Phys. Rev. D [**86**]{}, 013013 (2012). K.A. Olive et al.(Particle Data Group), Chinese Physics C[**38**]{}, 090001 (2014). Diagonalization of the generic Dirac See-saw mass matrix ======================================================== $${\cal M}^o= \begin{pmatrix} 0 & 0 & 0 & a_1\\ 0 & 0 & 0 & a_2\\ 0 & 0 & 0 & a_3\\ 0 & b_2 & b_3 & c \end{pmatrix}$$ Using the biunitary transformations $\psi^{o}_L = V_L^o \:\chi_L$ and $\psi^{o}_R = V_R^o \:\chi_R $ to diagonalize ${\cal{M}}^o$, the orthogonal matrices $V^{o}_L$ and $V^{o}_R$ may be written explicitly as $$V^{o}_L = \begin{pmatrix} \frac{a_2}{a^\prime}& \frac{a_1 a_3}{a^\prime a} & \frac{a_1}{a} \cos\alpha & \frac{a_1}{a} \sin\alpha\\ \\ - \frac{a_1}{a^\prime} & \frac{a_2 a_3}{a^\prime a} & \frac{a_2}{a} \cos\alpha & \frac{a_2}{a} \sin\alpha\\ \\ 0 & - \frac{a^\prime}{a} & \frac{a_3}{a} \cos{\alpha} & \frac{a_3}{a} \sin{\alpha}\\ \\ 0 & 0 & -\sin{\alpha} & \cos{\alpha} \end{pmatrix} \label{VoL}$$ $$V^{o}_R = \begin{pmatrix} 1 & 0 & 0 & 0 \\ \\ 0 & \frac{b_3}{b} & \frac{b_2}{b} \cos{\beta} & \frac{b_2}{b} \sin{\beta}\\ \\ 0& - \frac{b_2}{b} & \frac{b_3}{b} \cos{\beta} & \frac{b_3}{b} \sin{\beta}\\ \\ 0 & 0 & -\sin{\beta} & \cos{\beta} \end{pmatrix} \label{VoR}$$ where $$\lambda_3^2 = \frac{1}{2} \left( B - \sqrt{B^2 -4D} \right) \quad , \quad \lambda_4^2 = \frac{1}{2} \left( B + \sqrt{B^2 -4D} \right) \label{nonzerotleigenvalues}$$ are the nonzero eigenvalues of ${\cal{M}}^{o} {{\cal{M}}^{o}}^T$ (${{\cal{M}}^{o}}^T {\cal{M}}^{o}$), and $$\begin{aligned} B = a^2 + b^2 + c^2 = \lambda_3^2+\lambda_4^2\quad &, \quad D= a^2 b^2=\lambda_3^2\lambda_4^2 \;,\label{paramtleigenvalues} \end{aligned}$$ $$\cos{\alpha} =\sqrt{\frac{\lambda_4^2 - a^2}{\lambda_4^2 - \lambda_3^2}} \quad , \quad \sin{\alpha} = \sqrt{\frac{a^2 - \lambda_3^2}{\lambda_4^2 - \lambda_3^2}} \quad , \quad \cos{\beta} =\sqrt{\frac{\lambda_4^2 - b^2}{\lambda_4^2 - \lambda_3^2}} \quad , \quad \sin{\beta} = \sqrt{\frac{b^2 - \lambda_3^2}{\lambda_4^2 - \lambda_3^2}} \label{Seesawmixing}$$ [^1]: See [@albinosu32004; @albinosu3bled] and references therein for some other $SU(3)$ family symmetry model proposals. [^2]: except $(V_{CKM})_{13}$ and $(V_{CKM})_{31}$
-0.50in -0.40in 21.0cm \ [**[MONTE CARLO : BASICS]{}**]{} 75 mm [**K. P. N. Murthy\ Theoretical Studies Section,\ Materials Science Division,\ Indira Gandhi Centre for Atomic Research,\ Kalpakkam 603 102, Tamil Nadu\ INDIA\ e-mail: kpn@igcar.ernet.in**]{} 50 mm Indian Society for Radiation Physics\ Kalpakkam Chapter\ January 9, 2000.\ 10 mm This is a monograph released to mark the occasion of the Workshop on “ Monte Carlo: Radiation Transport”, held at the Indira Gandhi Centre for Atomic Research (IGCAR), Kalpakkam, from February 7, 2000 to February 18, 2000, sponsored by the Safety Research Institute (SRI), Atomic Energy Regularity Board (AERB) and jointly organized by the Indian Society for Radiation Physics (Kalpakkam Chapter) and IGCAR, Kalpakkam. This monograph can be obtained at a nominal cost of Rs. 50/= + Rs. 10/= (handling charges), from the Head, HASD, IGCAR, Kalpakkam 603 102, Tamilnadu, India. [**[Foreword]{}**]{} Linearized Boltxmann transport equation is extensively used in nuclear engineering to assess the transport behaviour of radiation in bulk materials. The solution to this equation provides detailed knowledge about the spatial and temporal distribution of particle fluc (of neutrons, photons, [*etc.*]{} All engineering quantities of practical interest such as heat produced, reaction rates, effective multiplication factors, radiation dose [*etc.*]{}, are derived therefrom. This equation is not solvable in closed form except in the simplest of the situations. Therefore, numerical methods of solution are used in almost all applications. Many standardized computer codes have been developed, and validated against data from benchmark experiments. Numerical techniques of solving the equation also turn out to be inadequate if the geometry of the problem is complex, as it happens very often in real life situations. In these instances, Monte Carlo simulation of the physical processes contained in the equation is possibly the only way out. This technique, started around late 1940’s, has developed considerably over the years and has now reached a high level of sophistication. As a general technique, it finds application in almost all branches of science as well. There is a tendency among the many practitioners of the Monte Carlo technique to use the method (particularly the readily available computer codes) like a black box, little realising that care and caution are called for in the choice of random number generators, in ensuring sampling adequacy, or in interpreting the statistical results obtained. There are excellent books dealing with the underlying theory of Monte Carlo games; studying the books requires some depth in mathematics, scaring away the beginner. This monograph is intended to present the basics of the Monte Carlo method in simple terms. The mathematical equipment required is not more than college level calculus and notational familiarity with primitive set theory, All the essential ideas required for an appreciation of the Monte Carlo technique - its merits, pitfalls, limitations - are presented in lucid pedagogic style. Dr. K. P. N. Murthy, the author of this monograph, has well over twenty five years of research experience in the application of Monte Carlo technique to problem in physical sciences. he has also taught this subject on a number of occassions and has been a source of inspiration to young and budding scientists. The conciseness and clarity of the monograph give an indication of his mastery overy the subject. It is my great pleasure, as President of the Indian Society for Radiation Physics, Kalpakkam Chapter, to be associated with the publication of this monograph by the society. Like all other popular brochures and technical documents brought out earlier, I am sure, this monograph will be well received. A. Natarajan Monte Carlo is a powerful numerical technique useful for solving several complex problems. The method has gained in importance and popularity owing to the easy availability of high-speed computers. In this monograph I shall make an attempt to present the theoretical basis for the Monte Carlo method in very simple terms: sample space, events, probability of events, random variables, mean, variance, covariance, characteristic function, moments, cumulants, Chebyshev inequality, law of large numbers, central limit theorem, generalization of the central limit theorem through Lévy stable law, random numbers, generation of pseudo random numbers, randomness tests, random sampling techniques: inversion, rejection and Metropolis rejection; sampling from a Gaussian, analogue Monte Carlo, variance reduction techniques with reference to importance sampling, and optimization of importance sampling. I have included twenty-one assignments, which are given as boxed items at appropriate places in the text. While learning Monte Carlo, I benefited from discussions with many of my colleagues. Some are P. S. Nagarajan, M. A. Prasad, S. R. Dwivedi, P. K. Sarkar, C. R. Gopalakrishnan, M. C. Valsakumar, T. M. John, R. Indira, and V. Sridhar. I thank all of them and several others not mentioned here. I thank V. Sridhar for his exceedingly skillful and enthusiastic support to this project, in terms of critical reading of the manuscript, checking explicitly the derivations, correcting the manuscript in several places to improve its readability and for several hours of discussions. I thank M. C. Valsakumar for sharing his time and wisdom; for his imagination, sensitivity and robust intellect which have helped me in my research in general and in this venture in particular. Indeed M. C. Valsakumar has been and shall always remain a constant source of inspiration to all my endeavours. I thank R. Indira for several hours of discussions and for a critical reading of the manuscript. The first draft of this monograph was prepared on the basis of the talks I gave at the Workshop on Criticality calculations using KENO, held at Kalpakkam during September 7-18, 1998. I thank A. Natarajan and C. R. Gopalakrishnan for the invitation. Subsequently, I spent a month from October 5, 1998, at the School of Physics, University of Hyderabad, as a UGC visiting fellow. During this period I gave a course on Monte Carlo: Theory and Practice. This course was given in two parts. In the first part, I covered the basics and in the second part I discussed several applications. This monograph is essentially based on the first part of the Hyderabad course. I thank A. K. Bhatnagar, for the invitation. I thank V. S. S. Sastri, K. Venu and their students for the hospitality. This monograph in the present form was prepared after several additions and extensive revisions I made during my stay as a guest scientist at the Institüt für Festkörperforschung, Forschungszentrum Jülich, for three months starting from 22 July 1999. I thank Klaus W. Kehr for the invitation. I thank Forschungszentrum Jülich for the hospitality. I thank Klaus W. Kehr, Michael Krenzlin, Kiaresch Mussawisade, Ralf Sambeth, Karl-Heinz Herrmann, D. Basu, M. Vijayalakshmi and several others, for the wonderful time I had in Jülich. I thank Awadesh Mani, Michael Krenzlin, Ralf Sambeth, Achille Giacometti, S. Rajasekar Subodh R. Shenoy and S. Kanmani for a critical reading of the manuscript and for suggesting several changes to improve its readability. I owe a special word of thanks to A. Natarajan; he was instrumental in my taking up this project. He not only encouraged me into writing this monograph but also undertook the responsibility of getting it published through the Indian Society for Radiation Physics (ISRP), Kalpakkam Chapter, on the occassion of the Workshop on Monte Carlo: Radiation Transport, February 7 - 18, 2000, at Kalpakkam, conducted by the Safety Research Institute of Atomic Energy Regulatory Board (AERB). I am very pleased that A. Natarajan has written the foreword to this monograph. I have great pleasure in dedicating this monograph to two of the wonderful scientists I have met and whom I hold in very high esteem: [**Prof. Dr. Klaus W. Kehr**]{}, Jülich, who retired formally in July 1999, and [**Dr. M. A. Prasad**]{}, Mumbai, who is retiring formally in March 2000. I take this opportunity to wish them both the very best.\ Kalpakkam,\ January 9, 2000. K. P. N. Murthy , INTRODUCTION ============ Monte Carlo is a powerful numerical technique that makes use of random numbers to solve a problem. I assume we all know what random numbers are. However this issue is, by no means, trivial and I shall have something to say on this later. Historically, the first large scale Monte Carlo work carried out dates back to the middle of the twentieth century. This work pertained to studies of neutron multiplication, scattering, propagation and eventual absorption in a medium or leakage from it. Ulam, von Neumann and Fermi were the first to propose and employ the Monte Carlo method as a viable numerical technique for solving practical problems. There were of course several isolated and perhaps not fully developed instances earlier, when Monte Carlo has been used in some form or the other. An example is the experiment performed in the middle of the nineteenth century, consisting of throwing a needle randomly on a board notched with parallel lines, and inferring the value of $\pi$ from the number of times the needle intersects a line; this is known as Buffon’s needle problem, see for example [@HALL]. The Quincunx constructed by Galton [@GALTON] toward the end of the nineteenth century, consisted of balls rolling down an array of pins (which deflect the balls randomly to their left or right) and getting collected in the vertical compartments placed at the bottom. The heights of the balls in the compartments approximate the binomial distribution. This Monte Carlo experiment is a simple demonstration of the Central Limit Theorem. In the nineteen twenties, Karl Pearson perceived the use of random numbers for solving complex problems in probability theory and statistics that were not amenable to exact solutions. Pearson encouraged L. H. C. Tippet to produce a table of random numbers to help in such studies, and a book of random sampling numbers [@lhctippet] was published in the year 1927. This was followed by another publication of random numbers by R. A. Fisher and F. Yates. Pearson and his students used this method to obtain the distributions of several complex statistics. In India, P. C. Mahalanbois [@pcm] exploited  random sampling  technique to solve a variety problems like the choice of optimum sampling plans in survey work, choice of optimum size and shape of plots in experimental work [*etc.*]{}, see [@crrao]. Indeed, descriptions of several modern Monte Carlo techniques appear in a paper by Kelvin [@KELVIN], written nearly hundred years ago, in the context of a discussion on the Boltzmann equation. But Kelvin was more interested in the results than in the technique, which to him was [*obvious*]{}! Monte Carlo technique derives its name from a game very popular in Monaco. The children get together at the beach and throw pebbles at random on a square which has a circle inscribed in it. From the fraction of the pebbles that fall inside the circle, one can estimate the value of $\pi$. This way of estimating the value of $\pi$ goes under the name rejection technique. A delightful variant of the game is discussed by Krauth [@krauth]; this variant brings out in a simple way the essential principle behind the Metropolis rejection method. We shall discuss the rejection technique as well as the Metropolis rejection technique later. In this monograph I shall define certain minimal statistical terms and invoke some important results in mathematical statistics to lay a foundation for the Monte Carlo methods. I shall try to make the presentation as simple and as complete as possible. SAMPLE SPACE, EVENTS AND PROBABILITIES ====================================== Consider an experiment, real or imagined, which leads to more than one outcome. We collect all the possible outcomes of the experiment in a set $\Omega$ and call it the [**sample space**]{}. An outcome is often denoted by the symbol $\omega$. Certain subsets of $\Omega$ are called [**events**]{}. The class of all events is denoted by ${\cal F}$. For every pair of events ${\cal A}_1\in {\cal F}$ and ${\cal A}_2\in {\cal F}$, if ${\cal A}_1\cup {\cal A}_2$, ${\cal A}_1\cap {\cal A}_2$, and ${\cal A}_1 -{\cal A}_2$ are also events $\in {\cal F}$, then ${\cal F}$ is called a [**field**]{}. To each event ${\cal A}$ we assign a real number $0\le {\cal P}({\cal A})\le 1$ called the [**probability**]{} of the event. If $\Omega$ consists of infinitely many outcomes, we demand that $\cup_{i=1}^{\infty} {\cal A}_i$ and $\cap_{i=1}^{\infty}{\cal A}_i $ be also events and form a [**Borel field**]{} of all possible events which includes the null event $\phi$. We have ${\cal P}(\phi)=0$, ${\cal P}(\Omega) =1$ and ${\cal P}( {\cal A}_1 \cup {\cal A}_2 )={\cal P} ( {\cal A}_1 ) + {\cal P}({\cal A}_2 )-{\cal P} ( {\cal A}_1 \cap {\cal A}_2)$. Events ${\cal A}_1$ and ${\cal A}_2$ are [**disjoint**]{} (or [**mutually exclusive**]{}) if ${\cal A}_1 \cap {\cal A}_2 = \phi$. Not all the subsets of $\Omega $ shall be events. One reason for this is that we may wish to assign probabilities to only some of the subsets. Another reason is of mathematical nature: we may not be able to assign probabilities to some of the subsets of $\Omega$ at all. The distinction between subsets of $\Omega$ and events and the consequent concept of Borel field are stated here simply for completeness. In applications, all [*reasonably defined*]{} subsets of $\Omega$ would be events.\ [**How do we attach probabilities to the events ?**]{}\ Following the classical definition, we say that ${\cal P}( {\cal A})$ is the ratio of the number of outcomes in the event ${\cal A}$ to that in $\Omega$, provided all [*the outcomes are equally likely*]{}. To think of it, this is the method we adopt intuitively to assign probabilities - like when we say the probability for heads in a toss of a coin is half; the probability for the number, say one, to show up in a roll of a die is one-sixth; or when we say that all micro states are equally probable while formulating statistical mechanics. An interesting problem in this context is the Bertrand paradox [@bertrand]; you are asked to find the probability for a randomly chosen chord in a circle of radius $r$ to have a length exceeding $r\sqrt{3}$. You can get three answers $1/2, 1/3,$ and $1/4$, depending upon the experiment you design to select a chord randomly. An excellent discussion of the Bertrand paradox can be found in [@PAP]. [**Assignment 1**]{}\ (A) A fine needle of length $2a$ is dropped at random on a board covered with parallel lines with distance $2b$ apart. Show that the probability that the needle intersects one of the lines equals $2a/\pi b$. See page 131 of [@PAP].\ (B) Devise an experiment to select randomly a chord in a circle of radius $r$. What is the probability that its length exceeds $r\sqrt{3}$ ? See page 9-10 of [@PAP]. What is the probability distribution of the chord length ?\ Alternately, we can take an operational approach. ${\cal P}({\cal A})$ is obtained by observing the frequency of occurrence of ${\cal A}$: repeat the experiment some $N$ times and let $N_{ {\cal A}}$ be the number of times the event ${\cal A}$ occurs. Then $N_{ {\cal A}}/N$ in the limit of $N\to\infty$ gives the probability of the event. As seen above, the formal study of probability theory requires three distinct notions namely the sample space $\Omega$, the (Borel) field ${\cal F}$ and the probability measure ${\cal P}$. See Papoulis [@PAP] and Feller [@FEL]. The physicists however, use a different but single notion, namely the [**ensemble**]{}. Consider for example a sample space that contains discrete outcomes. An ensemble is a collection whose members are the elements of the sample space but repeated as many times as would reflect their probabilities. Every member of $\Omega$ finds a place in the ensemble and every member of the ensemble is some element of the sample space. The number of times a given element (outcome) of the sample space occurs in the ensemble is such that the ratio of this number to the total number of members in the ensemble is, exactly, the probability associated with the outcome. The number of elements in the ensemble is strictly infinity. The molecules of a gas in equilibrium is a simple example of an ensemble; the speeds of the molecules at any instant of time have Maxwell-Boltzmann distribution, denoted by $p(v)$. Each molecule can then be thought of as a member of the ensemble; the number of molecules with speeds between $v_1$ and $v_2 $, divided by the total number of molecules $N$ in the gas is $\int_{v_1}^{v_2} p(v)dv$. RANDOM VARIABLES ================ The next important concept in the theory of probability is the definition of a (real) [**random variable**]{}. A random variable, denoted by $X(\omega)$, is a set function: it attaches a real number $x$ to an outcome $\omega$. A random variable thus maps the abstract outcomes to the numbers on the real line. It stamps each outcome, so to say, with a real number. Consider an experiment whose outcomes are [**discrete**]{} and are denoted by $\{ \omega_i\}$, with $i$ running from $1$ to say $N$. Let $X(\omega)$ be the random variable that maps the abstract outcomes $\{ \omega _i \}$ to real numbers $\{ x_i \}$. Then $p_i ={\cal P}(\omega\vert X(\omega)=x_i)$ gives the probability that the random variable $X(\omega )$ takes a real value $x_i$. The probabilities $\{ p_i \}$ obey the conditions, $$\begin{aligned} 0\ \ \le\ \ p_i \ \ & \le \ \ & 1,\ \ \forall\ \ i ,\nonumber\\ \sum_{i=1}^{N} p_{i}^{} & =\ \ 1.\end{aligned}$$ Let me illustrate the above ideas through a few examples.\ [**Tossing of a single coin**]{}\ The simplest example is the tossing of a single coin. The sample space is $\Omega = \{ H,T\} $, where $H$ denotes the outcome Heads and T the Tails. There are four possible events : $$\begin{aligned} {\cal F}&=&\left\{ \left\{ H \right\}, \left\{ T \right\}, \left\{ H, T \right\}, \left\{ \phi\right\} \right\} .\nonumber\end{aligned}$$ The corresponding probabilities are $1/2,\ 1/2,\ 1$ and $0$ for a fair coin. We can define a random variable $X(\omega)$ by attaching $+1$ to H and $-1$ to T, see Table (1). Then we say that the probability for the random variable $X$ to take a value $+1$ is ${\cal P} [ \omega\vert X (\omega )=1]=1/2$ and that for $-1$ is half. This defines the discrete probability distribution: $p(1)=1/2,\ p(-1)=1/2$, see Table (1). \[onetoss\_tab\] ----------- ---------------- ------------------------------- $\omega$  $x= X(\omega)$ ${\cal P} \left[ \omega\vert X(\omega)=x\right]$ H $+1$ $ 1/2$ T $ $ 1/2$ -1$ ----------- ---------------- ------------------------------- : Random variable and probability for the toss of a single coin. The physicists’ ensemble containing ${\cal N}$ members would be such that half of ${\cal N}$ are Heads and half Tails. If the probability of Heads is $3/4$ and of Tails is $1/4$, then the corresponding ensemble would contain $3{\cal N}/4$ heads and ${\cal N}/4$ Tails.\ [**Tossing of several coins**]{}\ Consider now the tossing of $N$ independent fair coins. Let $X_1$, $X_2$, $\cdots$ $X_N$ be the corresponding random variables. $X_i (H) = +1$ and $X_i (T) = -1\ \ \forall\ i=1,2,\cdots , N$. For each coin there are two possible (equally likely) outcomes. Therefore, for $N$ coins there are $2^N$ possible (equally likely) outcomes. For the $N$-coin-tossing experiment each outcome is a distinct string of $H$ and $T$, the string length being $N$. An example with $N=3$ is shown in Table (2), where the random variable $Y_3 (\omega)$ is defined by the sum: $Y_3 = X_1 + X_2 +X_3 $. \[sum3rv\_tab\] ---------- ------------------ --------------------------------- $\omega$ $x=Y_3 (\omega)$ ${\cal P}\left[ \omega\vert Y_3 (\omega)=x\right] $ HHH +3 $1/8$ HHT HTH $+1$ $3/8$ THH TTH THT $-1$ $3/8$ HTT TTT $-3$ $1/8$ ---------- ------------------ --------------------------------- : Random variable and Probability for the three - coin -tossing experiment The probability for each outcome (string) in the throw of $N$ coins is thus $2^{-N}$. Let us define a random variable $Y_N =X_1 + X_2 + \cdots + X_N$. The probability for $n$ heads in a toss of $N$ coins, which is the same as the probability for the random variable $Y_N $ to take a value $n-(N-n)$ is given by the [**binomial**]{} distribution, $$\label{binomial_eq} P(n, N) \equiv {\cal P}[Y_N =2n-N]={{1}\over{2^N}}\ {{N!}\over{n! (N-n)!}}.$$ Fig. \[BINOMIAL\_PS\] depicts the binomial distribution for $N=6$. For the general case of a loaded coin, with probability for Heads as $p$ and for the Tails as $q=1-p$, we have, $$\label{loadbin} P(n,N)\equiv {\cal P}[Y_N =2n-N]= {{N!}\over{n!\ (N-n)!}}\ p^n\ q^{N-n}.$$ [**Rolling of a fair die**]{}\ Another simple example consists of rolling a die. There are six possible outcomes. The sample space is given by, $$\begin{aligned} \Omega =\hspace{10cm}\nonumber \end{aligned}$$ $$\begin{aligned} \left\{ \begin{array}{cccccccccccccccccccccc} & & &\bullet& & & &\bullet& & & &\bullet& &\bullet & &\bullet& &\bullet& &\bullet&\bullet&\bullet \\ \bullet & & & & & & & &\bullet & & & & & & & &\bullet & & & & & \\ & &, & & &\bullet &, & & &\bullet &, &\bullet & &\bullet &, &\bullet & &\bullet &, &\bullet &\bullet &\bullet \end{array} \right\} .\nonumber\end{aligned}$$ The random variable $X(\omega)$ attaches the numbers $\{ x_i : i=1,2,\cdots ,6\}$ $ \equiv $ $ \{1,\ 2,\ 3,\ 4,\ 5,\ 6\}$ to the outcomes $\{ \omega _i :i=1,2,\cdots ,6\}$, in the same order. If the die is not loaded we have $p(x_i)=1/6\ \forall\ i=1,2,\cdots ,6$. An example of an event, which is the subset of $\Omega$, is $$\begin{aligned} {\cal A}& =& \begin{tiny} \left\{ \begin{array}{ccccccccc} & &\bullet & & & & \bullet & &\bullet \\ \bullet & & &\bullet & & & &\bullet & \\ &, & & &\bullet &, &\bullet & &\bullet \end{array} \right\} ,\nonumber \end{tiny}\end{aligned}$$ and ${\cal P}({\cal A})=1/2$. This event corresponds to the roll of an odd number. Consider the event $$\begin{aligned} {\cal B}= \begin{tiny} \left\{ \begin{array}{ccccccccccc} \bullet & & & &\bullet & &\bullet & &\bullet &\bullet & \bullet\\ & & & & & & & & & & \\ & & \bullet &, &\bullet & &\bullet &, & \bullet &\bullet&\bullet \end{array} \right\} , \nonumber \end{tiny}\end{aligned}$$ which corresponds to the roll of an even number. It is clear that the events ${\cal A}$ and ${\cal B}$ can not happen simultaneously; in other words a single roll of the die can not lead to both the events ${\cal A}$ and ${\cal B}$. The events ${\cal A}$ and ${\cal B}$ are [**disjoint**]{} (or [**mutually exclusive**]{}. If we define another event $$\begin{aligned} {\cal C} & = & \begin{tiny} \left\{ \begin{array}{ccccccccc} & &\bullet & & & &\bullet & & \\ \bullet & & & & & & &\bullet & \\ &, & & &\bullet &, & & &\bullet \end{array} \right\} , \nonumber \end{tiny}\end{aligned}$$ then it is clear that ${\cal A}$ and ${\cal C}$ are not disjoint. We have $$\begin{aligned} {\cal A} \cap {\cal C}= \begin{tiny} \left\{ \begin{array}{ccccc} & & \bullet & & \\ \bullet & & &\bullet & \\ & , & & &\bullet \end{array} \right\} . \nonumber \end{tiny}\end{aligned}$$ Similarly the events ${\cal B} $ and ${\cal C}$ are not disjoint: $$\begin{aligned} {\cal B} \cap {\cal C}= \begin{tiny} \left\{ \begin{array}{ccc} \bullet & & \\ & & \\ & &\bullet \end{array} \right\} .\nonumber \end{tiny}\end{aligned}$$ [**Assignment 2**]{}\ Consider rolling of a fair die $N$ times. Let $n_k $ denote the number of times the number $k$ shows up and $k$ runs from $1$ to $6$. Find an expression for the probability $P(n_1 , n_2 , n_3 , n_4 , n_5 , n_6 , N)$. What is the probability distribution of the random variable $n=\sum_{k=1}^{6} k\ n_k$? [**Poisson distribution**]{}\ An important discrete distribution, called the [**Poisson**]{} distribution, arises in the context of random occurrences in time. In an interval $\Delta t\to 0$ positioned at any time $t$, there is either one occurrence with probability $\lambda\Delta t$, or no occurrence with probability $1-\lambda\Delta t$. Here $\lambda^{-1}$ is the characteristic time constant of the Poisson process. Let $P(n,t)$ denote the probability that there are $n$ occurrences in the interval $[0,t]$. A [**master equation**]{} can be readily written down as, $$\begin{aligned} \label{MEP} P(n,t) & = & \lambda\Delta t\ P(n-1,t-\Delta t) + [ 1-\lambda\Delta t ] P(n, t-\Delta t), \nonumber\\ P(n,t=0)&=&\delta_{n,0}.\end{aligned}$$ To solve for $P(n,t)$, define the [**generating function**]{}, $$\tilde{P}(z,t) = \sum_{n=0}^{\infty}z^n P(n,t).$$ Multiplying both sides of Eq. (\[MEP\]) by $z^n$ and summing over $n$ from $0$ to $\infty$, we get, $$\begin{aligned} \tilde{P}(z,t)&=&\lambda z\Delta t\tilde{P} (z,t-\Delta t) + [ 1-\lambda\Delta t ] \tilde{P}(z, t-\Delta t),\nonumber\\ \tilde{P}(z,t=0)&=& 1.\end{aligned}$$ In the limit $\Delta t\to 0$, we get from the above $${{ \partial{\tilde{P } }}\over{\partial{t} } } =- \lambda (1-z)\tilde{P}(z,t),$$ whose solution is, $$\label{poigen} \tilde{P}(z,t)=\exp\left[ -\lambda\left( 1-z\right) t\right],$$ since the initial condition, $$\begin{aligned} \tilde{P}(z,t=0) &=&\sum_{n=0}^{\infty}z^n P(n,t=0)\nonumber\\ & & \nonumber\\ &=&\sum_{n=0}^{\infty}z^n \delta _{n,0}\nonumber\\ & & \nonumber\\ &=&1.\end{aligned}$$ Taylor expanding the right hand side of Eq. (\[poigen\]), we get $P(n,t)$ as the coefficient of $z^n$, $$\label{poissondist} P(n,t)={{ (\lambda t)^n}\over{n!}} \exp (-\lambda t) .$$ Fig. \[POISSON\_PS\] depicts the Poisson distribution.\ [**Continuous distributions**]{}\ What we have seen above are a few examples of discrete random variable that can take either a finite number or at most countable number of possible values. Often we have to consider random variables that can take values in an interval. The sample space is a [**continuum**]{}. Then we say that $f(x)dx$ is the probability of the event for which the random variable $X(\omega)$ takes a value between $x$ and $x+dx$. [*i.e.*]{} $f(x)dx={\cal P}[\omega\vert x\le X(\omega) \le x+dx]$. We call $f(x)$ the [**probability density function**]{} or the [**probability distribution function**]{}.\ [**Uniform distribution**]{}\ An example is the [**uniform**]{} random variable, $U(a,b)$, defined in the interval $[a,b]$. The probability density function, $f(x)$ of the random variable $U(a,b)$ is defined by $f(x)dx=dx/(b-a)$. We shall come across the random variable $U(0,1)$ later in the context of pseudo random numbers and their generators.\ [**Gaussian distribution**]{}\ A probability density function we would often come across is the [**Gaussian**]{}. This density function is of fundamental importance in several physical and mathematical applications. It is given by, $$\label{gauss_eq} G(x)={{1}\over{\sigma \sqrt{ 2\pi}}}\exp\left[- {{ \left( x-\mu\right)^2}\ \over{2\sigma^2}}\right], \ \ -\infty\ < x < +\infty , $$ where $\mu$ and $\sigma$ are the parameters, called the mean and standard deviation. Fig. \[GAUSS\_PS\] depicts the Gaussian density for $\mu =0$ and $\sigma = 0.5, 1.0$ and $1.5$. The Gaussian density function plays a central role in the estimation of statistical errors in Monte Carlo simulation, as we would see later.\ [**Exponential distribution:**]{}\ The [**exponential**]{} density arises in the context of several physical phenomena. The time taken by a radioactive nucleus to decay is exponentially distributed. The distance a gamma ray photon travels in a medium before absorption or scattering is exponentially distributed. The exponential distribution is given by, $$\begin{aligned} \label{exp} f(x)=\cases{ \alpha e^{-\alpha x},\ & for\ $x\ \ge\ 0$ \cr & \cr 0, & for\ $x\ < \ 0$ \cr }\end{aligned}$$ where $\alpha\ >\ 0 $ is a parameter of the exponential distribution. Fig. \[exp\_ps\] depicts the exponential distribution, with $\alpha =1$.\ [**Cauchy distribution**]{}\ An interesting distribution named after Cauchy is given by, $$\label{cauchy} f(x) ={{1}\over{\pi }} {{D}\over{D^2 + x^2}},\ -\infty\ < x\ <\ +\infty ,$$ where $D > 0$ is a scale factor. Fig. \[cau\_ps\] depicts the [**Cauchy**]{} distribution with $D=1$. If a random variable $\theta$ is uniformly distributed in the range $-\pi/2$ to $+\pi /2$, then $x=\tan (\theta)$ follows a Cauchy distribution ($D=1$). If $X_1$ and $X_2$ are independent Gaussian random variables each with mean zero and standard deviation $\sigma _1$ and $\sigma_2$ respectively, then the ratio $X=X_1 /X_2 $ is Cauchy distributed with $D= \sigma_2 /\sigma_1$. The Cauchy distribution is also known in the literature as [**Lorentz**]{} distribution or [**Breit-Wigner**]{} distribution and arises in the context of resonance line shapes. MEAN, VARIANCE AND COVARIANCE ============================= [**Mean**]{}\ The [**mean**]{} (also called the expectation value, the average, or the first moment) of a random variable $X(\omega)$, with a probability density $f(x)$ is denoted by the symbol $\mu$ and is defined as, $$\mu(X)=\int_{-\infty}^{+\infty}x\ f(x)dx.$$ Mean is the most important single parameter in the theory of probability; indeed in Monte Carlo calculations we would be interested in calculating the mean of some random variable, defined explicitly or implicitly. Let us repeat the experiment $N$ times and observe that the outcomes are $\omega_1, \omega_2, \cdots ,\omega_N$. Let the value of the random variable $X(\omega)$ corresponding to these outcomes be $x_1, x_2, \cdots x_N$ respectively. If $N$ is sufficiently large then $(\sum_{k=1}^{N}x_k)/N$ approximately equals $\mu$. In the limit $N\to\infty$ the arithmetic mean of the results of $N$ experiments converges to $\mu$.\ [**Statistical convergence**]{}\ At this point it is worthwhile discussing the meaning of [**statistical convergence**]{} [*vis-a-vis*]{} deterministic convergence that we are familiar with. A given sequence $\{ A(\nu );\ \nu\ =\ 1,\ 2,\cdots\}$ is said to converge to a value, say $B$ as $\nu\to\infty$, if for (an arbitrarily small) $\delta > 0$, we can find an $M$ such that for all $k\ge M$, $A(k)$ is [*guaranteed*]{} to be within $\delta$ of $B$ . In the statistical context the term [*guarantee*]{} is replaced by a statement of probability, and the corresponding definition of convergence becomes: $A(\nu )$ is said to converge to $B$ as $\nu\to\infty$, if given probability $0<P<1$ and (an arbitrarily small) $\delta > 0$, we can find an $M$ such that for all $k\ge M$, the probability that $A(k)$ is within $\delta$ of $B$ is greater than $P$. [*This risk that convergence can only be assured with a certain probability is an inherent feature of all Monte Carlo calculations.*]{} We shall have more to say on this issue when we take up discussions on the Chebyshev inequality, the law of large numbers and eventually the central limit theorem which help us appreciate the nature and content of Monte Carlo errors. For the moment it suffices to say that it is possible to make a quantitative probabilistic statement about how close is the arithmetic mean, $A(\nu)$ of $\nu$ experiments to the actual mean $\mu$, for large $\nu$; such a statistical error estimate would depend on the number of experiments $(\nu )$ and the value of the second most important parameter in the theory of probability namely, the [**variance**]{} of the random variable, $X$, underlying the experiment.\ [**Variance**]{}\ Variance, $\sigma^2$ is defined as the expectation value of $(X-\mu)^2$. Formally we have, $$\sigma^2 (X) = \int_{-\infty}^{+\infty}(x-\mu)^2 f(x) dx .$$ The square root of the variance is called the [**standard deviation**]{}.\ [**Moments**]{}\ We can define, what are called the moments of the random variable. The [**K**]{}$^{{\rm {\bf th}}}$ [**moment**]{} is denoted by $M_K$, and is defined as, $$M_K = \int_{-\infty}^{+\infty} x^K f(x) dx .$$ It is clear that $M_0=1$, which implies that $f(x)$ is normalized to unity, $M_1 =\mu$, and $\sigma^2 = M_2\ -\ M_{1}^2$.\ [**Cumulative probability distribution**]{}\ The [**cumulative probability density function**]{}, denoted by $F(x)$ is defined as $$\label{cdf} F(x) = {\cal P}\left[ \omega\vert X(\omega)\le x\right] = \int_{-\infty}^x f(x')dx' .$$ $F(x)$ is a monotonic non-decreasing function of $x$; $F(-\infty)=0$ and $F(\infty)=1$.\ [**Sum of two random variables**]{}\ Let us consider two random variables $X_1$ and $X_2$. Let $Y_2=X_1 + X_2$. We have, $$\begin{aligned} \label{meanoftwo} \mu(Y_2)&=&\int_{-\infty}^{+\infty}dx_1\int_{-\infty}^{+\infty}dx_2 (x_1 + x_2 ) f(x_1 , x_2),\end{aligned}$$ where $f(x_1 , x_2 )$ denotes the [**joint density**]{} of the random variables $X_1$ and $X_2$. Let $f_1 (x)$ and $f_2 (x)$ denote the probability density functions of $X_1$ and $X_2$ respectively. These are called [**marginal densities**]{} and are obtained from the joint density as follows. $$\begin{aligned} \label{marginal} f_1 (x_1) & = & \int_{-\infty}^{+\infty}dx_2 f(x_1 , x_2) .\nonumber\\ f_2 (x_2) & = & \int_{-\infty}^{+\infty}dx_1 f(x_1 , x_2) .\end{aligned}$$ The integral in Eq. (\[meanoftwo\]) can be evaluated and we get, $$\begin{aligned} \mu (Y_2) & = & \int_{-\infty}^{+\infty}dx_1\ x_1\int_{-\infty}^{+\infty}dx_2 \ f(x_1 , x_2) + \nonumber\\ & \ & \int_{-\infty}^{+\infty}dx_1\ \int_{-\infty}^{+\infty} dx_2\ x_2\ f(x_1,x_2)\nonumber\\ & \ & \nonumber\\ & = & \int_{-\infty}^{+\infty}dx_1 \ x_1 \ f_1 (x_1) + \int_{-\infty}^{+\infty}dx_2 \ x_2 f_2 (x_2)\nonumber\\ & & \nonumber\\ & =& \mu (X_1) + \mu (X_2).\end{aligned}$$ The means thus add up. The variance however does not, since it involves the square of the random variable. We have, $$\sigma^2 (Y_2) = \sigma^2 (X_1) + \sigma^2 (X_2)+ 2\times {\rm cov} [X_1,X_2] .$$ The last term in the above is the [**covariance**]{} of $X_1$ and $X_2$, and is given by $$\label{covariance} {\rm cov}(X_1 , X_2)=\int_{-\infty}^{+\infty}dx_1\ \int_{-\infty}^{+\infty}dx_2\ \left[ x_1 -\mu_1\right]\left[x_2 - \mu_2\right] f(x_1,x_2),$$ where $\mu_1 =\mu (X_1 )$ and $\mu_2 =\mu (X_2 )$. One can define the [**conditional density**]{} of $X_1$ given that $X_2$ takes a value, say $x_2$, as $$\begin{aligned} f_c (x_1\vert x_2)={{f(x_1,x_2)}\over{f_2 (x_2)}} .\end{aligned}$$ If $f_c (x_1\vert x_2)=f_1 (x_1)$, then we find that $f(x_1,x_2)=f_1 (x_1)\times f_2(x_2)$. The random variables $X_1$ and $X_2$ are then [**independent**]{}. In that case we find from Eq. (\[covariance\]) that ${\rm cov}(X_1,X_2)=0$. Thus the covariance is zero if the two random variables are independent. If the covariance is positive then we say that the two random variables are positively correlated; if negative, the random variables are negatively correlated. Note that [*two random variables may be uncorrelated i.e. covariance is zero) but they need not be independent; however if they are independent, they must be uncorrelated.*]{} One usually considers the dimension - less normalized quantity called the correlation coefficient defined as, $$\label{correcoeff} C(X_1 ,X_2 )={{ {\rm cov} (X_1, X_2)}\over{ \sigma (X_1) \sigma (X_2) }} .$$ It is easily verified that $-1 \le C(X_1 , X_2 )\le +1$.\ Let us calculate the mean and variance of the distributions we have seen so far:\ [**Binomial distribution**]{}\ For the toss of single fair coin, $$\begin{aligned} \label{binmean} \mu & = & {{1}\over{2}}\times (+1) + {{1}\over{2}}\times (-1)=0 , \nonumber\\ \sigma^2& =& {{1}\over{2}}\times (+1)^2 + {{1}\over{2}}\times (-1)^2 =1.\end{aligned}$$ For the toss of $N$ independent fair coins, the random variable $Y_N =(X_1 + X_2 +\cdots + X_N)$ has $$\mu _N={{1}\over{2^N}} \sum_{n=0}^{N}{{N!}\over{n!(N-n)!}}(2n-N)=0 ,$$ and $$\sigma^2 _N ={{1}\over{2^N}}\sum_{n=0}^{N}{{N!}\over{n!(N-n)!}}(2n-N)^2=N .$$ The random variable $Y_N$ has a simple physical interpretation in terms of a random walk on a one-dimensional lattice. The lattice sites are on, say $x$ axis, at unit intervals. You start at the origin. Toss a fair coin. If it shows up Heads then step to the right and go to the lattice site $x=1$; if toss is Tails, then step to the left and go to the lattice site $x=-1$. At the new site, toss the coin again to decide whether to go the left site or the right site. After $N$ tosses find out where you are on the axis, and your position defines the random variable $Y_N$. We can replace the number of coins or the tosses by time $t$ and say $x(t)$ is the position of the random walker after time $t$, given it started at origin at time zero. Average of $x(t)$ is zero for all times. You do not move on the average. The variance of $x(t)$ denoted by $\sigma ^2 (t)$ increases linearly with time $t$, and the proportionality constant is often denoted by $2D$ where $D$ is called the diffusion constant. Thus, for the random walk generated by the tossing of fair coins, the diffusion constant is one-half. On the other hand, if we consider the random variable $\bar{Y}_N =Y_N /N$, we find its mean is zero and standard deviation is $1/\sqrt{N}$. Notice that the standard deviation of $\bar{Y}_N$ becomes smaller and smaller as $N$ increases. Consider now the random variable $\bar{Y}_{\sqrt{N}}=Y_N / \sqrt{N}$; it is easily verified that its mean is zero and variance is unity, the same as that for the single toss. Thus $\sqrt{N}$ seems to provide a natural scale for the sum of $N$ independent and identically distributed random variables with finite variance. The reason for this would become clear in the sequel, see section 8. For the case of a single loaded coin (with $p$ as the probability for Heads and $q=1-p$ as that for the Tails) we have, $$\mu = p\times (+1) + q\times (-1) = p-q= 2p-1,$$ and $$\sigma ^2 = p\times (+1)^2 + q\times (-1)^2 - (p-q)^2=4p(1-p).$$ For the toss of $N$ independent and identically loaded coins, the random variable $Y_N = X_1 + X_2 +\cdots X_N $ has, $$\mu _N= \sum_{n=0}^{N}{{N!}\over{n!(N-n)!}}p^n q^{N-n}(2n-N)=N( 2p-1),$$ and $$\sigma ^2 _N = \sum_{n=0}^{N}{{N!}\over{n!(N-n)!}}p^n q^{N-n} (2n-N-\mu _N )^2= 4Np(1-p).$$ We notice that both the mean and the variance increase linearly with $N$. The relevant quantity is the standard deviation relative to the mean; this quantity decreases as $1/\sqrt{N}$. The tossing of $N$ loaded coins defines a biased random walk: there is a drift with velocity $2p-1$ and a diffusion with a diffusion constant $D\equiv \sigma^2/2N =2p(1-p)$. We can also consider the random variable $\bar{Y}_N = Y_N / N$. Its mean is $2p-1$ and is independent of $N$. Its standard deviation, however, decreases with $N$ as $1/\sqrt{N}$. Consider now the natural scaling: $\bar{Y}_{\sqrt{N}} =Y_N / \sqrt{N}$; we find its mean is $(2p-1)\sqrt{N}$ and variance $4p(1-p)$. The variance is independent of $N$.\ [**Poisson distribution**]{}\ The mean of the Poisson distribution is given by $$\mu (t) = \sum_{n=0}^{\infty} n {{ (\lambda t)^n }\over{n!}} e^{-\lambda t} = \lambda t,$$ and the variance is given by, $$\sigma ^2 (t) = \sum_{n=0}^{\infty} (n - \lambda t )^2 {{(\lambda t)^n}\over{n!}}e^{-\lambda t} = \lambda t .$$ Thus, for the Poisson distribution the mean and the variance have the same magnitude.\ [**Exponential distribution**]{}\ The mean of the exponential density is given by $$\mu = \int_0 ^{\infty}x\alpha\exp(-\alpha x)\ dx\ =\ {{1}\over{\alpha}} ,$$ and the variance is, $$\sigma^2=\int_0 ^{\infty}\left( x-{{1}\over{\alpha}}\right) ^2\alpha \exp(-\alpha x)\ dx= \left( {{1}\over{\alpha}}\right) ^2 .$$ The standard deviation of the exponential density is $\sigma = 1/\alpha$. We can associate the standard deviation with something like the expected deviation from the mean of a number, sampled randomly from a distribution. We shall take up the issue of random sampling from a given distribution later. For the present, let us assume that we have a large set of numbers sampled independently and randomly from the exponential distribution. The expected fraction of the numbers that fall between $\mu - \sigma = 0$ and $\mu +\sigma =2/\alpha$ can be calculated and is given by $$\begin{aligned} \label{expprob} {\cal P}(\mu -\sigma \le x \le \mu +\sigma )& =& \int_{\mu-\sigma}^{\mu +\sigma} \alpha e^{-\alpha x}\ dx\nonumber\\ & & \nonumber\\ & = & 0.8647 .\end{aligned}$$ Thus, nearly $86\%$ of the sampled numbers are expected to lie within one sigma deviation from the mean. Of course there shall also be numbers much larger than the mean since the range of $x$ extends up to infinity. The distribution of the sum of $N$ independent exponential random variables and the same scaled by $N$ and $\sqrt{N}$ will be considered separately later. In fact I am going to use this example to illustrate the approach to Gaussian as dictated by the Central Limit Theorem.\ [**Cauchy distribution**]{}\ Let us consider the Cauchy distribution discussed earlier. We shall try to evaluate the mean by carrying out the following integration, $$\label{cauchyint} \mu ={{1}\over{\pi }} \int_{-\infty}^{+\infty}x {{D}\over{D^2 + x^2}} \ dx\ .$$ The above integral strictly does not exist. The reason is as follows. If we carry out the integration from $-\infty$ to $0$ and then from $0$ to $\infty$, these two integrals do not exist. Notice that the integrand is a odd function. Hence if we allow something like a [*principal*]{} value integration, where the limits are taken simultaneously, we can see that the integral from $-\infty$ to $0$ cancels out that from $0$ to $+\infty$, and we can say the mean, $\mu $ is zero, consistent with the graphical interpretation of the density, see Fig. \[cau\_ps\]. If we now try to evaluate the variance, we find, $$\label{cauvar} \sigma ^2 = {{1}\over{\pi}} \int_{-\infty}^{+\infty}(x-\mu)^2 {{D}\over{D^2 + x^2}}\ dx.$$ The above is an unbounded integral. Hence if we sample numbers randomly and independently from the Cauchy distribution and make an attempt to predict the extent to which these numbers fall close to the mean , we would fail. Nevertheless Cauchy distribution is a legitimate probability distribution since its integral from $-\infty$ to $+\infty$ is unity and for all values of $x$, the distribution function is greater than or equal to zero. But its variance is infinity and its mean calls for a  generous interpretation of the integration. Since the standard deviation for the Cauchy distribution is infinity, the width of the distribution is usually characterized by the [**Full Width at Half Maximum(FWHM)**]{} and it is $2D$. Consider the sum $Y_N = X_1 + X_2 + \cdots + X_N$ of $N$ independent Cauchy random variables. What is the probability distribution of the random variable $Y_N$ ? Does it tend to Gaussian as $N\to\infty $ ? If not, why ? What is the natural scaling for the sum $Y_N$ ? These and related questions we shall take up later. Bear in mind that the Cauchy distribution has unbounded variance and this property will set it apart from the others which have finite variance.\ [**Gaussian distribution**]{}\ For the Gaussian distribution the two parameters $\mu$ and $\sigma$ are the mean and standard deviation respectively. Let us calculate the probability that a number sampled from a Gaussian falls within one sigma interval around the mean. This can be obtained by integrating the Gaussian from $\mu -\sigma$ to $\mu +\sigma$. We get, $$\begin{aligned} \label{gaussint} {\cal P}(\mu -\sigma \le x\le \mu +\sigma )&=& {{1}\over{\sigma\sqrt{2\pi}}}\int_{\mu -\sigma }^{\mu +\sigma }\exp \left[- {{ \left( x-\mu\right) ^2}\over{2\sigma ^2}}\right] \ dx , \nonumber\\ & & \nonumber\\ & = & 0.6826.\end{aligned}$$ Thus $68\%$ of the numbers sampled independently and randomly from a Gaussian distribution are expected to fall within one sigma interval around the mean. This interval is usually called the [*one-sigma confidence*]{} interval. The sum $Y_N$ of $N$ independent Gaussian random variables is also a Gaussian with mean $N\mu$ and variance $N\sigma^2$. When you scale the sum by $N$, the mean is $\mu$ and the variance is $\sigma^2 /N$; on the other hand the random variable $Y_N /\sqrt{N}$, has mean $\mu\sqrt{N}$ and variance $\sigma^2$. These will become clear when we take up the discussion on the Central Limit theorem later. In fact we shall see that the arithmetic mean of $N$ independent random variables, each with finite variance, would tend to have a Gaussian distribution for large $N$. The standard deviation of the limiting Gaussian distribution will be proportional to the inverse of the square root of $N$. We shall interpret the one-sigma confidence interval of the Gaussian as the statistical error associated with the estimated mean of the $N$ independent random variables.\ [**Uniform distribution**]{}\ For the uniform random variable $U(a,b)$, the mean is, $$\label{meanuni} \mu = {{1}\over{b-a}}\int_a ^b x\ \ dx = {{a+b}\over{2}},$$ and the variance is, $$\label{varuni} \sigma ^2 ={{1}\over{b-a}} \int_a ^b \left( x-{{a+b}\over{2}}\right)^2 \ dx = {{(b-a)^2}\over{12}}.$$ For the random variable $U(0,1)$, the mean and variance are respectively $1/2$ and $1/12$. [**Assignment 3**]{}\ Consider $N$ independent uniform random variables $U(0,1)$. Let $Y_N$ denote their sum.\ (A) Find the mean and variance of (a) $Y_N$, (b) $Y_N /N$ and (c) $Y_N /\sqrt{N}$ as a function $N$.\ (B) Derive an expression for the probability distribution of $Y_N$ for $N=2$. CHARACTERISTIC FUNCTION ======================= The [**characteristic function**]{} of a random variable $X$ is defined as the expectation value of $\exp(ikX)$. $$\Phi_X (k) = \int_{-\infty}^{+\infty}\exp (ikx) f(x)\ dx .$$ Expanding the exponential we get, $$\Phi_X (k)=\int_{-\infty}^{+\infty}f(x)\left[1+(ikx)+{{(ikx)^2}\over{2!}}+ \cdots + {{(ikx)^n }\over{n!}}+\cdots\right]\ dx .$$ Assuming that term wise integration is valid, we find, $$\label{MGF_EQ} \Phi_X (k) = 1+ikM_1+{{(ik)^2}\over{2!}}M_2+\cdots+{{(ik)^n}\over{n!}}M_n \cdots ,$$ from which we get, $$i^n M_n =\left. {{d^n}\over{dk^n }} \Phi_X (k)\right\vert _{k=0} .$$ Thus we can generate all the moments from the characteristic function. For a discrete random variable, the characteristic function is defined similarly as, $$\Phi (k) = \sum_{n} \exp (ikx_n)p_n .$$ The logarithm of the characteristic function generates, what are called the [**cumulants**]{} or the [**semi-invariants**]{}. We have, $$\label{CGF_EQ} \Psi_X (k) = \ln\left[ \Phi_X (k)\right] \equiv \sum_{n=1}^{\infty} {{ (ik)^n}\over{n!}} C_n,$$ where $C_n$ is the $n$-th cumulant, given by, $$i^n C_n =\left.{{d^n}\over{dk^n}} \Psi_X (k)\right\vert_{k=0} .$$\ [**How are the cumulants related to the moments ?**]{}\ This can be found by considering, $$\begin{aligned} \ln \left[ 1+ikM_1 + {{(ik)^2}\over{2!}}M_2 + {{ (ik)^3 }\over{3!}}M_3+ \cdots\right] =\quad\quad\quad\quad\quad\nonumber\\ ikC_1 + {{(ik)^2}\over{2!}} C_2 + {{(ik)^3}\over{3!}} C_3 + \cdots . \quad\quad\quad\quad\quad\quad\quad\quad\end{aligned}$$ Taylor-expanding the logarithm on the left hand side of the above and equating the coefficients of the same powers of $k$, we get, $$\begin{aligned} C_1 & = & M_1\\ C_2 & = & M_2 - M_1 ^2 = \sigma ^2\\ C_3 & =& M_3 - 3M_2 M_1 +2 M_1 ^3 ,\\ C_4 & = & M_4 - 4M_3 M_1 - 3M_2 ^2 + 12M_2 M_1 ^2 - 6 M_1 ^4 .\end{aligned}$$ The first cumulant is the same as the first moment (mean). The second cumulant is the variance. A general and simple expression relating the moments to the cumulants can be obtained [@VALSA] as follows. We have, $${{d\Phi(k)}\over{dk}}=\Phi(k){{d\Psi(k)}\over{dk}},$$ where $\Phi(k)$, see Eq. (\[MGF\_EQ\]) and $\Psi(k)$, see Eq. (\[CGF\_EQ\]) are the moments and cumulants generating functions respectively; we have dropped the suffix $X$ for convenience. We see that, $$\begin{aligned} {{d^{n+1}\Phi(k)}\over{dk^{n+1} }}&=&{{d^n}\over{dk^n}}\left[ \Phi(k) {{d\Psi(k)}\over{dk}}\right]\nonumber\\ & = & \sum_{m=0}^{n} {{n!}\over{m!\ (n-m)!}} {{d^{n-m} \Phi(k)}\over{dk^{n-m}}} {{d^{m+1}\Psi (k)}\over{dk^{m+1}}},\end{aligned}$$ from which it follows that, $$\begin{aligned} C_1 & = & M_1 \nonumber\\ C_{n+1}&=&M_{n+1} - \sum_{m=0}^{n-1} {{n!}\over{m!\ (n-m)!}} M_{n-m}C_{m+1} \ \ \forall\ n\ >\ 0.\end{aligned}$$\ Let us calculate the characteristic functions of the several random variables considered so far:\ [**Binomial distribution**]{}\ The characteristic function of the random variable defined for the toss of a single fair coin is $$\begin{aligned} \Phi_X (k)& = & {{1}\over{2}} e^{+ik}+{{1}\over{2}} e^{-ik}\nonumber\\ \ & = & \cos (k).\end{aligned}$$ The characteristic function of $Y_N = X_1 + X_2 +\cdots X_N$, defined for the toss of $N$ independent fair coins is given by, $$\begin{aligned} \Phi_{Y_N} (k)& = & {{1}\over{2^N }} \sum_{n=0}^{N} {{N!}\over{n!(N-n)!}} e^{ik(2n-N)}\nonumber\\ & & \nonumber\\ &=&{{1}\over{2^N}}\sum_{n=0}^{N} {{N!}\over{n! (N-n)!}} \left( e^{ik}\right) ^n \left( e^{-ik}\right) ^{N-n}\nonumber\\ & & \nonumber\\ &=&\left( {{e^{ik} + e^{-ik} }\over{2}}\right) ^N \nonumber\\ & &\nonumber\\ \ &=& \left[ \cos (k)\right] ^N \nonumber\\ \ &=& \left[ \Phi_X (k)\right]^N .\end{aligned}$$ [**Sum of independent random variables**]{}\ A general result is that the characteristic function of the sum of independent random variables is given by the product of the characteristic functions of the random variables. Let $X_1 , X_2 , \cdots X_N$ be independent and not necessarily identically distributed. Let $Y_N=X_1 + X_2 + \cdots +X_N$. Formally, the characteristic function of $Y_N$, denoted by $\Phi_{Y_N} (k)$ is given by $$\begin{aligned} \label{phiy} \Phi_{Y_N} (k)=\int dx_1\ \int dx_2\ \cdots\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\nonumber\\ \cdots\int dx_N\ \exp \left[ ik\left( x_1 +x_2 + \cdots +x_N \right) \right] f(x_1,x_2,\cdots ,x_N).\end{aligned}$$ Since $X_1, X_2, \cdots , X_N $ are independent, the joint density can be written as the product of the densities of the random variables. [*i.e.*]{}, $$f(x_1 ,x_2 ,\cdots x_N )=f_1 (x_1 )f_2 (x_2 )\cdots f_N (x_N ) ,$$ where $f_i(x_i)$ is the probability density of $X_i$. Therefore, $$\begin{aligned} \Phi_{Y_N}(k)&=&\prod_{i=1}^{N}\int dx_i\ \exp\left( ikx_i \right) f_i(x_i )\nonumber\\ & & \nonumber\\ \ &=&\prod_{i=1}^{N}\Phi_{X_{i}} (k ),\end{aligned}$$ where $\Phi_{X_{i}}(k)$ denotes the characteristic function of the random variable $X_i$. If these $N$ random variables are also identical, then $\Phi_{Y_N} (k)= [\Phi_X (k)]^N$.\ [**Arithmetic mean of independent and identically distributed random variables**]{}\ Let $$\bar{Y}_N = {{ X_1 + X_2+ \cdots + X_N}\over{N}},$$ define the arithmetic mean of $N$ independent and identically distributed random variables. In Eq. (\[phiy\]), replace $k$ by $k/N$; the resulting equation defines the characteristic function of $\bar{Y}_N$. Thus we have, $$\Phi_{\bar{Y}_N} (k)= \Phi_{Y_N} (k/N) = \left[ \Phi_X (k/N)\right]^N.$$ [**Sum of $N$ independent and identically distributed random variables scaled by $\sqrt{N}$**]{}\ Later, we shall have occasions to consider the random variable $\bar{Y}_{\sqrt{N}}$, defined by, $$\bar{Y}_{\sqrt{N} }={{ X_1 + X_2 + \cdots X_N}\over{\sqrt{N} }},$$ whose characteristic function can be obtained by replacing $k$ by $k/\sqrt{N}$ in Eq. (\[phiy\]). Thus, $$\Phi_{\bar{Y}_{\sqrt{N} } }(k) = \Phi_{Y_N}(k/\sqrt{N})= \left[ \Phi_X (k/\sqrt{N})\right] ^N .$$\ [**Exponential distribution**]{}\ For the exponential distribution, see Eq. (\[exp\]), with mean unity ([*i.e.*]{} $ \alpha~=~1$), the characteristic function is given by, $$\begin{aligned} \Phi_X (k) & = & \int_{0}^{\infty} e^{ikx-x}\ dx\nonumber\\ \ & = & {{1}\over{1-ik}}\ \ .\end{aligned}$$ The sum $Y_N$ of $N$ exponential random variables has a characteristic function given by, $$\label{sumexp} \Phi_{Y_N} (k) = {{1}\over{ (1-ik)^N}}\quad .$$ The random variable $\bar{Y}_N$ has a characteristic function given by, $$\label{amexp} \Phi_{\bar{Y}_N}(k) = {{1} \over{ \left( 1- {{ik}\over{N}} \right)^N}}\quad ,$$ which has been obtained by replacing $k$ by $k/N$ in Eq. (\[sumexp\]).\ [**Poisson distribution**]{}\ For the Poisson distribution, see Eq. (\[poissondist\]), the characteristic function is given by, $$\label{poissonch} \Phi (k,t)=\sum_{n=0}^{\infty} {{e^{ikn} (\lambda t)^n e^{-\lambda t}}\over{n!}} = \exp\left[ -\lambda t \left( 1-e^{ik}\right)\right],$$ which is the same as Eq. (\[poigen\]) if we set $z=\exp (ik)$.\ [**Gaussian distribution**]{}\ For the Gaussian distribution, the characteristic function is given by, $$\label{ftgauss} \Phi_X (k) = \exp(i\mu k-{{1}\over{2}}\sigma^2 k^2) .$$ Following the rules described above, we have, $$\begin{aligned} \Phi_{Y_N} (k) & =& \exp \left[ iN\mu k - {{1}\over{2}}N\sigma^2 k^2\right] \label{sumgauss1}\\ \Phi_{\bar{Y}_N} (k) & = & \exp\left[ i\mu k - {{1}\over{2}}{{\sigma^2 k^2}\over{N}}\right] \label{sumgauss2}\\ \Phi_{\bar{Y}_{\sqrt{N}}} (k) & = & \exp\left[ i\sqrt{N}\mu k - {{1}\over{2}}\sigma^2 k^2\right] . \label{sumgauss3}\end{aligned}$$ For a Gaussian, only the first two cumulants are non-zero. identically zero. From Eq. (\[sumgauss3\]) we see that the sum of $N$ Gaussian random variables scaled by $\sqrt{N}$, is again a Gaussian with variance independent of $N$. [**Assignment 4**]{}\ Derive expressions for the characteristic functions of (a) the Gaussian and b) the Cauchy distributions. CHEBYSHEV INEQUALITY ==================== Let $X$ be an arbitrary random variable with a probability density function $f(x)$, and finite variance $\sigma^2$. The [**Chebyshev inequality**]{}, see Papoulis [@PAP], says that, $${\cal P}\left\{ \vert X-\mu\vert \ge k\sigma \right\} \le {{1}\over{k^2}}\ \ ,$$ for $k\ge 1$. Thus, regardless of the nature of the density function $f(x)$, the probability that the random variable $X$ takes a value between $\mu-\epsilon$ and $\mu+\epsilon$, is greater than $1-(\sigma^2 / \epsilon^2)$, for $\epsilon\ge\sigma$. The Chebyshev inequality follows directly from the definition of the variance, as shown below. $$\begin{aligned} \sigma^2 & = & \int_{-\infty}^{+\infty}(x-\mu)^2 f(x)\ dx\nonumber\\ \ & \ge & \int_{-\infty}^{\mu-k\sigma}(x-\mu)^2 f(x)\ dx + \int_{\mu +k\sigma}^{+\infty}(x-\mu)^2 f(x)\ dx\nonumber\\ \ & \ge & k^2 \sigma^2\left[ \int_{-\infty}^{\mu-k\sigma}f(x)\ dx + \int_{\mu+k\sigma}^{+\infty}f(x)\ dx\ \right]\nonumber\\ \ & = & k^2\sigma^2{\cal P} \left\{ \left\vert X-\mu\right\vert \ge k\sigma\right\}\ \ .\end{aligned}$$ The Chebyshev inequality is simple; it is easily adapted to sums of random variables and this precisely is of concern to us in the Monte Carlo simulation. Take for example, $N$ independent realizations of a random variable $X$ with mean zero and variance $\sigma^2$. Let $\bar{Y}_N$ be the arithmetic mean of these realizations. $\bar{Y}_N$ is a random variable. The mean of $\bar{Y}_N$ is zero and its variance is $\sigma^2 /N$. Chebyshev inequality can now be applied to the random variable $\bar{Y}_N$. Accordingly, a particular realization of the random variable $\bar{Y}_N$ will lie outside the interval $(-\epsilon , +\epsilon )$ with a probability less than or equal to $\sigma^2/(N\epsilon^2)$. Thus, as $\epsilon$ becomes smaller, by choosing $N$ adequately large we find that a realization of $\bar{Y}_N$ can be made to be as close to the mean as we desire with a probability very close to unity. This leads us naturally to the [**laws of large numbers**]{}. Of the several laws of large numbers, discovered over a period two hundred years, we shall see perhaps the earliest version, see Papoulis [@PAP], which is, in a sense, already contained in the Chebyshev inequality. LAW OF LARGE NUMBERS ==================== Consider the random variables $X_1 , X_2 , \cdots , X_N$ which are independent and identically distributed. The common probability density has a mean $\mu$ and a [*finite*]{} variance. Let $\bar{Y}_N$ denote the sum of the random variables divided by $N$. The law of large numbers says that for a given $\epsilon > 0$, as $N\to\infty$, $$\begin{aligned} {\cal P} \left\{ \left\vert\bar{Y}_N -\mu\right\vert\ > \epsilon\right\} & \to 0 & .\nonumber\end{aligned}$$ It is easy to see that a realization of the random variable $\bar{Y}_N$ is just the Monte Carlo estimate of the mean $\mu$ from a sample of size $N$. The law of large numbers assures us that the [**sample mean**]{} converges to the [**population mean**]{} as the sample size increases. I must emphasize that the convergence we are talking about here is in a probabilistic sense. Also notice that the law of large numbers does not make any statement about the nature of the probability density of the random variable $\bar{Y}_N$. It simply assures us that in the limit of $N\to\infty$, the sample mean would converge to the right answer. $\bar{Y}_N$ is called the [**consistent estimator**]{} of $\mu$. The central limit theorem on the other hand, goes a step further and tells us about the nature of the probability density function of $\bar{Y}_N $, as we shall see below. CENTRAL LIMIT THEOREM (CLT) =========================== Let $X_1 , X_2 , \cdots X_N$ be $N$ independent and identically distributed random variables having a common Gaussian probability density with mean zero and variance $\sigma^2$, given by, $$G_{X} (x)={{1}\over{\sigma\sqrt{2\pi}}}\exp \left[ - {{x^2}\over{2\sigma ^2}}\right].$$ Let $Y_N =X_1 + X_2 + \cdots X_N $. It is clear that the characteristic function of $Y_N$ is the characteristic function of $X$ raised to the power $N$. Thus, from Eq. (\[ftgauss\]), $$\begin{aligned} \Phi_{Y_N} (k) & \equiv & \left[ \Phi_X (k)\right] ^N \nonumber\\ \ & = & \exp \left[ -{{1}\over{2}}\sigma ^2 k^2 N\right] .\end{aligned}$$ Fourier inverting the above, we find that the probability density of the random variable $Y_N$ is Gaussian given by, $$G_{Y_N} (x)={{1}\over{\sigma\sqrt{2\pi N} }}\exp\left[- {{x^2 }\over{2N\sigma^2}}\right] ,$$ with mean zero and variance $N\sigma^2$. Thus when you add Gaussian random variables, the sum is again a Gaussian random variable. Under addition, Gaussian is stable. The scaling behaviour of the distribution is evident, $$G_{Y_N} (x) = N^{-1/2}G_X \left( {{x}\over{N^{1/2} }}\right) \ .$$ The variance of the sum of $N$ independent and identically distributed Gaussian random variables increases (only) linearly with $N$. On the other hand the variance of $\bar{Y}_N$ is $\sigma^2 /N$, and thus decreases with $N$. Therefore the probability density of $\bar{Y}_N$ is, $$\label{clt} G_{\bar{Y}_{\sqrt{N}}} ={{\sqrt{N} }\over{\sigma\sqrt{2\pi} }}\exp \left[ - {{Nx^2}\over{2\sigma^2}}\right] .$$ [**The Central Limit Theorem asserts that even if the common probability density of the $N$ random variables is not Gaussian, but some other arbitrary density with (zero mean and) finite variance, Eq. (\[clt\]) is still valid but in the limit of $N\to\infty$.**]{} To see this, write the characteristic function of the common probability density as $$\begin{aligned} \Phi_X (k) & = & \int_{-\infty}^{+\infty}e^{ikx} f(x)\ dx\nonumber\\ \ & = & 1-{{1}\over{2}}\sigma^2 k^2 - {{1}\over{6}}iM_3 k^3 +{{1}\over{24}}M_4 k^4 \cdots\end{aligned}$$ Hence the characteristic function of $\bar{Y}_N$ is, $$\begin{aligned} \Phi_{\bar{Y}_N} (k)&=& \left[ \Phi_X (k/N)\right] ^N\nonumber\\ \ &\approx& \left[ 1-{{1}\over{2}} {{\sigma^2 k^2}\over{N^2}} \left( 1+i{{M_3 k}\over{3\sigma^2}} {{1}\over{N}} -{{M_4 k^2}\over{12\sigma^2}}{{1}\over{N^2}}\cdots \right) \right]^N\nonumber\\ \ &\sim & \exp\left[- {{1}\over{2}} {{\sigma^2 k^2}\over{N}}\right] \quad (\rm{for}\quad N\to\infty) , \end{aligned}$$ whose inverse is the density given by Eq. (\[clt\]), see van Kampen [@VANK]. The above can be expressed in terms of cumulant expansion. We have for the random variable $X$, $$\begin{aligned} \Phi_X (k)&=&\exp [ \Psi_X (k)]\nonumber\\ &=& \exp\left[ ikC_1 + {{(ik)^2}\over{2!}} C_2 + {{ (ik)^3}\over{3!}}C_3 +\cdots\right] ,\end{aligned}$$ where $C_1 , C_2 , C_3\cdots $ are the cumulants of $X$. Then, the characteristic function of the random variable $\bar{Y}_N$ is given by, $$\begin{aligned} \Phi_{\bar{Y}_N} (k)=\left[ \Phi_X (k/N)\right]^N\quad\hskip 6 cm\nonumber\\ =\exp\left[ ikC_1 - {{1}\over{2}} {{k^2 C_2}\over{N}}\left( 1+{{ik}\over{3}}{{C_3 }\over{C_2}}{{1}\over{N}} -{{k^2 C_4}\over{12C_2}}{{1}\over{N^2}} \cdots \right) \right] .\end{aligned}$$ We immediately see that for $N\to\infty$, $$\Phi_{\bar{Y}_N} \sim \exp\left[ i\ k\ C_1 - {{1}\over{2}} {{k^2 C_2}\over{N}}\right],$$ whose Fourier-inverse is a Gaussian with mean $C_1$, and variance $C_2 /N$. For the random variable $\bar{Y}_{\sqrt{N} }$, defined as the sum of $N$ random variables divided by $\sqrt{N}$, the characteristic function can be written as, $$\begin{aligned} \Phi_{\bar{Y}_{\sqrt{N} }} = \left[ \Phi_X (k/\sqrt{N})\right]^N\hskip 65 mm \nonumber\\ = \exp\left[ ik\sqrt{N}C_1 - {{k^2}\over{2}} C_2 \left\{ 1 + {{ ikC_3 }\over{3C_2}}{{1}\over{ \sqrt{N} }} - {{k^2 C_4}\over{12C_2}}{{1}\over{N}} +\cdots\right\}\right]\nonumber\\ \sim \exp\left[ ik\sqrt{N}C_1- {{k^2 C_2}\over{2}}\right] \quad (\rm{ for}\quad N\to\infty),\hskip 27 mm \end{aligned}$$ which upon Fourier inversion gives a Gaussian with mean $C_1\sqrt{N}$ and variance $C_2$. The variance of $\bar{Y}_{\sqrt{N}} $ is independent of $N$. Lévy Distributions ------------------ In the beginning of the discussion on the Central Limit Theorem, I said when you add Gaussian random variables, what you get is again a Gaussian. Thus Gaussian is [*stable*]{} under addition. This is a remarkable property. Not all distributions have this property. For example, when you add two uniform random variables, you get a random variable with triangular distribution. When two exponential random variables are added we get one with a Gamma distribution, as we would see in the next section where we shall be investigating the behaviour of the sum of several independently distributed exponential random variables. (In fact we went a step further and found that when we add $N$ identically distributed independent random variables with finite variance, the resulting distribution tends to a Gaussian, when $N\to\infty$. This is what we called the Central Limit Theorem). A natural question that arises in this context is: Are there any other distributions, that are stable under addition ? The answer is yes and they are called the [**Lévy stable distributions**]{}, discovered by Paul Lévy [@levy] in the mid twenties. Unfortunately the general form of the Lévy distribution is not available. What is available is its characteristic function. Restricting ourselves to the symmetric case, the characteristic function of a Lévy distribution is given by, $${\tilde L}(k;\alpha )=\exp (-D\vert k\vert ^{\alpha} ),$$ where $D > 0 $ is a scale factor, $k$ is the transform variable corresponding to $x$, and $\alpha $ is the Lévy index. The Fourier inverse of ${\tilde L}(k;\alpha )$ is the Lévy distribution $L(x;\alpha )$. Lévy showed that for $L(x;\alpha )$ to be non negative, $0<\alpha\le2$. In fact we have, $$\begin{aligned} L(x;\alpha )\sim\cases{ D^{-1/\alpha}\ ,\ \ & for \ $x=0$;\cr D\vert x\vert^{-\alpha-1}\ ,\ & for \ $\vert x\vert \to\infty ;\ \alpha < 2$ \ ;\cr D^{-1/2}\exp (-x^2 / 4D) \ &$ ,\forall\ x$ and \ $\alpha =2$\cr}.\end{aligned}$$ The pre factor in the above can be fixed by normalization. We see that Gaussian is a special case of the Lévy distributions when $\alpha =2$. In fact Gaussian is the only Lévy distribution with finite variance. Earlier we saw about Cauchy distribution. This is again a special case of the Lévy distribution obtained when we set $\alpha =1$. Consider $N$ independent and identically distributed Lévy random variables, with the common distribution denoted by $L(x;\alpha)$. The sum $Y_N$ is again a Lévy distribution denoted by $L_N (x; \alpha)$, obtained from $L(x;\alpha)$ by replacing $D$ by $ND$. Let us scale $Y_N$ by $N^{1/\alpha}$ and consider the random variable $Y_N / N^{1/\alpha}$. Its distribution is $L(x; \alpha)$. Thus $N^{1/\alpha}$ provides a natural scaling for Levy distributions. We have, $$L_{N} (x; \alpha)=N^{- 1/ \alpha } L_X \left( {{x}\over{ N^{1/\alpha}} } ; \alpha\right) .$$ The scaling behaviour fits into a general scheme. The Gaussian corresponds to $\alpha =2$; the natural scaling for Gaussian is thus $\sqrt{N}$. The Cauchy distribution corresponds to $\alpha =1$; the natural scaling is given by $N$. Thus Lévy stable law generalizes the central limit theorem: Lévy distributions are the only possible limiting distributions for the sums of independent identically distributed random variables. The conventional Central Limit Theorem is a special case restricting each random variable to have finite variance and we get Gaussian as a limiting distribution of the sum. Lévy’s more general Central Limit Theorem applies to sums of random variables with finite or infinite variance. Stable distributions arise in a natural way when 1) a physical system evolves, the evolution being influenced by additive random factors and 2) when the result of an experiment is determined by the sum of a large number of independent random variables. Illustration of the CLT ----------------------- I shall illustrate the approach to Gaussian of the distribution of the mean, by considering exponential random variables for which all the relevant quantities can be obtained analytically. We start with the identity, $$\int_{0}^{\infty}\ dx\ e^{-\alpha x}\ e^{ikx} = {{1}\over{\alpha -ik}}.$$ We differentiate $(N-1)$ times, both sides of the above with respect to $\alpha$ and set $\alpha =1$. We get, $$\label{gfsum} \int_{0}^{\infty}dx\ e^{ikx}\ \left[ {{e^{-x}x^{N-1}}\over{(N-1)!}}\right] ={{1}\over{(1-ik)^N}}.$$ We immediately identify the right hand side of the above, as the characteristic function of the sum of $N$ independent and identically distributed exponential random variables with mean unity, see Eq. (\[sumexp\]). In Eq. (\[gfsum\]) above, we replace, on both sides, $k$ by $k/N$, $x$ by $Nx$, and $dx$ by $Ndx$, to get, $$\label{phiam} \int_0 ^{\infty} dx\ e^{ikx}\left[ {{ e^{-Nx}N^N x^{N-1} }\over{(N-1)!}} \right] = {{1}\over{\left( 1- i{{k}\over{N}}\right)^N }}.$$ The right hand side of the above is the characteristic function of $\bar{Y}_N$ see Eq. (\[amexp\]). Thus we get an exact analytical expression for the probability density function of $\bar{Y}_N$, as $$f(x)={{N^N}\over{(N-1)!}}\ e^{-Nx}\ x^{N-1}, \quad\quad 0\le x < \infty ,$$ which, for all $N$, is a [**gamma density function**]{}, and not a Gaussian! The cumulants of the gamma density can be obtained by power series expansion of the logarithm of the characteristic function given by Eq. (\[amexp\]). We get, $$\begin{aligned} \ln \Phi_{\bar{Y}_N}(k)&=&-N\ln \left( 1- {{ik}\over{N}}\right)\nonumber\\ &=&N\sum_{n=1}^{\infty} {{ (ik)^n }\over{ N^n n}}\nonumber\\ & \nonumber\\ &=&\sum_{n=1}^{\infty} {{(ik)^n}\over{n!}} \left[ {{ (n-1)!}\over{N^{n-1} }}\right],\nonumber\\ & \nonumber\\ & =&ik+ {{ (ik)^2}\over{2!}} {{1}\over{N}} \left[ 1+ {{ ik}\over{3}}{{2!}\over{N} }\cdots \right] .\end{aligned}$$ The $n$-th cumulant is thus, $$C_n = {{ (n-1)!}\over{N^{n-1} }}.$$ We find $C_1 =1$ and $C_2 = 1/N$, as expected. The third cumulant is $C_3 = 2/N^2$, and it goes to zero as $N\to\infty$. In fact all the higher cumulants vanish in the asymptotic limit. (Indeed even the second cumulant goes to zero in the limit $N\to\infty$, which means that the probability density of the arithmetic mean is asymptotically a delta function, something very satisfying from the point of view of a Monte Carlo practitioner!). The important point is that the Gaussian with mean $unity$ and variance $1/N$ becomes a very good approximation to the gamma density, for large $N$. Replacing $k$ by $k/\sqrt{N}$, $x$ by $x\sqrt{N}$ and $dx$ by $dx\sqrt{N}$ in Eq. (\[gfsum\]), we get, $$\int_0 ^{\infty} dx\ e^{ikx}\left[ {{(\sqrt{N})^N}\over{(N-1)!}}\ x^{N-1} \ e^{ -\sqrt{N}x }\right] = {{1}\over{ \left( 1- i {{k}\over{\sqrt{N} }} \right) ^N}} .$$ The right hand side of the above equation is the characteristic function of the random variable $\bar{Y}_{\sqrt{N}}$ $=$ $ (X_1 + X_2 + \cdot + X_N )/\sqrt{N}$. Thus, the probability density function of $\bar{Y}_{\sqrt{N} }$, is given by the gamma density, $$\label{gammarootn} f(x) = {{ (\sqrt{N})^N}\over{(N-1)!}}\ x^{N-1} \ \exp (-\sqrt{N}x ),$$ whose mean is $\sqrt{N}$ and whose variance is unity (independent of $N$). Fig. \[GAMMA\_PS\] depicts the gamma density given above for $N=1$, $10$, $20$, $40$, $60$ and $80$ along with the Gaussian of mean $\sqrt{N}$ and variance unity. For large $N$ we find that the Gaussian is a good fit to the gamma density, for $\vert x-\sqrt{N}\vert << 1$. In the discussions above on the central limit theorem, we have assumed the random variables to be independent and identically distributed. The central limit theorem holds well under much more general conditions. First, the random variables need not be identical. To see this, let $N_1$ of the $N$ random variables have one distribution with say mean $\mu _1 $ and variance $\sigma _1 ^2 $. Let $Y_1$ denote their sum. Also let $N_2$ of the $N$ random variables have another distribution with mean $\mu_2$ and variance $\sigma _2 ^2$ and let their sum be denoted by $Y_2$. We take the limit $N_1 \to\infty$ and $N_2\to\infty$ such that $N_1 / N_2 $ remains a constant. Asymptotically ($N_1 \to\infty$) the random variable $Y_1$ tends to a Gaussian and the characteristic function is $\phi_1 (k) = \exp (ikN_1\mu_1 - k^2 N_1 \sigma_1 ^2 /2 )$. Similarly the random variable $Y_2$ tends to a Gaussian and the characteristic function is $\phi_2 (k)= \exp (ikN_2\mu_2 - k^2 N_2 \sigma_2 ^2 /2 )$. Their sum $Y=Y_1 + Y_2$ has the characteristic function $\phi (k) = \phi _1 (k) \times \phi _2 (k) $, since $Y_1$ and $Y_2$ are independent. Thus $\phi (k) = \exp \left\{ ik \left( N_1\mu_1 + N_2\mu_2\right) - k^2 \left( N_1\sigma_1 ^2 + N_2\sigma _2 ^2 \right) /2\right\}$. Fourier inverse of $\phi (k) $ is a Gaussian with mean $N_1 \mu _1 + N_2 \mu _2$ and variance $N_1 \sigma_1 ^2 +N_2 \sigma_2 ^2$. Second, the random variables can be weakly dependent, see for [*e.g.*]{} Gardiner [@GAR]. An example is the correlated random walk, described in Assignment (6). Essentially the adjacent steps of the random walk are correlated. Despite this, asymptotically the distribution of the position of the random walk is Gaussian, see for example [@tmj]. [**Assignment 5**]{}\ Consider a random variable $X$ with probability density $f(x)=\exp (-\sqrt{2}\vert x\vert )/\sqrt{2}$, for $-\infty \le x\le +\infty$.\ (A) What is the characteristic function of $X$ ?\ Let $X_1 , X_2 , \cdots X_N $, be independent and identically distributed random variables with the common density $f(x)$ given above. Let $Y_N$ be the sum of these. [*i.e.*]{} $Y_N = \sum_{i=1}^{N} X_i $.\ (B) What is the characteristic function of the random variable $Y_N$ ?\ Let $\bar{Y}_N = Y_N /N $.\ (C) What is the characteristic function of $\bar{Y}_N$ ?\ Let $\bar{Y}_{\sqrt{N}}=Y_N / \sqrt{N}$.\ (D-a) What is the characteristic function of $\bar{Y}_{\sqrt{N}}$ ?\ (D-b) Taylor expand the characteristic function of $\bar{Y}_{\sqrt{N}}$ in terms of moments and in terms of cumulants.\ (D-c) Employing the Moment and Cumulant expansion, demonstrate the approach to Gaussian as $N\to\infty$.\ (D-d) What is the mean of the asymptotic Gaussian ?\ (D-e) What is its variance of the asymptotic Gaussian ? [**Assignment 6**]{}\ Consider one-dimensional lattice with lattice sites indexed by $(\cdots -3, -2, -1, 0, +1, +2, +3, \cdots)$; a random walk starts at origin and in its first step jumps to the left site at $-1$ or to the right site $+1$ with equal probability. In all subsequent steps the probability for the random walk to continue in the same direction is $\alpha$ and reverse the direction is $1-\alpha$. Let $x(n)$ denote the position of the random walk after $n$ steps. Show that asymptotically ($n\to\infty$) the distribution of $x(n)$ is Gaussian. What is the mean of the limiting Gaussian? What is its variance? See [@tmj] Thus we find that for the central limit theorem to hold good it is adequate that each of the random variables has a finite variance and thus none of them dominates the sum excessively. They need not be identical; they can also be weakly dependent. Therefore, whenever a large number of random variables additively determine a parameter, then the parameter tends to have a Gaussian distribution. Upon addition, the individual random variables lose their characters and the sum acquires a Gaussian distribution. It is due to this remarkable fact that Gaussian enjoys a pre-eminent position in statistics and statistical physics. Having equipped ourselves with the above preliminaries let us now turn our attention to random and pseudo random numbers that form the backbone of all Monte Carlo simulations. RANDOM NUMBERS ============== [**What are Random numbers ?**]{}\ We call a sequence of numbers [*random*]{}, if it is generated by a random physical process. How does randomness arise in a (macroscopic) physical system while its microscopic constituents obey deterministic and time reversal invariant laws ? This issue concerns the search for a microscopic basis for the second law of thermodynamics and the nature and content of entropy(randomness). I shall not discuss these issues here, and those interested should consult [@SECLAW], and the references therein. Instead we shall say that physical processes such as radioactivity decay, thermal noise in electronic devices, cosmic ray arrival times [*etc.*]{}, give rise to what we shall call a sequence of truly random numbers. There are however several problems associated with generating random numbers from physical processes. The major one concerns the elimination of the apparatus bias. See for example [@fc] where description of generating truly random numbers from a radioactive alpha particle source is given; also described are the bias removal techniques. Generate once for all, a fairly long sequence of random numbers from a random physical process. Employ this in all your Monte Carlo calculations. This is a safe procedure. Indeed tables of random numbers were generated and employed in the early days of Monte Carlo practice, see for example [@lhctippet]. The most comprehensive of them was published by Rand Corporation [@RAND] in mid fifties. The table contained one million random digits. Tables of random numbers are useful if the Monte Carlo calculations are carried out manually. However, for computer calculations, use of these tables is impractical. The reason is simple. A computer has a relatively small internal memory; it cannot hold a large table. One can keep the table of random numbers in an external storage device like a disk or tape; constant retrieval from these peripherals would considerably slow down the computations. Hence, it is often desirable to generate a random number as and when required, employing a simple algorithm that takes very little time and very little memory. This means one should come up with a practical definition of randomness of a sequence of numbers. At the outset we recognize that there is no proper and satisfactory definition of randomness. Chaitin [@chaitin] has made an attempt to capture an intuitive notion of randomness into a precise definition. Following Chaitin, consider the two sequences of binary random digits given below: $$\begin{aligned} \begin{array}{cccccccccccccc} 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0\cr & & & & & & & & & & & & & \cr 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 1 \end{array}\nonumber\end{aligned}$$ I am sure you would say that the first is not a random sequence because there is a pattern in it - a repetition of the doublet 1 and 0; the second is perhaps a random sequence as there is no recognizable pattern. Chaitin goes on to propose that [**a sequence of numbers can be called random if the smallest algorithm capable of specifying it to the computer has about the same number of bits of information as the sequence itself.**]{} Let us get back to the two sequences of binary numbers given above. We recognize that tossing a fair coin fourteen times independently can generate both these sequences. Coin tossing is undoubtedly a truly random process. Also the probability for the first sequence is exactly the same as that for the second and equals $2^{-14}$, for a fair coin. How then can we say that the first sequence is not random while the second is ? Is not a segment with a discernible pattern, part and parcel of an infinitely long sequence of truly random numbers ? See [@brokendice] for an exposition of randomness along these lines. We shall however take a practical attitude, and consider numbers generated by a deterministic algorithm. These numbers are therefore predictable and reproducible; the algorithm itself occupies very little memory. Hence by no stretch of imagination can they be called random. We shall call them [**pseudo random numbers**]{}. We shall be content with pseudo random numbers and employ them in Monte Carlo calculations. We shall reserve the name pseudo random numbers for those generated by deterministic algorithm and are supposed to be distributed independently and uniformly in the range $0$ to $1$. We shall denote the sequence of pseudo random numbers by $\{ \xi_1 , \xi_2 , \cdots\}$. For our purpose it is quite adequate if [*one ensures that the sequence of pseudo random numbers is statistically indistinguishable from a sequence of truly random numbers.*]{} This is a tall order! How can anybody ensure this ? Tests for Randomness -------------------- Usually we resort to what are called [**tests of randomness**]{}. A test, in a general sense, consists of constructing a function $\psi (r_1, r_2, \cdots )$, where $r_1 , r_2 , \cdots $, are independent variables. Calculate the value of the function for a sequence of pseudo random numbers by setting $r_i = \xi_i\ \forall \ i=1,\ 2,\ \cdots$. Compare this value with the value that $\psi$ is expected to have if $\{ r_i : i=1,2,\cdots \}$ were truly random numbers distributed independently and uniformly in the range $0$ to $1$. For example, the simplest test one can think of, is to set $$\psi (r_1, r_2, \cdots )={{1}\over{N}}\sum_{i=1}^{N} r_i ,$$ which defines the average of $N$ numbers. For a sequence of $N$ truly random numbers we expect $\psi$ to lie between $.5-\epsilon$ and $.5+\epsilon$ with a certain probability $p(\epsilon)$. Notice that for $N$ large, from the Central Limit Theorem, $\psi$ is Gaussian with mean $0.5$ and variance $\sigma^2 = (12N)^{-1}$. If we take $\epsilon = 2\sigma=1/\sqrt{3N}$, then $p(\epsilon)$ is the area under the Gaussian between $.5-\epsilon$ and $.5+\epsilon$ and is equal to $0.95$. This is called the two-sigma confidence interval. Thus for a sequence of $N$ truly random numbers, we expect its mean to be within $\pm\epsilon$ around $0.5$ with $.95$ probability, for large $N$. If a sequence of $N$ pseudo random numbers has an average that falls outside the interval $(.5-\epsilon , .5+\epsilon)$ then we say that it fails the test at $5\%$ level. [**Assignment 7**]{}\ Carry out the above test on the random numbers generated by one of the generators in your computer. Does it pass the test at $5\%$ level ? Another example consists of setting, $$\psi (r_1 , r_2 , \cdots )\equiv C(k) = {{N \sum_{i=1}^{N} r_i r_{i+k} - \left( \sum_{i=1}^{N} r_i\right)^2} \over{ N\sum_{i=1}^{N}r_i ^2 - \left( \sum_{i=1}^{N} r_i \right)^2 }},$$ with $k=0,1,2,\cdots ,N-1$. For a sequence of truly random numbers, the function $\psi$ above, which denotes two point auto correlation function, is expected to be unity for $k=0$ and zero for $k\ne 0$. [**Assignment 8**]{}\ Calculate the auto correlation of the random numbers generated by one of the random number generators in your computer. In practice, one employs more complicated tests, by making $\psi$ a more complicated function of its arguments. An example is the [**run test**]{} which is more sensitive to the correlations. The idea is to calculate the length of a run of increasing (run-up) or decreasing (run-down) size. We say the run-down length is $l$ if we have a sequence of random numbers such that $\xi_{l+1} > \xi_l < \xi _{l-1} < \xi_{l-2} < \cdots \xi_2 < \xi_1$. We can similarly define the run-up length. Let me illustrate the meaning of run-down length by considering a sequence of integers between $0$ and $9$, given below. $$\begin{aligned} \begin{array}{ccccccccccccccccccccccccccc} &6 & &4 & & 1 & &3 & &4 & &5 & &3 & & 2 & &7 & &4 & &8 & \\ & & & & & & & & & & & & & & & & & & & & & & \\ | &6 & &4 & & 1 & &3 & | &4 & &5 & | &3 & & 2 & &7 &| &4 & &8 &| \\ & & & & & & & & & & & & & & & & & & & & & & \\ | & & & & & & &3 & | & & &1 & | & & & & &2 &| & & & 1 &| \end{array}\nonumber\end{aligned}$$ The first row displays the sequence; the second depicts the same sequence with numbers separated into groups by vertical lines. Each group contains $n+1$ numbers the first $n$ of which are in descending order. [*i.e.*]{} these numbers are running down. The descent is broken by the $(n+1)^{{\rm th}}$ number which is greater than the $n^{{\rm th}}$ number. The third row gives the run-down length, for each group, which is $n$. We can calculate the probability distribution of the run-down length as follows. Let $P(L\ge l)$ be the probability that the run-down length $L$ is greater than or equal to $l$. To calculate $P(L\ge l)$ we consider a sequence of $l$ distinct random numbers. There are $l!$ ways of arranging these numbers. For a sequence of truly random numbers, each of these arrangements is equally likely. Of these, there is only one arrangement which has all the $l$ numbers in the descending order. Hence $P(L\ge l) = 1/l!$. Since $P(L= l)= P(L\ge l)- P(L\ge l+1)$, we get $$\label{rundown_eq} P(l) = {{1}\over{l!}} - {{1}\over{(l+1)!}}\quad.$$ Alternately, for the probability of run-down length we have, $$\begin{aligned} P(L\ge l) & = & \int_{0}^{1}d\xi_1\int_{0}^{\xi_1}d\xi_2 \cdots \int_{0}^{\xi_{l-1}}d\xi_{l}\nonumber\\ & = & \int_0 ^1 {{\xi_1 ^{l-1} }\over{(l-1)!}} d\xi_1\nonumber\\ & = & {{1}\over{l!}}\end{aligned}$$ In the test, we determine numerically, the distribution of the run length on the sequence of pseudo numbers and compare it with the exact distribution given by Eq. (\[rundown\_eq\]). Fig. \[RUNUP\_PS\] depicts the results of a run-up test. Description of several of the most common tests for randomness can be found in [@TESTS]. One issue becomes obvious from the above discussions. There is indeed an indefinitely large number of ways of constructing the function $\psi$. Hence, in principle a pseudo random number generator can never be tested thoroughly for the randomness of the sequence of random numbers it generates. What ever may be the number of tests we employ and however complicated they may be, there can be, and always shall be, surprises....surprises like the Marsaglia planes discovered in the late sixties [@MARS] or the parallel lines discovered some three years ago by Vattulainen and co-workers [@VAT]. We shall discuss briefly these issues a bit later. [**Assignment 9** ]{}\ Carry out (a) run up and (b) run down tests, on the random numbers generated by one of the random number generators in your computer and compare your results with the exact analytical results. It is adequate if you consider run lengths up to six or so. The struggle to develop better and better random number generators and the simultaneous efforts to unravel the hidden order in the pseudo random numbers generated, constitute an exciting and continuing enterprise, see for example [@GUT]. See also Bonelli and Ruffo [@BORU] for a recent interesting piece of work on this subject. Pseudo Random Number Generators ------------------------------- The earliest pseudo random number generator was the [**mid-squares**]{} proposed by von Neumann. Start with an $m$ digit integer $\nu_1$. Take the middle $m/2$ digits and call it $\eta_1$. Square $\eta_1$ and call the resulting $m$ digit integer as $\nu_2$. Take the middle $m/2$ digits of $\nu_2$ and call it $\eta_2$. Proceed in the same fashion and obtain a sequence of integers $\eta_1, \eta_2, \cdots$. These are then converted to real numbers between zero and one by dividing each by $10^{m/2}$. For a properly chosen seed $\nu_1$, this method yields a long sequence of [*apparently*]{} good random numbers. But on a closer scrutiny, it was found that the mid-squares is not a good random number generator. I shall not spend anymore time on this method since it is not in vogue. Instead we shall turn our attention to the most widely used class of random number generators called the [**linear congruential generators**]{}, discovered by Lehmer [@LEH]. Start with a seed $R_1$ and generate successive integer random numbers by $$\label{cg} R_{i+1} = a R_i + b\ \ \ (\rm{mod}\ m) ,$$ where $a$, $b$, and $m$ are integers. $a$ is called the [**generator**]{} or [**multiplier**]{}; $b$, the [**increment**]{} and $m$ is the [**modulus**]{}. Equation (\[cg\]) means the following. Start with an integer $ R_1$. This is your choice. Calculate $a\times R_1 + b$. This is an integer. Divide this by $m$ and find the remainder. Call it $R_2$. Calculate $a\times R_2 + b$ ; divide the result by $m$ ; find the remainder and call it $R_3$. Proceed in the same fashion and calculate the sequence of integers $\{ R_1 , R_2 , R_3, R_4, \cdots\}$, initiated by the seed $R_1$. Thus $\{ R_i\ :\ i=1,2,\cdots\} $ is a sequence of random integers. The above can be expressed as, $$R_{i+1} = (a\times R_i + b ) - \left[ {{ (a\times R_i ) +b }\over{m}}\right]\times m ,$$ where the symbol $[\ \eta\ ]$ represents the largest integer less than or equal to $\eta$; [*e.g.*]{}, $[ \pi ]=3,\ [\sqrt{2}]=1,\ [ 4.999]=4,\ $[*etc.*]{}. The random integers $\{ R_j \}$ can be converted to floating point numbers by dividing each by $m$. [*i.e.*]{}, $\xi _j = R_j /m$. Then $\{ \xi_i : i=1,2,\cdots\}$ is the desired sequence of pseudo random numbers in the range $0$ to $1$. The linear congruential generator is robust (over forty years of heavy-duty use!), simple and fast; the theory is reasonably well understood. It is undoubtedly an excellent technique and gives fairly long sequence of reasonably good random numbers, when properly used. We must exercise great care in the use of congruential generators, lest we should fall into the [*deterministic* ]{} traps. Let me illustrate: Consider Eq. (\[cg\]). Let us take $a=5,\ b=1,\ R_1 =1,$ and $m=100$. The results of the recursions are shown in Table (3). \[lcga5b1m100\_tab\] ------------------------------------------------------------------------ ------- ----- ------------ ------ ------- -------------- ------- --------------- ------- $R_1$ $ =$ $1$ $R_2$ $=$ $(5\times$ $1 $ $+1)$ (mod $100)=$ $6 $ (mod $100)=$ $6 $ $R_3$ $=$ $(5\times$ $6$ $+1)$ (mod $100)=$ $31 $ (mod $100)=$ $31$ $R_4$ $=$ $(5\times$ $31$ $+1)$ (mod $100)=$ $156$ (mod $100)=$ $56$ $R_5$ $=$ $(5\times$ $56$ $+1)$ (mod $100)=$ $281$ (mod $100)=$ $81$ $R_6$ $=$ $(5\times$ $81$ $+1)$ (mod $100)=$ $406$ (mod $100 )=$ $6 $ ------- ----- ------------ ------ ------- -------------- ------- --------------- ------- : Linear congruential recursion with $a=5$, $b=1$, $m=100$, and $R_1 = 1$. ------------------------------------------------------------------------ We see from Table (3) that $R_6 = R_2 ; R_7 = R_3 , \cdots $, and the cycle repeats. The period is four. We just get four random(?) integers! Consider another example with $a=5,\ b=0,\ m=100, $ and $R_1 = 1$. Table (4) depicts the results of the linear congruential recursion. \[lcga5b0m100\_tab\] ------------------------------------------------------------------------ ------- ----- ------------ ------- ------------- ----- ------- ------------- ----- ------ $R_1$ $=$ $1$ $R_2$ $=$ $(5\times$ $1)$ (mod $100)$ $=$ $5 $ (mod $100)$ $=$ $5$ $R_3$ $=$ $(5\times$ $5)$ (mod $100)$ $=$ $25$ (mod $100)$ $=$ $25$ $R_4$ $=$ $(5\times$ $25)$ (mod $100)$ $=$ $125$ (mod $100)$ $=$ $25$ ------- ----- ------------ ------- ------------- ----- ------- ------------- ----- ------ : Linear congruential recursion with $a=5$, $b=0$, $m=100$, and $R_1 = 1$ ------------------------------------------------------------------------ Starting with the seed $1$, we get $5$ followed by an endless array of $25$, as seen from Table (4). These examples illustrate dramatically how important it is that we make a proper choice of $a$, $b$ and $m$ for decent results. Clearly, whatever be the choice of $a$, $b$ and $m$, the sequence of pseudo random integers generated by the linear congruential technique shall repeat itself after utmost $m$ steps. The sequence is therefore periodic. Hence, in applications we must ensure that the number of random numbers required for any single simulation must be much less than the period. Usually $m$ is taken very large to permit this. For the linear congruential generator, we can always get a sequence with full period, $m$, if we ensure:\ 1. $m$ and $b$ are relatively prime to each other; [*i.e.*]{}$\gcd(m,b)=1: $\ 2. $a \equiv\ 1\ (\rm{mod}\ p)$ for every prime factor $p$ of $m$; and\ 3. $a\equiv\ 1\ (\rm{mod}\ 4)$, if $m\equiv\ 0\ \rm{(mod}\ 4)$.\ For example, let $a=7$, $b=13$ and $m=18$. Check that this choice satisfies the above conditions. The results of the linear congruential recursion with the above parameters are depicted in Table (5). \[lcga7b13m18\_tab\] ------------------------------------------------------------------------ $$\begin{aligned} \begin{array}{lclrllcrr} R_1 & & & & & & & = & 1 \\ R_2 &=& (7\times & 1& +13) & (\rm{mod}\ 18) =& 20 & (\rm{mod}\ 18) =& 2\\ R_3 &=& (7\times & 2 & +13) & (\rm{mod}\ 18) =& 27 & (\rm{mod}\ 18) =& 9\\ R_4 &=& (7\times & 9 & +13) & (\rm{mod}\ 18) =& 76 & (\rm{mod}\ 18) =& 4\\ R_5 &=& (7\times & 4 & +13) & (\rm{mod}\ 18) =& 41 & (\rm{mod}\ 18) =& 5\\ R_6 &=& (7\times & 5 & +13) & (\rm{mod}\ 18) =& 48 & (\rm{mod}\ 18) =& 12\\ R_7 &=& (7\times & 12 & +13) & (\rm{mod}\ 18) =& 97 & (\rm{mod}\ 18) =& 7\\ R_8 &=& (7\times & 7 & +13) & (\rm{mod}\ 18) =& 62 & (\rm{mod}\ 18) =& 8\\ R_9 &=& (7\times & 8 & +13) & (\rm{mod}\ 18) =& 69 & (\rm{mod}\ 18) =& 15\\ R_{10} &=& (7\times & 15 & +13) & (\rm{mod}\ 18) =& 118 & (\rm{mod}\ 18) =& 10\\ R_{11} &=& (7\times & 10 & +13) & (\rm{mod}\ 18) =& 83 & (\rm{mod}\ 18) =& 11\\ R_{12} &=& (7\times & 11 & +13) & (\rm{mod}\ 18) =& 90 & (\rm{mod}\ 18) =& 0\\ R_{13} &=& (7\times & 0 & +13) & (\rm{mod}\ 18) =& 13 & (\rm{mod}\ 18) =& 13\\ R_{14} &=& (7\times & 13 & +13) & (\rm{mod}\ 18) =& 104 & (\rm{mod}\ 18) =& 14\\ R_{15} &=& (7\times & 14 & +13) & (\rm{mod}\ 18) =& 111 & (\rm{mod}\ 18) =& 3\\ R_{16} &=& (7\times & 3 & +13) & (\rm{mod}\ 18) =& 34 & (\rm{mod}\ 18) =& 16\\ R_{17} &=& (7\times & 16 & +13) & (\rm{mod}\ 18) =& 125 & (\rm{mod}\ 18) =& 17\\ R_{18} &=& (7\times & 17 & +13) & (\rm{mod}\ 18) =& 132 & (\rm{mod}\ 18) =& 6\\ & & & & & & & \\ R_{19} &=& (7\times & 6 & +13) & (\rm{mod}\ 18) =& 55 & (\rm{mod}\ 18) =& 1 \end{array}\nonumber\end{aligned}$$ ------------------------------------------------------------------------ Thus we get the sequence $$\{ 1,2,9,4,5,12,7,8,15,10,11,0,13,14,3,16,17,6\}$$ of eighteen distinct integers between $0$ and $17$, with full period $18$. Divide each number in this sequence by $18$ and get a sequence of real numbers between $0$ and $1$. That the period is maximum $m$ only ensures that the numbers are uniform in the range $0$ to $1$. The numbers may be correlated or may have a hidden order. Let us embed the above sequence in a two dimensional phase space. This is carried out as follows. Form two-dimensional embedding vectors: $(\xi_1 ,\xi_2)$, $(\xi_2 , \xi_3 )$, $\cdots$, $(\xi_m , \xi_1 )$. We have thus $m$ vectors and in our example $m=18$. Each of these vectors can be represented by a point in two dimensional phase space. Fig. \[MARSAGLIA\_PS\]a depicts these eighteen points. We observe that the points fall neatly on parallel lines. Take another example with $a=137$, $b=187$, and $m=256$. We get a sequence of $256$ distinct integers between $0$ and $255$ in some order starting from the chosen seed. Let us convert the numbers into real numbers between zero and one by dividing each by 256. Let $\{ \xi_i :i=1,2,\cdots , 256\}$ denote the sequence. Embed the sequence in a two-dimensional  phase space  , by forming two-dimensional vectors as discussed above. Fig. \[MARSAGLIA\_PS\]b depicts the results of this exercise. The vectors clearly fall on parallel lines. That the random numbers generated by linear congruential generators form neat patterns when embedded in two and higher dimensional phase space was known since as early as the beginning of sixties, see for example [@greenberger]. But no one recognized this as an inherent flaw in the linear congruential generators. It was generally thought that if one takes a linear congruential generator with a very long period, one would not perhaps see any patterns [@chambers]. Long periods can be obtained by choosing the modulus $m$ large and by choosing appropriate values of the multiplier $a$ and the increment $b$ to get the full period. [**Assignment 10**]{}\ Construct a linear congruential generator that gives rise to a long sequence of pseudo random numbers with full period. Embed them in two or higher dimensional phase space and see if there are any patterns. Test for the randomness of the sequence. The modulus $m$ is usually taken as $2^{t-1}-1$, where $t$ is the number of bits used to store an integer and hence is machine specific. One of the $t$ bits is used up for storing the sign of the integer. The choice $a=7^{5}=16801$, $b=0$ and $m=2^{31}-1$, for a $32$ bit machine has been shown to yield good results, see [@PTVF]. The Ahren generator which specifies $a=2^{t-2}(\sqrt{5}-1)/2$, $b=0$, has also been shown to yield  good  random numbers. The linear congruential generators have been successful and invariably most of the present day pseudo random number generators are of this class. In the late sixties, Marsaglia [@MARS] established unambiguously that the formation of lattice structure is an inherent feature of a linear congruential generator:\ [**If $d$-tuples $(\xi_1 ,\ \xi_2,\ \cdots ,\ \xi_d )$, $(\xi_2 ,\ \xi_3,\ \cdots ,\ \xi_{d+1} )$, $(\xi_3 ,\ \xi_4,\ \cdots ,\ \xi_{d+2} )$, $\cdots$, of the random numbers produced by linear congruential generators are viewed as points in the unit hyper cube of $d$ dimensions, then all the points will be found to lie in a relatively small number of parallel hyper planes. Such structures occur with any near congruential generator and in any dimension.**]{}\ The number of such [**Marsaglia planes**]{} does not exceed $(d!2^t)^{1/d}$. Besides the points on several Marsaglia planes form regular patterns. Thus it is clear now that the existence of Marsaglia planes is undoubtedly [**a serious defect**]{} inherent in the linear congruential generators. Then there is the other matter of the presence of correlations, hopefully weak, in the sequence of pseudo random numbers. If your Monte Carlo algorithm is sensitive to the subtle correlations present in the sequence, then you are in problem. For example Ferrenberg, Landau and Wong [@flj], found that the so-called high quality random number generators led to subtle but dramatic errors in algorithm that are sensitive to the correlations. See also [@badrn]. One should perhaps conclude that the [**linear congruential generators are not suitable for Monte Carlo work!**]{} Before we come to such a harsh conclusion, let us look at the issues in perspective. If your Monte Carlo program does not require more than a small fraction (smaller the better) of the full period of the random numbers, then most likely, the presence of Marsaglia lattice structure will not affect your results. All the linear congruential generators in use today have long periods. Also one can think of devices to increase the number of Marsaglia planes. For example the Dieter-Ahren  solution  to the Marsaglia problem is the following algorithm, $$R_{i+2} = a R_{i+1} + b R_i \quad\quad (\rm{mod}\ m),$$ which requires two seeds. The modulus $m=2^{t-1}-1$. For a proper choice of $a$ and $b$, the number of Marsaglia planes can be increased by a factor of $2^{t/d}$. Thus over the period of thirty years we have learnt to live with the lattice defect of the linear congruential generators. However at any time you get the feeling that there is something wrong with the simulation results and you suspect that this is caused by the Marsaglia lattice structure of the linear congruential generator then you should think of employing some other random number generator, like the [**inversive congruential generator (ICG)**]{} proposed in 1986 by Eichenauer and Lehn [@eichenauer] or the [**explicit inversive congruential generator (EICG)**]{}, proposed in 1993 by Eichenauer-Hermann [@eichher]. Both these generators do not suffer from the lattice or hyper plane structure defect. I shall not get into the details of these new inversive generators, except to say that the inversive congruential generator also employs recursion just as linear congruential generators do, but the recursion is based on a nonlinear function. More recently in the year 1995, Vattulainen and co-workers [@VAT] found, in the context of linear congruential generators, that the successive pairs $(\xi_t , \xi_{t+l})$ exhibit a pattern of parallel lines on a unit square. This pattern is observed for all $(\xi_t , \xi_{t+l})$ with $l=1,2,\cdots $, where the modulus $m=2^{31}-1$ for a 32 bit machine. The number of parallel lines strongly depends on $l$. For $l=(m-1)/2$ the pairs fall on a single line, and for $l=\{ (m-1)/2\} \pm 1$, the pairs fall on so large a number of parallel lines that they can be considered as space filling. This curious phenomenon of switching from pairs falling on one or a few parallel lines to pairs falling on several lines upon tuning the parameter $l$ is termed as transition from regular behaviour to chaotic behaviour, see [@MATTIS]. Thus there seems to be a connection between pseudo random number generators and chaotic maps. [**Chaos**]{}, we know, permits only short time predictability; no long term forecasting is possible. The predictability gets lost at long times.....at times greater than the inverse of the largest Lyapunov exponent. It is rather difficult to distinguish a chaotic behaviour from a random behaviour. Thus Chaos can provide an effective source of randomness. This is definitely meaningful idea since chaotic and random behaviours have many things common, see for example [@luscher], where Chaos has been proposed as a source of pseudo random numbers. But then we know there is an order, strange (fractal) or otherwise, in Chaos. What exactly is the connection between the Marsaglia order found in the context of linear congruential generators and the fractal order in Chaos ? If answers to these and several related questions can be found, then perhaps we can obtain some insight into the otherwise occult art of pseudo random number generators. A safe practice is the following. Consider the situation when you are planning to use a standard Monte Carlo code on a new problem; or consider the situation when you are developing a new Monte Carlo algorithm. In either case, carefully test your Monte Carlo along with the random number generator, on several standard problems for which the results are reasonably well known. This you should do irrespective of how  famous  the random generator is, and how many randomness tests it has been put through already. Afterall your Monte Carlo programme itself can be thought of as a new test of randomness of the pseudo random number generator. I shall stop here the discussion on pseudo random number generators and get on to Monte Carlo. In the next section I shall discuss random sampling techniques that transform the pseudo random numbers, $\{ \xi _i \}$, independent and uniformly distributed in the range $0$ to $1$, to $\{ x_i\}$, having the desired distribution in the desired range. RANDOM SAMPLING TECHNIQUES ========================== The [**random sampling techniques**]{} help us convert a sequence of random numbers $\{ \xi_i \}$, uniformly distributed in the interval $(0,1)$ to a sequence $\{ x_i \}$ having the desired density, say $f(x)$. There are several techniques that do this conversion. These are direct inversion, rejection methods, transformations, composition, sums, products, ratios, table-look-up of the cumulative density with interpolation, construction of equi-probability table, and a whole host of other techniques. In what follows we shall illustrate random sampling by considering in detail a few of these techniques. Inversion Technique ------------------- The simplest is based on the direct [**inversion**]{} of the cumulative density function of the random variable $X$. We shall first consider sampling from a discrete distribution employing inversion technique. Let $\{ p_i\ :\ i=1,2,\cdots ,N\}$ be the discrete probabilities for the random variable $X$ to take values $x_i(=i)$. Fig. \[DISCINV\_PS\]a depicts an example with $N=5$. We first construct the cumulative probabilities $\{P_i\ :\ i=0,N\}$ where $P_0 = 0;\ P_1 = p_1;\ P_2 = p_1 + p_2 : \cdots P_k =p_1 + p_2 + \cdots p_k;\ P_N =1$. Fig. \[DISCINV\_PS\]b depicts the cumulative probabilities as staircase of non-decreasing heights. The procedure is simple. Generate $\xi$, a random number uniformly distributed in the range $(0,1)$. Find $k$ for which $P_{k-1} \le \xi < P_k$. Then $x_k (=k)$ is the sampled value of $X$. This is equivalent to the following. The value of $\xi$ defines a point on the $y$ axis of Fig. \[DISCINV\_PS\]b between zero and one. Draw a line parallel to the $x$ axis at $(0,\xi)$. Find the vertical line segment intersected by this line. The vertical line segment is then extended downwards to cut the $x$ axis at say $x_1$. Then $x_1$ is the (desired) sampled value of $X$. [**Assignment 11**]{}\ Employing inversion technique sample from (a) binomial and (b) Poisson distributions. Compare the frequency distribution you obtain with the exact. The above can be generalized to continuous distribution. We have the cumulative probability distribution $F(x)=\int_{-\infty}^{x}f(x')\ dx'$. Note that $F(-\infty)=0$ and $F(+\infty)=1$. Given a random number $\xi_i$, we have $x_i = F^{-1}(\xi_i)$. An example with exponential distribution is shown in Fig. \[EXPINV\_PS\]. We have plotted in Fig. \[EXPINV\_PS\]a the exponential distribution for $x\ge 0$. Fig. \[EXPINV\_PS\]b depicts the cumulative distribution. Select a point on the $y$ axis of Fig. \[EXPINV\_PS\]b randomly between zero and one. Draw a line parallel to $x$ axis passing through this point. Find where it intersects the curve $F(x)$. Read off the $x$ coordinate of the intersection point, and call this $x_1$. Repeat the above several times and get $\{ x_i\ :\ i=1,2,\cdots\}$. These numbers will be exponentially distributed. [**Assignment 12**]{}\ Devise a technique to sample from the distribution $f(x)=\exp (-\sqrt{2}\vert x\vert )/\sqrt{2}\ {\rm for } -\infty \le x \le +\infty$. Generate $\{ x_i : i=1,2,\cdots ,N\} $. Sum up these numbers and divide by $\sqrt{N}$. Call this $y$. Generate several values of $y$ and plot their frequency distribution in the form of a histogram. Carry out this exercise with $N=1,2,\cdots$ and demonstrate the approach to Gaussian as $N$ becomes larger and larger. In fact for the exponential distribution we can carry out the inversion analytically. We set $\xi = F(x)$. It is clear that the number $F(x)$ is uniformly distributed between $0$ and $1$. Hence the probability that it falls between $F(x)$ and $F(x)+dF(x)$ is $dF(x)$, which is equal to $f(x)dx$. Hence $x=F^{-1}(\xi)$ is distributed as per $f(x)$. For exponential distribution we have $F(x)=1-\exp (-x)$. Hence $x=-\log (1-\xi)$. Since $1-\xi$ is also uniformly distributed we can set $x=-\log (\xi)$. [**Assignment 13**]{}\ Generate $N$ independent random numbers from an exponential distribution. Sum them up and divide by $\sqrt{N}$; call the result $y$. Generate a large number of values of $y$ and plot their frequency distribution. Plot on the same graph the corresponding gamma density and Gaussian and compare. [**Assignment 14**]{}\ Start with $N$ particles, indexed by integers $i=1,2,\cdots , N$. ([*e.g.*]{} $N=1024$). Initialize $x(j)=\lambda\ \forall\ j=1,2,\cdots ,N$, where $\lambda $ is the desired exponential decay constant. ([*e.g.,*]{} $\lambda =1$). The algorithm conserves $\sum_{i=1}^{N}x(i) = N\lambda$. Select independently and randomly two particles, say with indices $i$ and $j$, and $i\ne j$. Let $S= x(i) + x(j)$. Split $S$ randomly into two parts. Set $x(i)$ to one part and $x(j)$ to the other part. Repeat the above for a warm up time of say $4\times N$ iterations. Then every subsequent time you select two particles ($k$ and $l$), the corresponding $x(k)$ and $x(l)$ are two independent random numbers with exponential distribution:  $\lambda\exp (-\lambda x)\ {\rm for}\ 0\le x\le \infty$.\ (a) Implement the above algorithm and generate a large number of random numbers. Plot their frequency distribution and check if they follow exponential distribution.\ (b) Prove analytically that the above algorithm leads to independent and exponentially distributed random numbers in the limit $N\to\infty$. . Analytic inversion to generate exponentially distributed random numbers is not necessarily the most robust and fast of the techniques. There are several alternate procedures for sampling from exponential distribution without involving logarithmic transformation, see for example [@sampexp]. The subject of developing ingenious algorithms for generating exponentially distributed random numbers continues to attract the attention of the Monte Carlo theorists. For example, recently Fernańdez and Rivero [@ferriv] have proposed a simple algorithm to generate independent and exponentially distributed random numbers. They consider a collection of $N$ particles each having a certain quantity of energy to start with. Two distinct particles are chosen at random; their energies are added up; the sum is divided randomly into two portions and assigned to the two particles. When this procedure is repeated several times, called the warming up time, the distribution of energies amongst the particles becomes exponential. Note that this algorithm conserves the total energy. It has been shown that about one thousand particles are adequate; the warming up time is $4N$ or so. For details see [@ferriv]; see also [@wallace]. Rejection Technique ------------------- Another useful random sampling technique is the so called [**rejection technique**]{}. The basic idea is simple. From a set of random numbers discard those that do not follow the desired distribution. What is left out must be distributed the way we want. Let $f(x)$ be the desired distribution, defined over the range $(\alpha , \beta)$. First select a suitable bounding function $g(x)$ such that $C\times g(x) \ge f(x)$ for all values of $\alpha \le x \le \beta$. Also $g(x)$ much be such that it is easy to sample from. Sample a set of $\{ x_i : i=1,2,\cdots , N\}$ independently and randomly from $g(x)$. For each value of $x_i$ select randomly a number between $0$ and $C\times g(x_i)$. Call this set $\{ y(x_i)\}$. From the set $\{ x_i : i=1,2,\cdots ,N\}$ discard those for which $y(x_i) > f(x_i)$. The remaining $x_i$-s shall have the desired distribution $f(x)$. The efficiency of a rejection technique is the percentage of the attempts that get accepted and is given by the inverse of the area under the curve $C\times g(x)$. Usually $g(x)$ is taken as a constant for all $x$ and $C$ is the maximum value that $f(x)$ takes. The rejection technique would be inefficient if $C\times g(x)$ and $f(x)$ do not have similar shapes and ranges. Let me illustrate the rejection technique by considering the circular probability density function $$f(x)={{4}\over{\pi}}\sqrt{1-x^2}\quad {\rm for} \quad 0\le x\le 1.$$ A convenient choice of the bounding function is $g(x)=1\ \forall \ \quad 0\le x\le 1$. Thus sampling from $g(x)$ is equivalent to setting $x_i= \xi _i$. The value of $C$ is $4/ \pi$. Fig. \[REJECTION\_PS\]a depicts the density function $f(x)$ and the function $C\times g(x)$. Generate a pair of random numbers $( \xi_i , \xi _j )$ from $U(0,1)$. Set $x_i = \xi _i$. Calculate $f_i = f(x_i)$ and $y_i = C\times \xi_j$. If $y_i \le f_i$, accept $x_i$. Repeat the above procedure several times. Fig. \[REJECTION\_PS\]b depicts $\{ (x_i,y_i);i=1,2,\cdots ,1000\}$ and these points are seen to fill up the rectangle bounded by the lines $y=0, y=4/\pi, x=0$ and $x=1$. Fig. \[REJECTION\_PS\]c depicts the accepted pairs $\{ (x_i, y_i)\}$. Fig. \[REJECTION\_PS\]d depicts the histogram of the distribution of the accepted values of $x_i$. In the introduction, I mentioned of the  pebble-throwing game popular in the province of Monacco, from which the name Monte Carlo came. What has been described above is a straight forward adaptation of the same (Monte Carlo) game to sampling from a (normalized) distribution. I said earlier that one can make an estimate of the value of $\pi$ from the Monte Carlo game. How do we do this ? In the rejection technique described above, we sample a point randomly from the rectangular region, see Fig. \[REJECTION\_PS\]a. In the next step we either accept the point or reject it. Thus there are only two outcomes to the experiment. The point either falls inside the  circular distribution  curve (and hence gets accepted) with a probability $p=\pi /4$ or outside (and hence gets rejected) with a probability $q=1-p=1-\pi /4$. We are essentially tossing a loaded coin; the probability of Heads is $p$ and of Tails $q=1-p$. Let $n$ denote the number of Heads in $N$ independent throws of a loaded coin. The distribution of the random variable $n$ is Binomial. The quantity of interest to us is $n/N$, whose mean and standard deviation can easily be calculated, and are given by $p$ and $\sqrt{p(1-p)}/\sqrt{N}$ respectively. Thus we say the Monte Carlo estimate of $\pi$ is $4n/N \pm (4/N^{3/2})\sqrt{n(N-n)}$, where the error term is the one-sigma confidence interval. Fig. \[pi\_ps\] depicts the estimated value of $\pi$ with the one-sigma confidence error bar, as a function of the logarithm (to the base 2) of the number of trials $N=2^{10} , 2^{11}, \cdots 2^{18}$. The statistical convergence to the right answer as the sample size increases is seen. [**Assignment 15**]{}\ Employ rejection technique to sample from $f(x)={{2}\over{\pi}}\sqrt{1-x^2}\ \ \ \rm{for} -1 \le x \le 1$. Plot the distribution of the accepted sequence of numbers in the form of a histogram and compare with the exact. Calculate the value of $\pi$ from the above experiment ? What is the statistical error ? Sampling from a Gaussian ------------------------ The Gaussian is the most important distribution in statistics and statistical physics. It is also one of the [*richest*]{}, in the sense, a very large number of techniques have been proposed to generate Gaussian random numbers. The simplest perhaps is the one that makes use of the Central Limit Theorem: take the sum of $N$ random numbers, uniformly distributed in the range $(0,1)$. Let $Y _N$ denote the sum. By Central Limit Theorem, the distribution of $Y_N$ will tend to a Gaussian in the limit $N\to\infty$. The mean of the Gaussian is $N/2$ and its variance is $N/12$. Usually one wants a standard Gaussian with mean zero and variance unity. Therefore we define the quantity $q$, $$q={{Y _N -(N/2)}\over{\sqrt{N/12}}},$$ which has a Gaussian distribution of zero mean and unit variance for large $N$. A convenient choice is $N=12$, which reduces $q$ to $Y_{12} -6$: Take twelve pseudo random numbers and add them up; subtract six from the sum and you have a mean zero variance unity Gaussian random number. This method of generating Gaussian random numbers is not exact since the tails are absent. Note that the value of $q$ is restricted to the interval $(-6,+6)$. There is a clever technique that transforms two independent uniform random variables $\xi_1$ and $\xi_2$ to two independent Gaussian random variables $q_1$ and $q_2$, as follows. $$\begin{aligned} q_1 & = & \sqrt{-2\ \ln \ \xi_1 }\ \cos (2\pi \xi_2) ,\nonumber\\ q_2 & = & \sqrt{-2\ \ln \ \xi_1 }\ \sin (2\pi \xi_2) .\end{aligned}$$ This method, called [**Box-Muller algorithm**]{} [@bm], is easy to program. This algorithm is quite popular. However it has a few drawbacks. Box-Muller algorithm uses several multiplication, one logarithmic function, one trigonometric function and one square root function; hence it is rather slow. Besides the tails differ markedly from the true when a linear congruential generator is used. Muller [@muller] gives an account of several methods of sampling from a Gaussian distribution. [**Assignment 16**]{}\ Generate $N$ random numbers from $U(0,1)$. Add them up and subtract $N/2$. Divide the result by $\sqrt{N/12}$. Call this $y$. Generate a large number of $y$ and plot their frequency distribution. Take $N=2,3,4,\cdots$, and demonstrate the approach to Gaussian (of mean zero and variance unity). Recently Fernańdez and Criado [@fercri] have proposed a fast algorithm to generate Gaussian random numbers. Their algorithm is based on an $N$ particles closed system interacting two at a time, conserving energy. Start with $N$ particles each having the same velocity unity. [*i.e.*]{} $\{ v_i =1\ \forall\ i=1,2,\cdots ,N\} $. Pick up two particles at random; let them be $i$ and $j$, and $i\ne j$. Reset their velocities as per the following iteration rule, $$\begin{aligned} v_i ({\rm new} ) & = & {{ v_j ( {\rm old} ) + v_i ( {\rm old})}\over{ \sqrt{2} }}\nonumber\\ v_j ( {\rm new} )& = & {{ v_j ( {\rm old} ) - v_i ( {\rm old})}\over{ \sqrt{2} }}.\end{aligned}$$ Repeat the above several times. After initial [*warm up time*]{} of say $4N$ iterations or so, the velocities of the pair of particles you pick up in all subsequent iterations are the desired pairs of independent Gaussian random numbers with mean zero and variance unity. Gaussian of desired mean, say $\mu$ and standard deviation, say $\sigma$ can be obtained by the transformation $x=\sigma v + \mu$. Note that $\sum_{i=1}^{N}v_i ^2 = N$ at all stages of iteration. This algorithm is found to be ten times faster than the Box-Muller algorithm. For most applications, it is adequate if ten thousand to hundred thousand particles are considered, see [@fercri] for details. [**Assignment 17**]{}\ (a) Prove analytically that the Fernańdez-Criato algorithm leads to independent Gaussian random numbers. Prove that the sampling is ergodic. See [@fercri].\ (b) Generate a large number of Gaussian random numbers employing Fernańdez-Criado algorithm and plot their frequency distribution. Compare with the exact Gaussian. We have implemented the algorithm proposed by Fernańdez-Criado and a sample result is depicted in Fig. \[fergauss\_ps\]. Ten thousand particles were considered. The first $40,000$ iterations were considered as warm up time. We generated twenty thousand numbers and collected them in thirty equal bins. The statistical error (one - sigma confidence interval) on the counts in each bin were also calculated from the sample fluctuations. These are also depicted in Fig. \[fergauss\_ps\]. Metropolis Sampling -------------------- An interesting technique to sample from a probability distribution is the one proposed by Metropolis and his collaborators [@metropolis], in the context of generating microstates belonging to a canonical ensemble. [**Metropolis algorithm**]{} is widely used in Monte Carlo simulation of models in statistical physics. Here I shall illustrate the technique for sampling from an arbitrary discrete distribution $f(i)$, where the random variable takes discrete integer values $i$ between $1$ and $N$. We call $1,2,\cdots ,N$ as states and denote the set of all states by $\Omega=\{ 1,2,\cdots N\}$. Start with an initial arbitrary state $x_0$ belonging to $\Omega$. The Metropolis sampling technique generates a [**Markov chain**]{} of states $x_1 (\in \Omega), x_2 (\in \Omega) , \cdots x_{m-1}(\in \Omega)$. For $m\to\infty$, $\{ x_{m+1}(\in\Omega),x_{m+2}(\in\Omega),\cdots\}$ shall have the distribution $ \{ f(i):i=1,2,\cdots ,N\}$. A Markov chain is a stochastic process whose   past  has no influence on the future if its  present  is specified. In other words, for a Markov Chain, we have, $$P\left( x_k \le x\vert x_{k-1},x_{k-2},\cdots , x_0\right) = P\left( x_k \le x\vert x_{k-1}\right).$$ We call $\{ x_{m+1},x_{m+2},\cdots\} $ the desired ensemble of states. If $N_i$ is number of elements in the ensemble whose state index is $i$, then $N_i$ divided by the total number of members in the ensemble is the probability $f(i)$. For such asymptotic convergence to the desired ensemble, it is sufficient [*though not necessary*]{} that $$\label{ONSAGER_EQ} f(x_i)\ W(x_j \leftarrow x_i)=f(x_j)\ W(x_i\leftarrow x_j ) .$$ where $W(x_j\leftarrow x_i)$ is the probability of transition from state $x_i$ to state $x_j$. Equation (\[ONSAGER\_EQ\]) is called the [**detailed balance**]{}. For conservation of probability, the transition probabilities should obey the following condition, $$\sum_{i} W(i \leftarrow j)=1\ \forall\ j .$$ The detailed balance condition does not specify uniquely the transition probabilities from one state to the other. A simple choice of the $N\times N$ [**transition matrix**]{} $W$ with elements $W_{i,j}=W(i\leftarrow j)$ which is consistent with the detailed balance is given by, $$\begin{aligned} \label{WMATRIX_EQ} W_{i,j} & =& W^{\star}_{i,j}\ \ {\rm min}\ \left( 1,{{f(i)}\over{f(j)}}\right) \ {\rm for} \ \ i\ne j,\nonumber\\ W_{j,j} &=& W^{\star}_{j,j} + \sum_{i=1}^{N} W^{\star}_{i,j}\ \ {\rm max}\ \left( 0,1-{{f(i)}\over{ f(j)}}\right),\end{aligned}$$ where the matrix $W^{\star}$ with elements $W^{\star}_{i,j}$ is an arbitrary symmetric stochastic matrix with positive elements. We call $W^{\star}$ the [**trial** ]{} matrix. The sum of the elements in each row as well as each column of the trial matrix $W^{\star}$ is unity. As we shall see below, we need $W^{\star}$ matrix to select a trial state from the current state. Hence $W^{\star}$ is chosen conveniently to make the selection of the trial state simple. [**Assignment 18**]{}\ Verify that the transition matrix $W$ whose elements are calculated as per the prescriptions in Eqns. (\[WMATRIX\_EQ\]) obeys the detailed balance given by Eq. (\[ONSAGER\_EQ\] ). The implementation of the Metropolis sampling procedure proceeds as follows. Let $x_i(\in \Omega)$ be the current state (or value). We select a trial state $x_t$, randomly and with equal probability from amongst the $N$ states of $\Omega= \{ 1,2,\cdots N\}$. This in other words means that $W_{i,j}^{\star}=1/N\ \forall \ i,j =1,2,\cdots ,N$. Calculate the ratio $w=f(x_t)/f(x_i)$. If $w\ge 1$, accept the trial state and set $x_{i+1} = x_t $. If $w < 1 $, then generate a random number $\xi $ uniformly distributed in the range $0$ to $1$. If $\xi \le w $, then accept the trial state and set $x_{i+1} =x_t$. If $\xi > w $, reject the trial state and set $x_{i+1} = x_i $ and proceed. It may be necessary to generate several values of $x$, starting from an initial choice $x_0$, before the string acquires the desired distribution. A good choice of $x_0$ is that state for which the probability is maximum. Let us consider a simple example with three states $\Omega = \{ 1,2,3\}$ with $\vert f\rangle =(f_1,f_2,f_3)'=(1/4,\ 5/12,\ 1/3)'$. The matrix $W^{\star}$ is taken as $W^{\star}_{i,j}=1/3\ \forall\quad i,j$. The transition matrix constructed by the prescription, see Eq. (\[WMATRIX\_EQ\]), is given by, $$W=\left( \begin{array}{ccc} 1/3 & 1/5 & 1/4\\ 1/3 & 8/15 & 1/3\\ 1/3 & 4/15 & 5/12 \end{array}\right).$$ As can be readily seen, the matrix $W$ above, has the following properties. 1. $W_{i,j}> 0\ \forall\ i,j$; This is called the [**strong ergodicity**]{} condition. 2. $\sum_{i=1}^3 W_{i,j} =1\ \ \forall\ \ j$. As mentioned earlier, this condition ensures the conservation of the probability. The transpose of the matrix $W$ is usually called a stochastic matrix. The eigenvalues of $W$ are $1.0,\ 0.2,\ {\rm and}\ 0.0833$; the largest eigenvalue is unity and is non degenerate. The corresponding right eigenvector is $\vert f\rangle=(1/4,\ 5/12,\ 1/3)'$ [i.e.,]{} $W\vert f\rangle = \vert f\rangle$. [**Assignment 19**]{}\ Construct a matrix $W$: $W_{i,j} > 0\ \forall\ i,j;$ and $\sum_{i} W_{i,j} = 1 \forall \ j$. Calculate its eigenvalues and the corresponding left and right eigenvectors. Operate $W$ on an arbitrary vector several times and show that the resulting vector converges to a unique vector which is the eigenvector of the matrix $W$, corresponding the largest eigenvalue unity. Thus repeated application of $W$ on any arbitrary vector $\vert u\rangle$ with $\langle f\vert u \rangle\ne 0$, will asymptotically take the vector to $\vert f\rangle$. We say $\vert f\rangle$ is the equilibrium probability vector representing the equilibrium ensemble. Any initial ensemble represented by $\vert u\rangle$ with non-zero overlap with the equilibrium ensemble, will evolve to the equilibrium ensemble. The above results are true in general for any positive stochastic matrix $W$, by virtue of the [**Peron theorem**]{} [@Peron]. Perons’s theorem states that [*a positive matrix $W$ has an eigenvalue $\lambda$, which is real, positive and non-degenerate and which exceeds in modulus all the other eigenvalues. To this dominant eigenvalue there corresponds an eigenvector with all its elements positive.*]{} In our example, the transpose of the positive matrix $W$ is stochastic, [*i.e.,*]{} $\sum_i W_{i,j}=1$, and hence the dominant eigenvalue is unity. Peron’s theorem can be seen as follows. Consider the following left eigenvalue equation, $$\label{LEIG_EQ} \langle L\vert\ W = \langle L \vert\ \lambda\ ,$$ where $\langle L\vert$ is the left eigenvector corresponding to the eigenvalue $\lambda$. We can obtain the upper and lower bounds of the eigenvalue as follows. $$\begin{aligned} \label{BOUND_EQ} \lambda L_j & = & \sum_{i=1}^{N} W_{i,j}L_i\ ; \ \ \\ L_{min}\sum_{i=1}^N W_{i,j} \le \lambda L_j & \le & L_{max}\sum_{i=1}^{N}W_{i,j},\end{aligned}$$ where $L_{max}=max\{ L_k\}$ and $L_{min}=min\{ L_k \}$. Since $\sum_{i=1}^{N} W_{i,j} =1$, it follows, $$\label{BOUND2_EQ} L_{min} \le \lambda L_j \le L_{max} \ .$$ Consider the space of all positive vectors $\left\{ \langle L\vert ; L_j > 0 \ \forall\ j=1,2,\cdots ,N\right\}$. Then $${{L_{min} }\over{L_j }} \ \le\ \lambda\ \le\ {{L_{max} }\over{L_j }}\ \ \forall\ j.$$ The minimum value that $L_{max}/L_j$ can take as $j$ runs from $1$ to $N$ is unity; similarly, the maximum value that $L_{min}/L_j$ can take is unity. Thus we get $1\le\lambda\le 1$, which implies that $\lambda =1$. Thus if the left eigenvector is positive then the corresponding eigenvalue is unity. Consider now the space of vectors $\{ \langle L\vert\}$ such that each vector has some of its components positive and some negative. We note that $L_{max}>0$ and $L_{min}<0$. The bounds can be easily worked out and are given by, $${\rm max}\left\{ \ {{L_{ {\rm max}} }\over{ L_{{\rm min}} }}\ ,\ {{L_{{\rm min}} }\over{ L_{{\rm max}} }}\ \right\}\ \le \ \lambda\ \le\ 1.$$ Thus we see that $\lambda =1$ is the largest eigenvalue and the corresponding left eigenvector is positive. It is easily verified that $\langle U\vert = c\ (1\ 1\ 1\ \cdots \ 1)$, the constant vector, is the left eigenvector corresponding to the eigenvalue $\lambda =1$. The constant $c$ can be fixed by choosing a suitable norm. We can easily show that the eigenvalue $\lambda =1$ is non degenerate. To this end, assume the contrary. Let $\langle V\vert$ be another positive eigenvector corresponding to the eigenvalue $\lambda =1$. Then a linear combination of $\langle U\vert$ and $\langle V\vert$ given by $\langle \eta\vert=\alpha \langle U\vert + \beta\langle V\vert$ is also an eigenvector. We can choose $\alpha$ and $\beta$ such that $\langle \eta\vert$ has some components positive and some negative which contradicts the fact that the eigenvector is positive. This completes the proof. Let us now consider the right eigenvector $\vert R\rangle$, corresponding to $\lambda =1$. It is easily proved that $\vert R\rangle$ is positive. Consider the eigenvalue equation $$\label{righteigv_eq} (W_{1,1} -1)R_1 + W_{1,2}R_2 +\cdots +W_{1,N} R_N =0.$$ Let us assume $R_1$ is negative. The first term in Eq. (\[righteigv\_eq\]) is positive since $W_{1,1} < 1$. Hence at least one of the other components of $\vert R\rangle$ must be negative. In other words one of the elements of the set $\{ R_2 ,\ R_3 ,\ \cdots R_N \}$ must be negative to render the sum in Eq. (\[righteigv\_eq\]) zero. Without loss of generality we take $R_2$ to be negative. Consider the eigenvalue equation, $$\label{righteigv2_eq} W_{2,1}R_1 + (W_{2,2}-1)R_2 +W_{2,3}R_3 + \cdots + W_{2,N}R_N =0.$$ Add Eq. (\[righteigv\_eq\]) to Eq. (\[righteigv2\_eq\]) and get, $$\label{sumonetwo} (W_{1,1} + W_{2,1}-1)R_1 + (W_{1,2} + W_{2,2} -1)R_2 + \sum_{j=3}^{N} (W_{1,j}+W_{2,j})R_j =0.$$ In the above, $(W_{1,1}+W_{2,1})\ <\ 1$, $R_1 < 0$, $(W_{1,2} + W_{2,2}) < 1$ and $R_2 < 0$. Therefore we find that the first two terms in the above equation are positive. Hence at least one of the elements of the set $\{ R_3 ,\ R_4 \ \cdots R_N \}$ must be negative to render the sum zero. Without loss of generality we take this as $R_3$. Arguing along the same lines we show that if $R_1$ is negative then all the other components of the vector $\vert R\rangle$ are also negative. This in other words means that the right eigenvector corresponding to the largest eigenvalue $\lambda =1$ is positive. We call $\vert R\rangle$ the equilibrium eigenvector. Numerically the equilibrium ensemble for the three-state model is constructed as follows. Start with an arbitrary state $x_0\in\Omega$; select a trial state $x_t\in\Omega$ with equal probability. Note that we have taken $W^{\star}_{i,j}=1/3\ \forall\ i,j$. Calculate $w=f(x_t)/f(x_0)$. Accept the trial state as the next state with a probability given by minimum of $(1,w)$. If accepted set $x_1=x_t$; if rejected set $x_1=x_0$; and proceed. Thus we get an ensemble of states. Peron’s theorem is special case of a more general theorem on the eigenvectors and eigenvalues of non-negative matrices proved later by Frobenius. A proof of the [**Frobenius theorem**]{} can be found in the book on matrices by Gantmacher [@GANTMACHER]. As per this theorem, asymptotic convergence to equilibrium vector is possible even under weaker condition. Some or several elements of the transition matrix $W$ can be zero. In other words $W_{i,j} \ge 0\ \forall i,j$ and of course $\sum_I W_{i,j}= 1$. In such cases it is adequate that $\left( W^m \right)_{i,j}> 0\ \forall \ i,j $ and for an $m< \infty$; $m$ may depend on $i$ and $j$. Physically this means that any state is accessible from any other state in finite number of steps. This weaker condition helps us make a better choice of the trial matrix $W^{\star}$. Some or several elements of this matrix can be zero. In practical terms it means that the trial state $x_t$ can be selected randomly from a small neighbourhood of the current state $x_i$. Thus we set $x_t = x_i + \eta_i $ where $\eta_i$ is a random integer from $-\epsilon $ to $+\epsilon$. In this approach, it may so happen that the trial value lies outside the range of $x$. In such a situation, keep generating new trial values until, you get one within the range. The choice of $\epsilon$ is crucial. If $\epsilon $ is too large, then the fraction of the accepted trials would be too small and the sampling poor and inefficient. On the other hand if $\epsilon $ is too small, then even though a large number of trials would get accepted, the value of $x$ would remain close to the starting point over several trials and hence not span the entire range of $x$ quickly and efficiently. A good criterion is to fix $\epsilon$ such that the acceptance is between $30\%$ and $50\%$. [**Assignment 20**]{}\ Employ Metropolis sampling technique to sample from a) binomial distribution and b) Poisson distribution. In the above we have discussed the Metropolis technique with respect to discrete distributions. The method can be readily extended to sample from continuous distribution. Fig. \[METROPOLIS\_PS\] depicts the results of sampling from an exponential density employing Metropolis sampling. A very large number of transformations, tricks, and formulae have been discovered for sampling from a variety of non-uniform distributions. I shall not discuss them here. These techniques are scattered over several publications on Monte Carlo and its applications to different disciplines. [**Assignment 21**]{}\ Employ Metropolis technique to sample from the distribution $f(\theta)=\exp\left[ a\cos(\theta)\right]$, for $0\le \theta \le \pi$, where $a$ is a parameter. The optimal value of $\epsilon$ that would give $30\%$ to $50\%$ rejection would depend on the value of $a$. Compare your results with the exact. ANALOGUE MONTE CARLO ==================== Consider evaluation of the expectation value (also known as the mean) of a function $h$ of the random variable $X$. Formally we have, $$\mu=\int_{-\infty}^{+\infty} h(x)f(x)dx,$$ where $f(x)$ is the probability density function of the random variable $X$. $h(x)$ is usually called the score function. How do we estimate $\mu$ ? Let us first consider [**analogue simulation**]{}. A Monte Carlo simulation is called analogue simulation, if it does not employ any variance reduction devices. The simulations that employ variance reduction techniques we shall term as biased simulation. In analogue Monte Carlo, we sample randomly a sequence of $\{ x_i\ :i=1,2,\cdots ,N\}$, from the density $f(x)$ and write, $$\bar{h}_N = {{1}\over{N}}\sum_{i=1}^{N} h(x_i) .$$ In the limit $N\to\infty$, $\bar{h}_N \to\mu$. Also by the Central Limit Theorem, in the limit $N\to\infty$, the probability density of $\bar{h}_N$ tends to a Gaussian with mean $\mu$ and variance $\sigma^2 /N$, where $$\sigma^2 = \int_{-\infty}^{+\infty}\left[ h(x)-\mu\right]^2f(x)dx .$$ Thus we say that [**Analogue Monte Carlo**]{} estimate of $\mu$ is given by $\bar{h}_N \pm \sigma/\sqrt{N}$, where $\pm\sigma/\sqrt{N}$ defines the one-sigma confidence interval. This means that with a probability $p$ given by, $$\begin{aligned} p &=& {{\sqrt{N} }\over{\sigma\sqrt{2\pi} } } \int_{\mu-(\sigma/\sqrt{N})}^{\mu +(\sigma/\sqrt{N})} \exp\left[ -{{N(x-\mu)^2}\over{2\sigma^2}}\right] dx\nonumber\\ \ &=& {{1}\over{\sqrt{2\pi}}} \int_{-1}^{+1}\exp \left[ -{{x^2}\over{2}}\right] dx\nonumber\\ \ &=& 0.68268 ,\end{aligned}$$ we expect $\bar{h}_N$ to lie within $\pm \sigma/\sqrt{N}$ around $\mu$, if $N$ is sufficiently large. First we note that we do not know $\sigma$. Hence we approximate it by its Monte Carlo estimate $S_N$, given by $$S^2 _N = {{1}\over{N}}\sum_{i=1}^{N}h^2 (x_i ) -\left[ {{1}\over{N}} \sum_{i=1}^{N} h(x_i )\right] ^2 .$$ The quantity $\pm S_N /\sqrt{N}$ is called the [**statistical error**]{}. Notice that the sample size $N$ must be large for the above estimate of the error (and of course the mean) to hold well. Sometimes it would worth the effort to test if the sample mean has acquired a normal distribution, see for example the test devised by Shapiro and Wilks [@SHAP]. Normality tests are useful in a biased Monte Carlo simulation. The statistical error decreases as inverse of the square root of the sample size. This is rather slow - logarithmically slow. For example if we want to decrease the statistical error by a factor of two we must increase the sample size by a factor of four. The computing time increases linearly with the sample size. Hence invariably one finds that analogue Monte Carlo is practically impossible. But there is a way out. The way out is to resort to techniques that reduce the variance without altering the mean. These are called [**variance reduction techniques**]{} and in what follows I shall describe the basic principle behind variance reduction techniques through what is called the [**importance sampling**]{}. IMPORTANCE SAMPLING ==================== Importance sampling helps us sample from the important regions of the sample space. Consider the problem described in the last section. We sample $\{ x_i :i=1,2,\cdots ,N\}$ from an importance density $g(x)$ instead of the analogue density $f(x)$. To preserve the mean we define a modified score function $H(x)$ given by, $$H(x)=h(x){{f(x)}\over{g(x)}}.$$ The expectation value of $H$ is evaluated over the importance density $g(x)$; this is identically equal to the expectation value of the original score function $h$ over the analogue density $f(x)$: $$\begin{aligned} \mu (H)&=&\int_{-\infty}^{+\infty} H(x)g(x) dx\nonumber\\ \ &=&\int_{-\infty}^{+\infty} h(x) {{f(x)}\over{g(x)}} g(x) dx\nonumber\\ \ &=&\int_{-\infty}^{+\infty} h(x)f(x)dx\nonumber\\ \ &=&\mu(h) .\end{aligned}$$ Thus we sample $\{ x_i :i=1,2,\cdots ,N\}$ from $g(x)$, and calculate the ratios $\{ f(x_i)/g(x_i)\} $. The biased Monte Carlo estimate of $\mu$ is given by $$\bar{H}_N={{1}\over{N}}\sum_{i=1}^{N}h(x_i ){{f(x_i )}\over{g(x_i )}} .$$ In the limit $N\to\infty$, $\bar{H}_N\to\mu$. Let us now calculate the statistical error associated with $\bar{H}_N$. It is adequate if we consider the second moment, since we have formally shown that the mean is preserved under importance sampling. We have $$\begin{aligned} \label{m2bh} M_{2}^{B} (H) &=& \int_{-\infty}^{+\infty} H^2 (x)g(x) dx\nonumber\\ \ &=& \int_{-\infty}^{+\infty} {{h(x) f(x)}\over{g(x)}} {{h(x)f(x)}\over{g(x)}} g(x) dx\nonumber\\ \ &=&\int_{-\infty}^{+\infty} \left[ {{f(x)}\over{g(x)}}\right] h^2 (x) f(x) dx .\end{aligned}$$ For the analogue simulation, $$M_{2}^{A} (h)=\int_{-\infty}^{+\infty} h^2 (x)f(x) dx .$$ Thus if we choose properly the importance function $g(x)$ and ensure that the ratio $f(x)/g(x)$ on the average is substantially less than unity, then we can make $M_{2}^{B} (H)$ to be much less than $M_{2}^{A}(h)$. This in essence is the basic principle of variance reduction techniques. Thus, sampling from an importance density helps us estimate the mean with a much better statistical accuracy for a given sample size. In fact it is due to the variance reduction techniques that Monte Carlo simulation of many problems have become possible. Exponential Biasing ------------------- Let me illustrate the use of importance function on a simple problem where all the relevant quantities can be calculated analytically. The problem has its origin in radiation transport through thick shields. The actual problem of Monte Carlo simulation of the transport is relatively complex and a detailed discussion of this is not relevant for the purpose here. Hence, I shall be brief. Consider particles, neutrons or gammas, incident normally on the left face of a slab of material of thickness $T$. The particle would penetrate a distance $x$ into the shield with a probability $\Sigma \exp (-\Sigma x)$ and enter into a collision event at a point between $x$ and $x+dx$ with probability $\Sigma dx$. Here $\Sigma^{-1}$ is the mean free path (mfp). We can measure all distances in units of mfp and hence set $\Sigma =1$. The collision can lead to absorption in which case the particle history is terminated. On the other hand if the collision is a scattering event, the history continues. In a simple-minded picture, the scattering is taken as isotropic. In one-dimensional model, this means that the particle is directed toward the right or the left face of the slab, with equal probability. The particle travels along the scattered direction and has a collision event at some distance. The history continues and is constituted by a series of alternating free flights and collisions. The history ends when the particle is absorbed inside the shield or escapes the shield. When the particle escapes through the right face it scores unity; otherwise the score is zero. The scores from a large number of particle histories are accumulated. The average of these scores is the Monte Carlo estimate of the mean transmission defined as the fraction of the incident particles that escape through right face of the shield. The sample fluctuations give an estimate of the statistical error. If the thickness of the shield exceeds $20$ mfp or so, the problem becomes intractable by analogue Monte Carlo. Variance reduction techniques become imperative. An importance sampling technique often used in this context is called the [**exponential biasing**]{}. In the simplest version of exponential biasing, the inter collision distance is sampled from the importance density $b\exp(-bx)$, where the biasing parameter $b$ has to be optimized for minimum variance. For more details on exponential biasing see [@clark]. In the simple problem, we shall be interested in calculating the fraction of the incident particles that escape through the right face of the shield, without undergoing any collision whatsoever. This amounts to the following: In analogue simulation, sample a random number from the exponential density. If this exceeds $T$, score unity. Otherwise score zero. Collect the scores from $N$ histories and calculate the mean and the statistical error. In the biased simulation we sample $x$ from the importance density function $\hat{b}\exp(-\hat{b}x)$, where we have assumed that we know the value of $b=\hat{b}$, for which the variance is minimum. Let us denote this sampled value of $x$ as $x_i$. If $x_i$ exceeds $T$, then score $w_i=f(x_i)/g(x_i , \hat{b})= \exp [ -x_i (1-\hat{b})]/\hat{b}$. Otherwise score zero. Accumulate the scores from a large number of histories and calculate the mean and the statistical error. Fig. \[IMP\_PS\]a depicts the probability density function $f(x)$ and the score function $h(x)$ for the case with $T=5$ mfp. Fig. \[IMP\_PS\]b depicts the importance density function $g(x,b=\hat{b})$ and the modified score function $H(x)=f(x)h(x)/g(x,\hat{b})$ for the case with $T=5$ mfp. For this problem, all the quantities can be calculated analytically. The expressions for the average and variance are, $$\begin{aligned} \mu &=& \int _0 ^{\infty} h(x)\exp (-x)\ dx=\exp(-T) ,\\ \sigma^2 & = & ( 1-\mu)\ \mu.\end{aligned}$$ Table (6) gives the mean, relative standard deviation and the number of histories that would be required to estimate the mean in an analogue simulation, within $\pm 10\%$ of the exact. \[anasimexact\_tab\] ------ ---------------------- ------------------- ---------------------- $T$ $\mu$ $\sigma/\mu$ $N$ $3$ $4.98\times 10^{-2}$ $4.37$ $1.91\times 10^3 $ $5$ $6.74\times 10^{-3}$ $1.21\times 10^1$ $1.47\times 10^4 $ $10$ $4.54\times 10^{-5}$ $1.48\times 10^2$ $2.20\times 10^6 $ $20$ $2.06\times 10^{-9}$ $2.20\times 10^4$ $4.85\times 10^{10}$ ------ ---------------------- ------------------- ---------------------- : Analogue simulation: exact analytical results We see from Table (6) that as the thickness increases the mean transmission decreases and the relative fluctuation increases. We find that the number of histories required to predict the mean within $\pm 10\%$ statistical error is over $48$ billion for $T=20$ mfp, a task which is clearly impossible on any computer. Let us see how the use of importance sampling renders possible the impossible. Under importance sampling, the variance can be calculated analytically and is given by, $$\sigma ^2 (b)= {{ e^{-T(2-b)} }\over{b(2-b)}}-e^{-2T}.$$ It is quite straight forward to calculate the value of $\hat{b}$ for which the variance is minimum. We get, $$\hat{b} = 1+ {{1}\over{T}}-\sqrt{1+{{1}\over{T^2}} }.$$ Table (7) presents $T, \hat{b}, \sigma /\mu$ and the number of histories required to estimate the mean, under biased simulation within $\pm 10\%$ of the exact. \[bse\_tab\] --------- --------------- ---------------------- ----------   $T$     $\hat{b}$      $\sigma / \mu$       $N$   $\ 3$ $.28$ $1.95$ $\ 381$ $\ 5$ $.18$ $2.55$ $\ 651$ $10$ $.09$ $3.65$ $1329$ $20$ $.048$ $5.18$ $2687$ --------- --------------- ---------------------- ---------- : Biased Simulation: exact analytical results We find from Table (7) that the use of importance sampling would lead to a considerable reduction of variance especially for large $T$. Take the case with $T=5$ mfp. The use of importance sampling would reduce the statistical error by a factor five or so. As a consequence a sample of size $650$ is adequate to estimate the mean whereas analogue simulation would require a sample of size $14,700$. The results for $T=20$ mfp are more dramatic and bring home the need and power of importance sampling. The statistical error gets reduced by a factor of $4247$. We need to simulate only $2687$ histories to estimate the mean within $\pm 10\%$ of the exact, as compared to $48$ billion histories required under analogue simulation. We have simulated explicitly $10000$ histories under importance sampling, as follows. We sample $x_i$ from the importance density, $\hat{b}\exp (-\hat{b}x)$, and calculate the mean of the modified score function as, $$\bar{H}_N = {{1}\over{N}}\sum_{i=1}^N H(x_i),$$ where, $$H(x_i)=\left\{ \begin{array}{lll} {{1}\over{\hat{b}}}\exp \left[ -\left( 1-\hat{b}\right) x_i\right] , \ & {\rm if }& x_i\ \ge \ T,\\ & \\ 0, & {\rm if} & x_i\ < \ T. \end{array}\right.$$ The statistical error is calculated as, $$\Delta \bar{H}_N = \pm {{1}\over{\sqrt{N}}}\sqrt{ {{1}\over{N}}\sum_{i=1}^{N}H^2 (x_i)-\bar{H}_N ^2 }.$$ Table (8) gives the estimated mean transmission $\bar{H}_N$, the relative statistical error, and the actual deviation of the Monte Carlo estimate from the exact transmission. \[tenthousand\_tab\] ------ ---------------------- ---------------------------------------- ----------------------------- $T$ $\bar{H}_N$ ${{\Delta $ {{\bar{H}_N \bar{H}_N}\over{\bar{H}_N}}\times 100$ -\mu}\over{\mu}}\times 100$ $3$ $4.94\times 10^{-2}$ $\pm 2.0\%$ $ -0.8\%$ $5$ $6.76\times 10^{-3}$ $\pm 2.6\%$ $ +0.3\%$ $10$ $4.52\times 10^{-5}$ $\pm 3.7\%$ $-0.4\%$ $20$ $1.96\times 10^{-9}$ $\pm 5.4\%$ $-4.9\%$ ------ ---------------------- ---------------------------------------- ----------------------------- : Results of Monte Carlo simulation of $10,000$ histories with importance sampling and comparison with exact analytical calculations We observe from Table (8) that we are able to make a good estimate of the mean transmission employing importance sampling. The corresponding results obtained by analogue simulation of $50000$ histories (five times more than what we have considered for analogue simulation) are given in Table (9). \[fiftythousand\_tab\] ------ ---------------------- ---------------------------------- ------------------------------------ $T$ $\bar{h}_N$ ${{ \Delta ${{\bar{h}_N-\mu}\over{\mu}}\times \bar{h}_N}\over{\mu}}\times 100$ 100$ $3$ $5.03\times 10^{-2}$ $\pm 1.9\%$ $+ 1.8\% $ $5$ $6.68\times $\pm 5.5\%$ $-0.9\%$ 10^{-3}$ $10$ $6.16\times 10^{-4}$ $\pm 57.7\%$ $+35.7\%$ $20$ $\cdot$ $\cdot$ $\cdot$ ------ ---------------------- ---------------------------------- ------------------------------------ : Results of analogue simulation of $50,000 $ histories and comparison with exact analytical results We find from Table (9) that analogue simulation of the problem with $T=20$ mfp is impossible. On the average, we can expect one in $49$ million numbers sampled from the exponential density, to have a value greater than $20$. The chance of getting a score in a simulation of $50000$ histories is practically nil. Spanier Technique ------------------ In several situations, like the one discussed in section 12.1, it is possible to make a good guess of the shape of the importance function $g(x,b)$, where $b$ is an unknown parameter to be optimized for minimum variance. Spanier [@spanier] has proposed a technique for optimizing the parameter $b$, through processing of the Monte Carlo results from relatively small samples of histories. The technique consists of expressing the second moment, for the biased problem, see Eq. (\[m2bh\]) as, $$M_2 (b_i) = \int {{ h(x)f(x)}\over{g(x,b_i)}}\ {{ h(x)f(x)}\over{g(x,\tilde{b})}} \ g(x,\tilde{b})dx\ ,$$ where $\tilde{b}$ is the guess value of the parameter $b$ and $\{ b_i : i=1,2,\cdots ,M\}$ are the prescribed values of $b$ spanning its full range or a conveniently chosen interval in its range. The implementation of [**Spanier’s technique**]{} proceeds as follows. Start with a guess value $\tilde{b}$. Generate a set of $N$ values of $x$ by random sampling from the importance density $g(x,\tilde{b})$. Calculate, $$M_2 (b_j) = {{1}\over{N}} \sum_{i=1}^{N} {{ h(x_i)f(x_i)}\over{g(x_i,b_j)}}\ {{ h(x_i)f(x_i)}\over{g(x_i,\tilde{b})}} \quad {\rm for} \quad j=1,2,\cdots ,M.$$ Thus we get $M_2$ at $M$ distinct values of $b$. Find the value of $b$ at which the second moment is minimum; use this as $\tilde{b}$ in the next iteration and generate a second set of $N$ histories. Repeat the procedure until $\tilde{b}$ converges. $N$ can be small for the purpose of optimizing $b$. Fig. \[SPANIER\_PS\]A presents the results for the problem with $T=20$ mfp at three stages of iteration starting with an initial guess of $\tilde{b}=0.1$. The number of histories generated at each stage of iteration is $500$. Fig. \[SPANIER\_PS\]B depicts the second moment as a function of $b$ generated with the converged value of $\tilde{b}=.049$, along with the exact analytical results. Spanier’s technique is quite general and powerful. Application of this technique to radiation transport problem can be found in [@spanier; @murthy]. Several modifications and improvements to this technique have been recommended and those interested can refer to [@mac]. CLOSING REMARKS ================ In this monograph I have made an attempt to describe the fundamental aspects of Monte Carlo theory in very general terms without reference to any particular topics like neutron transport or problems in statistical mechanics. An understanding of the fundamental ideas of probability theory: sample space, events, probability of events, random variables, probability distribution, moments, cumulants, characteristic function, [*etc*]{}, is a must for appreciating the basis of Monte Carlo techniques. I have given a brief introduction to these topics. The Central Limit Theorem and its use in estimating Monte Carlo error bars is an important topic and has been dealt with in detail, preceded by a brief discussion on the Chebyshev inequality and the law of large numbers. I have also very briefly touched upon the generalization of the Central Limit Theorem, namely the Lévy stable law. A long sequence of pseudo random numbers constitutes the backbone of any Monte Carlo simulation. Often one finds that many Monte Carlo practitioners tend to remain ignorant of how these numbers are produced. They often consider the random number generator routine in their personal computer, the workstation or the super computer, as a black box. This tendency must go. The reason is simple. There is no random number generator which is flawless and which would yield random numbers useful for all kinds of simulations. Take for example the most popular and widely used of the random number generators. It is based on linear congruential recursion; it suffers from a very serious defect: the lattice and Marsaglia hyper plane structures that ensue when you embed the random numbers in two and higher dimensional phase space. If your problem is such that the Marsaglia lattice structure would influence your results significantly then you should seriously consider alternate means of generating pseudo random numbers. I have given a brief outline of these and related issues in this monograph. I have also discussed some tests of randomness of the sequence of pseudo random numbers. Of course we understand that there is absolutely no guarantee that the sequence of pseudo random numbers given by the generator in your machine is adequately random for the particular Monte Carlo application you have on hand; there can at best be only safeguards: carry out certain tests of randomness, like the uniformity test, correlation tests, run test, the tests based on embedding the sequence in phase space through construction of delay vectors or any other sophisticated tests you may think of. In fact the Monte Carlo algorithm you have developed itself can be used to test the randomness of the pseudo random number sequence. Random sampling from non-uniform distributions constitutes the core of Monte Carlo simulations. I have dealt with inversion, rejection, and Metropolis techniques for sampling from a given distribution. The last of these, namely the Metropolis algorithm, is usually employed in simulations of problems in statistical mechanics. Almost all Monte Carlo simulations employ, in some form or the other, techniques of variance reduction. I have tried to convey the basic principles of variance reduction through a discussion of the importance sampling device. I have illustrated the use of importance function by considering exponential biasing on a simple problem. I have also dealt briefly with the Spanier technique for optimizing the biasing parameter in importance sampling. Let me end by saying that there are several excellent books and review articles on Monte Carlo theory and practice, which you would like to consult during your  Monte Carlo learning  . Some of these books that have caught my attention are listed under [@REF]. Let me wish you all a merry time with Monte Carlo games. [999]{} A. Hall, [*On an experimental determination of $\pi$*]{}, Messeng. Math., [**2**]{}, 113 (1873). Francis Galton, [*Natural Inheritance*]{} (1989), cited in M. Cundy and A. P. Rollet, [*Mathematical Models*]{}, Oxford University Press, Oxford (1961). L. H. C. Tippet, [*Random Sampling Numbers*]{}, Cambridge University Press (1927), cited in [@crrao]. P. C. Mahalanobis, Dialectica [**8**]{}, 95 (1954), cited in [@crrao]. C. Radhakrishna Rao, [*Statistics and Truth : Putting Chance to Work*]{}, second edition, World Scientific (1997). Lord Kelvin, [*Nineteenth century clouds over the dynamical theory of heat and light*]{}, Phil. Mag.(6), [**2**]{}, 1 (1901). W. Krauth, [*Introduction to Monte Carlo Algorithm*]{}, Cond-mat/9612186 (1996); see http://lps.ens.fr/krauth J. Bertrand, [*Calcul des Probabilités*]{}, Gauthiers-Villars, Paris (1989). A. Papoulis, [*Probability Theory, Random Variables and Stochastic Processes*]{}, McGraw Hill (1965). W. Feller, [*An Introduction to Probability Theory and its Applications*]{}, Vols. I and II, John Wiley, New York (1968). M. C. Valsakumar, Unpublished notes (1990). P. Lévy, [*Calcul des Probabilités*]{}, Gauthier Villars, Paris (1925); P. Lévy, [*Théorie de l’addition des variables alétoires*]{}, Gauthier Villars, Paris (1937); for a proof in English of the Lévy stable law, see, A. N. Gnedenko and V. N. Kolmogrov, [*Limit Distributions for Sums of Random Variables*]{}, Reading MA, Addison-Wesley (1954). N. G. van Kampen, [*Stochastic Processes in Physics and Chemistry*]{}, North Holland (1981). C. W. Gardiner, [*A Hand Book of Stochastic Methods for Physics, B Chemistry and Natural sciences*]{}, Springer (1985). T. M. John and K. P. N. Murthy, J. Statis. Phys. [**45** ]{}, 753 (1986). L. Szilard, Z. Phys. [**530**]{}, 840 (1929); L. Brillouin, J. Appl. Phys., [**22**]{}, 334 (1951); R. Landauer, IBM J. Res. Dev., [**5**]{}, 183 (1960);\ R. J. Solomonoff, Inf. Control, [**7**]{},1 (1964); A. N. Kolmogrov, IEEE Trans. Inf. Theor., [**1**]{}, 3 (1965); G. J. Chaitin, J. Assoc. Comput. Mach., [**13**]{}, 547 (1966). A. Wehrl, Rev. Mod. Phys., [**50**]{}, 221 (1978); C. H. Bennet, Int. J. Theor. Phy., [**12**]{}, 905 (1982); K. G. Denbigh and J. S.Denbigh, [*Entropy in relation to incomplete knowledge*]{}, Cambridge University Press, Cambridge, U.K. (1985);\ P. C. W. Davies, [*Physics of Time Assymetry*]{}, Univ. California Press, Berkeley (1985); G. J. Chaitin, [*Information, Randomness and Incompleteness: papers on Algorithmic information theory*]{}, World Scientific, Singapore (1987)(a collection of Chaitin’s papers). C. H. Bennet, IBM J. Res. Dev., [**32**]{}, 16 (1988); R. Landauer, Nature, [**335**]{}, 779 (1988); W. H. Zurek, Phys. Rev., A[**40**]{}, 4731 (1989); W. H. Zurek, Phys. Rev. A[**40**]{}, 4731 (1989); I. Prigogine, [*From Being to Becoming*]{}, Freeman, San Francisco (1990). C. M. Caves, Phys. Rev. Lett., [**64**]{}, 2111 (1990); C. M. Caves, W. G. Unruh and W. Zurek, Phys. Rev. Lett., [**65**]{}, 1387 (1990); W. H. Zurek (Ed.), [*Complexity, Entropy and the Physics of Information*]{}, Vol. VIII of Santa Fe Institute Studies in the Sciences of Complexity, Addison Wesley (1990); C. H. Bennet, Sci. Am. [**275**]{}, 106 (1997). N. A. Friggario and N. Clark, Trans. Amer. Nucl. Soc., [**22**]{}, 283 (1975); N. A. Friggario and N. Clark, [*Toward Truly Random Numbers*]{}, Report ANL/ES-26 Part 4, Argonne National Laboratory (1978). RAND Corporation, [*A million random digits with 100000 normal deviates*]{}, The Free Press (1955). Ivar Ekland, [*Au hasard*]{}, Editions du Seuill (1991); English Translation: translated by Carol Volk, [*Broken dice and mathematical tales of chance*]{}, The university of Chicago Press (1993) G. J. Chaitin, Sci. Am., [**232**]{}, 47 (1975). J. H. Ahren, U. Dieter and A. Grube, Computing, [**6**]{}, 121 (1970); D. Knuth, [*The art of computer Programming*]{}, Vol. 2, Reading Mass.: Addison Wesley (1969); J. C. Butcher, Math. Comp., [**15**]{}, 198 (1961). G. Marsaglia, Proc. Nat. Acad. Sci., [**61**]{}, 25 (1968); G. Marsaglia, [*Applications of Number Theory to Numerical Analysis*]{}, Academic, New York (1972). I. Vattulainen, K. Kanakaala, J. Saarinen, and T. Ala-Nissila, Comp. Phys. Commun., [**86**]{}, 209 (1995). F. Gutbrod, [*New trends in pseudo random number generation*]{}, Annual Reviews of Computational Physics VI, World Scientific (to be published); G. Marsaglia, [*A Current View of Random Number Generators,*]{} in Proc. Computer Science and Statistics, Sixteenth Symp. on Interface, Elsevier (1985); G. Marsaglia and A. Zaman, Ann. Appl. Probability, [**1**]{}, 462 (1991). A. Bonelli and S. Ruffo, [*Modular Transformations, Order-Chaos Transitions and Pseudo-Random Number Generation*]{}, Int. J. Mod. Phys., C[**9**]{}(4)(1998). D. H. Lehmer, Ann. Comp. Lab. Harvard Univ., [**26**]{}, 141 (1951). M. Greenberger, Math. Comp., [**15**]{}, 383 (1961); M. Greenberger, Comm. ACM, [**8**]{}, 177 (1965). R. P. Chambers, IEEE Spectrum, [**4**]{}, 48 (1967). W. H. Press, S. Teukolsky, W. T. Vetterling and B. P. Flanners, [*Numerical Recipes*]{}, Cambridge (1992). A. M. Ferrenberg, D. P. Landau and Y. Joanna Wong, Phys. Rev. Lett., [**69**]{}, 3382 (1992). P. Grassberger, Phys. Lett., A[**181**]{}, 43 (1993); L. N. Shchur, J. R. Heringa and H. W. J. Blöte, Physica A[**241**]{}, 579 (1997); I. Vattulainen, T. Ala-Nissila and K. Kankaala, Phys. Rev. Lett., [**73**]{}, 2513 (1994); F. James, Europhys. Lett., [**24**]{}, 24 (1993); L. N. Shchur and H. W. J. Blöte, Phys. Rev. B[**55**]{}, R4905 (1997); F. J. Resende and B. v. Costa, Phys. Rev. E [**58**]{}, 5183 (1998); R. M. D’Souza, Y. Bar-Yam, and M. Kardar, Phys. Rev. E [**57**]{}, 5044 (1998). J. Eichenauer and J. Lehn, Statist. Papers, [**27**]{}, 315 (1986). J. Eichenauer-Hermann, Math. Comp., [**60**]{}, 375 (1993). A de Mattis, [*Order out of Chaos in Arithmetics*]{}, in G. Maino, L. Fronzoni and M. Pettini (Eds.), [*Dynamical Symmetries and Chaotic behaviour in Physical Systems*]{}, World Scientific, Singapore (1989). M. Lüscher, Comput. Phys. Commun., [**79**]{}, 100 (1994); S.C. Pathak and S. S. Rao, Cond-mat/931004 (1993). J. von Neumann, [*Various techiques used in connection with random digits*]{}, U. S. National Bur. Stand. Appl. Math. Ser., No. 12, page 36, (1951); J. H. Ahren and U. Dieter, [*Computer methods for sampling from the exponential and normal distribution*]{}, Comm. Assos. Mach., [**15**]{}, 873 (1972); G. Marsaglia, [*Generating exponential random variables*]{}, Ann. Math. Stat., [**32**]{}, 899 (1961); J. H. Ahren, and U. Dieter, [*Extensions of Forsythe’s method for random sampling from normal distribution,*]{}, Math. Comp., [**27**]{}, 927 (1973); G. E. Forsythe, [*von Neumann’s comparison method for random sampling and from the normal and other distributions*]{}, Math. of Comp. [**26**]{}, 817 (1972). J. F. Fernańdez and J. Rivero, Comput. Phys., [**10**]{}, 83 (1996). C. S. Wallace, ACM Trans. Math. Software, [**22**]{}, 119 (1996). G. E. P. Box and M. E. Muller, Ann. Math. Statist., [**29**]{}, 610 (1958). M. E. Muller, J. Assoc. Comp. Mach., [**6**]{}, 376 (1959). J. F. Fernańdez and C. Criado, [*Algorithm for normal random numbers*]{}, cond-mat/9901202 20 Jan 1999; scheduled to appear in Phys. Rev. E [**60**]{}(3) (1999). N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller and E. Teller, J. Chem. Phys. [**21**]{}, 1087 (1953). O. Peron, Math. Ann. [**64**]{}, 1 (1907); F. R. Gantmacher, [*Applications of the theory of matrices*]{}, (English translation: translated by J. R. Brenner), Interscience (1959). G. Bhanot, Rep. Prog. Phys., [**51**]{}, 429 (1988). S. S. Shapiro and M. B. Wilks, Biometrika, [**52**]{}, 591 (1965). F. H. Clark, [*Exponential transform as an importance sampling device - a review*]{}, Report ORNL-RSIC-14, Oak Ridge, USA (1966); H. Kahn, Nucleonics, [**6**]{}, 27 (1950); H. Kahn, Nucleonics, [**6**]{}, 60 (1950); P. S. Nagarajan, P. Sethulakshmi, and C. P. Raghavendran, [*A code for neutron leakage spectra*]{}, Report BARC/I-341, Bhabha Atomic Research Centre, Bombay, India (1975); K. P. N. Murthy, [*Tracklength Biasing in Monte Carlo Radiation Transport - A Review*]{}, in [*Workshop on Monte Carlo Methods*]{}, Bombay; Jan. 10-20 (1982). See Proceedings Published by BARC, Bombay, Ed. S. R. Dwivedi (1982)p120; K. P. N. Murthy, [*Exponential Biasing - An Update*]{}, in L. V. Krishnan, S. M. Lee and Om Pal Singh (Eds.) [*Proc. Nat. Conf. Radiation Shielding and Protection (RASP-98)*]{}, Allied (1996); L. B. Levitt, Nucl. Sci. Eng. [**31**]{}, 500 (1969); K. P. N. Murthy, Pramana, [**25**]{}, 231 (1985); K. P. N. Murthy and R. Indira, Nuc. Sci. Eng., [**92**]{}, 482 (1986); K. P. N. Murthy, Ann. Nuc. Energ., [**7**]{}, 389 (1980); K. P. N. Murthy, Ann. Nuc. Energ., [**10**]{}, 375 (1983); R. Indira and K. P. N. Murthy, Ann. Nuc. Energ., [**12**]{}, 97 (1985); P. K . Sarkar and M. A. Prasad, Nucl. Sci. Eng., [**70**]{},243 (1979); P. K. Sarkar and M. A. Prasad, Nucl. Sci. Eng., [**74**]{}, 52 (1980); S. R. Dwivedi, Nucl. Sci. Eng., [**80**]{}, 172 (1982); S. R. Dwivedi, Ann. Nucl. Energ., [**9**]{}, 359 (1982); R. Indira and T. M. John, Nucl. Sci. Eng. [**94**]{}, 323 (1986); R. Indira, Ann. Nucl. Energ., [**15**]{}, 67 (1988); R. Indira, Ann. Nucl. Energ., [**15**]{}, 261 (1988). J. Spanier, SIAM J. Appl. Math., [**18**]{}, 172 (1970); J. Spanier, [*A New Multistage procedure for Systematic Variance Reduction in Monte Carlo*]{}, Report ORNL-RSIC-29, Oak Ridge, USA (1971). K. P. N. Murthy, Atomkernenergie, [**34**]{}, 125 (1979); R. Indira and K. P. N. Murthy, [*Monte Carlo Simulation of a Benchmark Experiment on Neutron Transport through Thick Sodium*]{}, Report RRC-48, Reactor Research Centre, Kalpakkam, India (1981). D. B. MacMillan, Nuc. Sci. Eng., [**48**]{}, 219 (1972); R. Indira, Ann. Nuc. Energ., [**16**]{}, 571 (1989). J. M. Hammersley and D. C. Handscomb, [*Monte Carlo Methods*]{}, Chapman and Hall, London (1964); N. Metropolis and S. M. Ulam, [*The Monte Carlo Method*]{}, J. Amer. Statist. Assoc., [**44**]{}, 333 (1949); M. H. Kalos and P. A. Whitlock, [*Monte Carlo Methods, Vol. 1: Basics*]{}, John Wiley, New York (1986); P. Bratley, B. L. Box, and L. E. Scharge, [*A Guide to Simulation*]{}, Springer, New York (1987); B. D. Ripley, [*Stochastic Simulation*]{}, John Wiley, New York (1987); K. Binder (Ed.), [*Applications of Monte Carlo Methods in Statistical Physics*]{}, Springer (1984); Samuel S. M. Wong, [*Computational Methods in Physics and Engineering*]{}, see Chapter 7, pp. 383-485, Prentice Hall (1992). I. M. Sobol, [*The Monte Carlo Method*]{}, Mir, Moscow (1975); F. James, [*Monte Carlo Theory and Practice*]{}, Rep. Prog. Phys., [**43**]{}, 1145 (1980); E. D. Cashwell and C. J. Evertt, [*A Practical Manual of Monte Carlo Method for Random Walk Problems*]{}, Pergamon, New York (1959); J. Spanier and E. M. Gelbard, [*Monte Carlo Principles and Neutron Transport Problems*]{}, Addison Wesley (1969); E. C. McGrath and D. C. Irving, [*Techniques for Efficient Monte Carlo Simulation*]{}, Vol. I, II, and III, Report. ORNL-RSIC-38, Oak Ridge National Laboratory (1975); S. R. Dwivedi (Ed.), [*Proc. Workshop on Monte Carlo Methods*]{}, BARC, Bombay (1982). [**About the author......**]{}\ K. P. N. Murthy was born in Chennai and graduated from the Vivekananda College, Mylapore. He is presently a member of the Theoretical Studies Section of the Materials Science Division, Indira Gandhi Centre for Atomic Research, Kalpakkam. K. P. N. Murthy obtained his master’s degree in Physics from the University of Bombay for his thesis on Monte Carlo Radiation Transport. He did his doctoral work at the Central University of Hyderabad and obtained his Ph. D. in theoretical physics for his thesis on fluctuation phenomenon in model non-equilibrium systems. His fields of interest include random walks, Monte Carlo methods, stochastic processes, statistical physics, relaxation phenomena, radiation transport, regular and anomalous diffusion, disordered systems, time series analysis, and Chaos.
--- abstract: 'The prolific rise in autonomous systems has led to questions regarding their safe instantiation in real-world scenarios. Failures in safety-critical contexts such as human-robot interactions or even autonomous driving can ultimately lead to loss of life. In this context, this paper aims to provide a method by which one can algorithmically test and evaluate an autonomous system. Given a black-box autonomous system with some operational specifications, we construct a minimax problem based on control barrier functions to generate a family of test parameters designed to optimally evaluate whether the system can satisfy the specifications. To illustrate our results, we utilize the Robotarium as a case study for an autonomous system that claims to satisfy waypoint navigation and obstacle avoidance simultaneously. We demonstrate that the proposed test synthesis framework systematically finds those sequences of events (tests) that identify points of system failure.' author: - 'Prithvi Akella, Mohamadreza Ahmadi, Richard M. Murray, and Aaron D. Ames$^{1}$ [^1] [^2]' bibliography: - 'collected\_works.bib' nocite: '[@video]' title: | **Formal Test Synthesis for Safety-Critical Autonomous Systems\ based on Control Barrier Functions** --- Introduction ============ Autonomous systems have become increasingly pervasive in our everyday life, whether that be through the rise in interest for autonomous vehicles [@autonomous_vehicle], intelligent defense systems [@Autonomous_Swarm_NAVY], or even human/robot interaction [@HRI_ahmadi]. This rise in prevalence has motivated a similar increase in questions regarding the efficacy of these systems in safety critical contexts. These questions are not entirely unfounded, however, as even in those cases when attempting to verify system efficacy, horrific accidents still occur *e.g.* recent autonomous car crashes. Nonetheless, the field is still pushing forward rapidly, and in the future, these autonomous systems will have to deal with even more complex, dynamic, and relatively unstructured environments. Coupled with the cost of failure, this increase in system complexity makes systematic test and evaluation of these systems all the more necessary. Significant work on this issue has been carried out by the test and evaluation (T&E) community. Reachability analysis has been used to shape critical test cases from existing data [@TE_model_based_unadaptive_1] . At the discrete level, RRT has been used to efficiently search a feasible space to find critical sequences that identify failure of the underlying controller [@TE_model_based_unadaptive_2] . Tests based on a graph-search framework over clustered, critical situations, have been developed via exhaustive mission simulation of the underlying system [@TE_model_based_unadaptive_3]. Each of the aforementioned methods are model-based and not easily adaptable to other systems/testing environments as they are exhaustive. To address the issue of adaptivity, one approach adaptively samples the feasible space to generate successively harder tests [@TE_specific_adaptive_1]. That being said, the aforementioned frameworks require an accurate system model to function well, and except for the latter contribution, none are easily adaptable. However, as noted in a memo by the Department of Defense [@DOD_article], a testing framework that is both adaptive/adversarial and formally guarantees safety is still highly sought after. Prior work in the T&E community reference formal methods as a means by which one can formally guarantee safety/the lack thereof (see [@TE_formal_methods_CPS]). Formal methods, specifically linear and signal temporal logic (LTL & STL), have garnered significant interest in the controls community (see [@Safety_Logic1; @Safety_Logic2; @Safety_Logic3; @Safety_Logic4; @ahmadi2020barrier]). In each of these cases, the logical specification encodes a control objective whose satisfaction is formally guaranteed via the barrier-based controller. In this respect, control barrier functions are very useful in formally guaranteeing these logical specifications insofar as satisfying a specification and remaining within a safe set are both set-based arguments [@TAC_Paper]. However, these formal guarantees require specific knowledge of the onboard controller and system dynamics - for the test engineer, this is oftentimes not the case. ![The flowchart for the proposed test generation framework. The framework starts at (bottom left) collecting data for the system to be tested in order to (center top) estimate CBFs corresponding to the control objectives the system intends to satisfy. These estimated CBFs are used in a minimax game to (bottom right) generate tests designed to verify system efficacy in satisfying the aforementioned objective. This test generation framework is designed to systematically identify (center middle) points of system failure that may occur during general operation.[]{data-label="fig::Title_flowchart"}](CDC_Fig1.pdf){width="49.00000%"} [ **Our Contribution:**]{} In this paper, the overarching goal is to start to bridge the work done in the controls and the T&E community. Specifically, we address the issue of designing an adaptable/adversarial testing framework. Given an autonomous system with some operational specifications, we construct a minimax problem whose solution defines testing scenarios intended to optimally frustrate satisfaction of the given specifications without specific knowledge of the onboard control architecture. To this end, we begin by collecting data of the autonomous system satisfying the specifications. Then, we use the collected demonstration data to frame Linear Programs that develop approximate Control Barrier Functions corresponding to the operational specifications of the autonomous system. Finally, we use these approximate control barrier functions to develop a minimax game to solve for optimal testing parameters designed to frustrate satisfaction of the specifications. The proposed method is illustrated in Figure \[fig::Title\_flowchart\]. [ **Outline:**]{} In Section \[sec:probform\], we review some preliminary definitions and formally define the problem under study. In Section \[sec::Main\_Result\], we detail the main result of the paper, *i.e.,* a minimax game for test generation. In Section \[sec::corollaries\], we couple the result with a linear program to systematically generate difficult tests. Finally, in Section \[sec::simulations\_and\_experiments\], we illustrate our proposed methodology with a case study. Problem Formulation {#sec:probform} =================== In this section, we present some notions used in the sequel and formally define the problem under study. Preliminaries {#sec::problem_setup} ------------- We consider a class of systems (to-be-tested) that can be modeled as a dynamical system with affine inputs: $$\label{dyn_sys} \dot{x} = f(x) + g(x)u, \quad x \in \mathcal{X} \subset \mathbb{R}^n, \quad u \in \mathcal{U} \subset \mathbb{R}^m.$$ Furthermore, we will assume that both $f(x)$ and $g(x)$ are locally Lipschitz. For any function $h(x)$, $$\begin{aligned} L_fh(x) &\triangleq \nabla_xh(x)f(x), \\ L_gh(x) &\triangleq \nabla_xh(x)g(x),\end{aligned}$$ are its Lie derivatives. [ **Formal Methods:**]{} We will define $\mathcal{A}$ to be the set of atomic propositions from which the provided control objective, *i.e.,* a temporal logic specification, has been synthesized. We use the following notation to represent the truth/lack thereof for an atomic proposition $$\forall \phi \in \mathcal{A}, \quad \llbracket \phi \rrbracket \triangleq \{x \in \mathcal{X} | \phi(x) = \mathrm{TRUE} \},$$ where $\phi(x)$ denotes the atomic proposition evaluated at the state $x$. In addition, we will define the symbols $\neg, \wedge, \lor$ to correspond to negation, conjunction, and disjunction respectively. That is, $\neg \phi = $ TRUE when $\phi = $ FALSE. Likewise $\phi \wedge \omega =$ TRUE when $\phi = $ TRUE and $\omega = $ TRUE, and $\phi \lor \omega =$ TRUE when either $\phi=$ TRUE or $\omega = $ TRUE. In this paper, we consider a subset of temporal logic (TL) operators, $\operatorname*{\mathbf{F}}$uture and $\operatorname*{\mathbf{G}}$lobal, defined as follows (here $\equiv$ denotes a logical equivalency): $$\begin{aligned} \operatorname*{\mathbf{F}}\phi & \equiv \exists~t^*\geq 0~\mathrm{s.t.~}x(t^*)\in\llbracket \phi \rrbracket,\\ \operatorname*{\mathbf{G}}\phi & \equiv \forall~t\geq 0,~x(t)\in\llbracket \phi \rrbracket.\end{aligned}$$ While this seems restrictive, these two operators can be composed to consider more complex LTL specifications, such as $\square \lozenge \phi \equiv \operatorname*{\mathbf{G}}(\operatorname*{\mathbf{F}}\phi)$. [ **Control Barrier Functions (CBF):**]{} To provide a metric by which we measure satisfaction of the provided specification, we will establish a correspondence between these TL specifications and control barrier functions, $h$. To start, we first define extended class-$\mathcal{K}$ functions, $\alpha: (-b,a) \to (-\infty,\infty)$, to be those functions, $\alpha$, that are strictly increasing and satisfy $\alpha(0) = 0$. Here, $a,b>0$. Using these extended class-$\mathcal{K}$ functions, we can define Control Barrier Functions (CBF). \[Control Barrier Functions (CBF)\] \[def::cbf\] *For a dynamical system of the form , a differentiable function, $h: \mathbb{R}^n \to \mathbb{R}$ is considered a control barrier function if it satisfies the following criteria: $$\label{} \sup\limits_{u\in\mathcal{U}} \left[L_fh(x) + L_gh(x)u + \alpha(h(x)) \right] \geq 0,\quad \forall x \in \mathcal{X},$$ where $\alpha$ is an extended class-$\mathcal{K}$ function [@TAC_Paper].* The usefulness of a CBF is in guaranteeing the forward invariance of its 0-superlevel set: $$\begin{aligned} & \mathcal{C}_h & & \hspace{-0.9 in}= \{x \in \mathbb{R}^n~|~ h(x) \geq 0 \}, \\ & \partial\mathcal{C}_h & &\hspace{-0.9 in}= \{x \in \mathbb{R}^n~|~ h(x) = 0 \}.\end{aligned}$$ Indeed, it was shown in Proposition 1 of [@TAC_Paper] that a CBF, as in Definition \[def::cbf\], guarantees forward invariance of its 0-Superlevel set, $\mathcal{C}_h$. Here, we note that what we call control barrier functions are termed as *zeroing control barrier functions* in [@wang2017safety]. Finally, a finite time convergence control barrier function requires $\alpha(x) =\gamma\operatorname*{\mathrm{sign}}(x)|x|^\rho$ to ensure finite time convergence to the set, $\mathcal{C}_h$, by $T = \frac{1}{\gamma(1-\rho)}|h(x_0)|^{1-\rho}$, provided $h(x_0) \leq 0$ [@Finite_CBF]. Problem Statement {#sec::problem_statement} ----------------- As mentioned earlier, the overarching test and evaluation goal is to validate an autonomous system’s capacity to satisfy a provided TL specification. However, as we have no knowledge of the controller on-board the system to-be-tested, not only do we have no metric of quantifying success for the TL specification, but we also do not have a systematic method of developing difficult tests by which to identify control system failures in satisfying the specification. We will show in the sequel that there exists a correspondence between CBFs and TL specifications. So, if we could determine these CBFs for the system at hand, we can use them to test the system against a given specification. This chain of reasoning is the basis for Figure \[fig::Title\_flowchart\]. To that effect, we collect the following experimental data of the system satisfying the control objective: \[Data-Set\] \[demonstrations\] *Define $\mathbb{D}_i = \{ (x^i_k,u^i_k) \in \mathbb{R}^n\times\mathbb{R}^m~|~k = 0,1,\dots,T_i \}$ as the data-set of state, action pairs for demonstration, $i$. Here, $k$ indexes time until $T_i$, which is the max time for the specific demonstration at hand. Then, define $\mathbb{D} = \{ \mathbb{D}_1, \dots, \mathbb{D}_r \}$ as the set of all provided demonstrations.* \[Labeling\] For the provided data-set, $\mathbb{D}$, and associated specification, the data-set for each demonstration, $\mathbb{D}_i$, terminates when the specification is satisfied. *e.g.* for a specification defined as $\operatorname*{\mathbf{F}}\phi \wedge \operatorname*{\mathbf{G}}\omega$, where $\phi,\omega \in \mathcal{A}$, then for each $\mathbb{D}_i$, - $x^i_{T_i} \in \llbracket \phi \rrbracket$ and $x^i_k \not \in \llbracket \phi \rrbracket$ for all $k = 0,1,\dots,T_i-1$, and - $x^i_k \in \llbracket \omega \rrbracket$ for all $k = 0,1,\dots,T_i$. We use the generated data-set, $\mathbb{D}$, to determine composite CBFs that mimic system behavior. We compose these CBFs from a candidate set of barrier functions defined as follows: \[Candidate Barrier Set\] *We call $$\mathcal{B} \triangleq \{h_1, h_2, \dots, h_q \},$$ a candidate barrier set for some provided, continuously differentiable functions, $\{h_i\}_{i=1}^q$.* Note that in the above definition, each component of the candidate barrier set may not be a valid CBF, *i.e.* $\mathcal{B}$ could be the set of all polynomials of degree, $n\leq q-1$. Finally, we need to formalize how we specifically identify these testing scenarios. \[Testing Parameters\] \[testing\_parameters\] *We define the vector, $d\in\mathbb{R}^p$, to be a collection of testing parameters used to generate tests *e.g.* the location of obstacles, time when a phenomena starts, *etc*.* With these definitions in place, the problem statement is as follows: \[main\_problem\] *Given an autonomous system whose controller is unknown, $\mathbb{D}$, $\mathcal{B}$, and a TL specification the system intends to satisfy, devise an adaptive/adversarial strategy to identify a set of testing parameters $d$.* We show in the next section that these test parameters $d$ characterize a test scenario designed to validate that the autonomous system reliably satisfies a given TL specification. Main Result {#sec::Main_Result} =========== This section will detail the main result of this paper - the minimax game formulated to generate optimal test parameters, $d^*$, designed to frustrate satisfaction of a TL specification expressed through CBFs. Main Result {#main-result} ----------- To preface the main result, we will make the following remark to simplify notation: \[approximate\_barrier\_labeling\]*We denote $h^F_i~,i\in \mathcal{I}$ to be a set of CBFs for a finite number of specifications of the type $\mathbf{F}\phi_i$. Likewise, $h^G_j,~j \in \mathcal{J}$ denote CBFs for specifications of the type $\mathbf{G}\omega_j$. That is, $\mathcal{C}_{h^F_i} \equiv \llbracket \phi_i \rrbracket$, $\forall~ i\in\mathcal{I}$, and $\mathcal{C}_{h^G_j} \equiv \llbracket \omega_j \rrbracket$, $\forall~j\in\mathcal{J}$.* In addition, we will make the following assumption to simplify the formulations in the sequel. \[test\_restriction\] *We will assume that the CBFs $h^G_j,~j\in \mathcal{J}$ depend on a set of test parameters $d$. That is, $h^G_j:\mathbb{R}^n\times\mathbb{R}^p \to \mathbb{R}$ and $\dot{h}^G_j: \mathbb{R}^n\times\mathbb{R}^p\times\mathbb{R}^m \to\mathbb{R}$ whereas, $h^F_i: \mathbb{R}^n \to \mathbb{R}$.* We will define the following set of feasible inputs: $$\begin{aligned} & \mathcal{U}(x,d) = \label{eqn::feasible_set} \\ & \{ u \in \mathcal{U}~|~\dot{h}^G_j(x,u,d) \geq -\alpha_j( h^G_j(x,d)),~\forall~j\in\mathcal{J} \}, \nonumber \end{aligned}$$ where each $\alpha_j$ is the corresponding extended class-$\mathcal{K}$ function with respect to which $h^G_j$ is a CBF. Likewise, we will define: $$x(t)|_{u(t)} \triangleq x(0) + \int_0^t \left(f(x(s))+g(x(s))u(s)\right) ds,$$ to be the solution to equation  provided the input signal, $u(t)$. Likewise, we will make the following assumption to frame the type of specifications accounted for by the testing framework to be detailed: We assume that the provided TL specification can be recast into the following form: $$\label{eqn::sys_specification} \left[\lor_{i\in\mathcal{I}} \left( \operatorname*{\mathbf{F}}\phi_i \right) \right] \wedge \left[\wedge_{j\in\mathcal{J}} \left( \operatorname*{\mathbf{G}}\omega_j \right) \right],~\phi_i,\omega_j\in\mathcal{A}~\forall~i,j,$$ with the following initial conditions: $$\begin{aligned} & \lor_{i\in\mathcal{I}}(\phi_i(x(0))) & & \hspace{-0.8 in} =\mathrm{FALSE}, \label{eqn::initially_not_F}\\ & \wedge_{j\in\mathcal{J}}(\omega_j(x(0))) & & \hspace{-0.8 in} =\mathrm{TRUE}. \label{eqn::initially_G}\end{aligned}$$ Intuitively, specifications of type  denote control objectives wherein the system must ensure continued satisfaction of multiple control objectives while accomplishing at least one of a subset of tasks *e.g.* navigating to one of multiple waypoints while avoiding all obstacles. Equations  and indicate that the system does not start in trivial states, wherein the specification  has already been satisfied. Finally, to account for an adversarial testing framework, we specify that the test parameters are a function of the current state, *i.e.* $d(x)$, where the specific functional form is expressed in Theorem \[algorithmic\_test\_generation\]. Intuitively, the idea is that for tests to be adversarial to system action, they must, necessarily, depend on the system state. Under the notation specified in Remark \[approximate\_barrier\_labeling\], the main result is as follows: \[Algorithmic Test Generation\] \[algorithmic\_test\_generation\] *Given an autonomous system and a TL specification of the form in equation , the solution, $d^*(x)$, to the minimax game: $$\begin{aligned} d^*(x) = & \,\,\,\, \operatorname*{argmin}\limits_{d \in \mathbb{R}^p} & & \hspace{-0.3 in}\max\limits_{u \in \mathcal{U}(x,d)} \, \, \sum\limits_{i \in \mathcal{I}} \dot{h}^F_i(x,u), \label{differential_game}\quad \quad \quad \tag{Minimax}\end{aligned}$$ defines an optimal test parameter sequence, $d^*(x(t))$, predicated on a state trajectory, $x(t)|_{u(t)}$, and the control signal, $u(t)$, *i.e.,* $d^*(x(t))$ identifies a sequence of test scenarios designed to ensure system satisfaction of the following specification:* $$\label{eqn::d_specification} \left[ \wedge_{i\in\mathcal{I}} \left(\operatorname*{\mathbf{G}}\neg \phi_i \right) \right] \lor \left[ \lor_{j\in\mathcal{J}} \left( \operatorname*{\mathbf{F}}\neg \omega_j\right)\right],~\phi_i,\omega_j\in\mathcal{A}~\forall~i,j.$$ Proof of Main Result -------------------- This section contains the lemmas necessary to prove the main result, Theorem \[algorithmic\_test\_generation\]. For all maximization/minimization problems contained within, we specify that infeasibility of the associated optimization problem corresponds to a value of $-\infty,\infty$ respectively. To start, we need to show that TL specification  and TL specification  are mutually exclusive. To that end, we have the following Lemma regarding relations between TL operators: \[lemma::TL\_operator\_relations\] *The following relations are true: $$\begin{aligned} \neg \operatorname*{\mathbf{G}}\phi & \equiv \operatorname*{\mathbf{F}}(\neg \phi), \label{eqn::TL_relation_2}\\ \neg \operatorname*{\mathbf{F}}\phi & \equiv \operatorname*{\mathbf{G}}(\neg \phi). \label{eqn::TL_relation_3} \end{aligned}$$* For equation , $$\neg \operatorname*{\mathbf{G}}\phi \equiv \exists t^*\geq 0~|~x(t^*)\in\llbracket \neg \phi \rrbracket \equiv \operatorname*{\mathbf{F}}(\neg\phi).$$ Likewise, for equation , $$\neg\operatorname*{\mathbf{F}}\phi \equiv \forall~t\geq0~x(t)\in\llbracket\neg\phi\rrbracket \equiv \operatorname*{\mathbf{G}}(\neg \phi).$$ Using Lemma \[lemma::TL\_operator\_relations\] and De Morgan’s Law, we can prove that the two TL specifications, and , are mutually exclusive: \[lemma::mutual\_exclusivity\] *TL specifications and are mutually exclusive.* $$\begin{aligned} & \neg \left[ \left[\lor_i \left( \operatorname*{\mathbf{F}}\phi_i \right) \right] \wedge \left[\wedge_j \left( \operatorname*{\mathbf{G}}\omega_j \right) \right] \right] \\ \equiv & \neg \left[\lor_i \left( \operatorname*{\mathbf{F}}\phi_i \right) \right] \lor \neg \left[\wedge_j \left( \operatorname*{\mathbf{G}}\omega_j \right) \right] \\ \equiv & \left[\wedge_i \neg(\operatorname*{\mathbf{F}}\phi_i) \right] \lor \left[ \lor_j \neg(\operatorname*{\mathbf{G}}\omega_j)\right] \\ \equiv & \left[ \wedge_i (\operatorname*{\mathbf{G}}\neg\phi_i)\right] \lor \left[ \lor_j (\operatorname*{\mathbf{F}}\neg \omega_j)\right] \end{aligned}$$ Effectively, Lemma \[lemma::mutual\_exclusivity\] proves that if $d^*(x(t))$ ensures system satisfaction of TL specification , then the sequence of test parameters did identify a system failure insofar as the system failed to satisfy the specification . It remains, however, to show that minimax game  defines a sequence, $d^*(x(t))$, that forces the system to satisfy . To that end, we have the following Lemma that draws a correspondence between CBFs and TL specifications: \[equivalence\_logic\_safety\] *For an atomic proposition, $\phi \in \mathcal{A}$, if there exists a function, $h_\phi(x)$, such that $\mathcal{C}_{h_\phi} = \llbracket \phi \rrbracket$, then: $$\operatorname*{\mathbf{G}}\phi \equiv h_\phi(x(t)) \geq 0, ~\forall~ t \geq 0,$$ and: $$\operatorname*{\mathbf{F}}\phi \equiv \exists~t^* <\infty~\mathrm{s.t.}~h_\phi(x(t^*)) \geq 0.$$ Furthermore, if $h_\phi(x)$ is a CBF, then $\exists~u(t)$ such that $\operatorname*{\mathbf{G}}\phi = $TRUE. Likewise, if $h_\phi(x)$ is an FTCBF, then $\exists~u(t)$ such that $\operatorname*{\mathbf{F}}\phi = $TRUE.* For $\operatorname*{\mathbf{F}}\phi$: $$\begin{aligned} \operatorname*{\mathbf{F}}\phi & \quad \equiv \quad \exists~0\leq t^* <\infty~\mathrm{s.t.}~x(t^*) \in \llbracket \phi \rrbracket, \nonumber \\ & \quad \equiv \quad \exists~0\leq t^* <\infty~\mathrm{s.t.}~ x(t^*) \in \mathcal{C}_{h_\phi}, \nonumber \\ & \quad \equiv \quad \exists~0\leq t^* <\infty~\mathrm{s.t.}~ h_\phi(x(t^*)) \geq 0. \end{aligned}$$ Hence, if $h_\phi(x)$ is an FTCBF wherein $h(x(0)) \leq 0$, then an input sequence, $u(t)$, that satisfies: $$\begin{aligned} & L_fh(x(t)) + L_gh(x(t))u(t) + \gamma \operatorname*{\mathrm{sign}}(h(x(t)))\left| h(x(t))\right|^\rho \geq 0, \\ & \quad \quad \quad \forall~ t \leq T= \frac{1}{\gamma(1-\rho)}|h(x_0)|^{1-\rho} \end{aligned}$$ ensures $ h(x(T))\geq 0 \implies \operatorname*{\mathbf{F}}\phi = \mathrm{TRUE}. $ $\operatorname*{\mathbf{G}}\phi$ follows similarly. Lemma \[equivalence\_logic\_safety\] provides a metric by which to verify that $d^*(x(t))$ ensures system satisfaction of specification . Specifically, Lemma \[equivalence\_logic\_safety\] requires that $d^*(x(t))$ either ensure $h^F_i(x(t)) < 0$ $\forall~i \in \mathcal{I}$ and $\forall~t\geq0$, or $h^G_j(x(t)) < 0$ for at least one $j\in\mathcal{J}$ and $t\geq 0$. To show this, we require the following definitions for the optimal cost, $s$, optimal input, $u^*$, and optimal test parameter, $d^*$: $$\begin{aligned} & \,\,\, s(x(t),d) & & \hspace{-0.15 in} = & & \hspace{-0.1 in}\max\limits_{u\in\mathcal{U}(x(t),d)} & & \hspace{-0.1 in} \sum\limits_{i\in\mathcal{I}}\dot{h}^F_i(x(t),u), \label{eqn::max_derivative}\\ & u^*(x(t),d) & & \hspace{-0.15 in} = & & \hspace{-0.1 in}\operatorname*{argmax}\limits_{u\in\mathcal{U}(x(t),d)} & & \hspace{-0.1 in} \sum\limits_{i\in\mathcal{I}}\dot{h}^F_i(x(t),u), && \label{eqn::u_optimal}\\ & \,\,\,\,d^*(x(t)) & & \hspace{-0.15 in} = & & \hspace{-0.1 in}\,\,\,\,\operatorname*{argmin}\limits_{d\in\mathbb{R}^p} & & \hspace{-0.1 in} \sum\limits_{i\in\mathcal{I}}\dot{h}^F_i(x(t),u^*(x(t),d)). && \label{eqn::d_optimal}\end{aligned}$$ Here, we note that equation  is a re-casting of equation  accounting for the optimal input, $u^*(x(t),d)$. In addition, we will define the following set of invalidating test parameters: $$\label{eqn::infeasibility_set} \mathcal{D}(x) = \{d \in \mathbb{R}^p~|~\mathcal{U}(x,d)=\varnothing\}.$$ With the above definitions, we have the following Lemma: \[lemma::ensuring\_infeasibility\] *If, for some $x(t)$, $\mathcal{D}(x(t)) \neq \varnothing$, then the optimal solution, $d^*$, to equation  is such that, $d^*\in\mathcal{D}(x(t))$.* First, we note that, $$\label{eqn::infeasibility_yields_infinity} \forall~d \in \mathcal{D}(x(t)),~s(x(t),d) = -\infty.$$ The equation above comes from the infeasibility of maximization problem , which results in a value of $s = -\infty$. Furthermore, equation  is equivalent to: $$\begin{aligned} \label{eqn::recasting_dstar_again} d^*(x(t)) = & \,\,\,\, \operatorname*{argmin}\limits_{d \in \mathbb{R}^p} & & \hspace{-0.6 in} s(x(t),d). \end{aligned}$$ Based on the Locally Lipschitz assumptions made for $f(x)$ and $g(x)$ in equation  and the requirement that a CBF, $h(x)$, is differentiable at least once, it is true that $$L_fh^F_i(x),L_gh^F_i(x)~\mathrm{are~bounded}~\forall~i\in\mathcal{I}.$$ In addition, $$\forall~u\in\mathcal{U}(x(t),d),~u~\mathrm{is~bounded}.$$ Therefore, $$\dot{h}^F_i(x(t),u) = L_fh(x(t)) + L_gh(x(t))u~\mathrm{is~bounded}~\forall~i\in\mathcal{I}.$$ As defined in equations  and , it is also true that $$s(x(t),d) = \sum\limits_{i\in\mathcal{I}}\dot{h}^F_i(x(t),u^*(x(t),d)).$$ As each component, $\dot{h}^F_i$, is finite and $|\mathcal{I}|<\infty$, the following is true: $$\label{eqn::feasibility_yields_finite} \exists~M < \infty ~\mathrm{s.t.}~\left|s(x(t),d)\right|<M, \quad \forall~d\not\in\mathcal{D}(x(t)).$$ By definition of $\operatorname*{argmin}$ and using equations , , and , we have that $ d^*(x(t))\in\mathcal{D}(x(t)).$ With Lemma \[lemma::ensuring\_infeasibility\], we can show that the sequence, $d^*(x(t))$, attempts to force the system to satisfy, $\lor_j (\operatorname*{\mathbf{F}}\neg \omega_j)$. We will show this first for a single $\operatorname*{\mathbf{G}}\omega$: \[lemma::invalidity\_Gomega\] *If, for a given state trajectory, $x(t)$, $\omega(x(0)) = $ TRUE, and $\mathcal{D}(x(t)) \neq \varnothing$ $\forall$ $t\geq0$ with $|\mathcal{J}| = 1$, then: $$\label{eqn::finite_time_invalidation_omega} \forall~\delta>0,~\exists~t^*_\delta \in (0,\infty)~\mathrm{s.t.}~h_\omega(x(t^*),d^*d(t^*)) < \delta,$$ where $h_\omega$ is the CBF corresponding to $\operatorname*{\mathbf{G}}\omega$.* Via Lemma \[lemma::ensuring\_infeasibility\], we know that $ \forall~t\geq 0,~d^*(x(t)) \in \mathcal{D}(x(t)). $ As $|\mathcal{J}|=1$, this implies that $\forall~t\geq0$, $$\label{eqn::no_u_exists} \dot{h}_\omega(x(t),u,d^*(x(t))) < -\alpha(h_\omega(x(t),d^*(x(t)))),~\forall~u\in\mathcal{U}.$$ As $\alpha(\cdot)\in\mathcal{K}$ (abbreviating $d^*(x(t))$ to $d^*(t)$): $$h_\omega(x(t),d^*(t)) < \beta(h_\omega(x(0),d^*(0)),t),$$ where $\beta(\cdot)$ is a class-$\mathcal{KL}$ function. As a result: $$\exists~t^*\in(0,\infty)~\mathrm{s.t.}~\beta(h_\omega(x(0),d^*(0)),t^*) \leq \delta,$$ choosing $t_\delta^* = t^*$ completes the proof. Lemma \[lemma::invalidity\_Gomega\] is why we specify that the sequence, $d^*(x(t))$, attempts to force system satisfaction of $\lor_j(\operatorname*{\mathbf{F}}\neg\omega_j)$ as opposed to specifying that it guarantees that the system will satisfy the same specification. As minimax game  constrains system action, $u$, to ensure $\wedge_j(\operatorname*{\mathbf{G}}\omega_j)$, the test sequence can only get $\delta$ close to invalidation assuming optimal system action. This discrepancy will be made clear when compared with Lemma \[lemma::invalidation\_Fphi\]: \[lemma::invalidation\_Fphi\] *If $\phi(x(0)) = $ FALSE and $|\mathcal{I}|=1$, then the test parameter sequence, $d^*(x(t))$, is guaranteed to find a system trajectory, $x(t)|_{u^*(x(t),d^*(x(t)))}$, that satisfies $\operatorname*{\mathbf{G}}\neg\phi$ provided a trajectory exists wherein: $$\begin{aligned} \dot{h}_\phi(x(t),u^*(x(t),d(t))) & \leq 0,~\forall~t\geq 0, \label{eqn::h_phi_always_decreasing}\\ \mathcal{D}(x(t)) & = \varnothing,~\forall~t\geq0, \label{eqn::no_invalidity_set} \end{aligned}$$ for some $d(t)$.* First, we denote $h_\phi(x)$ to be the CBF corresponding to $\operatorname*{\mathbf{F}}\phi$. It follows from Lemma \[equivalence\_logic\_safety\] then, $$\label{eqn::phi_false_iff_negative} \phi(x(0)) = \mathrm{FALSE} \equiv h_\phi(x(0)) < 0.$$ From equation , to prove $\operatorname*{\mathbf{G}}\neg\phi$, it is sufficient to prove: $$\label{eqn::general_h_always_decreasing} \dot{h}_\phi(x(t),u(t)) \leq 0, \quad \forall~t\geq 0,$$ as if true: $$\begin{aligned} h_\phi(x(t)) & = h_\phi(x(0)) + \int_0^t \dot{h}_\phi(x(s),u(s)) \mathrm{ds}, \\ & < \int_0^t \dot{h}_\phi(x(s),u(s)) \mathrm{ds} \leq 0, \\ & \implies x(t)|_{u(t)} \in \llbracket \neg \phi \rrbracket,~\forall~t\geq 0~ \equiv \operatorname*{\mathbf{G}}\neg\phi. \end{aligned}$$ As a result, all that remains is to show that equation  is satisfied by $d^*(x(t))$. Here, equation  ensures that the results of Lemma \[lemma::ensuring\_infeasibility\] do not apply, as otherwise $d^*(x(t))\in\mathcal{D}(x(t))$ and we cannot make a statement regarding $s(x(t),d^*)$. Then, by definition of $\operatorname*{argmin}$ and equation  (abbreviating $d^*(x(t))$ to $d^*(t)$): $$\dot{h}_\phi(x(t),u^*(x(t),d^*(t))) \leq \dot{h}_\phi(x(t),u^*(x(t),d(t))),$$ which results in: $$\label{eqn::h_decreasing_under_optimality} \dot{h}_\phi(x(t),u^*(x(t),d^*(t))) \leq 0.$$ From equation  and the sufficiency proof predicated on equation , we have: $$x(t)|_{u^*(x(t),d^*(t))} \in \llbracket \neg \phi \rrbracket,~\forall~t\geq 0 \equiv \operatorname*{\mathbf{G}}\neg\phi.$$ With the aforementioned lemmas, we are now ready to prove Theorem \[algorithmic\_test\_generation\]. \[Theorem \[algorithmic\_test\_generation\]\] If both $|\mathcal{I}| = 1$ and $|\mathcal{J}|=1$, then the result stems directly from Lemmas \[lemma::invalidity\_Gomega\] and \[lemma::invalidation\_Fphi\]. First we note the following is true: $$\label{eqn::vacuously_true_D} \left( \mathcal{D}(x(t)) = \varnothing \right) \lor \left( \mathcal{D}(x(t)) \neq \varnothing \right) = \mathrm{TRUE},~\forall~t\geq 0.$$ As a result, it is true that $\forall~t \geq 0$, the optimal test parameter sequence, $d^*(x(t))$, attempts to ensure that the following is true: $$\begin{aligned} & \left(\dot{h}_\phi(x(t),u^*(x(t),d^*(x(t))))\leq 0 \right) \lor \label{eqn::CBF_inequality_forall_time} \\ & \left(\dot{h}_\omega(x(t),u,d^*(x(t))) < -\alpha(h_\omega(x(t),d^*(x(t)))~\forall~u\in\mathcal{U}\right) \nonumber \\ & \quad \quad \quad \forall~t\geq 0.\nonumber\end{aligned}$$ Hence, if either $\mathcal{D}(x(t))=\varnothing$ or $\mathcal{D}(x(t)) \neq \varnothing$ persist $\forall~t\geq 0$, then the the results of Lemmas \[lemma::invalidity\_Gomega\] and \[lemma::invalidation\_Fphi\] ensure $d^*(x(t))$ attempts to force the system to satisfy, $ \operatorname*{\mathbf{G}}\neg \phi \lor \operatorname*{\mathbf{F}}\neg \omega.$ We lose the guarantee on $\operatorname*{\mathbf{G}}\neg\phi$ that we had in Lemma \[lemma::invalidation\_Fphi\] as we can no longer ensure $\mathcal{D}(x(t)) = \varnothing$, $\forall~t\geq0.$ However, whenever $\mathcal{D}(x(t)) = \varnothing$, $d^*(x(t))$ will steer the system away from achieving $\operatorname*{\mathbf{F}}\phi$, if feasible. This same rationale extends to the case wherein $|\mathcal{I}| \neq 1$ and/or $|\mathcal{J}|\neq1$. Since  holds, for the multi-specification case, $d^*(x(t))$ attempts to ensure: $$\begin{aligned} & \left(\sum_{i\in\mathcal{I}}\dot{h}^F_i(x(t),u^*(x(t),d^*(x(t))))\leq 0 \right) \lor \label{eqn::multi_CBF_inequality_forall_time} \\ & \left(\dot{h}^G_j(x(t),u,d^*(x(t))) < -\alpha_j(h^G_j(x(t),d^*(x(t)))~\forall~u\in\mathcal{U}\right) \nonumber \\ & \quad \quad \quad \forall~t\geq 0,~\mathrm{and~for~at~least~one~}j, \nonumber\end{aligned}$$ For the first inequality in equation , the following implication is true: $$\begin{aligned} & \sum_{i\in\mathcal{I}}\dot{h}^F_i(x(t),u^*(x(t),d^*(x(t))))\leq 0 \implies \label{eqn::decrease_in_at_least_one_phi}\\ & \lor_{i\in\mathcal{I}}\left( \dot{h}^F_i(x(t),u^*(x(t),d^*(x(t)))) \leq 0\right) = \mathrm{TRUE}. \nonumber \end{aligned}$$ Implication  can be deduced from a contradiction. If, for the same implication, we were to assume the LHS of  to be true and the RHS to be false, then: $$\begin{aligned} & \lor_{i\in\mathcal{I}}\left( \dot{h}^F_i(x(t),u^*(x(t),d^*(x(t)))) \leq 0\right) = \mathrm{FALSE} & \implies \\ & \wedge_{i\in\mathcal{I}}\left( \dot{h}^F_i(x(t),u^*(x(t),d^*(x(t)))) > 0\right) = \mathrm{TRUE} & \implies \\ & \sum_{i\in\mathcal{I}}\dot{h}^F_i(x(t),u^*(x(t),d^*(x(t)))) > 0,\end{aligned}$$ which is a contradiction. As a result, $d^*(x(t))$ attempting to ensure equation  is equivalent to saying $d^*(x(t))$ attempts to ensure: $$\begin{gathered} \lor_{i\in\mathcal{I}}\left( \dot{h}^F_i(x(t),u^*(x(t),d^*(x(t)))) \leq 0\right) \lor \label{eqn::d_specification_CBF} \\ \lor_{j\in\mathcal{J}}\bigg(\dot{h}^G_j(x(t),u,d^*(x(t))) < -\alpha_j(h^G_j(x(t),d^*(x(t))), \\ ~\forall~u\in\mathcal{U}\bigg), \forall~t\geq 0.\end{gathered}$$ Coupled with the initial conditions  and , $d^*(x(t))$ attempting to ensure equation  is equivalent to saying $d^*(x(t))$ attempts to ensure system satisfaction of equation , which is the desired result. Test Synthesis {#sec::corollaries} ============== This section provides some additions to the main result that make it extensible to the problem at hand. Specifically, we formulate a linear program to extend the results of Theorem \[algorithmic\_test\_generation\] to generate test cases wherein we have no prior knowledge of the controller on-board the system. Likewise, we have a corollary that permits a predictive form of equation  such as the one used to generate the tests in Section \[sec::simulations\_and\_experiments\]. To start, we want to use the results of Theorem \[algorithmic\_test\_generation\] to see if the provided autonomous system satisfies the associated TL specification. However, as we do not know the controller onboard the system, we do not have any CBFs with which to define the minimax game in Theorem \[algorithmic\_test\_generation\]. That being said, Lemma \[equivalence\_logic\_safety\] in the Appendix provides us a method by which to determine these CBFs from the system demonstration data, $\mathbb{D}$. First, we define an estimated CBF (e-CBF) to be a convex combination of component functions in $\mathcal{B}$, where $p_j$ below denote the weights for said combination: $$\label{optimal_combination} \tag{e-CBF} h^*(x) = \sum_{j=1}^{|\mathcal{B}|} p_jh_j(x), \quad \forall~h_j \in \mathcal{B}.$$ By default, Lemma \[equivalence\_logic\_safety\] indicates that specification satisfaction requires the associated CBF to be positive. As a result, we will choose a cost function that is minimized when is most positive over all demonstrations: $$\label{eqn::cost} J(\mathcal{B},x,p) \triangleq -\sum_{i=1}^{|\mathbb{D}|} \sum_{k=0}^{T_i} \sum_{j=1}^{|\mathcal{B}|} p_j h_j(x^i_k). \tag{Cost}$$ Likewise, Assumption \[Labeling\] dictates that demonstrations end upon satisfaction of the control objective. Therefore, for the estimated CBF to correspond to $\operatorname*{\mathbf{F}}$uture type constraints, should be positive at the end of each demonstration. Similarly, for $\operatorname*{\mathbf{G}}$lobal type constraints, should be positive over the length of all demonstrations. This results in the following Corollary: \[composite\_CBF\_Lemma\] Assumption \[Labeling\] requires that $\phi =$ TRUE at each $T_i$ for $\mathbb{D}_i$. If a solution to equation  exists with constraint , then we have: $$\begin{aligned} x^i_{T_i} & \in \llbracket \phi \rrbracket, \\ h^*(x^i_{T_i}) & = \sum_{j=1}^{|\mathcal{B}|} p_j^*h_j(x^i_{T_i}) \geq 0, \end{aligned}$$ for any solution, $p^*$. As a result, $$\begin{aligned} \mathcal{C}_{h^*} \cap \llbracket \phi \rrbracket \supseteq \{x^i_{T_i}\} \quad \forall~i=1,2,\dots,r, \end{aligned}$$ which only implies set equivalence up to the provided data. As a result, Lemma \[equivalence\_logic\_safety\] only applies over the provided data-set where the equivalence holds. To show the same for $\operatorname*{\mathbf{G}}$lobal type specifications, replace constraint with and the proof follows similarly. As the CBFs generated via Corollary \[composite\_CBF\_Lemma\] are not exact, the results of Theorem \[algorithmic\_test\_generation\] cannot be guaranteed. However, they are very useful in generating tests as will be shown in Section \[sec::simulations\_and\_experiments\]. Secondly, the minimax game in Theorem \[algorithmic\_test\_generation\] may be non-convex and/or calculation of the solution may be computationally difficult. However, as the minimax game depends only on the current state, we can calculate the optimal test parameters for some subset of points and define the actual test to be an interpolation of the parameters defined at these points. That being said, this yields sub-optimal tests. Finally, the minimax problem  need not have that specific cost function for it to determine optimal test parameters. It suffices if the chosen cost function for the inner maximization problem is maximized when the estimated CBFs $h^F_i(x) \geq 0$. This permits predictive games of the form used in the simulations in Section \[sec::simulations\_and\_experiments\]. Case Study {#sec::simulations_and_experiments} ========== In this section, we detail simulations of test scenarios devised by our framework applied to the Georgia Tech Robotarium [@Robotarium]. The set up is defined next. [ **System Specification**]{} $\mathbf{F}g_i \wedge_j \mathbf{G}\neg a_j$. The system will always ensure that agent $i$ ends up at goal location $i$ while avoiding all the other agents $j$, provided that $\llbracket g_i \rrbracket \not \subseteq \cup_j \llbracket a_j \rrbracket$. For this specification, we have the following information: - $\mathbb{D}$: A set of twenty demonstrations ($r=20$) derived from simulations wherein a single agent successfully navigates to a predetermined goal while avoiding another obstacle. - $\mathcal{B}_F$: A set of norm-based barrier functions of the form $h(x) = -\|x - c\| + r$ wherein $h(x) \geq 0 \equiv \|x-c\| \leq r$. - $\mathcal{B}_G$: A set of norm-based barrier functions of the form $h(x) = \|x - c\| - r$ wherein $h(x) \geq 0 \equiv \|x-c\| \geq r$. Equation  identified: $$\begin{aligned} h^*_g(x) &= - \|x-g\| + 0.02, \\ h^*_o(x,d) &= \|x-d\| - 0.175,\end{aligned}$$ as the estimated CBF’s for $\operatorname*{\mathbf{F}}g_i$ and $\operatorname*{\mathbf{G}}\neg a_j$ respectively. Here, $d$ is the desired location of the obstacle agent, and the testing parameter we control. We estimated that the robots in the Robotarium can be sufficiently modeled with single-integrator systems and developed the following game from the derived barrier functions: $$\begin{aligned} d^* = & \, \, \, \, \operatorname*{argmin}\limits_{d \in \mathbb{R}^2} & \max\limits_{u \in \mathcal{U}} & \, \, \sum_{i=1}^{N}h^*_g(x_i), \label{experiment_game} \tag{Simulation Game} \\ & \operatorname*{\mathrm{subject~to}}& & \, \, \dot{h}^*_o(x_{i-1},u_i,d) \geq -\beta h^*_o(x_{i-1},d), \nonumber \\ & & & \, \, x_i = x_{i-1} + u_i\Delta t, \nonumber\\ & & & \,\, \dots\forall~i=1,2,\dots,N, \nonumber \\ & & & \, \, \|d-x_0\| \geq r_o. \nonumber \end{aligned}$$ In equation  above, $N=2$, $\beta = 100$, and $r_o = 0.175$. For large values of $\beta$, there is less of an implied assumption about system behavior as it decays to the boundary of the estimated safe region. As a result, large $\beta$ values permit equation  to account for a wider range of system behavior when solving for $d^*$. In addition, $r_o$ constrains against trivial solutions wherein $d=x_0$, which makes the inner maximization problem infeasible. To quantify how “hard” a test/demonstration is, we define: - $H^i_g \triangleq \frac{1}{T_i+1} \sum_{k=0}^{T_i} \left| \hat{h}_g(x_k)\right|$ to be the average time the system spent outside the goal. Here, $\hat{h}_g$ denotes a normalized version of our estimated CBF, $h^*_g$, such that $-1 \leq \hat{h}_g(x_k) \leq 0$, $\forall$ $k = 0,1,\dots T_i$, and $T_i$ is the max time for our Demonstrations/tests as defined in Definition \[demonstrations\]. - $H^i_o \triangleq 1 - \frac{1}{T_i+1} \sum_{k=0}^{T_i} \hat{h}_o(x_k)$ to be the average time spent collision free. Here, $\hat{h}_o$ denotes a normalized version of our estimated CBF, $h^*_o$ such that $0 \leq \hat{h}_o(x_k) \leq 1$, $\forall$ $k = 0,1,\dots,T_i$. To note, tests drive $H^i_o \to 1$ in an effort to drive $H^i_g \to 1$ which denotes system safety failure and inability to reach the goal, respectively. Figures \[fig::simulation\_crashes\] and \[fig::Metrized Data\] show the results of simulations based on the obstacle locations outputted by minimax game . For the multi-agent case, examples of the provided demonstration data are shown in the two, stacked figures to the far left. Under normal operating parameters, the agent successfully avoid the obstacles while moving to their repsective goals (none of the red lines interset the blue circles). However, when the stationary obstacle locations are updated based on solutions to , multiple crashes occur as shown in the two, stacked figures just left of center. Likewise, for the single-agent simulations shown, we inputted the desired obstacle location, $d$, as the goal location for a secondary agent. This agent acted as a moving obstacle, and for $2/20$ tests simulated, the trajectories for both agents are shown in the four figures on the right of Figure \[fig::simulation\_crashes\]. In both of these cases, notice how the estimated CBF, $h^*_o$, decays to $0$ upon termination. Effectively, in both of these cases, the test framework chose a sequence of obstacle locations, $d^*(x(t))$, that forced the system to satisfy $\operatorname*{\mathbf{G}}a_j$, at least with respect to the estimated CBF, $h^*_o$. Data for all $20$ single-agent simulations are compared against the provided data-set, $\mathbb{D}$, in Figure \[fig::Metrized Data\]. Under normal operation, the demonstration data is relatively consistent *i.e.* $H^i_g$ hovers just below $0.4$ and $H^i_o$ hovers just around $0.7$ for all demonstrations, $i=1,2,\dots,20$. However, for all test simulations, $H^i_g > 0.4$ and $H^i_o > 0.8$ further corroborating that the test parameter sequence generates difficult tests, and in $7/20$ cases wherein $H^i_o=1$, also forced the system to satisfy specification . An example of an experimental demonstration of the test framework can be seen in an accompanying video (linked here: [@video]). The setup here mimics the same single-agent case shown in the examples in Figure \[fig::simulation\_crashes\]. Conclusion ========== In this paper, we attempt to solve the problem of test and evaluation for verification and validation of autonomous systems, wherein the specific controllers are unknown. The goal in doing so, is to provide a mathematical framework designed to root out system inefficiencies in an effort to ensure confidence in those systems that pass the procedure. The method detailed involves estimation of approximate control barrier functions to frame a minimax game that is guaranteed to choose test parameters to frustrate system satisfaction of a provided temporal logic specification. In the future, we aim to extend this work to richer specification classes and formalize an iterative testing procedure based on our framework. [^1]: $^*$ This work was supported by the Air Force Office of Scientific Research. [^2]: $^{1}$ The authors are with the California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125, USA. [`pakella@caltech.edu`](mailto:pakella@caltech.edu), [`mrahmadi@caltech.edu`](mailto:mrahmadi@caltech.edu), [`murray@cds.caltech.edu`](mailto:murray@cds.caltech.edu), [`ames@caltech.edu`](mailto:ames@caltech.edu)
--- abstract: 'In this paper, we propose a new method to identify biochemical reaction networks (i.e. both reactions and kinetic parameters) from heterogeneous datasets. Such datasets can contain (a) data from several replicates of an experiment performed on a biological system; (b) data measured from a biochemical network subjected to different experimental conditions, for example, changes/perturbations in biological inductions, temperature, gene knock-out, gene over-expression, etc. Simultaneous integration of various datasets to perform system identification has the potential to avoid non-identifiability issues typically arising when only single datasets are used.' author: - 'Wei Pan, Ye Yuan$^*$, Lennart Ljung, Jorge Gonçalves and Guy-Bart Stan [^1] [^2] [^3] [^4] [^5]' title: Identifying Biochemical Reaction Networks From Heterogeneous Datasets --- Introduction ============ The problem of identifying biological networks from experimental time series data is of fundamental interest in systems and synthetic biology [@claire2015cdc]. For example, such information can aid in the design of drugs or of synthetic biology genetic controllers. Tools from system identification [@ljung1999system] can be applied for such purposes. However, most system identification methods produce estimates of model structures based on data coming from a single experiment. The interest in identification methods able to handle several datasets simultaneously is twofold. Firstly, with the increasing availability of “big data” obtained from sophisticated biological instruments, e.g. large ‘omics’ datasets, attention has turned to the efficient and effective integration of these data and to the maximum extraction of information from them. Such datasets can contain (a) data from replicates of an experiment performed on a biological system of interest under identical experimental conditions; (b) data measured from a biochemical network subjected to different experimental conditions, for example, different biological inducers, temperature, stress, optical input, gene knock-out and over-expression, etc. The challenges for simultaneously considering heterogeneous datasets during system identification are: (a) the system itself is unknown, i.e. neither the structure nor the corresponding parameters are known; (b) it is unclear how heterogeneous datasets collected under different experimental conditions influence the “quality” of the identified system. Secondly, in control or synthetic biology applications the systems to be controlled typically need to be modelled first. Highly detailed or complex models are typically difficult to handle using rigorous control design methods. Therefore, one typically prefers to use simple or sparse models that capture at best the dynamics expressed in the collected data. The identification and use of simple or sparse models inevitably introduces model class uncertainties and parameter uncertainties [@kaltenbach2009systems; @vanlier2013parameter]. To assess these uncertainties replicates of multiple experiments is typically necessary. Our approach is based on the concept of sparse Bayesian learning [@tipping2001sparse; @wipf2011latent] and on the definition of a unified optimisation problem allowing the consideration of different parameter values for different experimental conditions, and whose solution is a model consistent with all datasets available for identification. The ability to consider various datasets simultaneously can potentially avoid non-identifiability issues arising when a single dataset is used [@ingolia2008systems]. Furthermore, by comparing the identified parameter values associated with different conditions, we can pinpoint the influence specific experimental changes have on system parameters. The notation in this paper is standard and can be found in SI Section \[app:notation\]. Model {#sec:identification} ===== We consider dynamical systems described by nonlinear differential/difference equation with additive noise: $$\begin{aligned} \delta({x}{_{nt}}) &=\bff_n(\bx_t,\bu_t)\bv_n+\xi{_{nt}}\quad i =1, \ldots, n_{\bx} \\ &=\sum\nolimits_{s=1}^{N_{n}}v_{ns}f_{ns}(\bx_t,\bu_t)+\xi{_{nt}}, \label{eq:expansion} \end{aligned}$$ where $\delta({x}{_{nt}}) = \dot{x}{_{nt}}$ for continuous-time system; $\delta({x}{_{nt}}) = {x}{_{nt}}\text{ or } {x}{_{nt}}-{x}{_{n, t-1}}$ or some *known* transformation of historical data for discrete-time system; $v_{ns} \in {\mathbb{R}}$ and $f_{ns}(\bx_t,\bu_t): \mathbb{R}^{n_\bx+n_\bu}\rightarrow \mathbb{R}$ are basis functions that govern the dynamics. To ensure existence and uniqueness of solutions, the functions $f_{ns}(\bx_t,\bu_t)$ are assumed to be Lipschitz continuous. Note that we do not assume *a priori* knowledge of the form of the nonlinear functions appearing on the right-hand side of the equations in , e.g. whether the degradation obeys first-order or enzymatic catalysed dynamics or whether the proteins are repressors or activators. When data are sampled, we assume the data matrix and first derivative/difference data matrix satisfying (\[eq:expansion\]) can be obtained as $$\begin{aligned} \begin{bmatrix} x_{11} & \ldots & x_{n_\bx 1} \\ \vdots & \ddots & \vdots \\ x_{1M} & \ldots & x_{n_\bx M} \\ \end{bmatrix}~\text{and}~\begin{bmatrix} \delta({x}_{11}) & \ldots & \delta({x}_{n_\bx 1}) \\ \vdots & \ddots & \vdots \\ \delta({x}_{1M}) & \ldots & \delta({x}_{n_\bx M}) \\ \end{bmatrix} \label{datamatrix} \end{aligned}$$ respectively. Based on these defined data matrices, the system in (\[eq:expansion\]) can be written as $ \mathbf{y}_n=\dicc_n\bv_n+\bXi_n, \ n=1,\ldots,n_{\bx}, $ where $\by_n \define\left[\delta({x}_{n1}),\ldots,\delta({x}_{n{M}})\right]^\top\in {\mathbb{R}}^{M\times 1}$, $\bv_n \define \left[v_{n1},\ldots,v_{nN_n}\right]^\top \in {\mathbb{R}}^{N_n\times 1}$, $\bXi_n \define \left[\xi_{n1},\ldots,\xi_{nM})\right]^\top\in {\mathbb{R}}^{M\times 1}$, and the dictionary matrix $\mathbf{\dicc}_n\in {\mathbb{R}}^{M\times N_n} $ with its $j$-th column being $[f_{nj}(\bx_1,\bu_1), \ldots, f_{nj}(\bx_M,\bu_M)]^{\top}$. The noise or disturbance vector $\bXi_n$ is assumed to be Gaussian distributed with zero mean and covariance $\Cov \in \R_{+}^{M \times M}$ [^6]. The identification goal is to estimate $\bv_n$ of the linear regression formulation $\mathbf{y}_n=\dicc_n\bv_n+\bXi_n, \ n=1,\ldots,n_{\bx}$. Two issues need to be raised here. The first one is the selection of basis function $f_{ns}(\cdot,\cdot)$ which is key to the success of identification. Some discussion on this can be found in SI Section \[app:basis\], especially for biochemical reaction networks. The second one is the estimation of the first derivative data matrix which is not trivial. In SI Section \[app:deri\] , we provide a method to estimate first derivatives from noisy time-series data. If a total number of $C$ datasets are collected from $C$ independent experiments, we put a subscript $\left[c\right]$ to index the identification problem associated with the specific dataset obtained from experiment $[c]$. In what follows we gather in a matrix $\Anc$ similar to $\dicc_n$ the set of *all* candidate/possible basis functions that we want to consider during the identification. The identification problem is then written as: $$\begin{aligned} \ync=\Anc\wnc+\noisenc, \ \ n=1,\ldots,n_{\bx}, \ c = 1, \ldots, C. \label{problem:0} \end{aligned}$$ The solution to $\wnc$ to is typically going to be sparse, which is mainly due to the potential introduction of non-relevant and/or non-independent basis functions in $\Anc$. Since the $n_{\bx}$ linear regression problems in (\[problem:0\]) are independent, for simplicity of notation, we omit the subscript $n$ used to index the state variable and simply write: $$\yc=\Ac\wc+\noisec, c = 1, \ldots, C, \label{problem:single}$$ in which, $$\begin{aligned} \Ac &\define \left[\Ac_{:,1}, \ldots, \Ac_{:,N}\right] \\ & = \left[ \begin{array}{ccc} f_{1}(\bx^{[c]}_1) &\ldots & f_{N}(\bx^{[c]}_1) \\ \vdots & & \vdots \\ f_{1}(\bx^{[c]}_{{M^{[c]}}})&\ldots & f_{N}(\bx^{[c]}_{{M^{[c]}}}) \end{array} \right] \in \R^{{M^{[c]}} \times N}, \\ \wc &\define \left[w_1^{\left[c\right]}, \ldots, w_N^{\left[c\right]}\right]^\top \in \R^N,\\ \boldsymbol{\xi}^{\left[c\right]} &\triangleq \left[\xi^{\left[c\right]}_1, \ldots, \xi^{\left[c\right]}_{{M^{[c]}}})\right]^\top \in \R^{M^{[c]}}, \end{aligned}$$ where $\bx^{[c]}_t= \left[x^{[c]}_{1t}, \ldots, x^{[c]}_{n_{\bx} t}\right] \in \R^{n_{\bx}}$ is the state vector at time instant $t$. It should be noted that $N$, the number of basis functions or number of columns of the dictionary matrix $\Ac \in \mathbb{R}^{{M^{[c]}} \times N}$, can be very large. Without loss of generality, we assume $M^{[1]} = \cdots=M^{[C]} = M$. ** The model class considered in  can be enlarged in various ways. First, measurement noise, which is ubiquitous in practice, can be accounted for using the following linear measurement equation: $$\begin{aligned} z_t = x_t+\epsilon_t, \label{eq:measurement} \end{aligned}$$ where the measurement noise $\epsilon_t$ is assumed i.i.d. Gaussian. Under this formulation, the noise-contaminated data $z_t$ represents the collected data rather than $x_t$ in . Second, the additive stochastic term $\xi_t$ in is often used to model dynamic noise or diffusion. In many practical application, however, it is necessary to account for multiplicative noise instead of additive noise. Multiplicative noise can be accounted for by replacing with $\dot{x}_{t} =f(\bx_t,\bu_t)\bv+h(\bx_t,\bu_t)\xi{_{t}}.$ In SI Section \[app:noise\], we show how the framework presented here can be modified to encompass these extensions. Identification from multiple datasets {#sec:multiple} ===================================== To ensure reproducibility, experimentalists repeat their experiments under the same conditions, and the collected data are then called “replicates”. Typically, only the average value over these replicates is used for modelling or identification purposes. In this case, however, only the first moment is used and information provided by higher order moments is lost. Moreover, when data originate from different experimental conditions, it is usually very hard to combine the datasets into a single identification problem. This section will address these issues by showing how several datasets can be combined to define a unified optimisation problem whose solution is an identified model consistent with the various datasets available for identification. To consider heterogeneous datasets in one single formulation, we stack the various equations in (see Eq. ). Each stacked equation in Eq.  corresponds to a replicate or an experiment performed by changing the experimental conditions on the same system. [$$\begin{aligned} \left[ \begin{array}{c} \by^{\left[1\right]}\\ \vdots \\ \by^{\left[C\right]} \end{array} \right] & = \myunderbrace{ \left[ \begin{array}{ccc|c|ccc} \dic ^{\left[1\right]} _{:,1} &\ldots & \dic ^{\left[1\right]}_{:,N} & & & & \\ & & & \ddots & & &\\ & & & & \dic ^{\left[C\right]}_{:,1} &\ldots & \dic ^{\left[C\right]}_{:,N} \end{array} \right] }{C~\textbf{Blocks}} \left[ \begin{array}{c} \bw^{\left[1\right]}\\\hline \vdots \\ \hline \bw^{\left[C\right]} \end{array} \right] + \left[ \begin{array}{c} \bxi^{\left[1\right]}\\\hline \vdots \\\hline \bxi^{\left[C\right]} \end{array} \right]\\ &= \myunderbrace{ \left[ \begin{array}{ccc|c|ccc} \dic^{\left[1\right]}_{:,1}& & & & \dic^{\left[1\right]}_{:,N}& &\\ & \ddots & & \ddots & & \ddots &\\ & & \dic ^{\left[C\right]}_{:,1}& & & & \dic^{\left[C\right]}_{:,N} \end{array} \right] }{N ~\textbf{Blocks}} \left[ \begin{array}{c} w_1^{\left[1\right]} \\ \vdots \\ w_1^{\left[C\right]} \\ \hline \vdots\\ \hline w_N^{\left[1\right]}\\ \vdots \\ w_N^{\left[C\right]} \end{array} \right] + \left[ \begin{array}{c} \bxi^{\left[1\right]}\\\hline \vdots \\\hline \bxi^{\left[C\right]} \end{array} \right] = \left[ \begin{array}{c|c|c} \dic_1&\cdots &\dic_N \end{array} \right] \left[ \begin{array}{c} \bw_1\\\hline \vdots \\\hline \bw_N \end{array} \right] + \left[ \begin{array}{c} \bxi^{\left[1\right]}\\\hline \vdots \\\hline \bxi^{\left[C\right]} \end{array} \right]. \label{problem:stack} \end{aligned}$$]{} In Eq. , $\dic_i = \blkdiag[\dic^{\left[1\right]}_{:,i}, \ldots, \dic^{\left[C\right]}_{:,i}]$, and $\bw_i = [w_i^{\left[1\right]}, \ldots, w_i^{\left[C\right]}]^\top$, for $i = 1, \ldots, N$. Based on the stacked formulation given in Eq.  we further define $$\begin{aligned} \by &= \left[ \begin{array}{c} \by^{\left[1\right]}\\ \vdots \\ \by^{\left[C\right]}\\ \end{array} \right], \dic = \left[ \begin{array}{c|c|c} \dic_1&\cdots &\dic_N \end{array} \right],\\ \bw &= \left[ \begin{array}{c} \bw_1\\\hline \vdots \\\hline \bw_N \end{array} \right], \bxi = \left[ \begin{array}{c} \bxi^{\left[1\right]}\\\hline \vdots \\\hline \bxi^{\left[C\right]} \end{array} \right], \label{problem:stack:definition} \end{aligned}$$ which gives $$\begin{aligned} \by = \dic \bw +\bxi. \label{problem} \end{aligned}$$ This yields a formulation very similar to that presented previously for a single linear regression problem. However, in the multi-experiment formulation , there is now a special block structure for $\by$, $\dic$ and $\bw$. ** When $\bw^{\left[i\right]}$ is fixed to be $\bw $ for all the experiments, i.e. $ \bw^{\left[1\right]}= \cdots = \bw^{\left[C\right]} = \bw$, we can formulate the identification problem as a single linear regression problem by concatenation: [ $$\begin{aligned} \left[ \begin{array}{c} \by^{\left[1\right]}\\ \vdots \\ \by^{\left[C\right]} \end{array} \right] &= \left[ \begin{array}{c} \dic^{\left[1\right]}\\ \vdots \\ \dic^{\left[C\right]} \end{array} \right] \bw + \left[ \begin{array}{c} \bxi^{\left[1\right]}\\ \vdots \\ \bxi^{\left[C\right]} \end{array} \right]. \label{problem:cat} \end{aligned}$$ ]{} To incorporate prior knowledge into the identification problem, it is often important to be able to impose constraints on $\bw$. In biological systems, positivity of the parameters constituting $\bw$ is an example of such constraints. The other motivation for constrained optimisation comes from stability considerations. Typically, the underlying system is known *a priori* to be stable, especially if this system is a biological or physical system. A lot of stability conditions can be formulated as convex optimisation problems, e.g. Lyapunov stability conditions expressed as Linear Matrix Inequalities [@boyd1987linear], Gershgorin Theorem for linear systems [@horn1990matrix], etc. Only few contributions are available in the literature that address the problem of how to consider *a priori* information on system stability during system identification [@cerone2011enforcing; @zavlanos2011inferring]. To be able to integrate constraints on $\bw$ into the problem formulation, we consider the following assumption on $\bw$. \[assumption-constraints\] Constraints on the weights $\bw$ can be described by a set of convex functions: $H^{[I]}_{i}(\bw)\leq0$, $\forall i$; $H^{[E]}_{j}(\bw)=0$, $\forall j$, where the convex functions $H^{[I]}_{i}: \R^{N}\rightarrow \R$ are used to define inequality constraints, whereas the convex functions $H^{[E]}_{j}: \R^{N}\rightarrow \R$ are used to define equality constraints. Methods ======= To get an estimate of $\bw$ in , we use Bayesian modelling to treat all unknowns as stochastic variables with certain probability distributions [@bishop2006pattern]. For $\by=\dic \bw+\bXi$, it is assumed that the stochastic variables in the vector $\bXi$ are Gaussian distributed with *unknown* covariance matrix $\Cov$, i.e., $\bXi\thicksim\bN(\mathbf{0}, \Cov)$. In what follows we consider the following variable substitution for the inverse of unknown covariance matrix or precision matrix: $\Covinv \define \Cov^{-1}.$ In such case, the likelihood of the data given $\bw$ is [$$\begin{aligned} \Prob(\by|\bw) &=\mathcal{N}(\by|{\dic} {\bw},\Cov) \propto\exp \left[ -\frac{1}{2}(\dic\bw-\by)^{\top}\Covinv (\dic\bw-\by)\right]. \label{Likelihood} \end{aligned}$$]{} Sparsity Inducing Priors ------------------------ In Bayesian models, a prior distribution $\Prob(\bw)$ can be defined as $\Prob(\bw) = \prod_{i = 1}^{N} \Prob(\bw_i)$ where $ \Prob(\bw_i)\propto\exp \left[-\frac{1}{2}\sum_{j=1}^{C}g(w_i^{\left[j\right]})\right]=\prod_{j=1}^{C}\exp \left[-\frac{1}{2}g(w_i^{\left[j\right]})\right]=\prod_{j=1}^{C}\Prob(w_i^{\left[j\right]}), $ with $g(w_i^{\left[j\right]})$ being a given function of $w_i^{\left[j\right]}$. Generally, $\bw$ in  is sparse, and therefore certain sparsity properties should be enforced on $\bw$. To this effect, the function $g(\cdot)$ is usually chosen to be a concave, non-decreasing function of $|w_i^{\left[j\right]}|$ [@wipf2011latent]. Examples of such functions $g(\cdot)$ include Generalised Gaussian priors and Student’s *t* priors (see [@palmer2006variational; @wipf2011latent] for details). Computing the posterior mean $\E(\bw|\by)$ is typically intractable because the posterior $\Prob(\bw|\by)$ is highly coupled and non-Gaussian. To alleviate this problem, ideally one would like to approximate $\Prob(\bw|\by)$ as a Gaussian distribution for which efficient algorithms to compute the posterior exist [@bishop2006pattern]. For this, the introduction of lower bounding *super-Gaussian* priors $\Prob(w_i^{\left[j\right]})$, i.e., $ \Prob(w_i^{\left[j\right]}) =\max_{\gamma_{i} >0}\bN(w_i^{\left[j\right]}|0,\gamma_{i})\hyperprior(\hyper_i) \label{singlepriors}, $ can be used to obtain an analytical approximation of $\Prob(\bw|\by)$ [@palmer2006variational]. Note that problem  has a block-wise structure, i.e. the solution $\bw$ is expected to be block-wise sparse. Therefore, sparsity promoting priors should be specified for $\Prob(\bw_i)$, $\forall i$. To do this, for each block $\bw_i$, we define a hyper-parameter $\hyper_i$ such that $$\begin{aligned} \Prob(\bw_i) & =\max_{\gamma_{i} >0}\bN(\bw_i|\mathbf{0},\gamma_{i}\bI_C)\hyperprior(\hyper_i) \\ &=\max_{\gamma_{i} >0}\prod_{j=1}^{C}\bN(w_i^{\left[j\right]}|0,\gamma_{i})\hyperprior(\hyper_i), \label{priors} \end{aligned}$$ where $\prior(\hyper_i)$ is a nonnegative function, which is treated as a hyperprior with $\hyper_i$ being its associated hyperparameter. Throughout, we call $\prior(\hyper_i)$ the “*potential function*”. This Gaussian relaxation is possible if and only if $\log \Prob(\sqrt{w_i})$ is concave on $(0,\infty)$. Defining $$\begin{aligned} \Hyper_{i} &= \left[\gamma_{i}, \ldots, \gamma_{i} \right]\in \R^{C}, \ \bG_{i} =\diag\left[\Hyper_{i}\right], \\ \Hyper&= \left[\Hyper_{1}, \ldots, \Hyper_{\n}\right]\in \R^{\n C}, \ \bG = \diag\left[\Hyper\right], \label{eq:definegamma} \end{aligned}$$ we have [$$\begin{aligned} \Prob(\bw) = \prod_{i=1}^{\n} \Prob(\bw_i) =\max_{\Hyper > \mathbf{0}} \bN(\bw|\mathbf{0},\bG) \hyperprior(\Hyper). \label{Prior} \end{aligned}$$ ]{} Cost Function {#sec:costfunction} ------------- Once we introduce the Gaussian likelihood in and the variational prior in , we can get the following optimisation problem jointly on $\bw$, $\bgamma$ and $\Covinv$. The unknowns $\bw, \bgamma, \Covinv$ can be obtained by solving the following optimisation problem [$$\begin{aligned} & \mathcal{L}(\bw, \bgamma, \Covinv) = \min_{ \bw, \bgamma, \Covinv} \{- \log |\Covinv| +\log |\bG| +\log |\bG^{-1}+\dic^\top\Covinv \dic |\\ &+ \MSE+ \bw^\top\bG^{-1}\bw +\sum_{j=1}^{N}p(\hyper_{j}) \}, \label{eq:cost} \end{aligned}$$ ]{} where $\bG$ is given in . The derivation can be found in the SI \[app:cost\]. The proof mainly relies on using marginal likelihood maximisation. Algorithm --------- The cost function in  is convex in $\bw$ and $\Covinv$ but concave in $\bGamma$. This non-convex optimisation problem can be formulated as a convex-concave procedure (CCCP). It can be shown that solving this CCCP is equivalent to solving a series of iterative convex optimisation programs, which converges to a stationary point [@sriperumbudur2009convergence]. Let $$\begin{aligned} {2} u(\bw, \bgamma, \Covinv) &\define \MSE +\bw^\top\bG^{-1}\bw-\log \det\Covinv,\notag \\ v(\bgamma,\Covinv) &\define -\left[ \log |\bG|+\log | \bG^{-1}+ \dic^\top\Covinv \dic | +\sum_{j=1}^{N}p(\hyper_j)\right]. \notag\end{aligned}$$ It is easy to check that $v(\bgamma,\Covinv)$ is a convex function with respect to $\bgamma$. Furthermore, $\log|\cdot|$ is concave in the space of positive semi-definite matrices. Since we adopt a super-Gaussian prior with potential function $\prior(\hyper_j), \forall j,$ as described in , a direct consequence is that $p(\hyper_j)=-\log\prior(\hyper_j)$ is concave, and, therefore, $-p(\hyper_j)$ is convex [@tipping2001sparse].[^7] Note that $u(\bw,\bgamma,\Covinv)$ is jointly convex in $\bw$, $\bgamma$ and $\Covinv$, while $v(\bgamma, \Covinv)$ is jointly convex in $\bgamma$ and $\Covinv$. As a consequence, the minimisation of the objective function can be formulated as a concave-convex procedure $$\begin{aligned} \label{cccp} \min_{\bgamma\succeq\mathbf{0},\Covinv\succeq \mathbf{0}, \bw} u(\bw, \bgamma,\Covinv)-v(\bgamma, \Covinv). \end{aligned}$$ Since $v(\bgamma, \Covinv) $ is differentiable over $\bgamma$, the problem in (\[cccp\]) can be transformed into the following iterative convex optimisation problem [$$\begin{aligned} \bw^{k+1} &=\argmin\limits_{\bw} u(\bw, \bgamma^k, \Covinv^k) \label{eq:cccp1}\\ \bgamma^{k+1} &=\argmin\limits_{\bgamma \succeq \mathbf{0}} u(\bw^k, \bgamma,\Covinv^k)-\nabla_{\bgamma} v(\bgamma^k, \Covinv^k)^\top\bgamma \label{eq:cccp2}\\ \Covinv^{k+1} &=\argmin\limits_{\Covinv\succeq \mathbf{0}}u(\bw^k, \bgamma^k,\Covinv)-\nabla_{\Covinv} v(\bgamma^k,\Covinv^k)^\top\Covinv. \label{eq:cccp3}\end{aligned}$$ ]{} Using basic principles in convex analysis, we then obtain the following analytic form for the negative gradient of $v(\bgamma)$ at $\bgamma$ (using the chain rule): [ $$\begin{aligned} \balpha^{k} \triangleq&-\nabla_{\bgamma} v(\bgamma,\Covinv^k)^\top |_{\bgamma=\bgamma^k}\\ =&\nabla_{\bgamma}\left[\log |\bG^{-1}+\dic^\top \Covinv^k \dic| +\log |\bG| \right]\\ =& \diag \{\left[-(\bG^{k})^{-1}+ \dic^\top \Covinv^k \dic\right]^{-1}\}\cdot \diag\{-(\bG^{k})^{-2}\} \\ & +\diag^{-1}\{\bG^k\} \\ =& \myunderbrace{ \left[ \begin{array}{c|c|c} \balpha_{11}^{k}& \cdots & \balpha_{1\n}^{k} \end{array} \right] }{N ~\textbf{Blocks}} \\ \\ =& \left[ \begin{array}{c|c|c} \myunderbrace{\alpha_{11}^{k}, \ldots, \alpha_{11}^{k} }{C ~\textbf{Elements}} & \cdots & \myunderbrace{\alpha_{1\n}^{k}, \ldots, \alpha_{1\n}^{k} }{C ~\textbf{Elements}} \end{array} \right].\\ \label{alpha1k} \end{aligned}$$ ]{} Therefore, the iterative procedures  and  for $\bw^{k+1}$ and $\bgamma^{k+1}$, respectively, can be formulated as $$\begin{aligned} \left[\bw^{k+1},\bgamma^{k+1}\right] =\argmin\limits_{\bgamma\succeq \mathbf{0},\bw} & \MSEk\\ &+\sum_{i=1}^{\n} \left(\frac{\bw_i^\top\bw_i}{\gamma_{i}} +C\gamma_{i}\alpha_{i}^{k}\right). \label{cccp-4} \end{aligned}$$ The optimal $\bgamma$ components are obtained as: $ \gamma_{i}=\frac{\|\bw_i\|_2}{\sqrt{C \alpha_{i}^{k}}}. $ If $\bgamma$ is fixed, we have $\bw^{k+1}$ by solving optimisation problem $$\begin{aligned} \min\limits_{\bw} & \MSEk +2\sum_{i=1}^{\n} \|\theta_{i}^k \cdot \bw_i\|_2, \label{rwglasso} \end{aligned}$$ where $\theta_{i}^k = C\alpha_{i}^{k}$. We can then inject this into the expression of $\gamma_i$, which yields $$\begin{aligned} \gamma_{i}^{k+1}&=\frac{\|\bw_i^{k+1}\|_2}{\sqrt{C\alpha_{i}^{k}}}. \label{gammaik+1} \end{aligned}$$ After we get $\bw^{k+1}$ and $\bgamma^{k+1}$, we can proceed with the optimisation iteration in : $$\begin{aligned} \Lambda^k &=-\nabla_{\Covinv} v(\bgamma^k,\Covinv^k) \\ & =\nabla_{\Covinv}\left(\log \det \left(\bG^{-k}+ \dic^\top\Covinv^{k} \dic \right) \right)\\ & = \dic(\bG^{-k}+\dic^\top\Covinv^{k} \dic )^{-1}\dic^\top. \label{WeightforCov} \end{aligned}$$ Letting $\mathbf{Y}^{k+1} = (\dic\bw^{k+1}-\by)\cdot(\dic\bw^{k+1}-\by)^\top$, we can get an estimate of the inverse of covariance matrix $\Covinv$ as: $$\begin{aligned} \Covinv^{k+1} =\argmin\limits_{\Covinv \succeq \mathbf{0}} \trace\left(\Covinv\mathbf{Y}^{k+1} \right)-\log\det \Covinv +\trace\left(\Lambda^k \Covinv\right). \label{covink+1} \end{aligned}$$ Given $\bgamma^{k+1}$ in  and $\Covinv^{k+1}$ in , we can then go back to  to update $\balpha$ for the next iteration. This above described iterative procedure for identification is summarised in Algorithm \[alg:summary\] below. Collect $C$ heterogeneous groups of time series data from the system of interest (assuming the system can be described by ); Select the candidate basis functions that will be used to construct the dictionary matrix described in Section \[sec:multiple\]; Initialise $\theta_i^0=1, \ \forall i$, $\alpha_i^0 =\frac{\theta_i^0}{C}$, $\Covinv^{0}= \bI$, $\Lambda^{0}= \bI$; $\bw^{k+1}$ can be obtained by solving the following weighted minimisation problem over $\bw$, subject to the convex constraints in Assumption \[assumption-constraints\] [ $$\begin{aligned} \min\limits_{\bw} \frac{1}{2}\MSEk +\sum_{i=1}^{\n} \|\theta_{i}^k \cdot \bw_i\|_2 ; \label{alg:rwglasso} \end{aligned}$$ ]{} Update $\gamma_{i}^{k+1}$ using ; Let $\mathbf{Y}^{k+1} = (\dic\bw^{k+1}-\by)\cdot(\dic\bw^{k+1}-\by)^\top$; $\Covinv^{k+1} $ can be obtained by solving the following weighted minimisation problem over the inverse of the covariace matrix: $$\begin{aligned} \min\limits_{\Covinv \succeq \mathbf{0}} \trace\left(\mathbf{Y}^{k+1} +\Lambda^k \right)\Covinv-\log\det \Covinv; \label{alg:rwcovariance} \end{aligned}$$ Update $\balpha^{k+1}$ using ; Update $\theta_{i}^{k+1} = C\alpha_{i}^{k+1}$; Update $\Lambda^{k+1}$ using ; Break; Some further discussion can be found in SI Section \[s:discussion\]. ADMM Implementation {#sec:admm_sub} ------------------- Essentially, Algorithm  consists of a reweighted Group Lasso algorithm  and a reweighted inverse covariance estimation algorithm . Algorithm  can be implemented using the Alternating Direction Method of Multipliers (ADMM) [@boyd2011distributed]. This ADMM parallelisation allows to distribute the algorithmic complexity to different threads and to build a platform for scalable distributed optimisation. This is key to be able to deal with problems of large dimensions. More details can be found in SI Section \[s:admm\]. Connection to SDP formulations and the sparse multiple kernel method {#sec:sdp} -------------------------------------------------------------------- The iteration in can be rewritten in the following compact form $$\begin{aligned} \left[\bw^{k+1},\bgamma^{k+1}\right] =\argmin\limits_{\bgamma \succeq \mathbf{0}, \bw} & \MSEk \\ &+ \bw^{\top} \bG^{-1} \bw -\nabla_{\bgamma} v(\bgamma^k, \Covinv^k)^\top \bgamma. \label{cccp-compact} \end{aligned}$$ This is equivalent to the following SDP by using the standard procedure in [@boyd2004convex]. $$\begin{split} \min_{\bz, \bw, \Hyper}\,\,\,\,\,\, & \, \bz -\nabla_{\bgamma} v(\bgamma^k, \Covinv^k)^\top \bgamma \notag\\ \mathrm{subject}\,\,\mathrm{to}\,\,\,\,\,\,\, & \begin{bmatrix} \bz & (\by-\dic \bw)^{\top} & \bw ^{\top} \\ \by-\dic \bw & (\Covinv^{k})^{-1} & \mathbf{0} \\ \bw & \mathbf{0} & \bG \end{bmatrix} \succeq \mathbf{0}\\ & \,\,\, \bgamma \succeq \mathbf{0} \end{split}$$ The cost of solving this SDP is at least $N^3$ as well as $M$. Therefore, solving this SDP is too costly for all but problems with a small number of variables. This means that the number of samples, the dimension of the system, etc., can not be too large simultaneously. In this SDP formulation, $\bG$ is closely related to the sparse multiple kernel presented in [@TianshiTAC]. Certain choice of kernels may introduce some good properties or help reduce algorithmic complexity. In our case, we choose $\bG$ to have a diagonal or a DC kernel structure. Simulations =========== In this section, we use numerical simulations to show the effectiveness of the proposed algorithm. To compare the identification accuracy of the various algorithms considered, we use the root of normalised mean square error (RNMSE) as a performance index, i.e. $\textbf{RNMSE} = \|\bw_{\text{estimate}}-\bw_{\text{true}}\|_2/\|\bw_{\text{true}}\|_2.$ Several factors affect the RNMSE, e.g. number of experiments $C$, measurement noise intensity, dynamic noise intensity, length of single time series data $M$, number of candidate basis functions $N$. For brevity of exposition, we only show results pertaining to change of RNMSE over number of experiment $C$ and length of single time series for one experiment, all in the noiseless case. More results related to other factors that may affect RNMSE will be shown in a future journal publication presenting these results in more details. As an illustrative example, we consider a model of an eight species generalised repressilator [@strelkowa2010switchable], which is a system where each of the species represses another species in a ring topology. The corresponding dynamic equations are as follows: $$\label{example:equations} \begin{aligned} \dot x_{1t} &= \frac{p_{11}}{p_{12}^{p_{13}} + x_{8t}^{p_{13}}} + p_{14} - p_{15} x_{1t}, \\ \dot x_{it}&= \frac{p_{i1}}{p_{i2}^{p_{i3}} + x_{i-1,t}^{p_{i3}}} + p_{i4} - p_{i5} x_{it},~\forall i = 2,\dots 8, \end{aligned}$$ where $p_{ij}$, $i = 1, \ldots, 8, ~j = 1, \ldots, 5$. We assume the mean value for these parameters across different species and experiments are $\bar{p}_{i1} = 40$, $\bar{p}_{i2} = 1$, $\bar{p}_{i3} = 3$, $\bar{p}_{i4} = 0.5$, $\bar{p}_{i5} = 1$, $\forall i$. We simulate the ODEs in to generate the time series data. In each “experiment” or simulation of , the initial conditions are randomly drawn from a standard uniform distribution on the open interval $(0,1)$. The parameters in each experiment vary no more than $20\%$ of the mean values. In MATLAB, one can use `\bar{p}_{ij}*(0.8 + 0.4*rand(1))` to generate the corresponding parameters for each experiment. The numerical simulation procedure can be summarised as follows: 1. The deterministic system of ODEs  is solved numerically with an adaptive fourth-order Runge-Kutta method; 2. As explained in , Gaussian measurement noise with variance $\sigma^2$ is added to the corresponding time-series data obtained in the previous step[^8]; 3. The data is re-sampled with uniform intervals[^9]; 4. The local polynomial regression framework in [@de2013derivative] is applied to estimate the first derivative; 5. A dictionary matrix is constructed as explained in Section \[sec:multiple\]; 6. Algorithm \[alg:summary\] is used to identify the model. Following the procedure described in Section II, the candidate dictionary matrix $\dic$ in step 5) above is constructed by selecting as candidate nonlinear basis functions typically used to represent terms appearing in ODE models of Gene Regulatory Networks. As a proof of concept, we only consider Hill functions as potential nonlinear candidate functions. The set of Hill functions with Hill coefficient $h$, both in activating and repressing form, for the $i^{th}$ state variables at time instant $t_k$ are: $$\begin{aligned} \text{hill}(x_{it}, K, h_{\text{num}}, h_{\text{den}}) &\define \frac{x_{it}^{h_{\text{num}}}}{K^{h_{\text{den}}}+x_{it}^{h_{\text{den}}}} \end{aligned}$$ where $h_{num}$ and $h_{den}$ represent the Hill coefficients. When $h_{\text{num}} = 0$, the Hill function has a repression form, whereas an activation form is obtained for $h_{\text{num}}=h_{\text{den}}\neq 0$. In our identification experiment, we assume $h_{num}$, $h_{den}$ and $K$ to be known. We are interested in identifying the regulation type (linear or Hill type, repression or activation) and the corresponding parameters $p_{i1}$, the degradation rate constant $p_{i4}$ and the basal expression rate $p_{i5}$, $\forall i$. Since there are $8$ state variables, we can construct the dictionary matrix $\dic$ with $8$ (basis functions for linear terms) $+ (2*8)$ (basis functions for Hill functions, both repression and activation form) $+1$ (constant unit vector) $=25$ columns. The corresponding matrix $\dic$ is given in Eq.  in Supporting Information Section \[app:simulation\]. For a fixed number of experiments $C$ and length of single time series $M$, we compute the RNMSE over $50$ simulations by varying initial conditions and parameters $p_{ij}$. The RNMSE over $C$ and $M$ are shown in Fig. \[fig:rnmse\_w\_iter\_1\] and Fig. \[fig:rnmse\_w\_iter\_end\], using both group Lasso and Algorithm \[alg:summary\] with the maximal iteration number $k_{\max} = 5$ (see line 4 in Algorithm \[alg:summary\]). Inspection of the results presented in Fig. \[fig:rnmse\_w\_iter\_1\] and Fig. \[fig:rnmse\_w\_iter\_end\] clearly show that Algorithm \[alg:summary\] outperforms significantly group Lasso in terms of RNMSE. Discussions =========== There are several issues that we plan to further explore in the future. First, we are working on establishing the minimal sampling rate necessary to yield adequate numerical estimates of the first derivative matrix (see Eq. ). Second, further results not shown in this paper indicate that RNMSE is high when dynamic noise and measurement noise are high. We are currently working on finer characterisation of the “quality” of the identification in terms of the Signal-to-Noise ratio. [10]{} <http://arxiv.org/abs/1509.05153> L. Ljung, *System Identification: Theory for the User*.1em plus 0.5em minus 0.4emPrentice Hall, 1999. H.-M. Kaltenbach, S. Dimopoulos, and J. Stelling, “Systems analysis of cellular networks under uncertainty,” *FEBS letters*, vol. 583, no. 24, pp. 3923–3930, 2009. J. Vanlier, C. Tiemann, P. Hilbers, and N. van Riel, “Parameter uncertainty in biochemical models described by ordinary differential equations,” *Mathematical biosciences*, vol. 246, no. 2, pp. 305–314, 2013. M. Tipping, “Sparse bayesian learning and the relevance vector machine,” *The Journal of Machine Learning Research*, vol. 1, pp. 211–244, 2001. D. Wipf, B. Rao, and S. Nagarajan, “Latent variable bayesian models for promoting sparsity,” *Information Theory, IEEE Transactions on*, vol. 57, no. 9, pp. 6236–6255, 2011. N. T. Ingolia and J. S. Weissman, “Systems biology: reverse engineering the cell,” *Nature*, vol. 454, no. 7208, pp. 1059–1062, 2008. S. Boyd, L. El Ghaoul, E. Feron, and V. Balakrishnan, *Linear matrix inequalities in system and control theory*.1em plus 0.5em minus 0.4emSociety for Industrial Mathematics, 1987, vol. 15. R. Horn and C. Johnson, *Matrix analysis*.1em plus 0.5em minus 0.4emCambridge university press, 1990. V. Cerone, D. Piga, and D. Regruto, “Enforcing stability constraints in set-membership identification of linear dynamic systems,” *Automatica*, vol. 47, no. 11, pp. 2488–2494, 2011. M. Zavlanos, A. Julius, S. Boyd, and G. Pappas, “Inferring stable genetic networks from steady-state data,” *Automatica*, vol. 47, no. 6, pp. 1113–1122, 2011. S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, *[Linear Matrix Inequalities in System and Control Theory]{}*.1em plus 0.5em minus 0.4emSociety for Industrial Mathematics, 1994. C. Bishop, *Pattern Recognition and Machine Learning*.1em plus 0.5em minus 0.4emSpringer New York, 2006, vol. 4. J. Palmer, D. Wipf, K. Kreutz-Delgado, and B. Rao, “Variational [EM]{} algorithms for non-[Gaussian]{} latent variable models,” *Advances in neural information processing systems*, vol. 18, p. 1059, 2006. B. K. Sriperumbudur and G. R. Lanckriet, “On the convergence of the concave-convex procedure.” in *NIPS*, vol. 9, 2009, pp. 1759–1767. A. Aravkin, J. V. Burke, A. Chiuso, and G. Pillonetto, “Convex vs non-convex estimators for regression and sparse estimation: the mean squared error properties of ard and glasso,” *The Journal of Machine Learning Research*, vol. 15, no. 1, pp. 217–252, 2014. S. Boyd and L. Vandenberghe, *Convex optimisation*.1em plus 0.5em minus 0.4emCambridge university press, 2004. T. Chen, M. Andersen, L. Ljung, A. Chiuso, and G. Pillonetto, “System identification via sparse multiple kernel-based regularization using sequential convex optimization techniques,” *Automatic Control, IEEE Transactions on*, vol. 59, no. 11, pp. 2933–2945, 2014. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” *Foundations and Trends in Machine Learning*, vol. 3, no. 1, pp. 1–122, 2011. N. Strelkowa and M. Barahona, “Switchable genetic oscillator operating in quasi-stable mode,” *Journal of The Royal Society Interface*, p. rsif20090487, 2010. K. De Brabanter, J. De Brabanter, B. De Moor, and I. Gijbels, “Derivative estimation with local polynomial fitting,” *The Journal of Machine Learning Research*, vol. 14, no. 1, pp. 281–301, 2013. Y. Chang, R. Dobbe, P. Bhushan, J. W. Gray, C. J. Tomlin, “Retrieving common dynamics of gene regulatory networks under various perturbations,” in *Proceeding of Conference on Decision and Control*, 2015. [^1]: W. Pan and G.-B. Stan are with the Centre for Synthetic Biology and Innovation and the Department of Bioengineering, Imperial College London, United Kingdom. Email: [{w.pan11, g.stan}@imperial.ac.uk]{}. [^2]: Ye Yuan was with the Control Group, Department of Engineering, University of Cambridge, United Kingdom. He is with Department of Electrical Engineering and Computer Sciences, UC Berkeley. J. Gonçalves is with the Control Group, Department of Engineering, University of Cambridge, United Kingdom and with the Luxembourg Centre for Systems Biomedicine, Luxembourg. Email: [jmg77@cam.ac.uk]{}. $^*$ For correspondence: [yy311@berkeley.edu]{}. [^3]: L. Ljung is with Division of Automatic Control, Department of Electrical Engineering, Linköping University, Sweden. Email: [ljung@isy.liu.se]{}. [^4]: The authors gratefully acknowledge the support of Microsoft Research through the PhD Scholarship Program of Mr Wei Pan. Dr Ye Yuan acknowledges the support from EPSRC (project EP/I03210X/1). Dr Guy-Bart Stan gratefully acknowledges the support of the EPSRC Centre for Synthetic Biology and Innovation at Imperial College London (project EP/G036004/1) and of the EPSRC Fellowship for Growth (project EP/M002187/1). The authors would like to thank Dr Aivar Sootla and Dr Tianshi Chen for helpful discussions. [^5]: Supporing Information (SI) can be found online [@supp]. [^6]: Note that the covariance matrix is not necessarily diagonal. [^7]: In this paper, the prior is chosen as a Student’s t prior thus $p(\hyper_j) = 1$. [^8]: In the example presented here, we consider the noiseless case corresponding to $\sigma = 0$. [^9]: In this example, interval length is set to $1$.
--- abstract: 'Graph neural networks, which generalize deep neural network models to graph structured data, have attracted increasing attention in recent years. They usually learn node representations by transforming, propagating and aggregating node features and have been proven to improve the performance of many graph related tasks such as node classification and link prediction. To apply graph neural networks for the graph classification task, approaches to generate the *graph representation* from node representations are demanded. A common way is to globally combine the node representations. However, rich structural information is overlooked. Thus a hierarchical pooling procedure is desired to preserve the graph structure during the graph representation learning. There are some recent works on hierarchically learning graph representation analogous to the pooling step in conventional convolutional neural (CNN) networks. However, the local structural information is still largely neglected during the pooling process. In this paper, we introduce a pooling operator ${{\sf {EigenPooling}}}$ based on graph Fourier transform, which can utilize the node features and local structures during the pooling process. We then design pooling layers based on the pooling operator, which are further combined with traditional GCN convolutional layers to form a graph neural network framework ${{\sf {EigenGCN}}}$ for graph classification. Theoretical analysis is provided to understand ${{\sf {EigenPooling}}}$ from both local and global perspectives. Experimental results of the graph classification task on $6$ commonly used benchmarks demonstrate the effectiveness of the proposed framework.' author: - Yao Ma - Suhang Wang - 'Charu C. Aggarwal' - Jiliang Tang bibliography: - 'sample-base-abbre.bib' title: Graph Convolutional Networks with EigenPooling ---
--- abstract: 'The cosmological jerk parameter $j$ is reconstructed in a non-parametric way from observational data independent of a fiducial cosmological model. From this kinematical quantity, the equation of state parameter for composite matter distribution is also found out. The result shows that there is a deviation from the $\Lambda$CDM model close to $z=1.5$, at the $3\sigma$ confidence level.' --- [**Non-parametric reconstruction of the cosmological *jerk* parameter**]{}\ Purba Mukherjee[^1], Narayan Banerjee[^2]\ [*$^{1,2}$Department of Physical Sciences,  \ Indian Institute of Science Education and Research Kolkata,\ Mohanpur, West Bengal 741246, India.*]{}\ 1.0cm 1.0cm **PACS:** 98.80.Cq; 98.70.Vc 0.4mm **Keywords:** cosmology, dark energy, reconstruction, deceleration parameter, jerk parameter. 1.0cm Introduction ============ Even after more than a couple of decades of its discovery[@perl; @riess], the accelerated expansion of the universe is yet to be attributed to a well-defined matter sector, called the [*dark energy*]{}, responsible for the alleged acceleration. Therefore, the quest for dark energy has been alive along all possible ways. A “reverse engineering”, where one makes an attempt to find the characteristics of the matter distribution from a given evolution history, is amongst the prominent ways for quite a long time. Normally this “reconstruction” is related to figure out a physical characteristic of the matter distribution, such as the equation of state parameter of the dark energy $w_{DE}$, or even the potential $V(\phi)$ if the dark energy is taken as a scalar field.\ Another direction of reconstruction is through the kinematical parameters, such as the deceleration parameter $q = - \frac{1}{aH^2} \frac{d^2 a}{dt^2}$ where $a$ is the scale factor, and $H=\frac{1}{a}\frac{da}{dt}$, the fractional rate of increase in the linear size of the universe called the Hubble parameter. For a long time, $H$ had been the only cosmological parameter which could be estimated from observational data. As $H$ was found to be evolving, the next higher order derivative of $a$, namely $q$ was the quantity of interest, Now that $q$ can be measured and is found to be evolving, the third order derivative of $a$ finds a natural importance. Expressed in a dimensionless way, this quantity called the “jerk” is defined as $$\label{jerkdef} j = - \frac{1}{aH^3} \frac{d^3 a}{dt^3}.$$ There has been some work in the reconstruction of a cosmological model through these kinematical parameters. Reconstruction of the deceleration parameter $q$ can be found in the work of Gong and Wang[@gong1; @gong2]. Reconstruction through the jerk parameter has been carried out by Luongo [@luongo43], Rapetti [*et al*]{}[@rapetti44], Zhai [*et al*]{}[@zhai45], Mukherjee and Banerjee [@ankan1; @ankan2]. Although the possible importance of the jerk parameter in the game of reconstruction was pointed out long back[@varun], not much work has been done to utilize its full potential. Also, the work already done is an estimation of parameters with a functional form of $j$ being used as an ansatz. This is necessarily restrictive, as the functional form for $j$ is already chosen.\ A more unbiased way is to attempt a non-parametric reconstruction, where the evolution of the relevant quantity is determined directly from observational data without any ansatz a priori. Such attempts normally involve the reconstruction of $w_{DE}$[@sahl1; @sahl2; @holsclaw1; @holsclaw2; @holsclaw3; @critt; @sanjay; @zzhang]. However, there is hardly any attempt to model the dark energy through a reconstruction of the jerk parameter in a non-parametric way. Although there is no convincing reason that a reconstruction of kinematic parameters like $q$ or $j$ is more useful than that of a physical quantity like the dark energy equation of state parameter, this indeed provides an alternative route towards the understanding of dark energy in the absence of a convincing physical theory.\ In the present work, the jerk parameter $j$ is reconstructed for the first time from the observational data in a non-parametric way. We have utilized various combinations of the Supernova distance modulus data, the Cosmic Chronometer (CC) measurements of the Hubble parameter, the Baryon Acoustic Oscillation (BAO) data and also the Cosmic Microwave Background (CMB) Shift parameter data to examine their effect on the reconstruction.\ The reconstruction yields the result that for most of the combinations, the $\Lambda$CDM model is well allowed within a $2\sigma$ confidence level. For a few combinations however, the $\Lambda$CDM model is not allowed within this level.\ Indeed there are apprehensions that the CMB Shift parameter data depends crucially on a fiducial cosmological model[@elgaroy] and so does the BAO data[@carter]. However, we do not ignore them. Our reconstruction is based on the combinations both including and excluding the CMB Shift and the BAO datasets. The final result, when we extract the physical information, that of $w_{eff}$, looks qualitatively very much similar for various combinations of the datasets.\ In section 2, the methodology is discussed in brief and section 3 contains the actual reconstruction. The last section includes a discussion of the results obtained. The methodology =============== At the outset, we do not assume any fiducial model for the universe except that it is given by a spatially flat, isotropic and homogeneous metric given by $$\label{metric} ds^2 = -c^2 dt^2 + a^2(t) \left(dr^2 + r^2 d\theta^2 + r^2 \sin^2\theta d\phi^2 \right).$$ We pretend that we do not even know the Einstein equations and pick up only the kinematical quantities. We define the reduced Hubble parameter as, $h(z)=\frac{H(z)}{H_0}$. A subscript $0$ indicates the value of the quantity at the present epoch and $z$ is the redshift given as $1+z = \frac{a_0}{a}$. The luminosity distances of any object (such as a Supernova), can be obtained as $$\label{dL} d_L(z)= \frac{c(1+z)}{H_0} \int_{0}^{z} \frac{dz'}{h(z')},$$ For convenience, we define a dimensionless co-moving luminosity distance, $$\label{D} D(z) \equiv (1+z)^{-1} \frac{H_0}{c} d_L(z).$$ Combining Eq. (\[dL\]) and (\[D\]) and taking derivative with respect to $z$, we obtain the relation between Hubble parameter and the co-moving luminosity distance as, $$\begin{aligned} \label{H_from_D} H(z) &=& \frac{H_0}{D'},\\ h(z) &=& \frac{1}{D'}.\end{aligned}$$ where a prime denotes the derivative with respect to $z$. In terms of the dimensionless quantities $h$, $D$ and their derivatives, the jerk parameter can be written as $$\begin{aligned} \label{jerk} j(z) &=& -1 + \frac {2(1+z)h h' - (1+z)^2 \left(h'^2 + hh''\right)}{h^2} ,\\ &=& -1 + \frac{ (1+z)^2 \left(D' D''' -3 D''^2 \right) - 2 (1+z) D' D''}{D'^2}. \nonumber\end{aligned}$$ The uncertainty in $j(z)$, $\sigma_j$ obtained by error propagating Eq. is given below - $$\begin{aligned} \left(\frac{\sigma_j}{j+1}\right)^2 &=& \left\lbrace \frac{2(1+z)\left[ h'\sigma_{h}+ h \sigma_{h'}\right] -(1+z)^2 \left[ 2 h' \sigma_{h'} + h'' \sigma_{h} + h \sigma_{h''} \right]}{2(1+z)hh' - (1+z)^2 (h'^2 + hh'')}\right\rbrace^2 + \left(\frac{2\sigma_{h}}{h}\right)^2 + \nonumber \\ &-& \frac{4(1+z)\left[ h'\sigma_{h}^2+ h \sigma_{h h'}\right] - 2 (1+z)^2 \left[ 2 h' \sigma_{h h'} + h'' \sigma_{h}^2 + h \sigma_{h''h} \right]}{2(1+z)h^2 h' - (1+z)^2 h (h'^2 + hh'')} , \nonumber \\ &=& \left\lbrace \frac{(1+z)^2\left[ \sigma_{D'}D''' + D' \sigma_{D'''} -6 D'' \sigma_{D''}\right] -2 (1+z) \left[ D' \sigma_{D''} + D'' \sigma_{D'} \right]}{ (1+z)^2 \left(D' D''' -3 D''^2 \right) - 2 (1+z) D' D''}\right\rbrace^2 + \left(\frac{2\sigma_{D'}}{D'}\right)^2 + \nonumber \\ &-& \frac{2(1+z)^2\left[ D'''\sigma_{D'}^2 + D' \sigma_{D''' D'} - 6 D'' \sigma_{D''D'}\right] - 4 (1+z) \left[ D''\sigma_{D'}^2 + D' \sigma_{D''D'} \right]}{ (1+z)^2 \left(D'^2 D''' -3 D'D''^2 \right) - 2 (1+z) D'^2 D''}. \end{aligned}$$ In order to implement the reconstruction, the widely used Gaussian processes (GP) [@rw; @mackay; @william; @gp], which is a powerful model-independent technique, is adopted. This is a distribution over functions which generalize the idea of a Gaussian distribution for a finite number of quantities to the continuum. Given a set of data points one can use Gaussian processes to reconstruct the most probable underlying continuous function describing the data, and also obtain the associated confidence levels, without assuming a concrete parametrization of the aforesaid function. It requires only a probability on the target function $f(z)$.\ In cosmology, GP has attracted a wide application in reconstructing or testing models without an apriori fiducial model [@1606.04398[24]; @1606.04398[25]; @1606.04398[26]; @1606.04398[27]; @1606.04398[28]; @1606.04398[29]; @1606.04398[30]; @wang-meng; @wang-meng2; @zhou-peng; @cai-saridakis]. For a pedagogical introduction to GP, we refer to Seikel [*et al*]{}[@1606.04398[25]]. The code developed is publicly available.\ Assuming the observational data, such as the distance data $D$, or Hubble data $H$, obeys a Gaussian distribution with a mean and variance, the posterior distribution of reconstructed function $f(z)$ can be expressed as a joint Gaussian distribution of different data-sets involving $D$ or $H$. In this process, the key ingredient is the covariance function $k(z, \tilde{z})$ which correlates the values of different $D(z)$ and $H(z)$ at redshift points $z$ and $\tilde{z}$. The covariance function $k(z, \tilde{z})$ depends on a set of hyperparameters (e.g. the characteristic length scale $l$ and the signal variance $\sigma_f$). This approach also provides a robust way to estimate derivatives of the function in a stable manner. The hyperparameter $l$ corresponds roughly to the distance one needs to move in the input space before the function value changes significantly, while $\sigma_f$ describes typical change in the function value.\ The choice of covariance function, given in (\[cov1\]) affects the reconstruction to some extent. Here we have used the Matérn ($\nu = \frac{9}{2}$, $p=4$) covariance [@rw] between two redshift points separated by $\vert z-\tilde{z} \vert$ distance units, as in equation (\[cov2\]). This leads to the most reliable and stable results amongst the other significant choices [@seikel2013]. $$\begin{aligned} \label{cov1} k_{\nu=p+\frac{1}{2}}(z,\tilde{z}) = \sigma_f^2 \exp \left( \frac{-\sqrt{2p+1}}{l} \vert z - \tilde{z} \vert \right) \frac{p!}{(2p)!} \sum_{i=0}^{p} \frac{(p+i)!}{i!(p-i)!} \left( \frac{2\sqrt{2p+1}}{l} \vert z - \tilde{z} \vert \right)^{p-i} ,\end{aligned}$$ $$\begin{aligned} \label{cov2} k_{\frac{9}{2}}(z,\tilde{z}) = \sigma_f^2 \exp \left( \frac{-3 \vert z - \tilde{z} \vert}{l} \right) \left[ 1 + \frac{3 \vert z - \tilde{z} \vert}{l} \frac{27 ( z - \tilde{z})^2}{7l^2} + \frac{18 \vert z - \tilde{z} \vert ^3}{7l^3} + \frac{27 \left( z - \tilde{z} \right)^4}{35l^4}\right] .\end{aligned}$$ ![[Plots for the reconstructed dimensionless co-moving luminosity distance $D(z)$, it’s derivatives $D'(z)$, $D''(z)$ and $D'''(z)$ using combined Pantheon + CC data with Planck 2018 best fit prior value $H_0 = 67.27 \pm 0.6 $ km s$^{-1}$ Mpc$^{-1}$ (TT+TE+EE+lowE)[@planck]. The black solid line is the best fit curve and the associated 1$\sigma$, 2$\sigma$ confidence regions are shown in grey. The specific points (in the top two figures) with error bars represent the observational data. The black dashed line is for the $\Lambda$CDM model.]{}[]{data-label="all_planck"}](D_p_cc_pan.pdf){width="60.00000%"} ![[Plots for the reconstructed dimensionless co-moving luminosity distance $D(z)$, it’s derivatives $D'(z)$, $D''(z)$ and $D'''(z)$ using combined Pantheon + CC + BAO + CMB data, with Planck 2018 best fit prior value $H_0 = 67.66 \pm 0.42 $ km s$^{-1}$ Mpc$^{-1}$ (TT+TE+EE+lowE+lensing+BAO) [@planck]. The black solid line is the best fit curve and the associated 1$\sigma$, 2$\sigma$ confidence regions are shown in grey. The specific points (in the top two figures) with error bars represent the observational data. The black dashed line is for the $\Lambda$CDM model.]{}[]{data-label="all_planck2"}](D_p_cc_bao_cmb_pan.pdf){width="60.00000%"} ![[Plots for the reconstructed dimensionless co-moving luminosity distance $D(z)$, it’s derivatives $D'(z)$, $D''(z)$ and $D'''(z)$ using combined Pantheon + CC data, with Riess 2019 best fit prior value $H_0 = 74.03 \pm 1.42 $ km s$^{-1}$ Mpc$^{-1}$ from HST [@riess1]. The black solid line is the best fit curve and the associated 1$\sigma$, 2$\sigma$ confidence regions are shown in grey. The specific points (in the top two figures) with error bars represent the observational data. The black dashed line is for the $\Lambda$CDM model.]{}[]{data-label="all_riess"}](D_r_cc_pan.pdf){width="60.00000%"} ![[Plots for the reconstructed dimensionless co-moving luminosity distance $D(z)$, it’s derivatives $D'(z)$, $D''(z)$ and $D'''(z)$ using combined Pantheon + CC + BAO + CMB data, with Riess 2019 best fit prior value $H_0 = 74.03 \pm 1.42 $ km s$^{-1}$ Mpc$^{-1}$ from HST [@riess1]. The black solid line is the best fit curve and the associated 1$\sigma$, 2$\sigma$ confidence regions are shown in grey. The specific points (in the top two figures) with error bars represent the observational data. The black dashed line is for the $\Lambda$CDM model.]{}[]{data-label="all_riess2"}](D_r_cc_bao_cmb_pan.pdf){width="60.00000%"} The Supernova distance modulus data, Observational measurements of the Hubble parameter, Baryon Acoustic Oscillation data and the CMB Shift Parameter data have been utilized in reconstructing the jerk parameter.\ We use the $30$ latest $H(z)$ Cosmic Chronometer (CC) data points measured by calculating the differential ages of galaxies [@Zhang[61]], [@Zhang[62]], [@Zhang[63]] and the 23 $H(z)$ data points obtained from the radial BAO peaks in the galaxy power spectrum [@Zhang[64]], [@Zhang[65]] or the BAO peak using the Ly-$\alpha$ forest of QSOs [@Zhang[66]] based on the clustering of galaxies or quasars. One may find that some of the $H(z)$ data points from clustering measurements are correlated since they either belong to the same analysis or there is an overlap between the galaxy samples. Here in this paper, we mainly take the central value and standard deviation of the OHD data into consideration. Therefore, just as in Ref. [@geng], we assume that they are independent measurements. After the preparation of $H (z)$ data, we normalize them to obtain the dimensionless or reduced Hubble parameter $h(z) = H(z)/H_0$. Considering the error of Hubble constant, we calculate the uncertainty in $h(z)$ as, $$\label{sig_h} {\sigma_{h}}^2 = \frac{{\sigma_H}^2}{ {H_0}^2} + \frac{H^2}{{H_0}^4}{\sigma_{H_0}}^2,$$ where $\sigma_{H_0}$ is the error associated with $H_0$.\ For the supernova data, we use the Pantheon compilation [@pan1]-[@pan2] consisting of 1048 SNIa, which is the largest spectroscopically confirmed SNIa sample by now. It consists of different supernovae surveys, including SDSS, SNLS, various low-z samples and some high-z samples from HST. We include the covariance matrix along with systematic errors in our calculation. The distance modulus of each supernova can be estimated as $$\mu(z) = 5 \log_{10} \frac{d_L(z)}{\mbox{Mpc}} + 25$$ where $d_L$ is the luminosity distance in Eq. (\[dL\]). The distance modulus of SN-Ia can be derived from the observation of light curves through the empirical relation $\mu_{SN} = m^{*}_B + \alpha X_1 - \beta C - M_B$, where $X_1$ and $C$ are the stretch and colour parameters, and $M_B$ is the absolute magnitude. $\alpha$ and $\beta$ are two nuisance parameters. In the Pantheon sample, the corrected apparent magnitude $m_B = m^{*}_B + \alpha X_1 - \beta C$ are reported. Therefore, the colour and stretch corrections are already taken care of in the given dataset. The absolute magnitude of SN-Ia is degenerated with the Hubble parameter, and we fix it to $M_B = -19.35$, the best-fitting value of $\Lambda$CDM. We convert the distance modulus of SN-Ia to the normalized comoving distance through the relation $$\label{sne_D} D(z) \equiv \frac{1}{1+z} \frac{H_0}{c} 10^{\frac{\mu - 25}{5}}.$$ where $\mu$ is given by the difference between the corrected apparent magnitude $m_B$ and the absolute magnitude $M_B$ in the B-band for SN-Ia. The total uncertainty or error propagation $\mathbf{\Sigma}_\mu$ and $\mathbf{\Sigma}_D$ in $\mu$ and $D$ respectively are estimated following the standard practice. The total uncertainty matrix of distance modulus is given by, $$\mathbf{\Sigma}_\mu = \mathbf{C}_{stat} + \mathbf{C}_{sys}$$ where $\mathbf{C}_{stat}$ and $\mathbf{C}_{sys}$ are the statistical and systematic uncertainties respectively.\ The uncertainty of $D(z)$ is propagated from that of $\mu$ and $H_0$ using the standard error propagation formula, $$\label{sne_sigD} \mathbf{\Sigma}_D = \mathbf{D}_1 \mathbf{\Sigma}_\mu {\mathbf{D}_1}^T + \sigma_{H_{0}}^2 \mathbf{D}_2 \mathbf{D}_2^{T}$$ where $\sigma_{H_0}$ is the uncertainty of Hubble constant, the superscript ‘$T$’ denotes the transpose of a matrix, $\mathbf{D}_1$ and $\mathbf{D}_2$ are the Jacobian matrices, $$\begin{aligned} \mathbf{D}_1 &=& \mbox{diag}\left(\frac{\ln 10}{5} \mathbf{D}\right) \\ \mathbf{D}_2 &=& \mbox{diag}\left(\frac{1}{H_0} \mathbf{D}\right)\end{aligned}$$ where $\mathbf{D}$ is a vector whose components are the normalized comoving distances of all the SN-Ia.\ The so-called shift parameter is related to the position of the first acoustic peak in the power spectrum anisotropies of the cosmic microwave background (CMB). However the shift parameter $R$ is not directly measurable from the cosmic microwave background, and its value is usually derived from data assuming a spatially flat cosmology with dark matter and cosmological constant. $$R = \sqrt{\Omega_{m0}}\int_{0}^{z_c}\frac{dz'}{h(z')}$$ where $z_c$ = $1089$ is the redshift of recombination. We use the CMB shift parameter $R = 1.7488 \pm 0.0074$ and matter density parameter $\Omega_{m0} = 0.308 \pm 0.012$ from the Planck’s release [@planck_cmb] as important supplements of SN-Ia data.\ In view of the known tussle between the value of $H_0$ as given by the Planck data[@planck] and that used prior to the advent of Planck mission, we reconstruct $j$ twice, using both of them separately. The recent global and local measurements of $H_0 = 67.27 \pm 0.60 $ km s$^{-1}$ Mpc$^{-1}$ (TT+TE+EE+lowE), $67.66 \pm 0.42 $ km s$^{-1}$ Mpc$^{-1}$ (TT+TE+EE+lowE+lensing+BAO) with $1\%$ uncertainty (P18)[@planck] and $H_0 = 74.03 \pm 1.42 $ km s$^{-1}$ Mpc$^{-1}$ with $2.4\%$ uncertainty (R19)[@riess1], are considered for the purpose. The reconstructed functions $D(z)$, $D'(z)$, $D''(z)$ and $D'''(z)$ are plotted against $z$ for all four sets of the combined datasets, and shown in Fig. \[all\_planck\], \[all\_planck2\], \[all\_riess\] and \[all\_riess2\] for the two choices of the prior value of $H_0$. The black solid line is the best fit curve. The shaded regions correspond to the $68\%$ and $95\%$ confidence levels (CL). The true model is expected to lie within the $68\%$ CL. The specific points (in the top two figures in all the four sets) with error bars represent the observational data used in reconstruction. For the Pantheon data, eq. and are used to estimate the $D$ data points and the uncertainty $\mathbf{\Sigma}_D$ from the observed $\mu$ and $\mathbf{\Sigma}_{\mu}$ respectively. For the CC and BAO data, we consider eq. and convert the $H$-$\sigma_H$ data to $h$-$\sigma_{h}$ data set. From we can clearly see $D'(z)$ is related to $h(z)$. So, we can take into account the $h$ data points, the uncertainty $\sigma_{h}$ associated, and represent is graphically as $$\begin{aligned} D' &=& \frac{1}{h}, \nonumber \\ \vert \sigma_{D'} \vert &=& \frac{1}{h^2} \vert \sigma_{h} \vert . \end{aligned}$$ The black dashed line is for the $\Lambda$CDM model. Thus, given a set of observational data points we have used the Gaussian processes to construct the most probable underlying continuous function $D(z)$ describing the data, along with its derivatives $D'(z)$, $D''(z)$ and $D'''(z)$, and have also obtained the associated confidence levels. ![image](j_p_cc_pan.pdf){width="\textwidth"}\ ![image](j_p_cc_bao_cmb_pan.pdf){width="\textwidth"}\ ![image](j_r_cc_pan.pdf){width="\textwidth"}\ ![image](j_r_cc_bao_cmb_pan.pdf){width="\textwidth"}\ The reconstruction ================== We now reconstruct the cosmological jerk parameter $j(z)$ using the Gaussian Process from the reconstructed function $D(z)$ and its higher order derivatives ($D'(z)$, $D''(z)$ and $D'''(z)$) using eq. \[jerk\]. Results for the reconstructed jerk is given in Fig. \[jerkplot\_p\] and \[jerkplot\_r\] respectively. The shaded regions correspond to the $68\%$, $95\%$ and $99.7\%$ confidence levels (CL). The black solid line shows the “best fit” values of the reconstructed function. Plot shows that the $\Lambda$CDM model, in most of the combinations, is allowed within a $2\sigma$ error bar.\ However, for the Planck 2018 $H_0$ prior (Fig\[jerkplot\_p\]), the CC + BAO combination (bottom left) and the CC + BAO + CMB + Pantheon combination (bottom right), the $\Lambda$CDM is allowed only in $3\sigma$ and not $2\sigma$ for a brief period.\ For the Riess 2019 prior, the CC + BAO combination (bottom left in Fig\[jerkplot\_r\]), the $\Lambda$CDM model is included only in $3\sigma$ and not in $2\sigma$ for most of the evolution between $z=0$ and $z=2.5$. The bottom right plot of this figure shows that for the CC + BAO + CMB + Pantheon combination, $\Lambda$CDM is not included even in $3\sigma$ close to $z=1.5$.\ The plots for the “best fit value” (black solid lines) of the jerk parameter indicate that $j$ has an evolution, and also, this evolution may well be non-monotonic.\ The approximate fitting functions for the reconstructed jerk parameter are given in Eq. - and - for two sets of combinations, namely CC + Pantheon and the combination of all the data sets.\ For CC + Pantheon dataset combination: $$\begin{aligned} \label{jerk_result} &j_{\mbox{\tiny P18}}(z) = -0.99995 - 1.61516 ~z + 10.0773 ~z^2 - 86.4326 ~z^3 + 310.932 ~z^4 - 601.198 ~z^5 + \nonumber \\ &\hspace{6.5cm} + 680.92 ~z^6 + - 420.081 ~z^7 + 129.508 ~z^8 - 15.6674 ~z^9,\\ \nonumber \\ &j_{\mbox{\tiny R19}}(z) = -1.08262 - 0.0678615 ~z - 14.3527 ~z^2 + 66.0917 ~z^3 - 154.661 ~z^4 + 183.143 ~z^5 + \nonumber \\ &\hspace{11cm}- 91.3607 ~z^6 + 15.7326 ~z^7, \label{jerk_result2}\end{aligned}$$ and for CC + Pantheon + BAO + CMB combination: $$\begin{aligned} \label{jerk_result3} &j_{\mbox{\tiny P18}}(z) = -0.99996 - 1.60148 ~z + 20.5976 ~z^2 - 179.3920~ z^3 + 683.416 ~z^4 - 1434.995~ z^5 + \nonumber \\ &\hspace{5cm} + 1769.65 ~z^6 - 1264.88 ~z^7 + 513.19 ~z^8 - 109.89 ~z^9 + 9.666 z^{10},\\ \nonumber \\ &j_{\mbox{\tiny R19}}(z) = -1.04967 - 2.68 ~z + 28.3061 ~ z^2 - 216.726 ~z^3 + 754.522 ~z^4 - 1479.94 ~z^5 + \nonumber \\ &\hspace{4cm} + 1736.11 ~z^6 - 1197.97 ~z^7 + 473.438 ~z^8 - 99.2284 ~z^9 + 8.56616 ~z^{10}. \label{jerk_result4}\end{aligned}$$ ![image](w_p_cc_pan.pdf){width="40.00000%"} ![image](w_p_cc_bao_cmb_pan.pdf){width="40.00000%"}\ ![image](w_r_cc_pan.pdf){width="40.00000%"} ![image](w_r_cc_bao_cmb_pan.pdf){width="40.00000%"} We now relax our pretension of not knowing Einstein equations. We use the definition of deceleration parameter $$\label{q} \frac{\dot{H}}{H^2} = -(1+q),$$ in Einstein equations, $$\begin{aligned} \label{friedmann} 3H^2 &=& 8\pi G \rho ,\\ 2\dot{H} + 3H^2 &=& -8\pi G p ,\end{aligned}$$ where $\rho$ and $p$ are the total energy density and pressure contribution from all components constituting the Universe. Therefore, the effective equation of state parameter is $$\label{w_eff} w_{eff} = \frac{p}{\rho} = -\frac{2\dot{H} + 3H^2 }{3H^2 } = \frac{-1 + 2q}{3}.$$ One can write $j$ in terms of $q$ as $$\label{q_from_j} j(z)= -\left[q(2q+1)+(1+z)\frac{dq}{dz}\right].$$ Using equations -, for the two datasets, equation can be numerically integrated for $q(z)$. For this one has to assume the initial value of the deceleration parameter at the present epoch ($z=0$) i.e., $q_0$. We have chosen $q_0 \simeq -0.54^{+0.07}_{-0.08}$ from reference [@q0choice] at $z=0$, and using the solutions for $q=q(z)$ in we arrive at the effective EoS parameter from reconstructed jerk. We plot the evolution for the effective equation of state parameter in Fig. \[weffplot\]. The black solid line represent the effective EoS obtained from the reconstructed jerk. The shaded regions show the uncertainty associated with $w_{eff}$ corresponding to the 1$\sigma$ confidence level for the reconstructed jerk parameter (say $j \pm \sigma_j$). The uncertainty in $w_{eff}$ is ascertained by numerically integrating both $j \pm \sigma_j(z)$ alongside $j(z)$ in eq. starting from the initial value $q_0$.\ The approximate functional forms obtained for the effective equation of state parameter are given in Eq. - and - for two sets of combinations, namely CC + Pantheon and the combination of all the data sets.\ For CC + Pantheon dataset combination: $$\begin{aligned} \label{weff_result} w_{\mbox{\tiny P18}}(z)&= -0.693358 + 0.718592~ z - 15.8327 ~z^{2.67414} + 38.0581 ~z^3 - 86.5953 ~z^4 + 208.338 ~z^5 + \nonumber \\ &+ 518.941 ~z^{7.00109}- 513.523 ~z^8 + 354.204 ~z^9 - 160.4 ~z^{10} + 42.6695 ~z^{11} - 5.05034 ~z^{12} + \nonumber \\ &- 381.065 ~\left| z \right| ^6,\\ \nonumber \\ w_{\mbox{\tiny R18}}(z)&= -0.692601 + 0.625951 ~z + 3.33012 ~z^{1.81994} - 44.8849 ~z^{3} + 262.946 ~z^{4} - 844.011 ~z^{5} + \nonumber \\ & + 1661.12 ~z^{6} - 2098.3 ~z^7 - 872.5 ~z^9 + 254.023 ~z^{10} - 32.3061 ~z^{11} + 1710.42 ~\left| z \right|^8 , \label{weff_result2}\end{aligned}$$ and, for CC + Pantheon + BAO + CMB combination: $$\begin{aligned} \label{weff_result3} w_{\mbox{\tiny P18}}(z)&= -0.69368 + 0.669596 ~z - 0.119192 ~z^2 + 1.4935 ~z^3 - 2.32544 ~z^4 - 0.119214 ~z^5 + \nonumber \\ &+1.57755 ~z^6 + 0.581673 ~z^7 - 1.14239 ~z^8 - 1.44675 ~z^9 - 0.07217 ~z^{10} + 1.42208 ~z^{11} + \nonumber \\ &+ 1.17954 ~z^{12} - 0.855902 ~z^{13} - 1.74517 ~z^{14} + 1.6586 ~z^{15} - 0.398383 ~z^{16} ,\\ \nonumber \\ w_{\mbox{\tiny R19}}(z)&= -0.693191 + 0.72215 ~z + 1.38516 ~z^{2.00893} - 11.4426 ~z^3 + 62.3137 ~z^4 - 195.948 ~z^5 + \nonumber \\ &+ 379.509 ~z^6 - 473.887 ~z^7 + 382.535 ~z^8 - 193.629 ~z^9 + 56.1131 ~z^{10} - 7.13564 ~z^{11} \label{weff_result4}\end{aligned}$$ The value of $w_{eff}$ at $z=0$ is $-0.693^{+0.07}_{-0.08}$ (this depends on the chosen value of $q_0$). Considering the value of $\Omega_{m0} = 0.308 \pm 0.012$ from Planck data release [@planck_cmb], we can calculate the value of $w_{eff,\Lambda0}$ to be $-0.692$ with $\pm 0.027$ uncertainty at $z=0$ using the standard error propagation method. For higher redshift $z>1.2$, the reconstructed $w_{eff}$ in the present work clearly shows a sizeable departure from the corresponding $w_{eff,\Lambda}$ values of the $\Lambda$CDM model, which can be obtained from ) as, $$w_{eff,\Lambda} = -\frac{1}{1 + \frac{\Omega_{m0}}{1-\Omega_{m0}}(1+z)^3}.$$ It should also be mentioned that the nature of $w_{eff}$ as shown in Fig. \[weffplot\] does not depend critically on small changes in the chosen value of $w_{eff}$ at $z=0$. So we did not include other choices in the figure. Discussion ========== A reconstruction of the cosmological jerk parameter $j$ is attempted in this work. The reconstruction is non-parametric, so $j$ is unbiased to assume any particular functional form to start with. Also, it does not depend on the theory of gravity, only except the assumption that the universe is described by a 4 dimensional spacetime geometry and it is spatially flat, homogeneous and isotropic. It deserves mention that although a non-parametric reconstruction is there in the literature for quite some time now for reconstructing physical quantities like the equation of state parameter or the quintessence potential, it has hardly been used to reconstruct the jerk parameter.\ Kinematical quantities, that can be defined with the metric alone (namely the scale factor $a$), form the starting quantities of interest in the present case. As the deceleration parameter $q$ is now an observed quantity and is found to evolve, the next higher order derivative, the jerk parameter is the focus of attention. Surely the parameters made out of even higher derivatives like snap (4th order derivative of $a$), crack (5th order derivative) etc. could well be evolving[@capoz]. But we focus on $j$ which is the evolution of $q$, the highest order derivative that is an observationally measured quantity. For a parametric reconstruction of $j$, one can still start from the higher order derivatives[@cald; @stacho] and integrate back to $j$, and estimate the parameters, coming in as constants of integration, with the help of data. But this does not form the content of the present work as already mentioned.\ It is found that for various combinations of datasets, the $\Lambda$CDM model is normally included in the $2\sigma$ confidence level. For some combinations, this is included in the $3\sigma$ confidence level but not in $2\sigma$. The most significant departure is for the CC + BAO + CMB + Pantheon combination with the Riess 2019 prior for $H_0$, where the $\Lambda$CDM is not even included in $3\sigma$ for a brief period close to $z=1.5$. The plots also show that the nature of $j$ does not substantially change for the $H_0$ prior chosen.\ The polynomials for the best fit curve for $j$ have been worked out. This is done for two combinations, namely CC + Pantheon, where BAO and CMB Shift data are avoided for reasons discussed in the introduction, and also for the combination of all the four data sets, CC + Pantheon + BAO + CMB Shift.\ From the best fit curves for $j$, one can find the deceleration parameter $q(z)$ by numerical integration. The effective equation of state parameter $w_{eff}$ is linear in $q$, so the plots for both of them will look similar. We plot $w_{eff}$ against the redshift $z$ in Fig. \[weffplot\]. For some quoted value $q_0$ with the error bar, the upper and lower bounds of $w_{eff}$ can also be found out. The plots reveal that $w_{eff}$ has an evolution distinct from the $\Lambda$CDM model and not at all monotonically decreasing with evolution. The plots also indicate that the universe might have another stint of accelerated expansion in the recent past before entering into a decelerated phase and finally giving way to the present accelerated expansion.\ We started with a reconstruction of a kinematical quantity, namely the jerk parameter $j$, as this gives a flair of arriving at the evolution history without any bias towards a particular theory. As a by-product, this reconstruction leads to an evolution history of a physical quantity, the effective equation of state parameter $w_{eff}$. [150]{} S. Perlmutter [*et al*]{}., Astrophys. J. [**517**]{}, 565 (1999). A. Riess [*et al*]{}., Astron. J. [**116**]{}, 1009 (1998). Y.G. Gong and A. Wang, Phys. Rev. D [**73**]{}, 083506 (2006). Y.G. Gong and A. Wang, Phys. Rev. D [**75**]{}, 043520 (2007). O. Luongo, Mod. Phys. Lett. A **19**, 1350080 (2015). D. Rapetti, S. W. Allen, M. A. Amin and R. D. Blandford, Mon. Not. Roy. Astron. Soc. **375**, 1510 (2007). Z.-X. Zhai, M.-J. Zhang, Z.-S. Zhang, X.-M. Liu and T.-J. Zhang, Phys. Lett. B **727**, 8 (2013). A. Mukherjee and N. Banerjee, Phys. Rev. D **93**, 043002 (2016). A. Mukherjee and N. Banerjee, Class. Quatum Grav. [**34**]{}, 03501 (2017). U. Alam, V. Sahni and A. A. Starobinsky, Mon. Not. R. Astron. Soc. [**344**]{}, 1057 (2003). M. Sahlén, A. R. Liddle, and D. Parkinson, Phys. Rev. D [**72**]{}, 083511 (2005). M. Sahlén, A. R. Liddle, and D. Parkinson, Phys. Rev. D [**75**]{}, 023502 (2007). T. Holsclaw [*et al*]{}., Phys. Rev. D [**82**]{}, 103502 (2010). T. Holsclaw [*et al*]{}., Phys. Rev. D [**84**]{}, 083501 (2011). T. Holsclaw [*et al*]{}., Phys. Rev. Lett. [**105**]{}, 241302 (2010). R. G. Crittenden, G. B. Zhao, L. Pogosian, L. Samushia and X. Zhang, J. Cosmol. Astropart. Phys. [**02**]{}, 048 (2012). R. Nair, S. Jhingan and D. Jain, J. Cosmol. Astropart. Phys. [**01**]{}, 005 (2014). Z. Zhang [*et al*]{}., arXiv: 1902.09794. O. Elgaroy and T. Multamaki, Astron. Astrophys. [**471**]{}, 65 (2007). P. Carter, F. Beutler, W. J. Percival, J. DeRose, R. H. Wechsler and C. Zhao, Mon. Not. R. Astron. Soc. [**494**]{}, 2076 (2020). C. Rasmussen and C. Williams, Gaussian Processes for Machine Learning, The MIT Press (2006). D. MacKay, Information Theory, Inference and Learning Algorithms, Cambridge University Press (2003), chapter 45. C. Williams, Prediction with Gaussian processes: From linear regression to linear prediction and beyond, in Learning in Graphical Models, ed. M. I. Jordan, 599-621. The MIT Press (1999). Gaussian Process webpage\ `http://www.gaussianprocess.org/`. T. Holsclaw, U. Alam, B. Sansó, H. Lee, K. Heitmann, S. Habib, and D. Higdon, Phys. Rev. Lett. **105**, 241302 (2010). M. Seikel, C. Clarkson, and M. Smith, J. Cosmol. Astropart. Phys. **06**, 036 (2012). A. Shafieloo, A. G. Kim, and E. V. Linder, Phys. Rev. D **85**, 123530 (2012). S. Yahya, M. Seikel, C. Clarkson, R. Maartens, and M. Smith, Phys. Rev. D **89**, 023503 (2014). S. Santos-da Costa, V. C. Busti, and R. F. Holanda, J. Cosmol. Astropart. Phys. **10**, 061 (2015). T. Yang, Z.-K. Guo, and R.-G. Cai, Phys. Rev. D **91**, 123533 (2015). R.-G. Cai, Z.-K. Guo, and T. Yang, Phys. Rev. D **93**, 043517 (2016). D. Wang, X.-H. Meng, Phys. Rev. D **95**, 023508 (2017). D. Wang, W. Zhang, and X.-H. Meng, Eur.Phys. J. C **79**, 211 (2019). L. Zhou, X. Fu, Z. Peng, J. Chen, Phys. Rev. D **100**, 123539 (2019). Y.-F. Cai, M. Khurshudyan, E. N. Saridakis, Astrophys. J. **888**, 62 (2020). M. Seikel and C. Clarkson, arXiv:1311.6678. R. Jimenezand, A. Loeb, Astrophys. J. **573**, 37 (2008). J. Simon, L. Verde, R. Jimenez, Phys. Rev. D **71**, 123001 (2005). D. Stern, R. Jimenez, L. Verde, M. Kamionkowski, S.A. Stanford, J. Cosmol. Astropart. Phys. **2**, 008 (2010). E. Gaztanaga, A. Cabre, L. Hui, Mon. Not. R. Astron. Soc. **399**, 1663 (2009). M. Moresco, A. Cimatti, R. Jimenez, L. Pozzetti *et al.*, J. Cosmol. Astropart. Phys. **08**, 006 (2012). T. Delubac, J. Rich, S. Bailey *et al.*, Astron. Astrophys. **552**, A96 (2013). J.-J. Geng, R.-Y. Guo, A. Wang, J.-F. Zhang, and X. Zhang, (2018), arXiv:1806.10735. D. M. Scolnic, *et al*., Astrophys. J. 859, 101 (2018). The numerical data of the full Pantheon SNIa sample are available at-\ `http://dx.doi.org/10.17909/T95Q4X`.\ `https://archive.stsci.edu/prepds/ps1cosmo/index.html`. P. A. R. Ade *et al.*, arXiv: 1502.01590. \[Planck Collaboration\] N. Aghanim *et al.*, arXiv : 1807.06209. \[Planck Collaboration\] A. G. Riess *et al.*, Astrophys. J. **876**, 85 (2019). S. Capozziello, O. Farooq, O. Luongo and B. Ratra, Phys. Rev. D **90**, 044016 (2014). S. Capozziello, R. D’Agostino and O. Luongo, Mon. Not. R. Astron. Soc. [**494**]{}, 2576 (2020). R. R. Caldwell and M. Kamionkowski, J. Cosmol. Astropart. Phys. [**0409**]{}, 009 (2004). M. P. Dabrowski and T. Stachowiak, Annals Phys. [**321**]{}, 771 (2006). [^1]: E-mail: pm14ip011@iiserkol.ac.in [^2]: E-mail: narayan@iiserkol.ac.in
[*Astronomy Letters, 2013, Vol. 39, No. 11, pp. 753–758.*]{} -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 1.0cm **Orientation Parameters of the Cepheid System in the Galaxy** **V.V. Bobylev** *Pulkovo Astronomical Observatory, St. Petersburg, Russia* *Sobolev Astronomical Institute, St. Petersburg State University, Russia* —Based on the distribution of long-period Cepheids, we have redetermined the orientation parameters of their principal plane in the Galaxy. Based on 299 Cepheids with heliocentric distances $r<20$ kpc and pulsation periods $P\geq5^d$, we have found the directions of the three principal axes of the position ellipsoid: $L_1=281.0\pm0.1^\circ,$ $B_1=-1.9\pm0.1^\circ,$ $L_2= 11.0\pm0.7^\circ,$ $B_2=0.2\pm0.1^\circ$ and $L_3=275.9\pm0.7^\circ,$ $B_3=88.1\pm0.1^\circ$. Thus, the line of nodes $l_\Omega=L_3+90^\circ=5.9^\circ$ is very close to the direction to the Galactic center; the Cepheid symmetry plane is inclined to the Galactic plane approximately by $-2^\circ$ in the direction of the first axis ($L_1$). The direction of the line of nodes found from old Cepheids ($P<5^d$) differs significantly and is $l_\Omega=298^\circ$. The elevation of the Sun above the Galactic plane has been estimated from 365 closer stars ($r<4$ kpc) without any constraint on the pulsation period to be $h_\odot=23\pm5$ pc. INTRODUCTION {#introduction .unnumbered} ============ Cepheids play a very important role in studying the Galactic structure. Their number increases; the calibration of the period–luminosity relation needed to determine their distances is improved. This has become possible owing to the fact that the trigonometric parallaxes have been measured for several Cepheids (Fouqué et al. 2007). Using infrared photometry allowed the interstellar extinction to be taken into account much more accurately (Berdnikov et al. 2000). The layer of neutral hydrogen in the Galaxy is known to be warped at large Galactocentric distances (Westerhout 1957; Barton et al. 1988). Hydrogen rises above the Galactic plane in the second quadrant and goes below it in the third and fourths quadrants. The results of a study of this structure based on currently available data on the HI and HII distributions are presented in Kalberla and Dedes (2008) and Cersosimo et al. (2009), respectively. This structure is revealed from the spatial distribution of stars and dust (Drimmel and Spergel 2001), from the distribution of pulsars (Yusifov 2004), from OB stars (Miyamoto and Zhu 1998), and from the 2MASS red giant clump (Momany et al. 2006). The Cepheid system exhibits a similar feature (Fernie 1968; Berdnikov 1987). Several models were proposed to explain the nature of the Galactic warp: (1) the interaction between the disk and a nonspherical dark matter halo (Sparke and Casertano 1988); (2) the gravitational influence from the closest satellites of the Galaxy (Bailin 2003); (3) the interaction of the disk with the circumgalactic flow of high-velocity hydrogen clouds produced by mass transfer between the Galaxy and the Magellanic Clouds (Olano 2004); (4) the intergalactic flow (López-Corredoira et al. 2002); and (5) the interaction with the intergalactic magnetic field (Battaner et al. 1990). The goal of this paper is to redetermine the orientation of the Cepheid system in our Galaxy. For this purpose, we apply a method that allows this problem to be solved in a general form. Studying the dependence of the derived parameters on the stellar age and distance is also of great importance. DATA {#data .unnumbered} ==== Here, we use Cepheids of the Galaxy’s flat component classified as DCEP, DCEPS, CEP(B), CEP in the GCVS (Kazarovets et al. 2009) as well as CEPS used by other authors. To determine the distance based on from the period–luminosity relation, we used the calibration from Fouqué et al. (2007): $\langle M_V\rangle=-1.275-2.678 \log P,$ where the period $P$ is in days. Given $\langle M_V\rangle,$ taking the period-averaged apparent magnitudes $\langle V\rangle$ and extinction $A_V=3.23 E(\langle B\rangle-\langle V\rangle)$ mainly from Acharova et al. (2012) and, for several stars, from Feast and Whitelock (1997), we determine the distance $r$ from the relation $$\displaystyle r=10^{\displaystyle -0.2(\langle M_V\rangle-\langle V\rangle-5+A_V)}. \label{Ceph-02}$$ For a number of Cepheids (without extinction data), we used the distances from the catalog by Berdnikov et al. (2000), which were determined from infrared photometry. With the goals of our study in mind, we concluded that it was better not to use several stars lying higher than 2 kpc above the Galactic plane and those located deep in the Galaxy’s inner region. Thus, we used two limitations, $$|Z|<2~\hbox {kpc}, \quad X<6~\hbox {kpc}, \label{criterii-xz}$$ satisfied by 465 Cepheids. Their distributions in projections onto the Galactic $XY,$ $XZ,$ and $YZ$ planes are shown in Figs. 1–3. Several calibrations proposed to estimate the Cepheid ages are known. Here,we use the calibration by Efremov (2003), $$\log t=8.50-0.65 \log P, \label{AGE-EFREM}$$ derived from Cepheids belonging to open clusters of the Large Magellanic Cloud. THE METHOD {#the-method .unnumbered} ========== We apply the well-known method of determining the symmetry plane of a stellar system with respect to the principal (in our case, Galactic) coordinate system. The basics of this approach were described by Polak (1935), and the technique for estimating the errors in the angles can be found in Parenago (1951) and Pavlovskaya (1971). In the rectangular coordinate system centered on the Sun, the $x$ axis is directed toward the Galactic center, the $y$ axis is in the direction of Galactic rotation $(l=90^\circ, b=0^\circ),$ and the $z$ axis is directed toward the North Galactic Pole. Then, $$\begin{array}{rll} x&=&r\cos l\cos b,\\ y&=&r\sin l\cos b,\\ z&=&r\sin b. \label{ff-1} \end{array}$$ Let $m, n,$ and $k$ be the direction cosines of the pole of the sought-for great circle from the $x, y,$ and $z$ axes. The sought-for symmetry plane of the stellar system is then determined as the plane for which the sum of the squares of the heights, $h=mx+ny+kz,$ is at a minimum: $$\sum h^2=\hbox {min}. \label{ff-2}$$ The sum of the squares $$h^2=x^2m^2+y^2n^2+z^2k^2+2yznk+2xzkm+2xymn \label{ff-3}$$ can be designated as $2P=\sum h^2.$ As a result, the problem is reduced to searching for the minimum of the function $P:$ $$2P=am^2+bn^2+ck^2+2fnk+2ekm+2dmn, \label{ff-4}$$ where the second-order moments of the coordinates $a=[xx],$ $b=[yy],$ $c=[zz],$ $f=[yz],$ $e=[xz],$ $d=[xy],$ written via the Gauss brackets, are the components of a symmetric tensor: $$\left(\matrix { a& d & e\cr d& b & f\cr e& f & c\cr }\right), \label{ff-5}$$ whose eigenvalues $\lambda_{1,2,3}$ are found from the solution of the secular equation $$\left|\matrix { a-\lambda& d& e\cr d & b-\lambda & f\cr e & f&c-\lambda\cr } \right|=0, \label{ff-7}$$ and the directions of the principal axes $L_{1,2,3}$ and $B_{1,2,3}$ are found from the relations $$\renewcommand{\arraystretch}{2.2} \begin{array}{lll} \displaystyle \tan L_{1,2,3}={{ef-(c-\lambda)d}\over {(b-\lambda)(c-\lambda)-f^2}},\\ \displaystyle \tan B_{1,2,3}={{(b-\lambda)e-df}\over{f^2-(b-\lambda)(c-\lambda)}}\cos L_{1,2,3}. \label{ff-42} \end{array}$$ The errors in $L_{1,2,3}$ and $B_{1,2,3}$ are estimated according to the following scheme: $$\renewcommand{\arraystretch}{2.2} \begin{array}{lll} \displaystyle \varepsilon (L_2)= \varepsilon (L_3)= {{\varepsilon (\overline {xy})}\over{a-b}},\\ \displaystyle \varepsilon (B_2)= \varepsilon (\varphi)={{\varepsilon (\overline {xz})}\over{a-c}},\\ \displaystyle \varepsilon (B_3)= \varepsilon (\psi)= {{\varepsilon (\overline {yz})}\over{b-c}},\\ \displaystyle \varepsilon^2 (L_1)={\varphi^2 \varepsilon^2 (\psi)+\psi^2 \varepsilon^2 (\varphi)\over{(\varphi^2+\psi^2)^2}},\\ \displaystyle \varepsilon^2 (B_1)= {\sin^2 L_1 \varepsilon^2 (\psi)+\cos^2 L_1 \varepsilon^2 (L_1)\over{(\sin^2 L_1+\psi^2)^2}}, \label{ff-65} \end{array}$$ where $$\varphi=\cot B_1 \cos L_1, \quad \psi=\cot B_1 \sin L_1.$$ The three quantities $\overline {x^2y^2}$, $\overline {x^2z^2}$ and $\overline {y^2z^2},$ should be calculated in advance. Then, $$\renewcommand{\arraystretch}{1.6} \begin{array}{lll} \displaystyle \varepsilon^2 (\overline {xy})= (\overline{x^2y^2}-d^2)/n, \\ \displaystyle \varepsilon^2 (\overline {xz})= (\overline {x^2z^2}-e^2)/n, \\ \displaystyle \varepsilon^2 (\overline {yz})= (\overline {y^2z^2}-f^2)/n, \label{ff-73} \end{array}$$ where $n$ is the number of stars. Thus, the algorithm for solving the problem consists in setting up the function $2P$ (7), seeking for the roots of the secular equation (9), whose specific values are of no interest to us, and estimating the directions of the principal axes of the position ellipsoid from Eqs. (10)–(12). In the classical case, the problem was solved for a unit sphere ($r=1$), but here we propose to use the distances (which act as the weights). ![ Distribution of Cepheids in projection onto the Galactic $XY$ plane. The Sun is at the intersection of the dotted lines; the dashed line indicates the circle of radius $R_0=8$ kpc around the Galactic center. The filled circles are long-period Cepheids $(P\geq5^d);$ the small gray circles are short-period Cepheids $(P<5^d);$ the circles mark the three Cepheids with large $Z$ discussed in the text. []{data-label="f1"}](f1.eps){width="90.00000%"} ![ Distribution of Cepheids in projection onto the Galactic $XZ$ plane. The Galactic center is on the left (at $X=8$ kpc). The notation is the same as that in Fig. 1.[]{data-label="f2"}](f2.eps){width="70.00000%"} ![ Distribution of Cepheids in projection onto the Galactic $YZ$ plane. The slope of the solid line is $2^\circ$. The notation is the same as that in Fig. 1.[]{data-label="f3"}](f3.eps){width="70.00000%"} RESULTS {#results .unnumbered} ======= Based on the entire sample of Cepheids (465 stars), whose distribution in the Galaxy is shown in Figs. 1–3, we found the following directions of the principal axes of the position ellipsoid: $$\renewcommand{\arraystretch}{1.2} \matrix { L_1=278.96\pm0.05^\circ, & B_1=-1.33\pm0.00^\circ, \cr L_2=~~8.93\pm0.43^\circ, & B_2=~1.41\pm0.04^\circ, \cr L_3=232.37\pm0.43^\circ, & B_3=88.06\pm0.07^\circ. \cr } \label{rezult-1}$$ The mean age for this sample of Cepheids is ${\overline t}=98$ Myr. According to solution (13), the slope of the solid line in Fig. 3 corresponds to $(90^\circ-B_3).$ For comparison, we present the results of our calculations for the distribution on a unit sphere $(r=1)$ obtained using the same stars: $$\matrix { L_1=283.32\pm0.02^\circ, & B_1=-0.66\pm0.00^\circ, \cr L_2=~13.32\pm0.67^\circ, & B_2=-0.37\pm0.03^\circ, \cr L_3=312.95\pm0.67^\circ, & B_3=89.24\pm0.03^\circ, \cr } \label{rezult-2}$$ which differ significantly from the parameters of solution (13). Although the errors in the unknowns $B_{1,2,3}$ in solution (14) are smaller, the angles $B_{1,2,3}$ are also considerably smaller than those in solution (13). However, it can be clearly seen from Fig. 3 that the slope of $2^\circ$ is better applicable to the data than $\approx$0.5$^\circ$ (as follows from solution (14)). This discrepancy decreases if the stars are considered in narrow distance ranges. Below, we consider only the results of the solutions with distances. Distant stars make a major contribution to solution (13) (they have the largest weights). The solutions obtained from distant stars ($3<r<20$ kpc) with different pulsation periods (ages) are of interest. For the youngest stars, $$\matrix { L_1=279.6\pm0.3^\circ, & B_1=-2.1\pm0.1^\circ, \cr L_2=~~9.9\pm1.3^\circ, & B_2=~0.6\pm0.1^\circ, \cr L_3=262.8\pm1.3^\circ, & B_3=87.8\pm0.3^\circ, \cr &P\geq9^d, \cr &n_\star=63, \cr &{\overline t}=54~\hbox {Myr}; \cr } \label{rezult-3}$$ for middle-age stars, $$\matrix { L_1=284.5\pm0.2^\circ, & B_1=-1.4\pm0.1^\circ, \cr L_2=~14.5\pm2.2^\circ, & B_2=~0.3\pm0.1^\circ, \cr L_3=272.3\pm2.2^\circ, & B_3=88.5\pm0.2^\circ, \cr &5^d\leq P<9^d, \cr &n_\star=51, \cr &{\overline t}=96~\hbox {Myr}; \cr } \label{rezult-4}$$ for the oldest stars: $$\matrix { L_1=261.6\pm0.4^\circ, & B_1=-0.9\pm0.0^\circ, \cr L_2=351.6\pm1.0^\circ, & B_2=~3.2\pm0.4^\circ, \cr L_3=188.0\pm1.0^\circ, & B_3=86.7\pm0.1^\circ, \cr &P<5^d, \cr &n_\star=63, \cr &{\overline t}=133~\hbox {Myr}. \cr } \label{rezult-5}$$ In contrast to the results of solutions (15)–(16), which, on the whole, agree between themselves, the oldest Cepheids give a significantly different orientation of the line of nodes, $l_\Omega=L_3+90^\circ=278^\circ.$ This is no surprise, because old Cepheids have had time to recede from their birthplace, they made more than half of their revolution around the Galactic center, i.e., they were formed in a different part of the Galaxy (for example, with respect to the Magellanic Clouds). The surprising thing is that a slope of $\approx3^\circ$ is present in their distribution (the angles $B_2$ and $B_3$). This finding may imply that the disk warp can be a long-lived structure, at least older than $\approx150$ Myr. The roots of the secular equation (9) describe the shape of the ellipsoid but provide no information about the coordinates of its center. The shift along the $z$ coordinate, which characterizes the elevation of the Sun above the Galactic plane $h_\odot=-{\overline z},$ is most interesting. Three stars with very large $Z$ are marked in Figs. 1–3. These are two long-period Cepheids, DR Cep ($P =19.8^d, Z = 1.9$ kpc) and FQ Lac ($P =11.3^d, Z =-1.3$ kpc), and one short-period Cepheid, IT Lac ($P = 2.6^d, Z =-1.0$ kpc). Unfortunately, as yet no information about their radial velocity measurements is available for them. Their proper motions from the UCAC4 catalog (Zacharias et al. 2013) are very unreliable. This is because at such large distances ($r\approx15$ kpc), a typical error $e_\mu\approx2$ mas yr$^{-1}$ will contribute $4.74 r e_\mu\approx140$ km s$^{-1}$ to the space velocity, which is very much. Therefore, it is not yet possible to judge the character of their velocities. These stars most likely have peculiar velocities. For this reason, we decided not to use these stars to determine the orientation parameters. Since the results of solutions (15)–(16) do not differ greatly, the interval of periods can be combined. Based on 299 stars $(r<20$ kpc) with pulsation periods $P\geq5^d$ (${\overline t}=77$ Myr), we have now found $$\matrix { L_1=281.0\pm0.1^\circ, & B_1=-1.9\pm0.1^\circ, \cr L_2=~11.0\pm0.7^\circ, & B_2=~0.2\pm0.1^\circ, \cr L_3=275.9\pm0.7^\circ, & B_3=88.1\pm0.2^\circ, \cr } \label{rezult-66}$$ the line of nodes $l_\Omega=L_3+90^\circ=5.9^\circ$ is close to the direction to the Galactic center. The parameters (18) agree satisfactorily with the results of analyzing the layer of neutral hydrogen (Kalberla and Dedes 2008). It should be noted that the distances to hydrogen clouds are estimated from the radial velocities (kinematic distances) with a low accuracy; in our case, however, the accuracy is higher, because the distance error is, on average, 10–15%. Therefore, our results are of indubitable interest. Based on 163 stars ($r<20$ kpc) with pulsation periods $P<5^d$ (${\overline t}=138$ Myr), we found $$\matrix { L_1=249.5\pm0.4^\circ, & B_1=-2.1\pm0.1^\circ, \cr L_2=339.4\pm1.9^\circ, & B_2=~1.9\pm0.2^\circ, \cr L_3=208.1\pm1.9^\circ, & B_3=87.2\pm0.1^\circ, \cr } \label{rezult-77}$$ the direction of the line of nodes is $l_\Omega=298^\circ$. The elevation of the Sun above the Galactic plane $h_\odot$ depends on the heliocentric distance, which is clearly seen from Figs. 2, 3. Our calculations show that a sample with a radius of 4–5 kpc is optimal (the error in $h_\odot$ is smallest). For example, based on a sample of the closest (71 stars) Cepheids from the range $r\leq1$ kpc (with the rejection according to the $3\sigma$ criterion), we found $$h_\odot=30\pm9~\hbox {pc}, \label{rezult-7}$$ but the influence of nonuniformities in the distribution of stars is great here. Based on 365 stars from the range $r\leq4$ kpc, we found $$h_\odot=23\pm5~\hbox {pc}, \label{rezult-8}$$ while based on the remaining 100 stars from the range 4 kpc$<r<20$ kpc, we found $$h_\odot=45\pm39~\hbox {pc}. \label{rezult-9}$$ DISCUSSION {#discussion .unnumbered} ========== Fernie (1968) estimated the direction of the line of nodes for the Cepheid system, $7^\circ\pm4^\circ,$ the inclination $-0.8^\circ\pm0.2^\circ$ in the direction $l=277^\circ,$ and the elevation of the Sun above the Galactic plane, $h_\odot=45\pm15$ pc from 328 stars. Since the sample of stars in Fernie (1968) probably contained quite a few old Cepheids, the inclination turned out to be small, and the determination of $h_\odot$ was affected by distant Cepheids. Berdnikov (1987) found $h_\odot=26\pm6$ pc from 363 stars, in good agreement with our result (21). Our value of $h_\odot$ (21) is also in good agreement with $h_\odot=17\pm3$ pc obtained by Joshi (2007) from young open star clusters and OB stars. A discussion of the results of analyzing the warp of the hydrogen layer obtained from neutral, ionized, and molecular hydrogen can be found, for example, in Cersosimo et al. (2009). Hydrogen reaches its maximum elevations above the Galactic plane z = from $+$300 to $+$400 pc (at $R\approx$12 kpc) in the first and second quadrants and $z=$ from $-150$ to $-200$ pc in the third and fourth quadrants (i.e., the warp is nonlinear). It can be seen from Fig. 3 that even after the elimination of the above three stars, the dispersion of the positions is larger at positive $y$ (on the left in the Fig. 3); on average, the heights of the stars are larger than those of hydrogen. At positive $y,$ there are also Cepheids with negative $z.$ This can be related to their ages; having been formed $\approx$50 Myr ago, they could be displaced below the plane in such a time. To confirm this assumption, it is necessary to analyze the space velocities of Cepheids, which we plan to do in another paper. On the whole, we can conclude that the connection of Cepheids with the Galactic warp is beyond doubt. CONCLUSIONS {#conclusions .unnumbered} =========== Based on the distribution of Cepheids, we redetermined the orientation parameters of their principal plane in the Galaxy. Based on 299 stars at heliocentric distances $r<20$ kpc with pulsation periods $P\geq5^d$ we found the directions of the three principal axes of the position ellipsoid (solution (18)): $L_1=281.0\pm0.1^\circ,$ $B_1=-1.9\pm0.1^\circ,$ $L_2= 11.0\pm0.7^\circ,$ $B_2=0.2\pm0.1^\circ$ and $L_3=275.9\pm0.7^\circ,$ $B_3=88.1\pm0.1^\circ.$ The line of nodes $l_\Omega=L_3+90^\circ=5.9^\circ$ is very close to the direction to the Galactic center; the Cepheid symmetry plane is inclined to the Galactic plane approximately by $-2^\circ$ in the direction of the first axis $(L_1).$ The oldest Cepheids (163 stars at $r<20$ kpc with pulsation periods $P<5^d)$ give a significantly different orientation of the line of nodes (solution (19)): $L_1=249.5\pm0.4^\circ, B_1=-2.1\pm0.1^\circ,$ $L_2=339.4\pm1.9^\circ, B_2=~1.9\pm0.2^\circ$ and $L_3=208.1\pm1.9^\circ, B_3=87.2\pm0.1^\circ,$ The direction of the line of nodes $l_\Omega=298^\circ$ differs approximately by $65^\circ$ from that obtained from a sample of younger Cepheids. The Sun’s elevation above the Galactic plane was estimated from 365 stars at $r<4$ kpc without any constraint on the pulsation period $P$ to be $h_\odot=23\pm5$ pc. In future, the method considered here can be useful for analyzing large amounts of data, for example, those from the GAIA space experiment or masers with their trigonometric parallaxes measured by means of VLBI. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} --------------- We are grateful to the referee for helpful remarks that contributed to a improvement of the paper. This work was supported by the “Nonstationary Phenomena in Objects of the Universe” Program of the Presidium of the Russian Academy of Sciences and the “Multiwavelength Astrophysical Research” grant no. NSh–16245.2012.2 from the President of the Russian Federation. REFERENCES {#references .unnumbered} ==========   1. A.A. Acharova, Yu.N. Mishurov, and V.V. Kovtyukh, Mon. Not. R. Astron. Soc. 420, 1590 (2012). 2\. J. Bailin, Astrophys. J. 583, L79 (2003). 3\. E. Battaner, E. Florido, and M.L. Sanchez-Saavedra, Astron. Astrophys. 236, 1 (1990). 4\. L.N. Berdnikov, Astron. Lett. 13, 45 (1987). 5\. L.N. Berdnikov, A.K. Dambis, and O.V. Vozyakova, Astron. Astrophys. Suppl. Ser. 143, 211 (2000). 6\. W.B. Burton, in Galactic and Extragalactic Radio Astronomy, Ed. by G. Verschuur and K. Kellerman (Springer, New York, 1988), p. 295. 7\. J.C. Cersosimo, S. Mader, N.S. Figueroa, et al., Astrophys. J. 699, 469 (2009). 8\. R. Drimmel and D.N. Spergel, Astrophys. J. 556, 181 (2001). 9\. Yu.N. Efremov, Astron.Rep. 47, 1000 (2003). 10\. M. Feast and P. Whitelock, Mon. Not. R. Astron. Soc. 291, 683 (1997). 11\. J.D. Fernie, Astron. J. 73, 995 (1968). 12\. P. Fouqu, P. Arriagada, J. Storm, et al., Astron. Astrophys. 476, 73 (2007). 13\. Y.C. Joshi, Mon. Not. R. Astron. Soc. 378, 768 (2007). 14\. P.M.W. Kalberla and L. Dedes, Astron. Astrophys. 487, 951 (2008). 15\. E.V. Kazarovets, N.N. Samus, O.V. Durlevich, et al., Astron. Rep. 53, 1013 (2009). 16\. M. López-Corredoira, J. Betancort-Rijo, and J. Beckman, Astron. Astrophys. 386, 169 (2002). 17\. M. Miyamoto and Z. Zhu, Astron. J. 115, 1483 (1998). 18\. Y. Momany, S. Zaggia, G. Gilmore, et al., Astron. Astrophys. 451, 515 (2006). 19\. C.A. Olano, Astron. Astrophys. 423, 895 (2004). 20\. P.P. Parenago, Trudy Gos. Astron. Inst. Shternberga 20, 26 (1951). 21\. E.D. Pavlovskaya, in Practical Works in Stellar Astronomy, Ed. by P.G. Kulikovskii (Nauka, Moscow, 1971), p. 162 \[in Russian\]. 22\. I.F. Polak, Introduction to Stellar Astronomy (ONTI, Moscow, Leningrad, 1935) \[in Russian\]. 23\. L. Sparke and S. Casertano, Mon. Not. R. Astron. Soc. 234, 873 (1988). 24\. G. Westerhout, Bull. Astron. Inst. Netherlands 13, 201 (1957). 25\. I. Yusifov, astro-ph: 0405517 (2004). 26\. N. Zacharias, C. Finch, T. Girard, et al., Astron. J. 145, 44 (2013).
--- author: - 'S. N. Kempkes' - 'M. R. Slot' - 'J. J. van den Broeke' - 'P. Capiod' - 'W. A. Benalcazar' - 'D. Vanmaekelbergh' - 'D. Bercioux' - 'I. Swart' - 'C. Morais Smith' title: | Robust zero-energy modes\ in an electronic higher-order topological insulator:\ the dimerized Kagome lattice --- [^1] [^2] **Quantum simulators are an essential tool for understanding complex quantum materials. Platforms based on ultracold atoms in optical lattices and photonic devices led the field so far, but electronic quantum simulators are proving to be equally relevant. Simulating topological states of matter is one of the holy grails in the field. Here, we experimentally realize a higher-order electronic topological insulator (HOTI). Specifically, we create a dimerized Kagome lattice by manipulating carbon-monoxide (CO) molecules on a Cu(111) surface using a scanning tunneling microscope (STM). We engineer alternating weak and strong bonds to show that a topological state emerges at the corner of the non-trivial configuration, while it is absent in the trivial one. Contrarily to conventional topological insulators (TIs), the topological state has two dimensions less than the bulk, denoting a HOTI. The corner mode is protected by a generalized chiral symmetry, which leads to a particular robustness against perturbations. Our versatile approach to quantum simulation with artificial lattices holds promises of revealing unexpected quantum phases of matter.** In a visionary colloquium nearly sixty years ago, Feynman proposed to construct so-called quantum simulators - systems that can be engineered and manipulated at will - with the aim of verifying model Hamiltonians and understanding more complex or elusive quantum systems [@Feynman1960; @georgescu2014quantum]. It took forty years for the field to properly take off, with the simulation of the Bose-Hubbard model and the superfluid/Mott-insulator transition in a two-dimensional (2D) optical lattice loaded with $^{87}$Rb atoms [@greiner2002quantum]. Since then, triangular, honeycomb, Kagome and other types of optical lattices have been loaded with bosons and/or fermions, and many interesting quantum states of matter have been simulated [@bloch2012quantum]. Later, quantum simulators were realized also in, among others, trapped ion [@blatt2012quantum] and photonic systems [@aspuru2012photonic]. On the other hand, progress on electronic quantum simulators was achieved only very recently. A few years ago, the first artificial electronic lattice was built by positioning CO molecules on a Cu(111) surface, confining the surface-state electrons to a honeycomb lattice [@Manoharan]. The technique was inspired by the pioneering construction of quantum corrals using STM based manipulations of adatoms [@Crommie]. This was followed by other electronic and spin lattices constructed by atomic manipulation in the STM, such as atomic spin chains [@Hirjibehedin; @Khajetoorians2011], the Lieb lattice with $s$-orbitals [@Slot2017; @Drost2017] and *p*-orbitals [@Slot2018], the quasi-crystalline Penrose tiling [@Collins2017], and the Sierpiński gasket with a fractional dimension [@Kempkes]. Besides manipulating the geometry and the dimensionality, it would be desirable to engineer and control *topological properties* [@Moore_2010] in electronic quantum simulators. Topological insulators, superconductors, and semimetals have attracted enormous attention during the last decades, and their potential use in quantum computers has caused a frantic interest in these systems [@NobelHaldane]. In their best known form, TIs are materials that are insulating in the bulk and host topologically protected states in one dimension lower than the bulk [@HasanKane10]. A first example of engineered electronic TIs established by controlled fabrication on the nanoscale is the one-dimensional (1D) Su-Schrieffer-Heeger (SSH) chain [@Drost2017]. However, recently it was proposed that another class of topological systems exists, the so-called HOTIs, in which the topological states emerge in at least two dimensions lower than the bulk[@BenalcazarScience]. In this way, 0D corner (1D hinge) states were predicted and subsequently observed in a 2D [@noh2018] (3D [@SchindlerBismuth]) TI. At the moment, HOTIs have been experimentally realized in photonic [@noh2018], phononic [@serragarcia2018], topolectrical-circuit [@Imhof], microwave-circuit [@peterson2018], and acoustic [@xue2018; @Ni2018] systems. ![ **Design of the dimerized Kagome lattice.** (a) Schematic finite-size tight-binding representation of the Kagome lattice. The unit cell is indicated by a grey hexagon. The model takes both NN hopping ($t_a$ and $t_b$) and NNN hopping ($t_{nnn}$) into account. (b) Band structure for the bulk of the lattice shown in (a), calculated using a tight-binding model with $t_b=75\,$meV, $t_a=0.38 t_b$, $t_{nnn}=0.25 t_b$ and onsite energy $\epsilon=0.075\,$eV. (c-d) Configuration of CO molecules (black) on a Cu(111) surface (grey background) to establish artificial-lattice sites (blue/yellow/green) in a non-trivial ($t_a < t_b$) and trivial ($t_a > t_b$) dimerized Kagome geometry, respectively. Smaller (larger) hopping is indicated by dashed (solid) lines. (e-f) Constant-current STM images of the realized non-trivial and trivial Kagome lattice. Imaging parameters: $I = 0.3\,$nA and $I = 0.1\,$nA, respectively, and $V = 100\,$mV. Scale bars: 3 nm. (g-h) Normalized differential-conductance spectra (solid lines) and the LDOS calculated using the tight-binding model (dashed lines) for the bulk (green), edge (yellow) and corner (blue) sites of the non-trivial and trivial dimerized Kagome lattice, respectively.[]{data-label="fig1"}](Fig1.png){width="60.00000%"} Here, we present the artificial realization of an *electronic* HOTI. Specifically, we create and characterize a dimerized Kagome lattice [@EzawaPRL]. This lattice, shown in Fig. \[fig1\]a, is described by three sites in a unit cell (grey hexagon) with a nearest-neighbor (NN) intracell hopping $t_a$ and intercell hopping $t_b$ (red and blue lines, respectively). The next-nearest-neighbor (NNN) hopping $t_{nnn}$ is indicated in purple only at the top of the lattice (for clarity). In our finite triangular lattice, the corner sites are represented by a blue color, whereas the edge sites are indicated in yellow and the bulk sites in green. The Bloch Hamiltonian (without NNN-hopping) of this model reads $$\begin{aligned} h_K({\bf k})&= - \left( \begin{array}{ccc} 0 & t_a +t_b e^{i {\bf k \cdot a_2}} & t_a +t_b e^{-i {\bf k \cdot a_3}} \\ t_a+t_b e^{-i {\bf k \cdot a_2}} & 0 & t_a+t_b e^{-i {\bf k \cdot a_1}} \\ t_a+t_b e^{i {\bf k \cdot a_3}} & t_a+t_b e^{i {\bf k \cdot a_1}} & 0 \\ \end{array} \right),\end{aligned}$$ where $\bf k$ is the crystal momentum, and ${\bf a_1}=(1,0)$ and ${\bf a_{2,3}}=(\frac{1}{2},\pm \frac{\sqrt{3}}{2})$ are the lattice vectors. The full tight-binding Hamiltonian that describes the experimentally realized lattice is given in the SI. The bulk band structure is shown in Fig. \[fig1\]b. The regular Kagome lattice exhibits a spectrum with a Dirac cone and a flat band. The alternating hopping strengths in the dimerized Kagome lattice $t_a \neq t_b$ open a band gap between the bottom and middle band at the K-point, as displayed for realistic values $t_a = 28.5\,$meV and $t_b = 75\,$meV. Note that the - otherwise flat - top band is dispersive due to a non-negligible NNN hopping $t_{nnn} = 18.8\,$meV (see Methods and SI). For the finite-size lattice, we distinguish two cases. If $t_a > t_b$, the lattice configuration is topologically trivial and if the values of the hopping amplitudes are switched, *i.e.* $t_a < t_b$, the lattice is topologically non-trivial. In the topological phase, the weakly coupled edge and corner sites are predicted to accommodate edge states and zero-energy corner modes [@EzawaPRL; @benalcazar2018charge], respectively. The edge of the lattice is similar to a one-dimensional SSH model and exhibits gapped bands. In the gap of both the bulk and the edge, three symmetry-protected zero-energy modes arise, which are localized at each of the corners of the lattice. Usually, the protection of zero-energy topological states is possible in insulators or superconductors that exhibit a symmetric spectrum. In topological superconductors, the particle-hole symmetry enforces this spectral symmetry, and pins the energies of Majorana bound states exactly at zero energy in its Bogoliubov-de Gennes spectrum. In insulators, bipartite lattices provide such spectral symmetry. The bipartite character of a crystal, often known as chiral symmetry, protects an integer number of zero-energy states in 1D systems such as the SSH model (see SI). Recently, it was shown that when additional crystalline symmetries are present, bipartite lattices can also protect zero-energy corner states in 2D HOTIs [@noh2018]. The Kagome lattice, however, is *not* a bipartite lattice, but consists of three sublattices A, B, and C (see unit cell in Fig. 1a). This poses a conundrum because this lattice does exhibit higher-order zero-energy states at $60^\circ$ corners in the topological configuration, despite the absence of the chiral symmetry associated with bipartite lattices. The protection of these zero-energy corner states can be explained by a generalized chiral symmetry, which relies on the fact that the Kagome lattice is tripartite [@Ni2018] (see SI). *Lattice realization.* Now we turn to the experimental realization of the electronic dimerized Kagome lattice.Figs. 1c-d present the configuration of CO molecules (black) on Cu(111) (grey background) used to constrain the surface-state electrons to the non-trivial and trivial lattice geometry, respectively. Since the CO molecules act as a repulsive barrier to the 2D electron gas at the Cu(111) surface, they are positioned to form the anti-lattice of the Kagome. The distance between the artificial-lattice sites of the Kagome lattice is chosen to be $3\sqrt{3}a \approx 13.3\,$Å , where $a \approx 2.56\,$Å denotes the Cu(111) NN distance. Strong hopping (solid lines) is established by a wide connection between the sites, while the hopping is weaker (dashed lines) for a narrow connection, implemented by an increased number of CO adsorbates. The experimental realization of the non-trivial and trivial dimerized Kagome lattice is shown in the constant-current STM images in Figs. \[fig1\]e-f. As a guide to the eye, the artificial-lattice sites and the NN hopping are indicated. Differential-conductance spectra were acquired above the bulk (green), edge (yellow) and corner (blue) artificial-lattice sites and normalized by the average spectrum taken on clean Cu(111) [@Manoharan]. We first discuss the spectra acquired above the non-trivial lattice (see Fig. 1g, solid lines). The bulk spectrum (green) shows a peak around a bias voltage of $V = -150\,$mV, which corresponds to the lowest bulk band, and a more pronounced peak around $V = +200\,$mV, which can be assigned to the middle and top bulk band. The edge spectrum (yellow) exhibits two peaks, located around $V = -20\,$mV and $V = +200\,$mV, indicative of two edge modes. This resembles an SSH chain at the edge with two bands, of which the top band minimum and bottom band maximum are separated by $2(t_b-t_a)$ (without orbital overlap). Around $V = 0\,$mV, a minimum of the bulk and edge spectrum, the corner spectrum (blue) exhibits a maximum. We attribute this peak to a zero-energy mode localized at the corners. In contrast to the non-trivial lattice, the spectra of bulk, edge and corner sites of the trivial lattice are similar. (*cf.* Fig. 1h, solid lines). These results indicate the presence of an electronic zero mode in the non-trivial dimerized Kagome lattice. The differential-conductance spectra are reproduced by tight-binding calculations of the local density of states (LDOS) at the designated artificial-lattice sites, displayed underneath the experimental spectra in Figs. 1g and 1h (dashed lines), with the same hopping parameters as used in Fig. 1b. The results are further corroborated by muffin-tin calculations (see SI). ![**Wave-function mapping**. (a-d) Differential-conductance maps acquired above the non-trivial dimerized Kagome lattice at bias voltages $V = -110\,$mV, $V = +5\,$mV, $V = +50\,$mV, and $V = +145\,$mV. (e-h) LDOS maps of the non-trivial lattice at similar energies, simulated using the tight-binding model. (i-l) Differential-conductance maps acquired above the trivial lattice at similar bias voltages.[]{data-label="fig2"}](Fig2a.png){width="80.00000%"} *Wave-function maps.*\ In Fig. \[fig2\], we investigate the spatial localization of the density of states at bias voltages corresponding to the peak positions in the differential-conductance spectra. Differential-conductance maps acquired above the non-trivial lattice (Figs. 2a-d) are compared with tight-binding calculations (Figs. 2e-h) and differential-conductance maps of the trivial lattice (Figs. 2i-l). At $V = -110\,$mV, the electrons are localized in the bulk of the non-trivial Kagome lattice. Next, at $V = 5\,$meV, the contribution of the bottom edge band is visible. At $V = +50\,$mV, we observe the highest intensity at the weakly-connected corner sites, revealing the corner-localized zero modes. Finally, at $V = 145\,$mV, all sites exhibit a similar LDOS, as expected from the spectra. These results are in agreement with the tight-binding simulations on the non-trivial lattice (Figs. 2e-h). In contrast, the differential-conductance maps obtained above the trivial lattice show a homogeneous LDOS at all bias voltages. In particular, the corner sites do not exhibit a higher intensity than the other sites at $V = +50\,$mV. The shift of the electron probability from the center to the edges and then finally to the corner sites is only seen for the non-trivial lattice and fully corroborated by tight-binding and muffin-tin calculations (see SI). ![**Robustness of the zero mode** (a) Localization of the corner mode in the top of a Kagome lattice containing 630 sites. The radius of the circles in the left panel indicates $|\psi|^{0.2}$ to represent the decay of the wave function in a visible way, and the unit cells are indicated with gray hexagons. The corner modes exponentially localize on the corresponding sublattice C (see SI). The spectrum is shown in the right panel, where the zero modes are indicated with a red line. (b) Locally adding a NNN hopping term $t_2=0.05 t_b$ between the A sublattice sites and a similar hopping term between the B sublattice sites breaks the generalized chiral symmetry for the top sites, but this does not affect the zero mode localized at sublattice site C. (c) Breaking the chiral symmetry for the top sublattice site C does shift the zero mode to finite energy and the wavefunction no longer exponentially localizes only on the sublattice sites C. (d) Breaking the chiral symmetry in the bulk also destroys the zero mode and the exponential localization, but the effect of this perturbation is less than in (c). See SI for further analysis on the breaking of these symmetries.[]{data-label="fig3b"}](Fig-4Kagome.png){width="100.00000%"} *Zero modes in the Kagome lattice.*\ The zero modes in the Kagome lattice are protected by the generalized chiral symmetry [@Ni2018] (see Methods and SI). To investigate their robustness, we now focus on the top of the lattice, where the corner mode is localized on the sublattice C. Fig. 3a shows that this zero mode has support only on the C sublattice, and decays exponentially in the neighbouring C sites in the bulk and at the edge (the size of the dots represent $|\psi|^{0.2}$ to allow for a visualisation of the decay of the wave function, see SI for the exponential decay of the wave function). If we now locally break the chiral symmetry by introducing a small hopping $t_2=0.05 t_b,$ connecting the A-A and the B-B sites in the neighbourhood of the top corner, the zero mode in C remains unperturbed (see Fig. 3b). On the other hand, if we connect C-C neighbours by a hopping $t_2$, thus locally breaking the chiral symmetry of the C sublattice, the zero mode in C loses protection, moves away from zero energy, and decays also in the A and B sublattices (see Fig. 3c). The other zero modes, at the A and B corners of the lattice, nevertheless, remain unaffected. However, if the local perturbation in C is applied farther away from the corner mode, the disturbance is small (see Fig. 3d). Note that the generalized chiral symmetry is not broken by the NNN hopping $t_{nnn}$ and the orbital overlap, that are present in the experiment (see SI). These results indicate that the generalized chiral symmetry connected to a tripartite system offers more protection to the zero modes than usual bipartite systems do. ![**Boundary defects in the Kagome lattice.** (a,e,i) Schematics of lattices (a) from which a corner site was removed, (e) with an appendix with two 120$^\circ$ angles added to one edge, and (h) with one 120$^\circ$ and one 60$^\circ$ angle appended. The purple color of the top site in the schematics indicates a slightly lower on-site energy, which is due to an involuntary upward shift of the top CO molecule by $0.256\,$nm. (b-d),(f-h),(j-l) Differential-conductance maps at $V= -100\,$mV (bulk bands), $V= +5\,$mV (edge bands) and $V= +50\,$mV (corner states) for the three lattices. []{data-label="fig4"}](Fig3.png){width="100.00000%"} *Creating and removing zero modes.*\ Finally, we show several examples of how these zero modes can also be created and destroyed experimentally by introducing defects into the lattice (see Fig. \[fig4\]). In the first defect realization, we remove the corner site at sublattice B (bottom-right corner) from the lattice by blocking the site with CO molecules (see Figs. \[fig4\]a-d). Hence, one of the zero modes is no longer present (see Fig. 4a). The corner sites A and C are not affected by the defect, as shown in the differential-conductance map at $V = +50\,$meV in Fig. 4d. The generalized chiral symmetry is preserved for these modes, as their sublattices remain unperturbed. In this way, by introducing a corner defect we remain with two zero modes. Second, we append a protrusion at one edge, hosting 120$^\circ$ obtuse angles, breaking the $C_{3}$ symmetry of the lattice but preserving one of its mirror symmetries (see Figs. 4e-h). We observe that the edge mode is disrupted around the positions where the edge no longer consists of only A and C sites (Fig. 4g). On the other hand, the corner modes remain unaltered under this perturbation (see Fig. 4h). This corroborates that, being a local symmetry, the generalized chiral symmetry offers a protection mechanism that is stronger than the one provided by crystalline symmetries: the zero-energy states persist even in the absence of crystalline symmetries. Finally, a weakly-connected site is added, breaking the mirror symmetry of the entire lattice, but preserving the generalized chiral symmetry. Again, the three zero-energy modes at the corners are resilient. In addition, the added weakly-connected site at $60{^\circ}$ (blue) exhibits a fourth zero-energy mode at sublattice A, protected by the generalized chiral symmetry. Hence, we show that it is possible to create and/or destroy zero-modes at will. In fact, under the generalized chiral symmetry, zero modes exist whenever a site is only weakly connected to its neighbors (i.e. connected to other sites by hopping terms of amplitude $t_a$, for $t_a<t_b$), as happens in all the cases where zero modes exist in Fig. \[fig4\]. Only if two zero modes belonging to different sublattices are in close proximity they can hybridize to open a gap. If, on the other hand, zero modes belonging to the same sublattice are brought together, they will remain at zero energy. In this sense, the generalized chiral symmetry provides a protection mechanism analogous to the conventional chiral symmetry in bipartite lattices, although in this case the existence of three species of zero modes offers more versatility. *Conclusion and outlook*\ The Kagome lattice is known to be a fascinating system, mostly because it realizes geometric frustration and is conjectured to host the elusive spin-liquid phase. Here, we show that a dimerized electronic Kagome lattice brings even further surprises. Protected-zero modes arise at the corners of the lattice, thus realizing a HOTI with extreme robustness due to the tripartite character of the generalized chiral symmetry. By introducing different types of defects into the lattice, zero modes can be manipulated at will, and one can tune the system to have an even or odd number of corner modes. The large degree of control over artificial lattices provides unique opportunities to study electronic topological phases. Using the LDOS as a clear experimental observable, it is possible to detect symmetry-protected edge and corner modes not only at the Fermi energy, but all relevant energies of the model. In addition, the protection mechanisms and robustness of topological phases can be probed by selectively breaking certain symmetries. This can be done either locally, for example via introduction of atomically well-defined defects breaking the crystalline symmetry, or globally, for example by applying a magnetic field. Furthermore, it will be possible to study the influence of disclinations. For topological crystalline insulators, the interplay of topologically protected edge modes and edge geometry can be probed. Electronic quantum simulators are thus complementary to photonic systems, which are designed in a much larger scale, and to the cold-atom setups, which offer great control but request nK temperatures for their operation. The progress in the realization of artificial electronic structures takes a step forward with the inclusion of topology among the parameters to be manipulated. **METHODS**\ Methods are available in the online version of the article. **Data availability**\ All data is available from the corresponding authors on reasonable request. The experimental data can be accessed using open-source tools. [10]{} url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} . ** ****, (). , & . ** ****, (). , , , & . ** ****, (). , & . ** ****, (). & . ** ****, (). & . ** ****, (). , , , & . ** ****, (). , & . ** ****, (). , & . ** ****, (). , , & . ** ****, (). *et al.* . ** ****, (). , , & . ** ****, (). *et al.* . ** ****, (). , , , & . ** ****, (). *et al.* . ** ****, (). . ** ****, (). . ** ****, (). & . ** ****, (). , & . ** ****, (). *et al.* . ** ****, (). *et al.* . ** ****, (). *et al.* . ** ****, (). *et al.* . ** ****, (). , , & . ** ****, (). , , , & . ** ****, (). , , & . ** ****, (). . ** ****, (). , & . ** (). **Supplementary Information** is available in the online version of the article. **Acknowledgements** We would like to acknowledge Hans Hansson, Duncan Haldane, and Marcel Franz for fruitful discussions. WAB thanks the support of the Eberly Postdoctoral Fellowship at the Pennsylvania State University. The work of DB is supported by Spanish Ministerio de Ciencia, Innovation y Universidades (MICINN) under the project FIS2017-82804-P, and by the Transnational Common Laboratory $Quantum$—$ChemPhys$. DV, IS and CMS acknowledge funding from NWO via grants 16PR3245 and DDC13, and DV acknowledges the ERC Advanced Grant ’FIRSTSTEP’ 692691 as well. **Author contributions** SNK and JvdB performed the theoretical calculations under the supervision of CMS, WAB and DB. MRS, SNK and IS planned the experiment. MRS performed the experiment and data analysis with contributions from PC under the supervision of IS and DV. CMS, SNK and MRS wrote the manuscript with input from all authors. **METHODS**\ **Experiments**\ The scanning tunneling microscopy and spectroscopy experiments were performed in a Scienta Omicron LT-STM system operating in sample-bias mode at a temperature of $4.5\,$K and a base pressure in the $10^{-11}\,$mbar range. An atomically-flat Cu(111) surface was prepared by several cycles of Ar$^{+}$ sputtering and annealing and was cooled down in the STM head. Carbon monoxide molecules were leaked into the chamber for 20 minutes at a pressure of $1 \cdot 10^{-8}\,$mbar and adsorbed onto the cold surface. The Kagome lattices were assembled and characterized using a Cu-coated platinum-iridium tip, prepared by gentle contact with the Cu(111) surface. CO manipulations were carried out in feedback at $V = 20\,$mV and $I = 40\,$nA, following previously reported procedures[@BartelsMethods2; @MeyerMethods2; @CelottaMethods2]. STM images were obtained in constant-current mode. Differential-conductance spectra and maps were acquired in constant-height mode using a standard lock-in technique with a modulation amplitude of $10\,$mV r.m.s. at a frequency of $273\,$Hz. **Muffin-tin calculations**\ The muffin-tin model describes the surface-state electrons of the Cu(111) as a 2D electron gas confined between circular potential barriers (CO molecules) with a height of $V=0.9\,$eV and a radius $R=0.3\,$nm. The energy and wave functions for this model can be found by numerically solving the Schrödinger equation with this potential landscape. A broadening of $\Gamma =0.08\,$eV is included to account for bulk scattering. **Tight-binding calculations**\ The free electrons in the lattice act as if they are confined to certain artificial atom positions due to the placing of the CO-molecules. We can describe this behavior within a tight-binding model of connected $s$-orbitals. By making a fit to the experimental and muffin-tin spectra, we are able to determine the hopping parameters, the on-site energy and the orbital overlap. We find the values (in the topological phase) for the strong hopping $t_b=0.075\,$eV and the weak hopping $t_a=0.38 t_b$. Furthermore, we obtain the NNN hopping $t_{nnn}=0.25 t_b$, the on-site energy $\epsilon=0.075\,$eV and the orbital overlap between nearest-neighbours $s_b=0.22$ and $s_a=0.9 s_b$. With these parameters, we solve the generalized eigenvalue equation $H | \psi \rangle = E \mathcal{S} | \psi \rangle$, where $\mathcal{S}$ is the overlap-integral matrix. Next, the LDOS is calculated at each atomic site, in which the broadening $\Gamma=0.08\,$eV is included to account for bulk scattering. Finally, the LDOS maps are calculated by multiplying the LDOS at each site with a Gaussian wave function of width $\sigma = 0.45 d$, where $d = 1.33\,$nm is the distance between two neighboring sites. **Protection mechanism**\ The protection of the zero modes is due to an extension of the chiral symmetry. The conventional chiral symmetry is expressed as $$\begin{aligned} \Gamma^{-1} h({\bf k}) \Gamma = -h({\bf k}).\end{aligned}$$ where, without loss of generality, one can choose a basis in which the chiral operator $\Gamma$ is a diagonal matrix with entries $+1$ for one sublattice and of $-1$ for the other one. In the dimerized Kagome lattice, we have an odd number of lattice sites in the unit cell, and therefore the chiral symmetry does not hold. The concept, however, can be extended to a generalized version of the conventional chiral symmetry because the Kagome lattice is tripartite. The generalized chiral operator, $\Gamma_3$, can now be chosen (by an appropiate choice of ordering in the Hamiltonian matrix) to be a diagonal $3 \times 3$ matrix with entries $\Gamma_3=\text{Diag}(1, e^{2\pi i /3}, e^{-2\pi i /3})$ that differentiates three sublattices [@Ni2018]. The generalized chiral symmetry is then written as $$\begin{aligned} \Gamma_3^{-1} h_1({\bf k}) \Gamma_3& = h_2({\bf k}), \nonumber\\ \Gamma_3^{-1} h_2({\bf k}) \Gamma_3 &=h_3({\bf k}), \nonumber\\ h_1({\bf k})+ h_2({\bf k})+ h_3({\bf k})&=0.\end{aligned}$$ In the topological phase, three zero modes exist simultaneously, each of which localizes at one of the three sublattices (see SI). This generalized chiral symmetry does not result in spectral symmetry of the bulk bands. Consequently, bulk bands can also have zero energy, but when the bulk bands are degenerate with the zero modes (for $1/2 < t_a/t_b <1$) they do not mix with the localized corner zero modes. More details on the protecting symmetry and symmetry-breaking perturbations are given in the SI. [10]{} url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , & . ** ****, (). *et al.* . ** ****, (). *et al.* . ** ****, (). [^1]: Both authors contributed equally. [^2]: Both authors contributed equally.
--- abstract: | The vector boson fusion (VBF) topology at the Large Hadron Collider at 14 TeV provides an opportunity to search for new physics. A feasibility study for the search of sleptons in a compressed mass spectra scenario is presented in the final state of two jets, one or two low $p_{T}$ non-resonant leptons, and missing energy. The presence of the VBF tagged jets and missing energy are effective in reducing Standard Model backgrounds. Using smuon production with a mass difference between ${\ensuremath{\tilde{l}_{L}}}$ and ${\ensuremath{\tilde{\chi}_{1}^0}}$ of 5-15 GeV, the significance of observing the signal events is found to be\ $\sim$ 3-6$\sigma$ for $m_{\tilde{l}}$=115-135 GeV, considering an integrated luminosity of 3000 fb$^{-1}$. author: - 'Bhaskar Dutta$^{1}$' - 'Tathagata Ghosh$^{1}$' - 'Alfredo Gurrola$^{2}$' - 'Will Johns$^{2}$' - 'Teruki Kamon$^{1,3}$' - 'Paul Sheldon$^{2}$' - 'Kuver Sinha$^{4}$' - 'Kechen Wang$^{1,5}$' - 'Sean Wu$^{1}$' title: Probing Compressed Sleptons at the LHC using Vector Boson Fusion Processes --- MIFPA-14-33 Introduction ============ Searches for supersymmetry (SUSY) at the Large Hadron Collider (LHC) have produced impressive constraints on colored superpartners. For comparable masses, the exclusion limits on squarks ($\tilde{q}$) and gluinos ($\tilde{g}$) are approximately $1.5$ TeV at $95\%$ confidence level with $20$ fb$^{-1}$ of integrated luminosity [@:2012rz; @Aad:2012hm; @cmssusy; @:2012mfa]. On the other hand, searches for charginos (${\ensuremath{\tilde{\chi}_{1}^{\pm}}}$), neutralinos (${\ensuremath{\tilde{\chi}_{2}^0}}$), and sleptons (${\ensuremath{\tilde{l}_{L}}}\equiv \tilde{e}_L, \tilde{\mu}_L$) through direct electroweak production face the difficulty that their production cross-sections are much lower, resulting in smaller exclusion bounds. Directly produced sleptons have been probed at both ATLAS [@ATLASSlep] and CMS [@CMSSlep1], in final states containing opposite-sign same-flavor non-resonant dileptons and missing transverse energy (${{E\!\!\!\!/_{T}}}$). The decay chain is $pp \rightarrow {\ensuremath{\tilde{l}_{L}}}{\ensuremath{\tilde{l}_{L}}}^* \rightarrow l^+ l^- {\ensuremath{\tilde{\chi}_{1}^0}} {\ensuremath{\tilde{\chi}_{1}^0}}$, with $Br({\ensuremath{\tilde{l}_{L}}}\rightarrow l^- {\ensuremath{\tilde{\chi}_{1}^0}}) = 1$. Constraints for right slepton masses, have also been set by these studies. The mass separation $\Delta M = m_{{\ensuremath{\tilde{l}_{L}}}}-m_{{\ensuremath{\tilde{\chi}_{1}^0}}}$ is an important factor in the resulting exclusion plots from both experiments. The exclusion limits are given on the $m_{{\ensuremath{\tilde{l}_{L}}}}$-$m_{{\ensuremath{\tilde{\chi}_{1}^0}}}$ plane, and depend on $\Delta M$. The mass reach with $m_{\tilde{\chi}_{1}^{0}} = 0$ GeV is $m_{\tilde{l}_{L}} \sim 280$ GeV and 330 GeV for CMS and ATLAS respectively. In the CMS studies, for $m_{{\ensuremath{\tilde{l}_{L}}}} \sim$ 110-200 GeV, the excluded region has $\Delta M \sim 110$ GeV. In the ATLAS study, the exclusion limits reach $m_{\tilde{l}} \sim 250$ GeV with $\Delta M \, \sim \, 100$ GeV, after which $\Delta M$ increases. For the right-sleptons, the mass reach with $m_{\tilde{\chi}_{1}^{0}} = 0$ GeV is $m_{\tilde{l}_{R}} \sim 180$ GeV and 250 GeV for CMS and ATLAS respectively. Compressed spectra with smaller $\Delta M$ may have eluded these probes, and are important for a variety of theoretical reasons. For example, for a Bino-like ${\ensuremath{\tilde{\chi}_{1}^0}}$ dark matter (DM) candidate, the annihilation cross-section is usually too low, and needs to be considerably enhanced with coannihilation processes (typically requiring $\Delta M \, \sim \,$ 5-15 GeV for slepton coannihilation [@Griest:1990kh; @Coann]) to obtain the relic density observed by WMAP [@WMAP]. Also, the annihilation diagrams, even when coannihilation is absent, may involve sleptons in the t-channel. On the other hand, the main supersymmetric contributions to the muon $g-2$ are dominated by chargino-sneutrino and neutralino-smuon loop diagrams with not necessarily very large mass gaps. Furthermore, the BNL measured excess [@BNL] for the anomalous magnetic moment of the muon is about 3.6$\sigma$ (2.4$\sigma$) using $e^{+}e^{-}$ ($\tau$) data [@Davier:2013wwg] and can be explained with a SUSY mass spectra containing $\mathcal{O}(100)$ GeV neutralinos and sleptons (we refer to [@Endo:2013bba] for a recent summary). In this paper, we propose search strategies for ${\ensuremath{\tilde{l}_{L}}}$ pairs using the vector boson fusion (VBF) topology. The VBF topology has been used by the authors recently to probe the supersymmetric electroweak sector. In [@Dutta:2012xe], the ${\ensuremath{\tilde{\chi}_{1}^{\pm}}}$-${\ensuremath{\tilde{\chi}_{2}^0}}$ system (where it is mostly the charged and neutral Wino) has been studied, while direct production of ${\ensuremath{\tilde{\chi}_{1}^0}}$ dark matter by VBF processes has been proposed in [@Delannoy:2013ata]. Previously, sleptons were studied in the context of the LHC  [@sleptonold]. As shown in these references, the requirement of two energetic jets in the forward region, in opposite hemispheres, and with large dijet invariant mass is very effective in reducing standard model (SM) backgrounds. Further, the VBF topologies naturally give rise to larger ${{E\!\!\!\!/_{T}}}$ since the momentum of the particles produced in the slepton system must balance the high $p_{T}$ of the scattered partons, which is of significant experimental importance as it allows for an additional handle to trigger on compressed spectra that is typically characterized by low ${{E\!\!\!\!/_{T}}}$ in non-VBF searches. Thus, in the compressed scenario, the ${\ensuremath{\tilde{\chi}_{1}^0}}$ resulting from the ${\ensuremath{\tilde{l}_{L}}}$ decay carries significant ${{E\!\!\!\!/_{T}}}$, providing better control of the SM background. We develop our VBF search strategy to be particularly suited to probes in the region of small $\Delta M$, in particular focusing on the use of soft leptons to discriminate against the SM background. The structure of the paper is as follows. In Section \[searchstrategy\] an outline of the search strategy is given, followed by results in Section \[results\]. Conclusions are given in Section \[conclusion\]. Search Strategy {#searchstrategy} =============== To probe ${\ensuremath{\tilde{l}_{L}}}$ production, the following processes are investigated: $pp \rightarrow {\ensuremath{\tilde{l}_{L}}}\, {\ensuremath{\tilde{l}_{L}}}^{*} \, jj, \, {\ensuremath{\tilde{l}_{L}}}\, {\ensuremath{\tilde{\nu}}}^* \, ({\ensuremath{\tilde{l}_{L}}}^* {\ensuremath{\tilde{\nu}}})jj$. We note that the mass splitting between ${\ensuremath{\tilde{l}_{L}}}$ and ${\ensuremath{\tilde{\nu}}}$ is $m^2_{{\ensuremath{\tilde{l}_{L}}}} - m^2_{{\ensuremath{\tilde{\nu}}}} = - \frac{1}{2} m^2_{Z} \cos{2\beta}$ [@Barger:1993gh] and thus $m_{{\ensuremath{\tilde{l}_{L}}}}> m_{{\ensuremath{\tilde{\nu}}}}$. Therefore, along with ${\ensuremath{\tilde{l}_{L}}}$ pair production we also consider ${\ensuremath{\tilde{l}_{L}}}{\ensuremath{\tilde{\nu}}}^*$ (${\ensuremath{\tilde{l}_{L}}}^* {\ensuremath{\tilde{\nu}}}$) production. Two separate studies were performed in the final states of $2j \, + \, 2l \, + \, {{E\!\!\!\!/_{T}}}$ (which targets ${\ensuremath{\tilde{l}_{L}}}\, {\ensuremath{\tilde{l}_{L}}}^*$ production), and $2j \, + \, 1l \, + \, {{E\!\!\!\!/_{T}}}$ (which targets both ${\ensuremath{\tilde{l}_{L}}}\, {\ensuremath{\tilde{l}_{L}}}^*$ and ${\ensuremath{\tilde{l}_{L}}}\, {\ensuremath{\tilde{\nu}}}^*$ (${\ensuremath{\tilde{l}_{L}}}^* {\ensuremath{\tilde{\nu}}}$) production). In order to avoid having ${\ensuremath{\tilde{\nu}}}$ as the lightest SUSY particle and consequently as the DM candidate, a scenario ruled out by a combination of relic density and direct detection constraints [@SneutrinoDM], $\tan{\beta}$ is kept small \[$\sim \mathcal{O}(1)$\] so that $m_{{\ensuremath{\tilde{\nu}}}} > m_{{\ensuremath{\tilde{\chi}_{1}^0}}}$. Furthermore, to satisfy the muon $g-2$ constraint, $m_{{\ensuremath{\tilde{\chi}_{i}^0}}},m_{{\ensuremath{\tilde{\chi}_{j}^{\pm}}}}$ (where, $i=2,3,4$ and $j=1,2$) are set to $\lesssim 1$ TeV, but it does not influence our analysis. Several ${\ensuremath{\tilde{l}_{L}}}$ masses in the range $115-135$ GeV are chosen for the study at $\sqrt{s} = 14$ TeV. For each $m_{{\ensuremath{\tilde{l}_{L}}}}$, $\Delta M$ is varied between 5 and 15 GeV (except for the $m_{{\ensuremath{\tilde{l}_{L}}}}=135$ GeV points, where $\Delta M$ between 5 and 25 GeV is considered). The ${\ensuremath{\tilde{\chi}_{1}^0}}$ is purely Bino and the slepton is mostly ${\ensuremath{\tilde{l}_{L}}}$. The decay mode of ${\ensuremath{\tilde{l}_{L}}}$ is ${\ensuremath{\tilde{l}_{L}}}\, \rightarrow l {\ensuremath{\tilde{\chi}_{1}^0}}$ with $100\%$ branching ratio. The rest of the spectrum is assumed to be much heavier. The signal samples are generated at $\mathcal{O}(\alpha^{4}_{EW} \alpha^{0}_s)$ and include 2-partons (exclusive) processes. [^1] The search strategy is based on three steps. First, we use the unique features of VBF processes to reduce non-VBF backgrounds by requiring large ${{E\!\!\!\!/_{T}}}$, $H_{T}$ and exactly two forward jets in opposite hemispheres and with large dijet invariant mass. Second, events containing additional leptons and/or b-tagged jets are rejected in order to reduce the contribution from WZ and $t\bar{t}$. Finally, the $p_{T}$ distributions of the muons are used to assess the presence of a signal above the SM background. We study the signal significance using two approaches: (1) a simple cut and count approach optimized for our benchmark scenario by requiring soft leptons; (2) a more general shaped based approach using a fit of the entire muon $p_{T}$ spectrum to search for an enhancement in the soft part of the spectrum. Signal and background samples are generated with [@Alwall:2011uj] followed by the parton showering and hadronization with [@Sjostrand:2006za] and the detector simulation using [@pgs]. We use CTEQ6L1 [@Pumplin:2002vw] parton distribution function. Results ======= The SM backgrounds considered for this study are, ${t \bar{t}}\, + \,$ jets, $VV \, + \,$ jets, and $V \, + \,$ jets, where $V$ denotes $W, Z$. The $V \, + \,$ jets background is generated including up to 4-partons (inclusive), while the ${t \bar{t}}\, + \,$ jets calculation includes up to 3-partons (inclusive). Double-counting is avoided by using the MLM-scheme [@Mangano:2006rw] for jet matching. The $VV \, + \,$ jets background is calculated at $\mathcal{O}(\alpha^{4}_{EW} \alpha^{0}_s)$ only and includes up to 2-partons (exclusive). Single top ($tW$, $tq$) and $\gamma^* \, + \,$ jets backgrounds are found to be insignificant for the following analyses. The Higgs background is negligible after applying the VBF cuts. For an inclusive Higgs sample of events equivalent to $\sim 3000$ fb$^{-1}$, we find no event passing after applying all cuts and so, the effective cross-section is 0 with an uncertainty of $3\times10^{-4}$ fb. As a benchmark scenario we consider smuon production with $l=\mu$, but this study can also be extended to the $l=e$ case. We use JETCLU-like [@Abe:1991ui] cone algorithm as used by for jet reconstruction with cone radius=0.5. We further use $\Delta R = 0.3$ for jet isolation. The next-to-leading order QCD corrections to the VBF electroweak production of signal and background cross sections have not been considered. The change in the signal [@kfactor] and background [@kfactorb] cross-sections due to the inclusion of the K factor is very modest (at a few percent level) for VBF production. ---------------------------------- ------------ ------------------------------------ ------------------------------------ ------------------------------------ ------------------------------------ Selection (135, 120) $VV$ + jets $t\bar{t}$ + jets $W$ + jets $Z$ + jets \[GeV\] \[fb\] \[fb\] \[fb\] \[fb\] Initial 0.491 1.34${\ensuremath{\times 10^{3}}}$ 7.03${\ensuremath{\times 10^{5}}}$ 1.88${\ensuremath{\times 10^{8}}}$ 5.56${\ensuremath{\times 10^{7}}}$ exactly 2 j 0.265 2.08${\ensuremath{\times 10^{2}}}$ 3.32${\ensuremath{\times 10^{4}}}$ 2.38${\ensuremath{\times 10^{7}}}$ 9.61${\ensuremath{\times 10^{6}}}$ VBF topology selection 0.0327 4.29 35.4 4.68${\ensuremath{\times 10^{3}}}$ 1.58${\ensuremath{\times 10^{3}}}$ exactly 2 muon 0.0085 0.125 0.500 – – $p_{T_{\mu_1}}+p_{T_{\mu_2}}<70$ 0.0062 0.0126 0.0700 – – ${{E\!\!\!\!/_{T}}}> 200$ 0.0021 0.0021 – – – $H_T > 200$ 0.0021 0.0020 – – – ---------------------------------- ------------ ------------------------------------ ------------------------------------ ------------------------------------ ------------------------------------ : \[$2 j + 2 \mu + {{E\!\!\!\!/_{T}}}$ study\] Summary of the effective cross-section (fb) for the signal and main sources of background at LHC14 for the benchmark point $(m_{{\tilde{\mu}_L}}, m_{{\ensuremath{\tilde{\chi}_{1}^0}}}) = (135,120)$ GeV. “—” indicates the background size is negligible. []{data-label="2muBenchAndBgTable"} $\mathbf{2j \, + \, 2l \, + \, {{E\!\!\!\!/_{T}}}} $ **Study -** The selections for the $2j \, + \, 2l \, + \, {{E\!\!\!\!/_{T}}}$ study are as follows: - exactly 2 j:\ (i) $b$-veto, where we assume $b$-tagging efficiency ($\epsilon_b$) of $70 \%$ and a fake rate ($f$) of $1 \%$ coming from $u,d,s,c,g$ jets. Both $\epsilon_b$ and $f$ are flat over $p_{T} > 30$ GeV for $|\eta| < 2.4$;\ (ii) select exactly 2 jets with $p_{T_j} > 30$ GeV and $|\eta_j| < 5$. - VBF topology selection:\ (i) $\eta_{j_1}\eta_{j_2} < 0$ and $M_{j_1j_2} > 600$ GeV;\ (ii) $|\eta_{j_{1,2}}| > 1.7$;\ (iii) $\Delta\phi_{j_1j_2}<1.0$. - exactly 2 muons:\ (i) select 2 muons, having opposite charges, with $p_{T_{\mu}} > 10$ GeV and $|\eta_{\mu}| < 2.5$;\ (ii) veto events with a loosely identified $e, \tau$;\ (iii) $Z$-veto (i.e. reject events with $81$ GeV $< M_{\mu^{\pm}_{1}\mu^{\mp}_{2}} <$ $101$ GeV);\ (iv) Central $\mu$ selection with $\eta_j({\rm min}) < \eta_\mu < \eta_j({\rm max})$, where $\eta_j({\rm min})$ and $\eta_j({\rm max})$ are the minimal and maximal $\eta$s of two jets.\ As shown in Fig. \[muonpt\], the muons from the smuon decays are expected to be soft in the compressed spectra benchmark scenarios considered, with $p_{T} \sim \Delta M$. For the cut and count (CC) approach, we take advantage of this characteristic by imposing an upper cut on the scalar sum of the $p_{T}$ of both muons ($p_{T_{\mu_{1}}}+p_{T_{\mu_{2}}} < 70$ GeV). Finally, we require ${{E\!\!\!\!/_{T}}}> 200$ GeV and $H_T > 200$ GeV, where $H_T$ is the scalar sum of the transverse momentum of all jets with $p_T>30$ GeV (including $b$ and $\tau$ jets). The cross-sections after each set of cuts are shown in Table \[2muBenchAndBgTable\]. Since the single top and $\gamma^* \, + \,$ jets backgrounds are negligible for this analysis, they are not presented in Table \[2muBenchAndBgTable\]. The contribution from ${t \bar{t}}\, + \,$ jets vanishes after all the selections, but $VV \, + \,$ jets survives with a background rate comparable to the expected signal rate. --------------------------- ------------ ------------------------------------ ------------------------------------ ------------------------------------ ------------------------------------ Selection (135, 120) $VV$ + jets $t{\bar{t}}$ + jets $W$ + jes $Z$ + jets \[GeV\] \[fb\] \[fb\] \[fb\] \[fb\] Initial 0.953 1.34${\ensuremath{\times 10^{3}}}$ 7.03${\ensuremath{\times 10^{5}}}$ 1.88${\ensuremath{\times 10^{8}}}$ 5.56${\ensuremath{\times 10^{7}}}$ exactly 2 j 0.514 2.08${\ensuremath{\times 10^{2}}}$ 3.32${\ensuremath{\times 10^{4}}}$ 2.38${\ensuremath{\times 10^{7}}}$ 9.61${\ensuremath{\times 10^{6}}}$ VBF topology selection 0.0240 2.88 7.27 1.08${\ensuremath{\times 10^{3}}}$ 2.85${\ensuremath{\times 10^{2}}}$ exactly 1 muon 0.0130 0.483 1.31 1.54${\ensuremath{\times 10^{2}}}$ 18.4 $p_{T_{\mu_1}}<30$ 0.0075 0.0690 0.232 51.4 9.18 ${{E\!\!\!\!/_{T}}}> 200$ 0.0040 0.0259 0.0770 – – $H_T > 250$ 0.0029 0.0189 – – – --------------------------- ------------ ------------------------------------ ------------------------------------ ------------------------------------ ------------------------------------ : \[$2 j + 1 \mu + {{E\!\!\!\!/_{T}}}$ study\] Summary of the effective cross-section (fb) for the signal and main sources of background at LHC14 for the benchmark point $(m_{\tilde{\mu}_L}, m_{{\ensuremath{\tilde{\chi}_{1}^0}}}) = (135,120)$ GeV. “—” indicates the background size is negligible.[]{data-label="1muBenchAndBgTable"} $\mathbf{2j \, + \, 1 \, \, l \, + \, {{E\!\!\!\!/_{T}}}}$ **Study -** All other selections being equal, the channel containing exactly one muon suffers from larger background rates with respect to the two muon channel. However, the backgrounds are reduced to manageable levels with more stringent VBF topological selections. The selections for the $2j \, + \, 1\, \, \mu \, + \, {{E\!\!\!\!/_{T}}}$ study are as follows: - exactly 2 j:\ (i) $b$-veto (same as in the $2j \, + \, 2l \, + \, {{E\!\!\!\!/_{T}}}$ study);\ (ii) select exactly 2 jets with $p_{T_j} > 30$ GeV and $|\eta_j| < 5$. - VBF topology selection:\ (i) $\eta_{j_1}\eta_{j_2} < 0$ and $M_{j_1j_2} > 1100$ GeV ;\ (ii) $|\eta_{j_{1,2}}| > 1.8$;\ (iii) $\Delta\phi_{j_1j_2}<1.0$. - 1 muon:\ (i) select events containing exactly 1 muon with $p_{T_{\mu}} > 10$ GeV and $|\eta_{\mu}| < 2.5$;\ (ii) veto events with a loosely identified $e$ or $\tau$;\ (iii) Central $\mu$ selection with $\eta_j({\rm min}) < \eta_\mu < \eta_j({\rm max})$, where $\eta_j({\rm min})$ and $\eta_j({\rm max})$ are the minimal and maximal $\eta$s of two jets.\ ![Distribution of $p_{T_{\mu}}$ in the ${\ensuremath{\tilde{l}_{L}}}$ pair, ${\ensuremath{\tilde{l}_{L}}}{\ensuremath{\tilde{\nu}}}^*$ (${\ensuremath{\tilde{l}_{L}}}^* {\ensuremath{\tilde{\nu}}}$) and $VV \, + \,$jets events for the benchmark point $(m_{\tilde{l}_{L}},m_{\tilde{\chi}_{1}^{0}})=(115$ GeV$, 110$ GeV$)$. All distributions shown in the figure are after VBF cuts, ${{E\!\!\!\!/_{T}}}$ and $H_T$ requirements but without the $p_{T_{\mu}}$ upper-bound cuts. This figure is a representative of the plots used for our shape based analysis.[]{data-label="muonpt"}](smu_115_n1_110.pdf){width="4.0in"} As discussed above, the muons from the smuon decays are expected to be soft. Thus for the CC approach, we impose an upper threshold on the transverse momentum of the muon ($p_{T_{\mu_{1}}} <30$ GeV). Similarly, ${{E\!\!\!\!/_{T}}}> 200$ GeV and $H_T > 250$ GeV are found useful discriminants in this channel. The cross-sections after each set of cuts are shown in Table \[1muBenchAndBgTable\]. [*[**Significance -**]{}*]{} The significances $S / \sqrt{S+B}$ for the CC approach outlined in Tables I and II, where $S$ and $B$ are the signal and background yields respectively, are shown in Table III. As mentioned, we also perform a shape based analysis [^2] of the $p_{T}(\mu)$ and $p_{T}(\mu_{1}) + p_{T}(\mu_{2})$ distributions in the 1-muon and 2-muon channels, respectively, using a binned likelihood following the test statistic based on the profile likelihood ratio. A local p-value is calculated as the probability under a background only hypothesis to obtain a value of the test statistic as large as that obtained with a signal plus background hypothesis. The significance $z$ is then determined as the value at which the integral of a Gaussian between $z$ and $\infty$ results in a value equal to the local p-value. In Table \[sigsAllSAndBgTable\], we show the significances for exactly 1-muon and exactly 2-muon final states and the combined significances, using the joint likelihood, for these two channels. We find that the combined significance is $\gtrsim$ 4$\sigma$ considering 3000 fb$^{-1}$ luminosity for $\Delta M$=5-15 GeV and $m_{\tilde{l}}$=115-125 GeV. For $m_{\tilde{l}}$=135 GeV, we find that the significance is $\gtrsim$ 3$\sigma$ for $\Delta M$=10-15 GeV. The significance becomes smaller for larger $\Delta M$ since the emitted muon from the ${\ensuremath{\tilde{l}_{L}}}$ decay has larger $p_T$ which makes it more difficult to discriminate signal from the $VV$ + jets background. From Table \[2muBenchAndBgTable\] and \[1muBenchAndBgTable\], it might appear to the reader that the 1-muon channel is insignificant compared to the 2-muon channel, owing to it’s smaller $S/B$ ratio. However, a closer look at Table \[sigsAllSAndBgTable\] will reveal that with decreasing $\Delta M$, 1-muon channel becomes increasingly more important and it becomes the dominant channel for $\Delta M \sim 5$ GeV. This is due to the fact that for smaller $\Delta M$, the muons coming from the slepton decays will become even more soft, and it will become increasingly difficult to detect both muons in the 2-muon channel. Monojet searches (one boosted high $p_{T}$ jet plus missing energy) have been conducted by the LHC experiments [@Khachatryan:2014rra; @ATLAS:2012zim; @ATLAS:2012ky] as an effective probe for compressed spectra. It is interesting to compare the sensitivity of the proposed VBF searches to compressed slepton production with that of monojet. We find the monojet analysis does not provide sensitivity in these compressed slepton scenarios due to the small cross-sections. For this comparison we consider a benchmark point $(m_{\tilde{l}_{L}},m_{\tilde{\chi}_{1}^{0}})=(135$ GeV$, 120$ GeV$)$ and apply the selections outlined in the 14 TeV projection analysis [@Schwaller:2013baa]. The combined signal production cross-section of $\tilde{\mu}_{L}\tilde{\mu}_{L}^{*}$ and $\tilde{\mu}_{L}\tilde{\nu}^{*}$ + jets (up to 3 partons) is 486.5 fb, while the dominant backgrounds are $V$ + jets [@Baer:2014cua; @Han:2013usa]. Following the 14 TeV projection analysis of Ref. [@Schwaller:2013baa], we obtain less than $1\sigma$ significance at 3000 fb$^{-1}$ of integrated luminosity for the aforementioned benchmark point. ---------------- --------------------- ---------------------------- --------------- -------------- -------------- --------------- -------------- -------------- -- ------- -- ----- [$\Delta M$]{} [$m_{\tilde{l}}$]{} [$m_{\tilde{\chi}_1^0}$]{} Cross-section Significance Significance Cross-section Significance Significance \[fb\] CC Shape \[fb\] CC Shape CC Shape 25 135 110 0.0014 1.3 1.8 0.0021 0.8 1.3 1.6 2.3 15 135 120 0.0021 2.1 2.6 0.0029 1.0 1.5 2.5 3.2 10 135 125 0.0019 2.1 2.9 0.0044 1.8 2.9 2.9 4.5 5 135 130 0.0004 0.3 0.5 0.0036 1.5 2.2 1.5 2.1 15 125 110 0.0024 2.4 3.1 0.0035 1.3 1.8 3.0 3.8 10 125 115 0.0018 2.0 2.8 0.0043 1.8 2.8 2.9 4.8 5 125 120 0.0006 0.6 1.0 0.0046 1.9 4.1 2.1 3.9 15 115 100 0.0027 2.8 4.1 0.0043 1.6 1.8 3.5 4.6 10 115 105 0.0021 2.3 3.4 0.0050 2.0 3.2 3.3 5.1 5 115 110 0.0007 0.6 1.1 0.0058 2.4 4.1 2.5 4.0 ---------------- --------------------- ---------------------------- --------------- -------------- -------------- --------------- -------------- -------------- -- ------- -- ----- : Summary of the effective cross-section (fb) and significances, with 3000 fb$^{-1}$ after all cuts for different SUSY points at LHC14. The effective cross-section of total standard model background after all cuts is 0.0020 fb for “exactly 2-muon final state analysis”, and 0.0189 fb for “exactly 1-muon final state analysis”. The significances presented are calculated by means of both “cut and count (CC)” and “shape analysis” methods. []{data-label="sigsAllSAndBgTable"} Conclusion ========== The main result of this paper is that the VBF topology provides a feasible strategy to search for sleptons in the case where the mass separation with the lightest neutralino is $\sim$ 5-25 GeV. The mass range is of great interest phenomenologically and where exclusion bounds in non-VBF studies are difficult to obtain. There is no current or projected constraint for the 14 TeV LHC run available for this region. Two separate studies were performed in the final states of $2j \, + \, 2l \, + \, {{E\!\!\!\!/_{T}}}$ (which targets ${\ensuremath{\tilde{l}_{L}}}\, {\ensuremath{\tilde{l}_{L}}}^*$ production), and $2j \, + \, 1l \, + \, {{E\!\!\!\!/_{T}}}$ (which targets both ${\ensuremath{\tilde{l}_{L}}}\, {\ensuremath{\tilde{l}_{L}}}^*$ and ${\ensuremath{\tilde{l}_{L}}}\, {\ensuremath{\tilde{\nu}}}^*$ (${\ensuremath{\tilde{l}_{L}}}^* {\ensuremath{\tilde{\nu}}}$) production). As a benchmark scenario we consider smuon production with $\Delta M$=5-15 GeV and arrive at a combined significance of $\sim$ 4-6$\sigma$ considering 3000 fb$^{-1}$ luminosity for $m_{\tilde{l}}$=115-125 GeV. For $m_{\tilde{l}}$=135 GeV, the significance is $\gtrsim$ 3$\sigma$ for $\Delta M$=10-15 GeV. If the analysis were repeated with $p_T > 5$ GeV for muons, the signal acceptance for the $\Delta M = 5$ GeV scenario should increase but a background estimate for soft muons in high pile-up condition, expected in high-luminosity LHC, will be a challenging task. [*[**Acknowledgements -**]{}*]{} We would like to thank Xerxes Tata for helpful discussions. This work is supported in part by DOE Grant No. DE-FG02-13ER42020, NSF Award PHY-1206044, and by the World Class University (WCU) project through the National Research Foundation (NRF) of Korea funded by the Ministry of Education, Science, and Technology (Grant No. R32-2008-000-20001-0). T.K. is also supported in part by Qatar National Research Fund under project NPRP 5-464-1-080. K.S. is supported by NASA Astrophysics Theory Grant NNH12ZDA001N. [99]{} ATLAS Collaboration, CERN-PH-EP-2014-093. ATLAS Collaboration, J. High Energy Phys. 10 (2013) 130. CMS Collaboration, Eur. Phys. J. C [**73**]{} (2013) 2568. CMS Collaboration, J. High Energy Phys. 06 (2014) 055. ATLAS Collaboration, J. High Energy Phys. 05 (2014) 071 \[arXiv:1403.5294 \[hep-ex\]\]. CMS Collaboration, Eur. Phys. J. C [**74**]{} (2014) 9, 3036 \[arXiv:1405.7570 \[hep-ex\]\]. K. Griest and D. Seckel, Phys. Rev. D [**43**]{}, 3191 (1991). J. R. Ellis, T. Falk, G. Ganis, K. A. Olive and M. Srednicki, Phys. Lett. B [**510**]{}, 236 (2001) \[hep-ph/0102098\]; R. L. Arnowitt, B. Dutta and Y. Santoso, Nucl. Phys. B [**606**]{}, 59 (2001) \[hep-ph/0102181\]; J. R. Ellis, D. V. Nanopoulos and K. A. Olive, Phys. Lett. B [**508**]{}, 65 (2001) \[hep-ph/0102331\]; V. A. Bednyakov, H. V. Klapdor-Kleingrothaus and E. Zaiti, Phys. Rev. D [**66**]{}, 015010 (2002) \[hep-ph/0203108\]; V. A. Bednyakov, H. V. Klapdor-Kleingrothaus and V. Gronewold, Phys. Rev. D [**66**]{}, 115005 (2002) \[hep-ph/0208178\]; WMAP Collaboration, Astrophys. J. Suppl.  [**208**]{}, 19 (2013) \[arXiv:1212.5226 \[astro-ph.CO\]\]; WMAP Collaboration, Astrophys. J. Suppl.  [**208**]{}, 20 (2013) \[arXiv:1212.5225 \[astro-ph.CO\]\]. Muon g-2 Collaboration, Phys. Rev. D [**73**]{}, 072003 (2006) \[hep-ex/0602035\]. M. Endo, K. Hamaguchi, S. Iwamoto and T. Yoshinaga, arXiv:1303.4256 \[hep-ph\]. M. Davier, proceedings of 12th International Workshop on Tau Lepton Physics (TAU 2012), 17-21 Sep 2012. Nagoya, Japan, CNUM: C12-09-17.5 \[arXiv:1302.1907 \[hep-ex\]\]. B. Dutta, A. Gurrola, W. Johns, T. Kamon, P. Sheldon and K. Sinha, arXiv:1210.0964 \[hep-ph\]. A. G. Delannoy, B. Dutta, A. Gurrola, W. Johns, T. Kamon, E. Luiggi, A. Melo and P. Sheldon [*et al.*]{}, Phys. Rev.  Lett.  [**111**]{}, 061801 (2013) \[arXiv:1304.7779 \[hep-ph\]\]. A. Datta and K. Huitu, Phys. Rev. D [**67**]{}, 115006 (2003) \[hep-ph/0211319\]; P. Konar and D. Zeppenfeld, Phys. Lett. B [**647**]{}, 460 (2007) \[hep-ph/0612119\]. V. D. Barger, M. S. Berger and P. Ohmann, Phys. Rev. D [**49**]{}, 4908 (1994) \[hep-ph/9311269\]. J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer and T. Stelzer, J. High Energy Phys. 06 (2011) 128 \[arXiv:1106.0522 \[hep-ph\]\]; J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H.-S. Shao and T. Stelzer [*et al.*]{}, J. High Energy Phys. 07 (2014) 079 \[arXiv:1405.0301 \[hep-ph\]\]. T. Sjostrand, S. Mrenna and P. Z. Skands, J. High Energy Phys. 05 (2006) 026 \[hep-ph/0603175\].  is a parameterized detector simulator. We use version 4 (<http://www.physics.ucdavis.edu/~conway/research/software/pgs/pgs4-general.htm>) in the LHC detector configuration. Default $\pgs$ muon reconstruction has been used for this paper. J. Pumplin, D. R. Stump, J. Huston, H. L. Lai, P. M. Nadolsky and W. K. Tung, J. High Energy Phys. 07 (2002) 012 \[hep-ph/0201195\]. M. L. Mangano, M. Moretti, F. Piccinini and M. Treccani, J. High Energy Phys. 01 (2007) 013 \[hep-ph/0611129\]. T. Figy, C. Oleari and D. Zeppenfeld, “Next-to-leading order jet distributions for Higgs boson production via weak boson fusion,” Phys. Rev. D [**68**]{}, 073005 (2003) \[hep-ph/0306109\]; P. Konar and D. Zeppenfeld, “Next-to-leading order QCD corrections to slepton pair production via vector-boson fusion,” Phys. Lett. B [**647**]{}, 460 (2007) \[hep-ph/0612119\]; C. Oleari and D. Zeppenfeld, Phys. Rev. D [**69**]{}, 093004 (2004) \[hep-ph/0310156\]. B. Jager, C. Oleari and D. Zeppenfeld, JHEP [**0607**]{}, 015 (2006) \[hep-ph/0603177\]; B. Jager, C. Oleari and D. Zeppenfeld, Phys. Rev. D [**73**]{}, 113006 (2006) \[hep-ph/0604200\]; G. Bozzi, B. Jager, C. Oleari and D. Zeppenfeld, Phys. Rev. D [**75**]{}, 073004 (2007) \[hep-ph/0701105\]. CMS Collaboration, arXiv:1408.3583 \[hep-ex\]. ATLAS Collaboration, ATLAS-CONF-2012-147, ATLAS-COM-CONF-2012-190. ATLAS Collaboration, J. High Energy Phys. 04 (2013) 075 \[arXiv:1210.4491 \[hep-ex\]\]. H. Baer, A. Mustafayev and X. Tata, Phys. Rev. D [**89**]{}, 055007 (2014) \[arXiv:1401.1162 \[hep-ph\]\]. C. Han, A. Kobakhidze, N. Liu, A. Saavedra, L. Wu and J. M. Yang, J. High Energy Phys. 02 (2014) 049 \[arXiv:1310.4274 \[hep-ph\]\]. P. Schwaller and J. Zurita, J. High Energy Phys. 03 (2014) 060 \[arXiv:1312.7350 \[hep-ph\]\]. L. E. Ibanez, Phys. Lett. B [**137**]{}, 160 (1984); J. S. Hagelin, G. L. Kane and S. Raby, Nucl. Phys. B [**241**]{}, 638 (1984); T. Falk, K. A. Olive and M. Srednicki, Phys. Lett. B [**339**]{}, 248 (1994) \[hep-ph/9409270\]; C. Arina and N. Fornengo, J. High Energy Phys. 11 (2007) 029 \[arXiv:0709.4477 \[hep-ph\]\]. F. Abe [*et al.*]{} \[CDF Collaboration\], Phys. Rev. D [**45**]{}, 1448 (1992). [^1]: For the benchmark point $(m_{\tilde{l}_{L}},m_{\tilde{\chi}_{1}^{0}})=(135$ GeV$, 120$ GeV$)$, we have also generated and analysed a sample up to $\mathcal{O}(\alpha^{4}_{EW} \alpha^{4}_S)$, which includes up to 3 partons (inclusive) processes, but did not find any significant increase in the signal rate and significance. [^2]: For the shape based analysis, we used distributions of $p_{T_{\mu}}$, which are generated after imposing all the cuts except the $p_{T_{\mu}}$ upper-bound cuts, namely, $p_{T_{\mu_1}}+p_{T_{\mu_2}}<70$ for the 2-muon channel and $p_{T_{\mu_1}}<30$ for the 1-muon channel. One such distribution is presented in Fig \[muonpt\] for the benchmark point $(m_{\tilde{l}_{L}},m_{\tilde{\chi}_{1}^{0}})=(115$ GeV$, 110$ GeV$)$.
--- abstract: | Let $C$ be a convex body and let $S$ be a nondegenerate simplex in ${\mathbb R}^n$. Denote by $\xi(C;S)$ the minimal $\tau>0$ such that $C$ is a subset of the simplex $\tau S$. By $\alpha(C;S)$ we mean the minimal $\tau>0$ such that $C$ is contained in a translate of $\tau S$. Earlier the author has proved the equalities $\xi(C;S)=(n+1)\max\limits_{1\leq j\leq n+1} \max\limits_{x\in C}(-\lambda_j(x))+1$  (if $C\not\subset S$),   $\alpha(C;S)= \sum\limits_{j=1}^{n+1} \max\limits_{x\in C} (-\lambda_j(x))+1.$ Here $\lambda_j$ are linear functions called the basic Lagrange polynomials corresponding to $S$. In his previous papers, the author has investigated these formulae if $C=[0,1]^n$. The present paper is related to the case when $C$ coincides with the unit Euclidean ball $B_n=\{x: \|x\|\leq 1\},$ where $\|x\|=\left(\sum\limits_{i=1}^n x_i^2 \right)^{1/2}.$ We establish various relations for $\xi(B_n;S)$ and $\alpha(B_n;S)$, as well as we give their geometric interpretation. Keywords: $n$-dimensional simplex, $n$-dimensional ball, homothety, absorption index author: - 'Mikhail Nevskii[^1]' date: 'May 5, 2019' title: | On Some Problems\ Related to a Simplex and a Ball --- Preliminaries {#nev_s1} ============= Everywhere further $n\in{\mathbb N}.$ An element $x\in{\mathbb R}^n$ is written in the form $x=(x_1,\ldots,x_n).$ By definition, $$\|x\|=\sqrt{(x,x)}=\left(\sum\limits_{i=1}^n x_i^2\right)^{1/2},$$ $$B\left(x^{(0)};\varrho\right):=\{x\in{\mathbb R}^n: \|x-x^{(0)}\|\leq \varrho \} \quad \left(x^{(0)}\in {\mathbb R}^n, \varrho>0\right),$$   $$B_n:=B(0;1), \quad Q_n:=[0,1]^n, \quad Q_n^\prime:=[-1,1]^n.$$ Let $C$ be a convex body in ${\mathbb R}^n$. Denote by $\tau C$ the image of $C$ under the homothety with center of homothety in the center of gravity of $C$ and ratio of homothety $\tau.$ For an $n$-dimensional nondegenerate simplex $S$, consider the value $\xi(C;S):=\min \{\sigma\geq 1: C\subset \sigma S\}.$ We call this number the [*absorption index of $S$ with respect to $C$.*]{} Define $\alpha(C;S)$ as minimal $\tau>0$ such that convex body $C$ is a subset of the simplex $\tau S$. By ${{\rm ver}}(G)$ we mean the set of vertices of convex polytope $G$. Let $x^{(j)}=\left(x_1^{(j)},\ldots,x_n^{(j)}\right),$ $1\leq j\leq n+1,$ be the vertices of simplex $S$. The matrix $${\bf A} := \left( \begin{array}{cccc} x_1^{(1)}&\ldots&x_n^{(1)}&1\\ x_1^{(2)}&\ldots&x_n^{(2)}&1\\ \vdots&\vdots&\vdots&\vdots\\ x_1^{(n+1)}&\ldots&x_n^{(n+1)}&1\\ \end{array} \right)$$ is nondegenerate. By definition, put ${\bf A}^{-1}$ $=(l_{ij})$. Linear polynomials $\lambda_j(x)= l_{1j}x_1+\ldots+ l_{nj}x_n+l_{n+1,j}$ whose coefficients make up the columns of ${\bf A}^{-1}$ have the property $\lambda_j\left(x^{(k)}\right)$ $=$ $\delta_j^k$, where $\delta_j^k$ is the Kronecker $\delta$-symbol. We call $\lambda_j$ the [*basic Lagrange polynomials corresponding to $S$.*]{} The numbers $\lambda_j(x)$ are barycentric coordinates of a point $x\in{\mathbb R}^n$ with respect to $S$. Simpex $S$ is given by the system of linear inequalities $\lambda_j(x)\geq 0$. For more details about $\lambda_j$, see \[3; Chapter1\]. The equality $\xi(C;S)=1$ is equivalent to the inclusion $C\subset S.$ If $C\not\subset S$, then $$\label{ksi_cs_equality} \xi(C;S)=(n+1)\max_{1\leq j\leq n+1} \max_{x\in C}(-\lambda_j(x))+1.$$ (the proof was given in \[2\]; see also \[3;§1.3\]). The relation $$\label{relation_cs} \max\limits_{x\in C} \left(-\lambda_1(x)\right)= \ldots= \max\limits_{x\in C} \left(-\lambda_{n+1}(x)\right)$$ holds true if and only if the simplex $\xi(C;S)S$ is circumscribed around convex body $C.$ In the case $C=Q_n$ equality (\[ksi\_cs\_equality\]) can be reduced to the form $$\xi(Q_n;S)=(n+1)\max_{1\leq j\leq n+1} \max_{x\in {{\rm ver}}(Q_n)}(-\lambda_j(x))+1$$ and (\[relation\_cs\]) is equivalent to the relation $$\label{relation_qs} \max\limits_{x\in {{\rm ver}}(Q_n)} \left(-\lambda_1(x)\right)= \ldots= \max\limits_{x\in {{\rm ver}}(Q_n)} \left(-\lambda_{n+1}(x)\right).$$ For any $C$ and $S$, we have $\xi(C;S)\geq\alpha(C;S)$. The equality $\xi(C;S)=\alpha(C;S)$ holds only in the case when the simplex $\xi(C;S)S$ is circumscribed around convex body $C.$ This is equivalent to (\[relation\_cs\]) and also to (\[relation\_qs\]) when $C=Q_n$. It was proved in \[4\] (see also \[3; §1.4\]) that $$\label{alpha_cs_equality} \alpha(C;S)= \sum_{j=1}^{n+1} \max_{x\in C} (-\lambda_j(x))+1.$$ If $C=Q_n$, then this formula can be written in rather more geometric way: $$\label{alpha_d_i_formula} \alpha(Q_n;S) =\sum_{i=1}^n\frac{1}{d_i(S)}.$$ Here $d_i(S)$ is [*the $i$th axial diameter of simplex $S$,*]{} i.e., the length of a longest segment in $S$ parallel to the $i$th coordinate axis. Equality (\[alpha\_d\_i\_formula\]) was obtained in \[11\]. When $S\subset Q_n,$ we have $d_i(S)\leq 1.$ Therefore, for these simplices, (\[alpha\_d\_i\_formula\]) gives $$\label{ksi_alpha_n_ineq} \xi(Q_n;S)\geq\alpha(Q_n;S) =\sum_{i=1}^n\frac{1}{d_i(S)}\geq n.$$ Earlier the author established the equality $$\label{d_i_l_ij_formula} \frac{1}{d_i(S)}=\frac{1}{2}\sum_{j=1}^{n+1} |l_{ij}|$$ (see \[2\]). Being combined together, (\[alpha\_d\_i\_formula\]) and (\[d\_i\_l\_ij\_formula\]) yield $$\label{alpha_qs_formula} \alpha(Q_n;S)=\frac{1}{2}\sum_{i=1}^n\sum_{j=1}^{n+1} |l_{ij}|.$$ Note that $\alpha(C;S)$ is invariant under parallel translation of the sets and for $\tau>0$ we have $\alpha(\tau C;S)=\tau\alpha(C;S).$ Since $Q_n^\prime=[-1,1]^n$ is a translate of the cube $2Q_n$, after replacing $Q_n$ with $Q_n^\prime$ we obtain from (\[alpha\_qs\_formula\]) an even simpler formula: $$\label{alpha_q_prime_s_formula} \alpha(Q_n^\prime;S)=\sum_{i=1}^n\sum_{j=1}^{n+1} |l_{ij}|.$$ Let us define the value $$\xi_n:=\min \{ \xi(Q_n;S): \, S \mbox{ --- $n$-мерный симплекс,} \, S\subset Q_n, \, {{\rm vol}}(S)\ne 0\}.$$ Various estimates of $\xi_n$ were obtained first by the author and then by the author and A.Yu. Ukhalov (e.g., see papers \[1\], \[2\], \[5\], \[6\], \[7\], \[8\], \[12\] and book \[3\]). Always $n\leq \xi_n<n+1$. Nowaday the precise values of $\xi_n$ are known for $n=2,5,9$ and also for the infinite set of odd $n$’s for any of which there exists an Hadamard matrix of order $n+1$. If $n\ne 2$, then every known value of $\xi_n$ is equal to $n$, whereas $\xi_2=1+\frac{3\sqrt{5}}{5}=2.34\ldots$ Still remains unknown is there exist an even $n$ with the property $\xi_n=n$. There are some other open problems concerning the numbers $\xi_n$. In this article, we will discuss the analogues of the above characteristics for a simplex and an Euclidean ball. Replacing a cube with a ball makes many questions much more simpler. However, geometric interpretation of general results has a certain interest also in this particular case. Besides, we will note some new applications of the basic Lagrange polynomials. Numerical characteristics connecting simplices and subsets of ${\mathbb R}^n$ have applications for obtaining various estimates in polynomial interpolation of functions defined on multidimensional domains. This approach and the corresponding analytic methods in detailes were described in \[3\]. Lately these questions have been managed to study also by computer methods (see, e.g., \[5\], \[6\], \[8\], \[12\]). The value $\alpha(B_n;S)$ {#nev_s2} ========================= The [*inradius of an $n$-dimensional simplex $S$*]{} is the maximum of the radii of balls contained within $S$. The center of this unique maximum ball is called the [*incenter of $S$.*]{} The boundary of the maximum ball is a sphere that has a single common point with each $(n-1)$-dimensional face of $S$. By the [*circumradius of S*]{} we mean the minimum of the radii of balls containing $S$. The boundary of this unique minimal ball does not necessarily contain all the vertices of $S$. Namely, this is only when the center of the minimal ball lies inside the simplex. The inradius $r$ and the circumradius $R$ of a simplex $S$ satisfy the so-called [*Euler inequality*]{} $$\label{euler_ineq} R\geq nr.$$ Equality in (\[euler\_ineq\]) takes place if and only if $S$ is a regular simplex. Concerning the proofs of the Euler inequality, its history and generalizations, see, e.g., \[10\], \[13\], \[14\]. In connection with (\[euler\_ineq\]), let us remark an analogue to the following property being true for parallelotopes (see \[11\], \[3; §1.8\]). [*Let $S$ be a nondegenerate simplex and let $D,$ $D^*$ be parallelotopes in ${\mathbb R}^n.$ Suppose $D^*$ is a homothetic copy of $D$ with ratio $\tau>1.$ If $D\subset S \subset D^*,$ then $\tau\geq n.$* ]{} This proposition holds true also for balls. In fact, the Euler inequality is equivalent to the following statement. [*Suppose $B$ is a ball with radius $r_1$ and $B^*$ is a ball with radius $r_2$. If $B\subset S\subset B^*$, then $r_1\leq nr_2.$ Equality takes place if and only if $S$ is a regular simplex inscribed into $B^*$ and $B$ is the ball inscribed into $S$.*]{} Another equivalent form of these propositions is given by Theorem 2 (see the note after the proof of this theorem). Let $x^{(1)},$ $\ldots,$ $x^{(n+1)}$ be the vertices and let $\lambda_1,$ $\ldots,$ $\lambda_{n+1}$ be the basic Lagrange polynomials of an nondegenerate simplex $S\subset {\mathbb R}^n$ (see Section 1). In what follows $\Gamma_j$ is the $(n-1)$-dimensional hyperplane given by the equation $\lambda_j(x)=0$, by $\Sigma_j$ we mean the $(n-1)$-dimensional face of $S$ contained in $\Gamma_j$, symbol $h_j$ denotes the height of $S$ conducted from the vertex $x^{(j)}$ onto $\Gamma_j$, and $r$ denotes the inradius of $S$. Define $\sigma_j$ as $(n-1)$-measure of $\Sigma_j$ and put $\sigma:=\sum\limits_{j=1}^{n+1} \sigma_j$. Consider the vector $a_j:=\{l_{1j},\ldots,l_{nj}\}$. This vector is orthogonal to $\Gamma_j$ and directed into the subspace containing $x^{(j)}$. Obviously, $$\lambda_j(x)= l_{1j}x_1+\ldots+ l_{nj}x_n+l_{n+1,j}=(a_j,x)+l_{n+1,j}=(a_j,x)+\lambda_j(0).$$ Let us obtain these pairwise-equivalent equalities from the top up to the bottom. First we note that $$\label{alpha_bs_equality} \alpha(B_n;S)= \sum_{j=1}^{n+1} \max_{x\in B} (-\lambda_j(x))+1.$$ Formula (\[alpha\_bs\_equality\]) is the particular case of (\[alpha\_cs\_equality\]) in the situation $C=B_n$. By the Cauchy inequality, $$\label{cauchy_ineq} -\|a_j\|\|x\|\leq (a_j,x)\leq \|a_j\|\|x\|,$$ $$-\|a_j\|\|x\|-\lambda_j(0)\leq -\lambda_j(x)\leq \|a_j\|\|x\|-\lambda_j(0).$$ Both the upper and the lower bounds in (\[cauchy\_ineq\]) are reachable. This gives $$\max_{x\in B_n} (-\lambda_j(x))= \max_{\|x\|\leq 1} (-\lambda_j(x))= \|a_j\|-\lambda_j(0).$$ Therefore, $$\alpha(B_n;S)= \sum_{j=1}^{n+1} \max_{x\in B_n} (-\lambda_j(x))+1= \sum_{j=1}^{n+1}\|a_j\|-\sum_{j=1}^{n+1}\lambda_j(0)+1= \sum_{j=1}^{n+1}\left(\sum_{i=1}^n l_{ij}^2\right)^{1/2}.$$ We made use of the equality $\sum\limits_{j=1}^{n+1}\lambda_j(0)=1.$ Since $\lambda_j\left(x^{(j)}\right)=1$, we have $$h_j={\rm dist}\left(x^{(j)};\Gamma_j\right)= \frac{\left|\lambda_j\left(x^{(j)}\right)\right|}{\|a_j\|}= \frac{1}{\|a_j\|}=\frac{1}{\left(\sum\limits_{i=1}^n l_{ij}^2\right)^{1/2}}.$$ Consequently, $$\alpha(B_n;S)= \sum_{j=1}^{n+1}\left(\sum_{i=1}^n l_{ij}^2\right)^{1/2} =\sum_{j=1}^{n+1}\frac{1}{h_j}.$$ We have obtained both (\[alpha\_bs\_sum\_l\_ij\_equality\]) and (\[alpha\_bs\_h\_j\_equality\]). Let us prove (\[alpha\_bs\_1\_r\_equality\]). The ball $B_n$ is a subset of a tranlate of the simplex $\alpha(B_n;S)S$. This means that a translate of the ball $\frac{1}{\alpha(B_n;S)}B_n$ is contained in $S$. Since the maximum of the radii of balls being contained in $S$ is equal to $r$, holds true $\frac{1}{\alpha(B_n;S)}\leq r,$ i.e., $\alpha(B_n;S)\geq \frac{1}{r}$. To obtaine the inverse inequality, denote by $B^\prime$ a ball of radius $r$ inscribed into $S$. Then the ball $B_n=\frac{1}{r}B^\prime$ is a subset of some translate of $\frac{1}{r}S$. Using the definition of $\alpha(B_n;S)$ we can write $\alpha(B_n;S)\leq \frac{1}{r}$. So, we have $\alpha(B_n;S)=\frac{1}{r}$. Finally, in order to establish (\[alpha\_bs\_sigma\_nV\]), it is sufficient to utilize (\[alpha\_bs\_1\_r\_equality\]) and the formula ${{\rm vol}}(S)=\frac{1}{n}\sigma r$. The latter equality one can obtain from an ordinary formula for the volume of a simplex after subdividing $S$ onto $n+1$ simplices in such a way that $j$th of these simplices has a vertex in the center of the inscribed ball and is supported on $\Sigma_j$. $\Box$ For proving, it is sufficient to apply (\[alpha\_bs\_h\_j\_equality\]) and (\[alpha\_bs\_1\_r\_equality\]). It seems to be interesting that this geometric relation (which evidently can be obtained also in a direct way) occurs to be equivalent to general formula for $\alpha(C;S)$ in the particular case when a conveх body $C$ coincide with an Euclidean unit ball. Equality (\[r\_formula\]) follows immediately from (\[alpha\_bs\_sum\_l\_ij\_equality\]) and (\[alpha\_bs\_1\_r\_equality\]). To obtain (\[z\_formula\]), let us remark that $$r= {\rm dist}(z;\Gamma_j)= \frac{|\lambda_j(z)|}{\|a_j\|}.$$ Since $z$ lies inside $S$, each barycentric coordinate of this point $\lambda_j(z)$ is positive, i.e., $\lambda_j(z)=r\|a_j\|.$ Consequently, $$z=\sum_{j=1}^{n+1}\lambda_j(z)x^{(j)}= r\sum_{j=1}^{n+1} \|a_j\| x^{(j)}.$$ This coincides with (\[z\_formula\]). Finally, since vector $a_k=\{l_{1k},\ldots,l_{nk}\}$ is orthogonal to $\Sigma_k$ and is directed from this facet inside the simplex, a unique common point of $B(z;r)$ and $\Sigma_k$ has the form $$y^{(k)}=z-\frac{r}{\|a_k\|}a_k=r\left( \sum_{j=1}^{n+1} \|a_j\| x^{(j)}-\frac{1}{\|a_k\|} a_k\right).$$ The latter is equivalent to (\[y\_k\_formula\]). $\Box$ It is interesting to compare (\[alpha\_bs\_sum\_l\_ij\_equality\]) with the formula (\[alpha\_q\_prime\_s\_formula\]) for $\alpha(Q_n^\prime;S)$. Since $B_n$ is a subset of the cube $Q_n^\prime=[-1,1]^n$, we have $\alpha(B_n;S)\leq \alpha(Q_n^\prime;S)$. Analytically, this also follows from the estimate $$\left(\sum_{i=1}^n l_{ij}^2\right)^{1/2}\leq \sum_{i=1}^n |l_{ij}|.$$ For arbitrary $x^{(0)}$ and $\varrho>0$, the number $\alpha\left(B(x^{(0)};\varrho);S\right)$ can be calculated with the use of Theorem 1 and the equality $\alpha(B(x^{(0)};\varrho);S)$ $=$ $\varrho\alpha(B_n;S)$. If $S\subset Q_n$, then all the axial diameters $d_i(S)$ do not exceed $1$ and (\[alpha\_d\_i\_formula\]) immediately gives $\alpha(Q_n;S)\geq n$. Moreover, the equality $\alpha(Q_n;S)=n$ holds when and only when each $d_i(S)=1$. The following proposition expresses the analogues of these properties for simplices contained in a ball. By the definition of $\alpha(B_n;S)$, the ball $B_n$ is contained in a translate of the simplex $\alpha(B_n;S)S$. Hence, some translate $B^\prime$ of the ball $\frac{1}{\alpha(B_n;S)}B_n$ is a subset of $S$. So, we have the inclusions $B^\prime\subset S\subset B_n$. Since the radius of $B^\prime$ is equal to $\frac{1}{\alpha(B_n;S)}$, the inradius $r$ and the circumradius $R$ of $S$ satisfy the inequalities $\frac{1}{\alpha(B_n;S)}\leq r,$ $R\leq 1$. Making use of the Euler inequality $R\geq nr$, we can write $$\label{theor2_ineqs} \frac{1}{\alpha(B_n;S)}\leq r\leq \frac{R}{n}\leq \frac{1}{n}.$$ Therefore, $\alpha(B_n;S)\geq n.$ The equality $\alpha(B_n;S)=n$ means that the left-hand value in (\[theor2\_ineqs\]) coincides with the right-hand one. Thus, all the inequalities in this chain turn into equalities. We obtain $R=1,$ $r=\frac{1}{n}$. Since in this case the Euler inequality (\[euler\_ineq\]) also becomes an equality, $S$ is a regular simplex inscribed into $B_n$. Conversely, if $S$ is a regular simplex inscribed into $B_n$, then $r=\frac{1}{n}$, i.e., $\alpha(B_n;S)=\frac{1}{r}=n$. $\Box$ We see that Theorem 2 follows from the Euler inequality (\[euler\_ineq\]). In fact, these statements are equivalent. Indeed, suppose $S$ is an arbitrary $n$-dimensional simple, $r$ is the inradius and $R$ is the circumradius of $S$. Let us denote by $B$ the ball containing $S$ and having radius $R$. Then some translate $S^\prime$ of the simplex $\frac{1}{R}S$ is contained in $B_n$. By Theorem 1, $\alpha(B_n;S^\prime)$ is the inverse to the inradius of $S^\prime$, i.e., is equal to $\frac{R}{r}.$ Now assume that Theorem 2 is true. Let us apply this theorem to the simplex $S^\prime\subset B_n$. This gives $\alpha(B_n;S^\prime)=\frac{R}{r}\geq n$ and we have (\[euler\_ineq\]). Finally, if $R=nr,$ then $\alpha(B_n;S^\prime)=n$. From Theorem 2 we obtain that both $S^\prime$ and $S$ are regular simplices. It follows from (\[ksi\_alpha\_n\_ineq\]) that the minimum value of $\alpha(Q_n;S)$ for $S\subset Q_n$ also is equal to $n$. This minimal value corresponds to those and only those $S\subset Q_n$ for which every axial diameter $d_i(S)$ is equal to $1$. The noted property is fulfilled for the maximum volume simplices in $Q_n$ (see \[3\]), but not for the only these simplices, if $n>2$. The value $\xi(B_n;S)$ {#nev_s3} ====================== In this section, we will obtain the computational formula for the absorption index of a simplex $S$ with respect to an Euclidean ball. We use the previous denotations. Let us apply the general formula (\[ksi\_cs\_equality\]) in the case $C=B\left(x^{(0)};\varrho\right)$. The Cauchy inequality yields $$\label{cauchy_for_ksi_ineq} -\|a_j\|\|x-x^{(0)}\|\leq (a_j,x-x^{(0)})\leq \|a_j\|\|x-x^{(0)}\|.$$ If $\|x-x^{(0)}\|\leq \varrho$, we see that $$-\varrho\|a_j\|\leq (a_j,x)-(a_j,x^{(0)}) \leq \varrho\|a_j\|,$$ $$-\lambda_j(x)=-(a_j,x)-l_{n+1,j}\leq \varrho\|a_j\|-(a_j,x^{(0)})-l_{n+1,j}.$$ Since both the upper and the lower bounds in (\[cauchy\_for\_ksi\_ineq\]) are reachable, $$\max_{\|x-x^{(0)}\|\leq \varrho} (-\lambda_(x))= \varrho\left(\sum_{i=1}^n l_{ij}^2\right)^{1/2}- \sum_{i=1}^n l_{ij}x_i^{(0)}-l_{n+1,j}.$$ It follows that $$\xi\left(B\left(x^{(0)};\varrho\right);S\right)= (n+1)\max_{1\leq j\leq n+1, \|x-x^{(0)}\|\leq \varrho} (-\lambda_j(x))+1=$$ $$=(n+1)\max_{1\leq j\leq n+1} \left[\varrho\left(\sum_{i=1}^n l_{ij}^2\right)^{1/2}- \sum_{i=1}^n l_{ij}x_i^{(0)}-l_{n+1,j}\right]+1,$$ and we obtain (\[ksi\_b\_x0\_ro\_s\_l\_ij\_equality\]). Equality (\[ksi\_bs\_l\_ij\_equality\]) appears from (\[ksi\_b\_x0\_ro\_s\_l\_ij\_equality\]) for $x^{(0)}=0, \varrho=1$. $\Box$ The equality $\beta_n=n$. Commentaries {#nev_s4} ====================================== The statement immediately follows from Theorem 2 and the inequality $\xi(B_n;S)\geq \alpha(B_n;S)$. We give here also a direct proof without applying the Euler inequality that was used to obtain the estimate $\alpha(B_n;S)\geq n$. First let $S$ be a regular simplex inscribed into $ B_n $. Then $\alpha(B_n;S)=n$ and the inradius of $S$ is equal to $\frac{1}{n}$. Since the simplex $\xi(B_n;S)S$ is circumscribed around $B_n$, we have the equalities $\xi(S;B_n)=\alpha(S;B_n)=n$ and also relation (\[relation\_cs\]) with $C=B_n$. It follows from (\[ksi\_cs\_equality\]) that for any $j=1,\ldots,n+1$ $$\max_{x\in B_n} (-\lambda_j(x))=\frac{n-1}{n+1},$$ where $\lambda_j$ are the basic Lagrange polynomials related to $S$. Now suppose simplex $S$ is contained in $B_n$ but is not regular or is not inscribed into the ball. Denote the Lagrange polynomials of this simplex by $\mu_j$. There exist a regular simplex $S^*$ inscribed into $B_n$ and an integer $k$ such that $S$ is contained in the strip $0\leq\lambda_k(x)\leq 1$, the $k$th $(n-1)$-dimensional faces of $S$ and $S^*$ are parallel, and $S$ has not any common points with at least one of the boundary hyperplanes of this strip. Here $\lambda_j$ are the basic Lagrange polynomials of $S^*$. The vertex $x^{(k)}$ of the simplex $S^*$ does not lie in its $k$th facet. Assume $u$ is a point of the boundary of $B_n$ most distant from $x^{(k)}$. Then $u$ is the maximum point of polynomial $-\lambda_k(x)$, i.e., $- \lambda_j(u)=\frac{n-1}{n+1}$. Consider the straight line connecting $x^{(k)}$ and $u$. Denote by $y,z$ and $t$ the inersection points of this line and pairwize parallel hyperplanes $\mu_k(x)=1,$ $\mu_k=0$ and $\lambda_k(x)=0$ respectively. We have $$\label{ineqs_one_strong} \|x^{(k)}-t\|\geq \|y-z\|, \quad \|t-u\|\leq \|z-u\|.$$ At least one of these inequalities is fulfilled in the strict form. The linearity of the basic Lagrange polynomials means that $$\frac{\mu_k(z)-\mu_k(u)}{\mu_k(y)-\mu_k(z)}= \frac{\|z-u\|}{\|y-z\|}, \quad \frac{\lambda_k(t)-\lambda_k(u)}{\lambda_k\left(x^{(k)}\right)-\lambda_k(t)}= \frac{\|t-u\|}{\left\|x^{(k)}-t\right\|}.$$ Since $\mu_k(y)=1,$ $\mu_k(z)=0,$ $\lambda_k\left(x^{(k)}\right)=1,$ and $\lambda_k(t)=0$, we get $$-\mu_k(u)= \frac{\|z-u\|}{\|y-z\|} > \frac{\|t-u\|}{\left\|x^{(k)}-t\right\|}=-\lambda_k(u)=\frac{n-1}{n+1}.$$ We made use of (\[ineqs\_one\_strong\]) and took into account that at least one of the inequalities is strict. The application of (\[ksi\_cs\_equality\]) yields $$\xi(B_n;S)=(n+1)\max_{1\leq j\leq n+1} \max_{x\in B_n}(-\mu_j(x))+1\geq (n+1)(-\mu_k(u))+1>n.$$ Thus, if $S$ is not regular simplex inscribed into $B_n$, then $\xi(B_n;S)>n$. We see that each simplex $S\subset B_n$ satisfies the estimate $\xi(B_n;S)\geq n$. The equality takes place if and only if $S$ is a regular simplex inscribed into $B_n$. $\Box$ By analogy with the value $\xi_n=\min\{\xi(Q_n;S): S\subset Q_n\}$ defined through the unit cube, let us introduce the similar numerical characteristic given by the unit ball: $$\beta_n:=\min \{ \xi(B_n;S): \, S \mbox{ --- $n$-мерный симплекс,} \, S\subset B_n, \, {{\rm vol}}(S)\ne 0\}.$$ Many problems concerning $\xi_n$ yet have not been solved. For example, $\xi_2 = 1 + \frac{3\sqrt{5}}{5}$ still remains the only accurate value of $\xi_n $ for even $n$; moreover, this value was discovered in a rather difficult way (see \[3; Chapter2\]). Compared to $\xi_n$ the problem on numbers $\beta_n $ turns out to be trivial. It is sufficient to apply Theorem 4. $\Box$ The technique developed for a ball makes it possible to illustrate some results having been earlier got for a cube. Here we note a proof of the following known statement which differs from the proofs given in \[3; §3.2\] and \[12\]. It is known (see, e.g., \[9\]) that for these and only these $n$ we can inscribe into $Q_n$ a regular simplex $S$ so that all the vertices of $S$ will coincide with vertices of the cube. Let us denote by $B$ the ball with radius $\frac{\sqrt{n}}{2}$ having the center in center of the cube. Clearly, $Q_n$ is inscribed into $B$, therefore, the simplex is inscribed into the ball as well. Since $S$ is regular, by Theorem 4 and by similarity reasons, we have $\xi(B;S)=n.$ The inclusion $Q_n\subset B$ means that $\xi(Q_n;S)\leq \xi(B;S),$ i.e. $\xi(Q_n;S)\leq n$. From (\[ksi\_alpha\_n\_ineq\]) it follows that the inverse inequality $\xi(Q_n;S)\geq n$ is also true. Hence, $\xi(Q_n;S)=n$. Simultaneously (\[ksi\_alpha\_n\_ineq\]) gives $\xi_n=\xi(Q_n;S)=n$. $\Box$ This argument is based on the following fact: if $S$ is a regular simplex with the vertices in vertices of $Q_n$, then the simplex $nS$ absorbs not only the cube $Q_n$ but also the ball $B$ circumscribed around the cube. The corresponding absorption index $n$ is the minimum possible both for the cube and the ball. In addition, we mention the following property. The inclusion $B\subset nS$ implies that $\xi(B;S)=n$. This way $S$ is a regular simplex inscribed into the ball $B$. But since this is not so, $B$ is not a subset of $nS$. $\Box$ Simplices satisfying the condition of Corollary 5 exist at least for $n=3, 5,$ and $9$ (see \[12\]). The relations (\[ksi\_alpha\_n\_ineq\]) mean that always $\xi_n\geq n$. Since $\xi_2=1+\frac{3\sqrt{5}}{5}>2$, there exist $n$’s such that $\xi_n>n$. Besides the cases when $n+1$ is an Hadamard number, the equality $\xi_n=n$ is established for $n=5$ and $n=9$ (the extremal simplices in ${\mathbb R}^5$ and ${\mathbb R}^9$ are given in \[12\]). For all such dimensions holds true $\xi_n=\beta_n$, i.e., with respect to the minimum absorption index of an internal simplex, both the convex bodies, an $n$-dimensional cube and an $n$-dimensional ball, have the same behavoir. The equality $\xi_n=n$ is equivalent to the existence of simplices satisfying the inclusions $S\subset Q_n\subset nS$. Some properties of such simplices (e.g., the fact that the center of gravity of $S$ coincides with the center of the cube; see \[7\]) are similar to the properties of regular simplices inscribed into the ball. However, the problem to describe the set of all dimensions where exist those simplices, seems to be very difficult and nowaday is far from solution. **References** - Nevskij, M.V., On a certain relation for the minimal norm of an interpolational projection, [*Model. Anal. Inform. Sist.*]{}, 2009, vol. 16, no. 1, pp. 24–43 (in Russian). - Nevskii, M.V. On a property of $n$-dimensional simplices, [*Math. Notes*]{}, 2010, vol. 87, no. 4, pp. 543–555. - Nevskii, M.V., [*Geometricheskie ocenki v polinomialnoi interpolyacii*]{} (Geometric Estimates in Polynomial Interpolation), Yaroslavl’: Yarosl. Gos. Univ., 2012 (in Russian). . - Nevskii, M.V., On the minimal positive homothetic image of a simplex containing a convex body, [*Math. Notes*]{}, 2013, vol. 93, no. 3–4, pp. 470–478. - Nevskii, M.V., and Ukhalov, A.Yu., On numerical charasteristics of a simplex and their estimates, [*Model. Anal. Inform. Sist.*]{}, 2016, vol. 23, no. 5, pp. 603–619 (in Russian). English transl.: [*Aut. Control Comp. Sci.*]{}, 2017, vol. 51, no. 7, pp. 757–769. - Nevskii, M.V., and Ukhalov, A.Yu., New estimates of numerical values related to a simplex, [*Model. Anal. Inform. Sist.*]{}, 2017, vol. 24, no. 1, pp. 94–110 (in Russian). English transl.: [*Aut. Control Comp. Sci.*]{}, 2017, vol. 51, no. 7, pp. 770–782. - Nevskii, M.V., and Ukhalov, A.Yu., On $n$-dimensional simplices satisfying inclusions $S\subset [0,1]^n\subset nS$, [*Model. Anal. Inform. Sist.*]{}, 2017, vol. 24, no. 5, pp. 578–595 (in Russian). English transl.: [*Aut. Control Comp. Sci.*]{}, 2018, vol. 52, no. 7, pp. 667–679. - Nevskii, M.V., and Ukhalov, A.Yu., On minimal absorption index for an $n$-dimensional simplex, [*Model. Anal. Inform. Sist.*]{}, 2018, vol. 25, no. 1, pp. 140–150 (in Russian). English transl.: [*Aut. Control Comp. Sci.*]{}, 2018, vol. 52, no. 7, pp. 680–687. - Hudelson, M., Klee, V., and Larman, D., Largest $j$-simplices in $d$-cubes: some relatives of the Hadamard maximum determinant problem, [*Linear Algebra Appl.*]{}, 1996, vol. 241–243, pp. 519–598 - Klamkin M.S., and Tsifinis G.A., Circumradius–inradius inequality for a simplex, [*Mathematics Magazine*]{}, 1979, vol. 52, no. 1, pp. 20–22. - Nevskii, M., Properties of axial diameters of a simplex, [*Discr. Comput. Geom.*]{}, 2011, vol. 46, no. 2, pp. 301–312. - Nevskii, M., and Ukhalov A., Perfect simplices in ${\mathbb R}^5$, [ *Beitr. Algebra Geom.*]{}, 2018, vol. 59, no. 3, pp. 501–521. - Yang S., and Wang J., Improvements of $n$-dimensional Euler inequality, [*Journal of Geometry*]{}, 1994, vol. 51, pp. 190–195 - Vince A., A simplex contained in a sphere, [*Journal of Geometry*]{}, 2008, vol. 89, no. 1–2, pp. 169–178. [^1]: Department of Mathematics, P.G. Demidov Yaroslavl State University, Sovetskaya str., 14, Yaroslavl, 150003, Russia orcid.org/0000-0002-6392-7618 mnevsk55@yandex.ru
--- abstract: 'A type of gravitational-wave signals in the LIGO-Virgo sensitivity band are expected to be emitted by spinning asymmetric neutron stars, with rotational frequencies that could plausibly emit continuous gravitational radiation in the most sensitive band of the LIGO-Virgo detectors. The most important feature of such kind of signals is in their phase evolution, which is stable over a long observation run. When using analysis based on matched filtering, the phase evolution of long-coherent signals is needed to define how to build a proper template grid in order to gain the best signal-to-noise ratio possible. This information is encoded in a matrix called *phase metric*, which characterizes the geometry for the likelihood given by the matched filtering. Most of the times, the metric for long-coherent signals cannot be computed anlaytically and even its numerical computation is not possible due to numerical precision. In this paper we show a general phase decomposition technique able to make the template metric analytically computable. We will also show how this variables can be employed to distinguish in a robust way among astrophysical signals and non-stationary noise artifacts that may affect analysis pipelines.' author: - 'S. Mastrogiovanni' - 'P. Astone' - 'S. D Antonio' - 'S. Frasca' - 'G. Intini' - 'I. La Rosa' - 'P. Leaci' - 'A. Miller' - 'F. Muciaccia' - 'C. Palomba' - 'O.J. Piccinni' - 'A. Singhal' bibliography: - 'cw8.bib' title: 'Phase decomposition of the template metric for continuous gravitational-wave searches' --- Introduction ============ During the first and second advanced LIGO-Virgo [@Aasi20152; @Acernese2015] observing runs several gravitational-wave signals (GWs) have been detected. The signals detected so far are short-lived, also called [*transient*]{}, because their duration is much smaller than the usual observing time of the detectors. In particular five detection from binary black hole merger [@Abbott20162; @2016PhRvL.116x1103A; @2017PhRvL.119n1101A; @2017PhRvL.118v1101A; @TheLIGOScientificCollaboration2017] and a detection from a binary neutron star merger [@2017PhRvL.119p1101A] were made. The last detection have carried out many astrophysical information on the physics and astrophysics related to neutron stars such as the equation of state [@2017PhRvL.119p1101A]. Spinning neutron stars (NSs) are also expected to emit GWs if an asymmetry is present with respect to the rotation axis. These type of signals are expected to be continuous and long-lived with respect to the usual observation time of GW detectors but on the other hand their expected amplitude is very weak. The GW amplitude can carry out useful information on the star’s ellipticity and hence on its equation of state. These type of signals are refereed to as Continuous Waves (CWs). Generally speaking, algorithms for the detection of CWs match a set of [*waveform templates*]{} to detectors data, in order to highlight the presence of an astrophysical signal. For instance, in [*targeted searches*]{} a wave template covering the entire data set is used, while in [*semi-coherent searches*]{} a large set of templates is applied covering smaller portions of the data and later combined incoherently. A common problem is then how to build the waveform template and to decide what is the spacing into the template lattice. In principle, one should build a grid in the search parameter space such a way that the discretization does not prevent the detection of a signal, and that the computational cost of the entire search is affordable, while exploring a reasonable physical parameter space. The problem of template spacing has been originally deeply studied for compact binaries coalescence [@PhysRevD.49.2658; @PhysRevD.46.5236; @PhysRevD.53.6749] and later inspected for CW [@PhysRevD.58.063001; @Cutler2005; @Wette2013; @Wette2015; @DavidPhD; @Brady1997]. It has been shown that the information on how to build the template encoded in the so-called template metric [@PhysRevD.53.6749; @Cutler2005]. As we will see later in Sec. \[sec:2\], the metric is represented by matrix defined to compute the signal-to-noise ratio loss given a mismatched template with respect to the one present in the data. This matrix can be used to compute the fraction of the signal-to-noise ratio loss when a discretization error in the template grid is present. On the other hand, the metric carries also information on the of the likelihood function with respect to the waveform templates. An accurate evaluation of the template metric is then needed in such a way to probe the presence of a CW signal. Unfortunately, for a CW signal the metric is often an ill-conditioned and non-diagonal matrix. The condition number is defined as the ratio of the highest eigenvalue over the smallest eigenvalue of a given matrix. Usually if it is larger than the numerical precision of a compiler ($~10^{16}$) the matrix cannot be inverted properly by algorithms, which makes the template matrix difficult to handle from a numerical point of view. In this paper we will show a variable decomposition which makes the metric analytically calculable. We will show how to efficiently build a template grid using the new variables and how can we use it to improve CW searches by distinguish a signal from non-gaussian noise by the geometry of the likelihood function. The paper is organized as follows: In Sec. \[sec:2\] a background for the data analysis will be provided, in Sec. \[sec:3\] the new variable redefinition will be introduced, in Sec. \[sec:4\] tests aimed to probe that the phase decomposition works properly will be shown. Finally in Sec. \[sec:5\] and Sec. \[sec:6\] possible applications for hypothesis testing will be presented, focusing also at the end on implementation in follow-up algorithms. Data analysis background \[sec:2\] ================================== In this section we introduce the data analysis background. We use the $\mathcal{F}$-statistic defined in [@Jaranowski1998]. The choice of using the $\mathcal{F}$-statistic is due to the fact that we wish to have an estimator that is directly related to the likelihood function. However, our approach is quite general and holds for all searches that use matched filtering technique. The signal model ---------------- The GW signal emitted by an asymmetric spinning neutron star can be written following the formalism first introduced in [@2010CQGra..27s4016A] at the detector reference frame as the real part of $$h(t)= h_0 f(\eta) \big[ H^+ A_+ (t) + H^\times A_\times (t) \big]e^{2 \pi i f_{\mathrm{gw}} (t) t+i \phi_0}, \label{eq:Hgrande}$$ where $f_\mathrm{gw} (t)$ is the GW frequency at the detector reference frame and $\phi_0$ is the phase at the reference time while $f(\eta)$ is a function of the parameter $\eta$. For the quadrupole GW emission of rotating tri-axial rigid body (the NS) we expect $f_\mathrm{gw} (t)$ to be two times the rotational frequency of the spinning neutron star. The polarization amplitudes $H^+, H^\times$ are given by: $$\begin{aligned} H^+ =\frac{\cos(2 \psi) - i \eta \sin (2 \psi)}{\sqrt{1+\eta^2}}, \, \, H^\times =\frac{\sin(2 \psi) + i \eta \cos (2 \psi)}{\sqrt{1+\eta^2}}, \label{eq:HpHc}\end{aligned}$$ with $\eta$ being the ratio of the polarization ellipse semi-minor to semi-major axis and $\psi$ the polarization angle[^1][@2010CQGra..27s4016A]. The detector *sidereal responses* to the GW polarizations are encoded in the functions $A_+ (t), A_\times (t)$. It can be shown that the waveform defined by Eq. is equivalent to the GW signal expressed in the more standard formalism of [@Jaranowski1998], given the following relations: $$\eta=-\frac{2\cos \iota}{1+\cos^2 \iota}, \label{eq:etaiota}$$ where $\iota$ is the angle between the line of sight and the star rotation axis, and $$H_0=h_0 \sqrt{ \frac{1+6 \cos ^2 \iota + \cos^4 \iota}{4}},$$ with the usual GW amplitude $$h_0=\frac{1}{d} \frac{4 \pi^2 G }{c^4} I_\mathrm{zz} f_\mathrm{gw}^2 \epsilon, \label{eq:GW_amplitude}$$ where $d,I_\mathrm{zz}$ and $\epsilon$ are respectively the star distance, its moment of inertia with respect to the rotation axis and the [*ellipticity*]{}, which measures the star degree of asymmetry. In the detector reference frame, the signal is not monochromatic, i.e. the frequency $f_\mathrm{gw} (t)$ in Eq. is a function of the different modulation that act on the signal. In fact, the signal is modulated by several effects, namely the *Römer delay* due to the detector motion in the Solar System Barycenter and the source intrinsic spin-down due to the rotational energy loss. The phase modulation of a CW signal can be expressed as a composition of the listed effects [^2]: $$\begin{aligned} \frac{\phi_{\rm gw}(t)-\phi_0}{2 \pi}= &f_{ 0} (t-t_0) + \frac{1}{2}\dot{f}_{0}(t-t_0)^2+ \nonumber \\ & +f_{ 0} \, \vec{\mathcal{P}}(t) \cdot \widehat{n} + \dot{f}_{0}(t-t_0)\, \vec{\mathcal{P}}(t)\cdot \widehat{n} \label{eq:phevol}\end{aligned}$$ where $t_0$ is a reference time, $\vec{\mathcal{P}}(t)$ the position of the Earth in the Solar System Barycenter (normalized to the speed of light), $\phi_0$ an initial phase and $\widehat{n}$ the versor pointing to the source location in the sky. The variables that determine the phase evolution in Eq. are the GW frequency $f_0$ and it’s derivative $\dot{f_0}$ (at a given reference time) together with two angular variables which are the position in the sky $\alpha,\delta$. For the sky-position we will use the equatorial coordinates. With the signal description presented in Eq. the signal naturally factorizes as the product of some GW amplitudes $H_p(\vec{\beta_s})$ which are complex scalar numbers, and depend on the so-called *extrinsic parameters* $\vec{\beta_s}=(\eta, \psi, \phi_0)$, and phase templates $\ket{\mathcal{A}_p(\vec{\lambda_s})}$ which are vectors and depend on the the so-called [*intrinsic parameters*]{} (or phase parameters) $\vec{\lambda}_s=(\alpha,\delta, f, \dot{f})$. Without loosing of generality we can use the braket notation to indicate that the templates (or data itself) can either be expressed in different basis such as the frequency basis (Fourier’s domain) or time basis (time series). A signal composed by $p$ polarizations can be generally written as [@Jaranowski1998; @2010CQGra..27s4016A] $$\ket{h}=H_0 (h_0,\eta) \sum_p H_p(\vec{\beta}_s) \ket{\mathcal{A}_p(\vec{\lambda)}_s}. \label{eq:bra}$$ For instance, the phase templates $\ket{\mathcal{A}_p(\vec{\lambda)}_s}$ can be expressed in the time domain (using the time basis $\widehat{t}$) as harmonic functions. In fact they will be the product of the detector sidereal responses $A_{+/\times} (t, \vec{\lambda}_s)$ and all the possible phase modulations of the signal. $$\braket{\widehat{t}|\mathcal{A}_p}=A_{p} (t ,\vec{\lambda}_s) e^{i \phi_{\rm gw}(t, \vec{\lambda}_s)} \label{eq:tb}$$ Definition of the statistic --------------------------- Following the same approach as in [@Jaranowski1998] we can model our data as the superposition of Gaussian noise and a possible signal $$\ket{x}=\ket{n}+\ket{h} \label{eq:super}$$ The likelihood for the data $x$ containing a signal $h$ can be expressed as: $$\mathcal{L}(x|h(\vec{\lambda})) \propto e^{\frac{1}{2} \braket{x-h|x-h}}.$$ The inner scalar product can be performed both in time domain or in the frequency domain: $$\braket{a|b}= \frac{2}{S_{f}} \int_0^{f_{max}} a(f) \cdot b^*(f) df =\frac{2}{S_{f}} \int_0^{T_{\rm coh}} a(t) \cdot b^*(t) dt$$ with $S_f$ being the unilateral detector noise spectrum that since we are looking at a very narrow-frequency region (order of $10^{-3}$ Hz) we assume to be constant , $T_{\rm coh}$ the coherent integration time of the analysis and “$^*$” the complex conjugation. One can assume $S_f$ to be almost constant over a small frequency band in the case of nearly gaussian noise. We then define the maximum likelihood estimator as the ratio of the likelihoods associated to a signal being present and not, respectively: $$\mathcal{L}_{\rm ML}=\frac{\mathcal{L}(x|h)}{\mathcal{L}(x|h=0) } \label{eq:ML}$$ The signal detection problem consists to maximize Eq. while trying many different signal templates which are functions as well of the extrinsic and intrinsic GW parameters, $\vec{\lambda}$ and $\vec{\beta}$. It can be shown that $\mathcal{L}_{\rm ML}$ can be analytically maximize with respect to the extrinsic parameters of the wave thus removing 4 dimensions from our maximization problem. Following the same procedure of [@Jaranowski1998] implemented for the signal description in [@2010CQGra..27s4016A] it is possible to show that: $$\mathcal{L}_{\rm ML} = e^{\overset{_*}{\mathcal{F}}}, \label{eq:RML}$$ where $\overset{_*}{\mathcal{F}}$ is the $\mathcal{F}$-statistic computed from complex data instead of real data. $$\overset{_*}{\mathcal{F}}=\frac{1}{2} \sum_p \frac{\braket{\mathcal{A}_p|x}}{\braket{\mathcal{A}_p|\mathcal{A}_p}}\braket{x|\mathcal{A}_p}. \label{eq:R}$$ Theoretically $\overset{_*}{\mathcal{F}}$ has the same statistical meaning of the usual $\mathcal{F}$-statistic, i.e. is the logarithm of the maximum likelihood estimator in Eq. . Practically, in order to compute $\overset{_*}{\mathcal{F}}$ the application of two templates is required, while for the usual $\mathcal{F}$-statistic the application of 4 templates is required. This is because $\overset{_*}{\mathcal{F}}$ is computed starting from a complex representation of the data. The extrinsic parameters can be estimated from the complex polarization amplitude estimators,which result from the application of two matched filters: $$\begin{aligned} & \widehat{H}_+=\frac{\braket{A_+|x}}{\braket{A_+|A_+}} \, & \widehat{H}_\times=\frac{\braket{A_\times|x}}{\braket{A_\times|A_\times}}. \label{eq:estimators}\end{aligned}$$ Using the relations given by Eq. , one can use the estimators $\widehat{H}_{+/\times}$ of the GW polarization amplitudes to recover the intrisc parameters of the wave a posterior as a combination of the two estimators in Eq. . The relations can be found in [@2017CQGra..34m5007M]. Phase metric ------------ Let us assume that we are computing the $\overset{_*}{\mathcal{F}}$ statistic by using phase templates calculated using parameters with a small mismatch $\Delta \vec{\lambda}=|\vec{\lambda}-\vec{\lambda_s}|$ with respect to the signal parameters values. We expect that, depending on $\vec{\Delta \lambda}$, only a fraction of the signal will be recovered, ideally the one corresponding to the signal if $|\vec{\Delta \lambda}|=0$. The function that quantifies the loss for a mismatch in the phase parameters is called [*mismatch function*]{}: $$m_f=\frac{E[\overset{_*}{\mathcal{F}}_{\rm s}]-E[\overset{_*}{\mathcal{F}}_{\rm m}( \vec{\lambda_s}+\Delta \vec{\lambda})]}{E[\overset{_*}{\mathcal{F}}_{\rm s}]} \label{eq:mism1}$$ where $E[\overset{_*}{\mathcal{F}}_{\rm s}]$ and $E[\overset{_*}{\mathcal{F}}_{\rm m}(\Delta \vec{\lambda})]$ are respectively the expected values of the $\overset{_*}{\mathcal{F}}$ statistic for a perfect matched template and for a template computed with a mismatch $\Delta \vec{\lambda}$. One can Taylor expand the term $E[\overset{_*}{\mathcal{F}}_{\rm m}]$ around the signal true parameters obtaining the form [@PhysRevD.53.6749]: [^3] $$\begin{aligned} m_f=\sum_{i,j} g_{ij} (\vec{\lambda}_s) \Delta \lambda^i \Delta \lambda^j + \mathcal{O}(\Delta |\vec{\lambda}|^3) \label{eq:mism}\end{aligned}$$ The $4\times4$ tensor $g_{ij}(\vec{\lambda}_s)$ represent the metric in the 4-dimensional parameter space, and $(\vec{\lambda}_s)$ are the signal phase parameters. It is possible to show that if one assumes that the phase displacement due to the sidereal Earth motion is smaller than the phase displacement due to other effects (such as Doppler modulation), the metric will assume the form[@PhysRevD.53.6749; @Brady1997; @DavidPhD], more details on how to recover Eq. using the formalism introduced in Sec. \[sec:2\], are also given in Appendix \[ap:A\]: $$\begin{aligned} g_{ij} = & \frac{1}{T_{\rm coh}} \int_0^{T_{\rm coh}} \frac{\partial \phi_{\rm gw}}{\partial \lambda_i} \frac{\partial \phi_{\rm gw}}{\partial \lambda_i} \bigg|_{\lambda=\lambda_s} dt+ \ldots \nonumber \\ &- \frac{1}{T^2_{\rm coh}} \int_0^{T_{\rm coh}} \frac{\partial \phi_{\rm gw}}{\partial \lambda_i} \bigg|_{\lambda=\lambda_s} dt \int_0^{T_{\rm coh}} \frac{\partial \phi_{\rm gw}}{\partial \lambda_j} \bigg|_{\lambda=\lambda_s} dt. \label{eq:phase_metric}\end{aligned}$$ The concept of metric can also be extended to semi-coherent searches (see Appendix \[ap:B\] for more details) or in the case of pulsars in binary systems as done by [@PhysRevD.91.102003]. The metric indicates the fraction of signal-to-noise ratio that we are able to recover while searching from a mismatched template. Solving Eq. for a constant mismatch $m_f$ with respect to the variables $\Delta \lambda_i$ is equivalent for determine the set of templates which will result in the same value of the $\mathcal{F}$-statistic. Hence it is equivalent of study the hyper-surfaces at constant likelihood with respect to the templates. So a study of the metric is very important to uderstand how to build a proper template grid and which are the shape of the likelihood function for a GW signal present ideally only in gaussian noise. However, as it is possible to understand looking at the phase evolution in Eq. and to the metric in Eq. , the computation of the matrix is not an easy task. The first problem is that the metric is not flat, i.e. every component depends on the signal true parameters, that we do not know. Mathematically this effect arises from the fact that different phase modulations in Eq. couple one to each other (e.g. the frequency phase evolution and the Doppler modulation). Another problem is that usually the metric is ill-conditioned, i.e. has a condition number higher than the dobule float numerical precision and the computation of the eigendirections (which gives information on the geometry of the likelihood function) can present numerical problems [@Wette2013; @Prix2007]. ![image](mismatch_old_var.eps){width="100.00000%"} As an example on how to compute the metric spacing let us consider a simple case. In narrow-band or directed searches one assumes the sky-position to be perfectly known, while the frequency $f_0$ and spin down$\dot{f}_0$ are known with some uncertainties. This is the same case as the one considered in a direct search aimed to look for CW from the centrlal compact object in Cassiopea A[@0264-9381-25-23-235011], where the tempalte spacing was decided using the template metric [@PhysRevD.53.6749] for the CW case [@DavidPhD]. If one assume to correct the phase modulations related to the sky-position in a way that does not depend on $f_0,\dot{f}_0$, like with the non-uniform resampling technique [@2017CQGra..34m5007M], the remaining phase evolution of the signal on the corrected data will be: $$\frac{\phi(t)-\phi_0}{2 \pi}=f_0(t-t_0)+\frac{1}{2}\dot{f}_0(t-t_0)^2$$ At this level, the metric will be a $2 \times 2 $ tensor, if we compute the metric using Eq. and using $f_0$ and $\dot{f}_0$ as variable once can check that by placing the reference time in the middle of the run, $t_0=T_{\rm coh}/2$: $$g_{ij}= \begin{bmatrix} g_{ff} & g_{f\dot{f}} \\ g_{\dot{f}f} & g_{\dot{f} \dot{f}} \end{bmatrix} \approx \begin{bmatrix} T_{\rm coh}^2 & 0 \\ 0 & T_{\rm coh}^4 \end{bmatrix}$$ The metric obtained in this way is already diagonal and the mismatch function can be written as: $$m_f \approx T^2_{\rm coh} \Delta f^2+ T^4_{\rm coh} \Delta \dot{f}^2$$ It is then natural to define the frequency and spin-down resolution as $\Delta f=1/T_{\rm coh}$ and $\Delta \dot{f}=1/T_{\rm coh}^2$ respectively. These are the usual frequency and spin-down [*“bins’*]{}’ used in target and narrow-band searches. From the components of the narrow-band metric it is also possible to see that the condition number scales as the ratio of the highest eigenvalue and the lower one ( $\propto T_{\rm coh}^2$). In the narrow-band case the computation of the metric can be done analytically, and the matrix is already diagonal overcoming the numerical inversion of the metric. However the general case is not so trivial, the condition number will scale at least with $ T_{\rm coh}^4$, and the surfaces at constant mismatch will not have a trivial shape. Fig. \[fig:mmovar\] shows the contour plots of the mismatch function $m_f$ for the likelihood surface around an hardware injection (which are fake signal injected in the experiment for testing purposes) in the first Advanced LIGO observation run (O1). The injection had a signal-to-noise ratio (SNR) [^4] of about 70 for one month of data integrated coherently, it is located at $108.85$ Hz and it had an almost zero spin-down. The plots in Fig. \[fig:mmovar\] have been generated computing the mismatch by fixing two of the four CW intrinsic variables to their injected value, we see that even if we have a perfect knowledge on two of the phase variables the problem of templates placing is not so trivial to understand due to the shape of the likelihood surface. The templates that lie on the patterns showed in Fig. \[fig:mmovar\] are called “non-orthogonal” since they recover fraction of the same signal. The metric is used to compute the distance between two templates in this parameter space, templates that lies on the same pattern will be close each other with respect to the templates that are outside a given pattern. The general case, in which all the four phase variables are unknown will be more complicated and the likelihood hyper-surfaces difficult to study. For these reasons the authors in [@Wette2013] have introduced the so-called [*Super-sky metric*]{}. The idea of such metric is to linearize the phase evolution of the CW signal by relaxing the constrain on the sky-versor, i.e. its components are left free to span in the volume of a [*3-d*]{} sphere. After that a new set of sky-positions $n_a,n_b,n_c$ and frequency parameters $\nu, \dot{\nu}$ which nearly diagonalize the metric (the mixed components of the rotational parameters are non-null). However, as we will see later, this choice adds one extra-dimension to the templates thus meaning that there is the possibility to explore non-physical templates. This problem has been solved in [@Wette2013] by realizing that, once the sky-position and rotational parts of the metric had been decorrelated from each other, the metric dependence on one of the new sky-positions is much smaller than the other two, meaning that the problem is collapsed again on a 2 dimension surface instead of a volume. Our approach, which will be presented in the next section, share the idea of the super-sky metric of adding extra dimensions. However, while the super-sky metric was aimed to build a physical template grid for GW searches, our approach is aimed to probe the presence of the signal inside the data with respect to general phase modulations that are to data, during the analysis. Maximum Phase decomposition \[sec:3\] ===================================== As pointed out in the previous section our task is to find a method to make the metric flat, with a reasonable condition number and possibly analytically computable (no dependence on the signal parameter). Our approach will try to extend the concept of adding extra dimensions to linearize the phase of a CW signal. In this section we show the logical steps or our approach. Definition and metric computation \[sec:sa\] -------------------------------------------- Theoretically one can obtain a flat and analytically computable metric by linearizing the CW phase with respect to each variable, our first step is to decompose the CW phase evolution in Eq. in the following way: $$\begin{aligned} \frac{\phi_{\rm gw}(t)- \phi_0}{2 \pi}= &f_{ 0} (t-t_0) + \frac{1}{2}\dot{f}_{0}(t-t_0)^2+ \nonumber \\ & +f_{ 0} \, \mathcal{P}_x(t) \cdot n_x +f_{ 0} \, \mathcal{P}_y(t) \cdot n_y+ \nonumber\\ &+ f_{ 0} \, \mathcal{P}_z(t) \cdot n_z + \dot{f}_{0}(t-t_0)\, \mathcal{P}_x (t)\cdot n_x + \nonumber \\ +& \dot{f}_{0}(t-t_0)\, \mathcal{P}_y(t) \cdot n_y + \dot{f}_{0}(t-t_0)\, \mathcal{P}_z(t) \cdot n_z \label{eq:phevol1}\end{aligned}$$ where we have exploited the components of each scalar product related to the Doppler modulation. The next intent is to write Eq. in the form: $$\phi (t)= \sum_{i=1}^{8} \varphi_i v_i(\tau) \label{eq:mpd}$$ where $v_i(\tau)$ are functions of an adimensional time $\tau=(t-t_0)/T_{\rm obs}$ ($T_{\rm obs}$ is the observation time of the detector) and the variables $\varphi_i$ are a new set of coordinates defined from the usual CW phase parameters $f,\dot{f},\alpha,\delta$. By looking at Eq. and exploiting the products in Eq. , one can write the new scalar variables $\varphi_i$ as: \[eq:2\] $$\begin{aligned} & \varphi_1=2 \pi f_{0} T_{\rm obs} \\ & \varphi_2=\pi \dot{f}_{0} T^2_{\rm obs} \\ & \varphi_3= 2 \pi f_{0} {\rm max}_t[|\mathcal{P}_x(t)|] \cos \alpha \cos \delta \\ &\varphi_4= 2 \pi f_{0} {\rm max}_t[|\mathcal{P}_y(t)|] \sin \alpha \cos \delta \\ & \varphi_5= 2 \pi f_{0} {\rm max}_t[|\mathcal{P}_z(t)|] \sin \delta\\ & \varphi_6= 2 \pi \dot{f}_{0} T_{\rm obs} {\rm max}_t[|\mathcal{P}_x(t)|] \cos \alpha \cos \delta \\ & \varphi_7= 2 \pi \dot{f}_{0} T_{\rm obs} {\rm max}_t[|\mathcal{P}_y(t)|] \sin \alpha \cos \delta \\ & \varphi_8= 2 \pi \dot{f}_{0} T_{\rm obs} {\rm max}_t[|\mathcal{P}_z(t)|] \sin \delta \\\end{aligned}$$ and the adimensional functions $v_i(\tau)$ as: \[eq:3\] $$\begin{aligned} & v_1=\tau \nonumber \\ & v_2= \tau^2 \nonumber \\ &v_3=\mathcal{P}_x(\tau)/{\rm max}_\tau[|\mathcal{P}_x(\tau)|] \\ &v_4=\mathcal{P}_y(\tau)/{\rm max}_\tau[|\mathcal{P}_y(\tau)|] \\ & v_5=\mathcal{P}_z(\tau)/{\rm max}_\tau[|\mathcal{P}_z(\tau)|] \\ &v_6= \tau \mathcal{P}_x(\tau)/{\rm max}_\tau[|\mathcal{P}_x(\tau)|] \\ & v_7= \tau \mathcal{P}_y(\tau)/{\rm max}_\tau[|\mathcal{P}_y(\tau)|] \\ & v_8= \tau \mathcal{P}_z(\tau)/{\rm max}_\tau[|\mathcal{P}_z(\tau)|] \\end{aligned}$$ The new variables defined in Eqs. represent the maximum phase displacement that a signal may experience during the observing time $T_{\rm obs}$ from the modulation of different physical effects. We called these new decomposition as [*maximum phase decomposition*]{}. On the other hand the adimensional time functions $f_i (\tau)$ in Eqs. represent the time evolution of different phase modulations. For instance, the intrinsic frequency phase evolution and spin-down evolution are represented by $\varphi_1$ and $\varphi_2$. The Doppler coupling with the frequency is represented by $\varphi_{3-5}$, while the Doppler coupling with the spin-down is represented by $\varphi_{6-8}$. The values related to the Doppler modulation have 3 components because the Doppler can be decomposed on the usual cartesian coordinates $x,y,z$. Using this new set of variables the metric in Eq. can be analytically computed and assumes a very simple form: $$g^{\varphi}_{ij}=\int_0^1 v_i(\tau) v_j(\tau) d \tau - \int_0^1 v_i(\tau) d \tau \int_0^1 v_j(\tau) d \tau \label{eq:phm}$$ where $(i,j=1,\ldots,8)$. Since the time is now adimensional, the integration in Eq. go from “ $0$” which correspond to the start of the run, to “$1$” which correspond to the end of the run. Another advantage of using the maximum phase decomposition is that the condition number of the metric is naturally constrained and depends weakly on the amount of data that we are using. This is because we are normalizing each phase component by the maximum phase displacement that can occur during the analysis and the integral in Eq. is constrained. A drawback of using 8 variables instead of 4 is the increasing cost of the analysis due to the fact that we are now handling an eight-dimensional parameters. Moreover, since we are extending the dimensionality of the parameter space, not all the templates in the eight-dimensional parameter space will correspond to a template in the four-dimensional CW space (refer to Appendix \[ap:C\] for more details). However this is a trade that we can afford if the variables are used for hypothesis testing, as we will see later or with Markov chain Monte Carlo techniques, which optimally scale with the dimensionality of the problem. Diagonalization of the metric \[sec:sb\] ---------------------------------------- Even though the condition number is naturally constrained by the maximum phase decomposition, it can be still high such as $10^{10}$ as can be seen in Fig. \[fig:cn2\]. Even though such kind of condition number can be handled from a double-precision float precision compiler, we would like to perform some [*ad-hoc*]{} transformation on the metric that further decrease its value. The first step is to express the phase metric (which is now a 8 x 8 symmetric matrix) in Eq. as the product of two square matrix. $$g^\varphi_{ij}=\sqrt{g^\varphi_{ij}} \sqrt{g^\varphi_{ij}}. \label{eq:ormetric}$$ This procedure is performed using using the Block-Schur algorithm [@10.1007/978-3-642-36803-5_12] that does not involve inversions of any kind. The two square root matrixes will have a condition number that is roughly the square root of the original condition number, say $10^5$. After that we factorize the square root matrix using the QR decomposition[@Matrix1]: [^5] $$\sqrt{g^\varphi_{ij}}=Q R.$$ Where $Q$ is a orthogonal unitary matrix and $R$ is an upper triangular matrix. Rewriting the original metric in Eq. and doing some linear algebra one can write. $$g^\varphi_{ij}=R^T Q^T Q R= R^T \mathds{1} R, \label{eq:R}$$ where we have used the fact that $Q$ is orthonormal. From Eq. one can understand that R is the matrix of coordinate transformation that brings from the physical variables $\varphi$ to some phase variables $\Phi$ in which the metric $g_{ij}^{\Phi}$ is a unitary matrix, i.e. all the eigenvalues of the metric $g^{\Phi}_{ij}$ are ones. Using this new set of variables, we can easily compute the mismatch in Eq. as a summation of quadratic phase displacement. $$m_f=\sum_{i=1}^8 \Delta \Phi_i^2$$ The new variables are measured in radiants. From the above Eq. we see that a mismatch of $|\vec{\Delta \Phi}|=1$ rad will correspond to a mismatch of $m_f=1$. So it’s natural to use the $\Phi$ variables to directly measure the distance between two templates. It is also worth to note that using the $\Phi$ variables, the mismatch, and hence the likelihood function, will have spherical symmetry with respect to the true parameter of the signal. Summarizing in order to obtain the $\Phi$ basis: 1. We use the CW variables $\vec{\lambda}=(f_0,\dot{f_0},\alpha,\delta)$ to define the maximum phase displacements during the analysis given by Eqs . -. 2. We compute the metric $g_{ij}^\varphi$ using Eq. . 3. We diagonalize $g_{ij}^\varphi$ using the numerical procedure described in Sec. \[sec:sb\]. Testing \[sec:4\] ================= In this section we will present the results of several tests aimed to show that the maximum phase decomposition presented in Sec. \[sec:sa\], and the diagonalization process presented in Sec. \[sec:sb\], properly work while addressing the problems related to diagonalization. In the next paragraphs we will show tests aimed to check the condition number (which is important to quantify if we can use numerical algorithms to switch from the $\varphi$ variables to the $\Phi$ variables), and the mismatch length in the template parameter space estimated by the metric $g_{ij}^\Phi$. Finally we will also probe if the mismatch predicted by the new metric metric $g_{ij}^{\Phi}$ in the case of a signal is consistent with Eq. . Condition Number ---------------- The very first check is to control the condition number of the metric in the phases $g_{ij}^\varphi$. As pointed out in [@Wette2013] the condition number increases with the length of data that we are coherently analyzing. This happens because the eigenvalues of the matrix, and then the determinant of the template metric becomes smaller and smaller i.e. a finer template grid is needed. The Maximum phase decomposition is supposed to constrain the condition number to the value corresponding of analyzing full coherently the data set. The plot in Fig. \[fig:cn\] reports the value of the condition number as function of the fraction of data that we are integrating coherently. The points with $T_{\rm coh}/T_{\rm obs}=1$ represent a full coherent search, while the points with $T_{\rm coh}/T_{\rm obs}<1$ represent semi-coherent searches. The condition number is constrained to less than $10^{11}$ which is lower than the double-precision float precision $10^{16}$. Figure \[fig:cn\] also shows that the condition number increases with the observation time. However, if we plot our results with respect to the observation time, Fig. \[fig:cn2\], we can see that the condition number has a weak scaling with respect to the observation time of the analysis. In the case of a full coherent search (maximum condition number) the value is still constrained below to $10^{11}$. From this test we can see that the maximum phase decomposition is properly regularizing the condition number by constraining its value to $10^{11}$ (which can be handled from a double-precision float precision compiler) and making the algorithm able to handle $g_{ij}^\varphi$. Another type of test is to check if the matrix decomposition in Eq. approximates well the metric in Eq. . We have hence computed the maximum relative error on the estimated phase metric as $$M={\rm max}_{ij} \big[ \frac{g^\varphi_{ij}-R^T Q^T Q R}{g^\varphi_{ij}}\big]. \label{eq:mm}$$ Figure \[fig:cn3\] shows the maximum relative error computed as a function of the coherence time we are using in our analysis. Also here the endpoint of the figure represents a full coherent search while the others are semi-coherent search s. It is possible to note that the endpoint of every simulation has a much higher relative error. This is because in the original matrix, $g^\varphi_{23}$[^6] is almost zero. In conclusion the metric seems to be well-decomposed with the QR decomposition. ![Condition number of the metric $g_{ij}^\varphi$ computed with the maximum phase decomposition on the vertical axis. The fraction of data that we are integrating coherently is on the horizontal axis. The lines indicate the condition number computed for different observation times.[]{data-label="fig:cn"}](cn_Tobs.eps){width="45.00000%"} ![Condition number of the metric $g_{ij}^\varphi$ computed with the maximum phase decomposition on the vertical axis with respect to the observation time of the analysis on the horizontal axis. The lines represent the fraction of data we are coherently analyzing.[]{data-label="fig:cn2"}](cn_TT.eps){width="45.00000%"} ![Vertical axis: Maximum relative error computed using Eq. . Horizontal Axis: Fraction of data used coherently with respect to the observation time. The different types of lines indicate different observation times.[]{data-label="fig:cn3"}](maximum_error_plots.eps){width="45.00000%"} Mismatch length --------------- After the phase metric $g_{ij}^\varphi$ has been inverted into the new phase variables $\Phi$, we need to check that the new metric $g_{ij}^\Phi$ estimates correctly the distance at which the templates produce a mismatch $~<1$ rad, we will call this distance “ mismatch length”. Using the $\Phi$ variables we have seen that $m_f<1$ when $|\vec{\Delta \Phi}|>1$. So in the $\Phi$ space the mismatch length will be given between two templates separated by more of $|\vec{\Delta \Phi}|=1$ rad. Practically we are asking that the templates are nearly orthogonal to each other (see Appendix \[ap:D\] for more details). A possible way to test this is to run the analysis for a point in the parameter space distant from the injected signal more than the mismatch length. In the case that the mismatch length is estimated correctly and a signal is present in gaussian noise, we expect that the outcome of an analysis performed with a template grid with spacing much larger than the mismatch length, will result as computing the detection statistic for different noise realization. We have created simulated gaussian noise with a software injection signal with SNR $ ~10$ (referred to 1 month of coherent integration) and we have computed and histogrammed the detection statistic for a template grid spaced more than the mismatch length. The spacing of the grid was about $\Delta \Phi_i=10$ rad for each phase variable. Figure \[fig:S4\] shows the histogram of the detection statistic obtained. As expected, in the case of a full coherent search, the detection statistic is a 4 dof $\chi^2$ if only gaussian noise is entering into the analysis though the matched filter. We have also performed the same check using a semi-coherent search performed with 30 chunks of data. In this case we expect a $\chi^2$ with 120 dof as Fig. \[fig:S120\] shows. ![Histogram of the detection statistic obtained for a full-coherent search using a 8 dimensional templates grid equally spaced of $10$ \[rad\] around the injected signals parameters. The figures also shows the fit of a 4dof $\chi^2$ distribution. []{data-label="fig:S4"}](S_4dof.eps){width="45.00000%"} ![Histogram of the detection statistic obtained for a semi-coherent search using a 8 dimensional templates grid equally spaced of $10$ \[rad\] around the injected signals parameters. The figures also shows the fit of a 120dof $\chi^2$ distribution that match the experimental histogram.[]{data-label="fig:S120"}](S_120dof.eps){width="45.00000%"} Fraction of signal-to-noise ratio loss: --------------------------------------- The last test is to check in which limit the new variables $\Phi$ and the new phase metric $g^{\Phi}_{ij}$ efficently approximate the mismatch of Eq. . Hence we should check if the metric efficently tells us which is the fraction of the signal that we are recovering in our analysis given a template mismatch $\Delta \Phi_i$. Usually Eq. is well approximated by the metric for mismatches $<0.5 \%$ [@Wette2016] because far away from the signal’s true parameter the second order expansion is no more sufficient. Figure \[fig:mism8\] shows the mismatch function in Eq. computed for different software injections with different signal-to-noise ratios with respect to the mismatched variables $\Delta \Phi_i$. The red dotted curve represents the fraction of SNR loss predicted by the metric, as we can see from the simulation, the signal is completely lost for mismatches$|\Delta \Phi_i| > 1$, as we expect. The secondary modes in each plot are due to noise contributions or to secondary peaks due to the sidereal responses which are not taken into account in our maximum phase decomposition. Figure \[fig:mism8\] also points us to another drawback of using more variables than what are needed. In principle, it is possible to have a template in the eight-dimensional parameter space which fit better than the injected one in the four-dimensional parameter space. However by working directly in the eight-dimensional parameter space (without coming back), this is not a problem, since all the templates that are within a distance of $|\Delta \Phi<|1$ from each other count as the same template under the point of view of a mathched filter grid. ![image](mismatch_SNR.eps){width="110.00000%"} Applications \[sec:5\] ====================== In the previous sections we have shown how to define a metric in which the mismatch function and hence the likelihood surfaces can be studied . Indeed the topology of the statistic is a feature introduced in the data from the presence of a CW signal that during the analysis will be matched using certain template grids. It follows that signal that does not have the phase evolution in Eq. will not show the expected topology of a CW signal. We have also seen that using the maximum phase decomposition $\Phi$, the metric $g_{ij}^\Phi$ (and hence geometry of the statistic) can be approximated as an identity matrix. The characteristic geometry in the statistic introduced by $g_{ij}^\Phi$ can be used to try to distinguish between the presence of a CW signal or the presence of non-stationary noise artifacts. Different types of application can be found, but in this paper we will present 3 different test cases in which the $\Phi$ variables can help for the identification and detection of a CW signal. Frequentist p-values -------------------- Let us assume that we have obtained some interesting outliers [^7] from a given search (semi-coherent or full-coherent). In order to better estimate the significance of the outlier one usually want to use the noise-only distribution of the detection statistic. This distribution is analytically known just in the case only gaussian noise is present together with the signal. More importantly one would like also to capture non-gaussianities inside data and take them into account when computing the p-values. The modelization of non-gaussian noise cannot be done directly, since we do not perfectly known the noise of the experiment. Instead, we can try to build empirically the noise only distribution by performing the analysis for templates in which we assume that no-signal is present. For example in the case that our only parameters are $f_0$ and $\dot{f}_0$, one can run several analysis spaced more than the frequency and spin-down bins, thus obtaining different noise-realization (if we assume the ergodic principle) and later build the noise-only distribution with the obtained samples. Two requests must be satisfied when following this procedure: [*(i)*]{} templates should be far enough from the signal in such a way to blind our analysis to its presence; [*(ii)*]{} in order to preserve the noise properties the templates should not be too far from a given interesting CW candidate. The $\Phi$ variables give us a clear framework in which the previous constrains are satisfied. In fact, if a template is distant from the signal $\Phi$ parameters by $ |\Delta \Phi|>1$, then we expect to not see anymore the contribution of the signal. Using such kind of technique to generate the noise background can be seen as we are answering the question “ [*Which is the probability that modulating the noise with a phase modulation very similar to the one of a possible signal but independent, the noise will mimic the GW antenna pattern for which I am looking for?*]{}”. In Fig. \[fig:pf\] we show an example of significance assignement for an outlier due to a known noise line in detectors data found in the last narrow-band search for CW from the pulsar J1952+3252 using O1 data [@PhysRevD.96.122006]. The outlier displayed a very high significance (p-value$=10^{-6}$) from the narrow-band search, while generating the noise background with the $\phi$ variables and a template spacing of $\Delta \Phi=10$ we have drawn samples from the noise only distribution obtaining a new sub-threshold p-value for the outlier of $0.04$. The fact that the p-value is increased from $10^{-6}$ to $0.04$ is an indication of the fact that there are non-gaussian noise that is entering into the analysis and the outlier is likely due to this noise contribution. ![Histogram of the noise-only distribution obtained drawing samples with a template spacing $\Delta \Phi=10$ rad. The red vertical dashed line show the value of the detection statistic obtained from an outlier due to a known noise line in O1 data. Its original p-value was about $10^{-6}$ and now is $0.04$[]{data-label="fig:pf"}](prova_freq.eps){width="50.00000%"} ![image](signal_8hist_SNR8.eps){width="100.00000%"} Bayesian Confidence intervals ----------------------------- The geometry of the $\overset{_*}{\mathcal{F}}-statistic$ with respect to the intrinsic parameters $\Delta \vec{\lambda}$ will have an impact on the credible intervals for an analysis based on Bayesian inference. In fact, by using Eq. with Eq. and Eq. we have: $$\mathcal{L}_{\rm ML}(x|h(\Delta \vec{\lambda}))=e^{\overset{_*}{\mathcal{F}}_s[1-g_{ij} \Delta \lambda_i \Delta \lambda_j]}$$ being $\overset{_*}{\mathcal{F}}_i$ the statistic associated to the matched parameters of the signal. If we use the phase variables it is easy to see that: $$\mathcal{L}_{\rm ML}(x|h(\Delta \vec{\lambda}))=e^{\overset{_*}{\mathcal{F}}_s}e^{-\overset{_*}{\mathcal{F}}_s \sum_{k=1}^8 \Delta \Phi_k^2} \label{eq:30}$$ For very strong signals, we expect the maximum likelihood estimator to be a $\delta$-like function around the signals parameters, while for low signal-to-noise ratio we expect the posteriors to be more similar to gaussians always centered around the signal parameters. We can also study the confidence intervals with respect to the $\Phi$ variables: $$\int_{\Omega(\Phi_s)} \mathcal{L}_{\rm ML}(x|h(\Delta \vec{\lambda}) d \vec{\Phi}=0.95,$$ where $\Omega(\Phi_s)$ is a given volume in the parameter space centered around a value $\Phi_s$ that can be the mean of the maximum likelihood estimator. From Eq. it is clear that the maximum likelihood estimator has spherical symmetry with respect to the templates computed in the $\Phi$ space. For example, Fig. \[fig:8hs\] shows the contour plots obtained running a Markov Chain Monte Carlo algorithm looking for a software injected signal with signal-to-noise ratio 8 in one month of O1 data. It is clear from the figure that the posterior have spherical symmetry as we expect. On the other hand Fig. \[fig:8hn\] shows the contour plots obtained by running the same algorithm for a very loud (signal-to-noise ratio about 300) monochromatic noise line injected at the frequency searched in the analysis, in software simulated gaussian data. It is clear that in this case the posterior distribution has not spherical symmetry. We can qualitatively use this for distinguish among CW signals and non-gaussian noise lines. For example one can compute the marginalized probability $p(r,r_c)$ to be in a spherical volume $\mathcal{S}(r,r_c)$ from a central point $r_c$. $$p(r,r_c)=\int_{\mathcal{S}(r,r_c)} \mathcal{L}_{\rm ML}\big(x|h(\Phi \big) d\vec{\Phi}$$ In the case of a CW signal, we expect $p(r,r_c)=1$ if the radius of the sphere is within one template space ($\Delta \Phi<1$). For noise-lines, instead, since the spherical symmetry is not preserved and the posterior is spread all over the template grid, we expect $p(r,r_c)$ to not increase so rapidly from the central point $r_c$ and to reach the value of $1$ for $\Delta \Phi>1$. Table \[tab:1\] reports this kind of test performed for the examples in Fig. \[fig:8hs\] and Fig. \[fig:8hn\]. [c|c|c]{} **Case** & radius \[deg\] & $p(r,r_c)$\ \ Signal & 0.05 & 0.2272\ Signal & 0.1 & 0.97\ Signal & 0.15 & 1.0\ Noise line & 1.0 & 0.0042\ Noise line & 1.5 & 0.2676\ Noise line & 2.5 & 0.7366\ ![image](signal_8hist_noiseline.eps){width="100.00000%"} A more quantitative way to check the spherical symmetry hypothesis is by using the evidence $Z$ (or marginalized likelihood) of having $N_s$ samples from a multivariate normal 8-d distribution. In fact, according to Eq. if we run a Markov chain monte carlo (MCMC) for the maximum likelihood estimators $\mathcal{L}_{\rm ML}(x|h(\Delta \vec{\lambda})$ what we expect to see are roughly samples from a eight-dimensional bivariate normal distribution with mean the parameter of the signal and variance the $\sigma^2 \approx 1/ \overset{_*}{\mathcal{F}}_s$. For a 8 dimensional multivariate normal distribution with mean $\vec{\mu}$ and variance $\sigma^2$, the logarithm of the evidence can be computed as: $$\ln Z= -4 N_s \ln(2 \pi)-\frac{1}{2} N_s \ln (\sigma^2) - \frac{1}{2} \sum_{i=1}^{N_s} \frac{(\vec{x}_i-\vec{\mu}) \cdot (\vec{x}_i-\vec{\mu})}{\sigma^2}. \label{eq:evidence}$$ The idea is to run a Markov chain monte carlo (MCMC) algorithm for the maximum likelihood estimator and then from the output evaluate the mean $\vec{\mu}$, then compute the evidence for many values of $\sigma^2$. We expect the evidence to have a peak in correspondence of the value $\sigma^2_s=1/\overset{_*}{\mathcal{F}}_s$ (according to and then we expect a linear decrease. Practically this means that the samples obtained from the MCMC should be representative of an 8-dimensional gaussian process. We then perform the following procedure to probe the nature of data: [*(i)*]{} We run a MCMC algorithm on the data in order to sample the maximum likelihood estimator in Eq. . [*(ii)*]{} After obtaining $N_s$ independent samples we compute the mean $\vec{\mu}$ and the variance $\sigma^2_s=1/\overset{_*}{\mathcal{F}}_{\rm max}$ of the distribution, where we used the maximum of the $\overset{_*}{\mathcal{F}}$-statistic found by the MCMC. [*(iii)*]{} Using several values of $\sigma^2$ we compute the logarithm of the evidence $\ln Z_{\rm data}$ in Eq. as function of the variance, we expect to see a peak around $\sigma^2_s$ and after that a linear decrease. [*(iv)*]{} As another proxy for the evidence $\ln Z_{\rm data}$ to be representative of a gaussian process, we software generate $N_s$ samples of a 8-dimensional gaussian process with given mean $\vec{\mu}$ and variance $\sigma^2_s$, we then compute the evidence $\ln Z_{\rm proxy}$ as a function of the variance. [*(v)*]{} The two evidences $\ln Z_{\rm data}$ and $\ln Z_{\rm proxy}$ are compared together. If the evidence curve for the data is above or within the a degree of uncertainty (given by the statistical standard deviation of $Z$ for a pure gaussian process) of the evidence curve generated by true gaussian samples, then we have a strong reason to believe that what we are observing is likely due to a signal in Gaussian noise. Figs. \[fig:p8z\], \[fig:8z\], \[fig:nz\] show the evidence computed from a Markov chain Monte Carlo ran to sample the maximum likelihood estimators of the hardware injection Pulsar 3 in one month of O1 data (SNR 70), a software injection with SNR 8 (with the same parameters of Pulsar 3 but in gaussian simulated data) and a monochromatic noise line injected in gaussian simulated data with an high SNR that contaminates the analysis. In all the figures, the evidence computed from the data is compared between the proxy evidence computed from software generated samples of a bivariate normal distribution. The figures show that in the case that a signal is present inside data, the evidence curve of the data is above or within the evidence generated by software generated from gaussian process with same variance and mean. In the case of Pulsar 3 (high SNR) we observe that the evidence of the data is above the evidence generated by the software generated gaussian process, meaning that the recovered likelihood is more “peaked” than the one expected. This is reasonable since the signal is very strong and we are neglecting the effect of sidereal modulations which can further modify the shape of the likelihood surfaces in many different local peaks. Figure \[fig:nz\] instead shows the evidence computed in the case a very strong monochromatic noise line is present inside the data. It is clear that the evidence of data is far below the evidence computed for a gaussian process with same variance and mean, meaning that the likelihood that we are observing has not spherical symmetry at all and hence is very unlikely to be generated by a signal in gaussian noise. ![Evidence (vertical axis) computed with respect to a chosen variance (horizontal axis), in the hypothesis of a multivariate gaussian distribution for the $\Phi$ variables. Red dashed line: Evidence trend for O1 data around the Hardware Injection Pulsar 3. Blue solid line: Evidence trend for gaussian samples generated with a variance equal to the inverse of the maximum statistic found in the search, the lines cover the $1 \sigma$ confidence interval (blue dotted lines). []{data-label="fig:p8z"}](P3_evidence_radius.eps){width="45.00000%"} ![Evidence (vertical axis) computed with respect to a chosen variance (horizontal axis), in the hypothesis of a multivariate gaussian distribution for the $\Phi$ variables. Red dashed line: Evidence trend for a software injected signal with SNR 8. Blue solid line: Evidence trend for gaussian samples generated with a variance equal to the inverse of the maximum statistic found in the search, the lines cover the $1 \sigma$ confidence interval (blue dotted lines).[]{data-label="fig:8z"}](SNR_8_evidence_raius.eps){width="45.00000%"} ![Evidence (vertical axis) computed with respect to a chosen variance (horizontal axis), in the hypothesis of a multivariate gaussian distribution for the $\Phi$ variables. Red dashed line: Evidence trend for a monochromatic noise line injected in gaussian generated noise. Blue solid line: Evidence trend for gaussian samples generated with a variance equal to the inverse of the maximum statistic found in the search, the lines cover the $1 \sigma$ confidence interval (blue dotted lines). []{data-label="fig:nz"}](noise_evidence_radius.eps){width="45.00000%"} Application to Markov Chain Monte Carlo Follow-up: -------------------------------------------------- Another possible application of the new variables is in the so-called follow-up algorithms. Follow-ups are procedures aimed to understand the nature of a given candidate. Depending on the necessities of the problem usually we want these algorithms to follow candidate in the parameter space in such a way to perform longer and longer searches in order to increase the significance of a possible CW detection. Recently the possibility of performing this tasks with Markov Chain Monte Carlo Techniques have been shown [@Ashton2018]. It is well known that Markov Chain Monte Carlo should be tailored on the type of posterior that we would like to sample. Using the $\Phi$ variable the geometry of the posterior $p\big(h(\Phi)|x\big)$ is well known and this may help the algorithm to converge faster thus saving computation time. Running the same algorithm used in [@Ashton2018] from band sub-sampled data [@PiccinniPhD], but implemented for the $\Phi$ variables, we have found an Integrated Autocorrelation time[^8] of about $25$ iterations whereas in the original work it is about $90$ iterations. Thus meaning that about the half of the iterations are needed in order to obtain the posterior distribution, even if we are using 4 additional variables. The decreased computational cost grant us the possibility to increase the sensitivity of the search by increasing the number of outliers to follow-up. Conclusion \[sec:6\] ==================== In this paper we have presented a new set of variables for CW, called maximum phase decomposition, which is able to regularize the template metric for semi-coherent and full coherent searches. As shown in the paper the template metric plays a very important role when deciding how to build the template lattice. The fact that, in the standard CW searches, the metric for long coherent searchers is handle numerically makes the template grid non-trivial to built. At the current state the maximum phase decomposition cannot be used to to built a template grid for all-sky or semi-coherent searches due to the fact that it will bring to the placement of many templates in the eight-dimensional template space which does not have a correspondent template in the four-dimensional parameter space. However, applications for new variables $\Phi$ may result in more practical tasks for CW searches such as studies on the significance and nature of candidates using either frequentist or Bayesian frameworks. In particular, the $\Phi$ variables makes possible to study and explore the response of the noise to templates which have a very similar phase modulation (even if slightly non-physical) to a given CW candidate. A focal application of the new $\Phi$ variables can take place in Markov Chain Monte Carlo follow-ups, in fact the usage of such new variables can significantly improve the convergence of the algorithm thus reducing computation time. This means that we will able to follow-up more candidates thus improving the sensitivity of our searches. Even though physical parameters cannot be recovered at the end of the follow-up, the maximum phase decomposition still offers a good tool to study the significance of the CW candidate when increasing the coherent integration. The implementation of this framework for this kind of algorithm will be presented in a future work. Summarizing the maximum phase decomposition offer a valuable and alternative tool to exploit the phase properties of CW signal and to study the response of the detector noise to such phase modulations. The authors would like to thank Gregory Ashton, Karl Wette and Matthew Pitkin for the useful comments during the development of this work. Simone Mastrogiovanni would like to thank also the University of Rome Sapienza that has supported this work with the action “Avvio alla Ricerca 2017”. This paper carries LIGO Document Number [LIGO-P1800185]{}. The Phase metric computation \[ap:A\] ===================================== Let us assume that the data is a superposition of gaussian noise and a CW signal: $$\ket{h}=H_0 \sum_s H_s \ket{\mathcal{A}_s},$$ were we have dropped the dependence from $\vec{\lambda_s}$ for the notation sake, and that we are performing our analysis using a set of templates computed for a small mismatch $\Delta \vec{\lambda}$. If we assume the data to be a superposition of gaussian noise and signal, like in Eq. , and taking the definition of $\overset{_*}{\mathcal{F}}$-statistic in Eq. we can write: $$\overset{_*}{\mathcal{F}}_{mismatch}=\frac{1}{2} \sum_p \frac{(n^*_p+ \sum_s H^*_s \braket{\mathcal{A}_s|\mathcal{A}_p} )( n_p+ \sum_s H_s \braket{\mathcal{A}_p|\mathcal{A}_s} )}{\braket{\mathcal{A}_p|\mathcal{A}_p}}.$$ Being $n_p$ the projection of the noise over the template and “ $*$” the complex conjugator operator. We can now take the expected value of the above equation and exploit the products obtaining: $$E[\overset{_*}{\mathcal{F}}_{mismatch}]=\frac{1}{2} \sum_p \frac{E[n^*_p \cdot n_p]}{\braket{\mathcal{A}_p|\mathcal{A}_p}}+ \frac{E[n^*_p \sum_s H_s \braket{\mathcal{A}_p|\mathcal{A}_s}]}{\braket{\mathcal{A}_p|\mathcal{A}_p}}+ \frac{E[n_p \sum_s H^*_s \braket{\mathcal{A}_s|\mathcal{A}_p}]}{\braket{\mathcal{A}_p|\mathcal{A}_p}}+ \frac{E[ \sum_s H^*_s \braket{\mathcal{A}_s|\mathcal{A}_p} \sum_s H_s \braket{\mathcal{A}_p|\mathcal{A}_s}]}{\braket{\mathcal{A}_p|\mathcal{A}_p}} \label{eq:long}$$ The first term in Eq. represents the contribution from the noise[^9], the second and third terms vanish since we are assuming gaussian noise with zero mean. Finally the last term represent the contribution from a possible overlap of the signal and the mismatched template. Taking into account the previous considerations, we can rewrite Eq. exploiting the summation over the $s$ index in the last term: $$E[\overset{_*}{\mathcal{F}}_{m}]= \sum_p \frac{E[n^*_p \cdot n_p]}{\braket{\mathcal{A}_p|\mathcal{A}_p}}+ \frac{1}{\braket{\mathcal{A}_p|\mathcal{A}_p}} E \bigg[ \sum_{s_1=s_2} |H_s|^2 |\braket{\mathcal{A}_s|\mathcal{A}_p}|^2 + \sum_{s_1 \neq s_2} H_{s_1} H^*_{s_2} \braket{A_{s_2}|\mathcal{A}_p} \braket{\mathcal{A}_p|\mathcal{A}_{s_1}}\bigg] \\ \label{eq:caos}$$ Let us focus on the last term in Eq. which represent the contribution to the detection statistic from possible interference of two different polarization of a CW signal. In the limit of long integration time (greater than 1 day), the antenna response function to the two polarization $+$ and $\times$ are expected to become almost independent each other [@Jaranowski1998], this happens because the antenna response is averaged over a long integration time. We can then write: $$\frac{E [ \sum_{s_1 \neq s_2} H_{s_1} H^*_{s_2} \braket{\mathcal{A}_{s_2}|\mathcal{A}_p} \braket{\mathcal{A}_p|\mathcal{A}_{s_1}}]}{\braket{\mathcal{A}_p|\mathcal{A}_p}}=E [ \sum_{s_1 \neq s_2} H_{s_1} H^*_{s_2} \braket{\mathcal{A}_{s_2}|\mathcal{A}_{s_1}}]\approx 0 \label{eq:est}$$ Equation is telling us that the contribution from the interference of two different polarization vanish in the limit that the signal is formed by orthogonal polarization. Finally Eq. can be rewritten in the more compact form: $$E[\overset{_*}{\mathcal{F}}_{m}]=\sum_p \frac{E[n^*_p \cdot n_p]}{\braket{\mathcal{A}_p|\mathcal{A}_p}}+ \sum_{s} |H_s|^2 |\mathcal{A}_s|^2 \frac{ |\braket{\mathcal{A}_s|\mathcal{A}_p}|^2}{\braket{\mathcal{A}_p|\mathcal{A}_p} \braket{\mathcal{A}_s|\mathcal{A}_s}} = \frac{E[n^*_p \cdot n_p]}{\braket{\mathcal{A}_p|\mathcal{A}_p}}+ \sum_{s} \overset{_*}{\mathcal{F}}(\beta_s,\lambda_s) M_{sp}(\lambda_s,\lambda) \\ \label{eq:R_m}$$ Where $\overset{_*}{\mathcal{F}}(\beta_s,\lambda_s)$ is the detection statistic for a perfect matched template while the loss due to the template mismatch is encoded in a mismatch matrix that depend on the coupling between the true signal polarization $s$ and the template polarizations $p$. $$M_{sp}(\lambda_s,\lambda)=\frac{ |\braket{\mathcal{A}_s|\mathcal{A}_p}|^2}{\braket{\mathcal{A}_p|\mathcal{A}_p} \braket{\mathcal{A}_s|\mathcal{A}_s}}. \label{eq:brak}$$ The mismatch matrix can be easily computed in the time basis, remembering that in general a template can be written in the time basis as in Eq. : $$M_{sp}=\frac{1}{T_{\rm coh}^{2}} \bigg| \int_0^{T_{\rm coh}} e^{i \Delta \phi_{sp}(t;\lambda_s,\lambda)} dt \bigg|^2 \label{eq:msp}$$ Being $ \Delta \Phi_{\rm sp}(t;\lambda_s,\lambda)$ the phase mismatch between two polarizations $s$ and $p$ of the signal and template. Finally let us assume that the signal we are looking for is composed by the usual $+$ and $\times$ polarization. If we take the mismatch function from Eq. and for $ E[\overset{_*}{\mathcal{F}}_{m}]$ we use Eq. : $$m_f(\lambda, \lambda_s)=1- \frac{\overset{_*}{\mathcal{F}}_+ (\beta_s,\lambda_s) [M_{++} (\lambda,\lambda_s)+M_{+ \times} (\lambda,\lambda_s)+\overset{_*}{\mathcal{F}}_\times (\beta_s,\lambda_s) [M_{\times \times} (\lambda,\lambda_s)+M_{\times +} (\lambda,\lambda_s)]}{\overset{_*}{\mathcal{F}}_\times (\beta_s,\lambda_s)+\overset{_*}{\mathcal{F}}_+ (\beta_s,\lambda_s)}. \label{eq:aaa}$$ We can now Taylor expand up to the second order the matrix $M_{\rm sp}$ around the parameter of the signal (a summation over the polarization indexed $s$ and $p$ is intended). If we assume $\overset{_*}{\mathcal{F}}_+ (\beta_s,\lambda_s) \approx \overset{_*}{\mathcal{F}}_\times (\beta_s,\lambda_s)$ [^10] $$m_f(\lambda,\lambda_s)= 1- \frac{1}{2} \bigg[M_{sp}\bigg|_{\lambda=\lambda_s}+\sum_j \frac{\partial M_{sp}}{ \partial \lambda_j} \bigg|_{\lambda=\lambda_s} \Delta \lambda_j +\sum_{j,1} \frac{1}{2}\frac{\partial M_{sp}}{ \partial \lambda_j \partial \lambda_i} \bigg|_{\lambda=\lambda_s} \Delta \lambda_i \Delta \lambda_j+\mathcal{O}(\Delta \lambda^3)\bigg] = g_{ij} (\lambda_s) \Delta \lambda_i \Delta \lambda_j \mathcal{O}(\Delta \lambda^3) \label{eq:long}$$ The terms given by the first derivatives in Eq. are zero, since we are expanding around a local maximum of the mismatch function. From the definition in Eq. it is possible to see that $0-th$ terms are $M_{++}(\lambda_s)=M_{\times \times}(\lambda_s)=1$ and the terms $M_{+\times}(\lambda_s)=M_{\times +}(\lambda_s)=0$. If one computes the remaining second order derivatives, In the case that the phase mismatch due to the sidereal templates is negligible with respect to the phase mismatch introduced by all the other modulations and that we are integrating over many cycles of the signal[^11], Equation will be finally reduced to the form: $$m_f(\lambda,\lambda_s)= \sum_{i,j}\bigg[\frac{1}{T_{\rm coh}} \int_0^{T_{\rm coh}} \frac{\partial \phi}{\partial \lambda_i} \frac{\partial \phi}{\partial \lambda_i} \bigg|_{\lambda=\lambda_s} dt- \frac{1}{T^2_{\rm coh}} \int_0^{T_{\rm coh}} \frac{\partial \phi}{\partial \lambda_i} \bigg|_{\lambda=\lambda_s} dt \int_0^{T_{\rm coh}} \frac{\partial \phi}{\partial \lambda_j} \bigg|_{\lambda=\lambda_s} dt \bigg]\Delta \lambda_i \Delta \lambda_j+\mathcal{O}(\Delta \lambda^3). \label{eq:metric_final}$$ Where the polarization indexes $s,p$ are no more present since we are neglecting the phase modulations of the sidereal motion with respect to other modulations. In Eq. one can recognize the form of the phase metric presented in Eq. . Semi-coherent metric \[ap:B\] ============================= In semi-coherent searches such as [@2015ApJ...813...39A] the data is split in several data chunks of nearly same duration. The matched filter is then applied to each data chunks obtaining a value of the statistic $\overset{_*}{\mathcal{F}}$ for each data chunk $l$ and later combined incoherently. In practice the final value of the statistic will be the summation of all the obtained values. For our search, we define the mismatch as: $$m_f=\frac{\sum_l^{\rm chunks} \overset{_*}{\mathcal{F}^l_s} - \overset{_*}{\mathcal{F}^l_m}}{\sum_l^{\rm chunks} \overset{_*}{\mathcal{F}^l_s}},$$ where $s,m$ refers to the expected values of the statistic for the signal parameters and for a set of mismatched parameters. Following the same procedure of Appendix \[ap:A\] we can Taylor expand the up to the second term in order to obtain the metric: $$m_f=\frac{\sum_l^{\rm chunks} \overset{_*}{\mathcal{F}}^l_s g_{ij}^{l} \Delta \lambda_i \Delta \lambda_j }{\sum_l^{\rm chunks} \overset{_*}{\mathcal{F}}^l_s}$$ where $g_{ij}^{l} $ is the phase metric in Eq. computed for the chunk $l$. If the data is split into chunks of the same length we expect (in a case of a CW) $\overset{_*}{\mathcal{F}}^l_s$ to be almost the same over each data chunk $l$. We can then simplify the above equation as $$m_f \approx \frac{\sum_l^{\rm chunks} g_{ij}^{l} \Delta \lambda_i \Delta \lambda_j }{N},$$ where $N$ is the number of data chunks. It follows that the semi.-coherent metric $\tilde{g}_{ij}$ can be defined as $$\tilde{g}_{ij}= \frac{\sum_l^{\rm chunks} g_{ij}^{l}}{N}.$$ This expression is equivalent to the one already found in [@Wette2015] in which the author have proven all the approximations done. Effect of extra-dimensions \[ap:C\] =================================== The fact that we are using $8$ dimensions instead of $4$ is introducing in the analysis means that in the template lattice there may exists points which have no correspondent template in the $4$ parameter space but may fit better than the original astrophysical template. For instance one can obtain a combination of $\Phi_{1-8}$ which correspond to a combination of the physical phases $\phi_{1-8}$ where the four parameters $f_0, \dot{f}_0, \alpha, \delta$ does not have the same value, but a slight different value for all the $\phi_{1-8}$ . Under a point of view of the signal frequency components, this correspond to have more possible combination of the amplitudes on the 5 frequency components of the signal. As an example, Figure \[fig:spec\] shows two different power spectrum obtained looking for the hardware injection Pulsar 3 in one month of O1 data. The injected template (which is represented by the blue solid line) clearly shows the 5 frequency peaks due to the sidereal modulation of the signal. The red dashed line on the other hand shows the spectrum obtained for the template built from the $\Phi$ that has a statistic bigger than the original one. The template which maximize the statistic in the 8 dimensional space does not have a correspondent template in the 4 dimensional space. As we can see, the usage of the $8$ variables $\Phi$ is leaving more degrees of freedom to adjust the polarization components. ![Red-line: Power spectrum of the O1 data (1 month of integration) corrected for the $\Phi$ parameters associated with the Hardware injection Pulsar 3. The 5 frequency peaks due to the sidereal modulation are clearly visible and the associated $\mathcal{F}$-statistic was  1150. Blue line: Power spectrum of recovered from the $\Phi$ parameters found in a MCMC search, the reached $\mathcal{F}$-statistic was 1180.[]{data-label="fig:spec"}](spec_fisico.eps){width="40.00000%"} **Data-set** $\overset{_*}{\mathcal{F}}$-Statistic $\epsilon_{h_0}$ $\epsilon_{\psi}$ $\epsilon_{\eta}$ -------------- --------------------------------------- ------------------ ------------------- ------------------- Physical 1150 6 % 0.40% 0.9% non-physical 1180 4% 0.02% 2.0% : l\[tab:2\] Second column: Value of the detection statistic obtained. Third column: relative error on the amplitude estimation computed as the percentage of amplitude loss. Fourth column: relative error on the $\phi$ parameter computed as $\Delta \Psi/90 {\rm deg}$. Last column: relative error on the $\eta$ parameter computed as $\Delta \eta/2$. Correlations of the phase templates \[ap:D\] ============================================ As pointed out, we define the correlation among phase templates as their normalized scalar product: $$C=\frac{\braket{A_a | A_b}}{|A_a| |A_b|},$$ where the subscripts $a,b$ indicate two templates computed from the variables $\vec{\Phi}_a$ and $\vec{\Phi}_b$. Following Eq. if the metric estimates correctly the fraction of signal that we are recovering with a template, it follows that two phase templates computed from two parameters s $\vec{\Phi}_a$ and $\vec{\Phi}_b$ distant $\Delta \Phi=|\vec{\Phi}_a-\vec{\Phi}_b|>1$ will give a correlation very close to zero. Fig. \[fig:phase\_correlation\] shows the correlation of two phase templates computed with a distance in the parameter space that is a multiple of 10. Even though the correlation does not drop immediatly to zero (since we are neglecting the sidereal modulations and since the metric its a quadratic approximation), its value becomes small very fast. One can also plot the histogram of the correlation obtained in this way, that is shown in Fig. \[fig:phase\_correlation\_histogram\], where we see that the majority part of phase templates have a very small correlation with the central point. ![Vertical axis: Correlation of two phase templates (one is fixed) with a given distance in the $\Phi$ space (x-axis).[]{data-label="fig:phase_correlation"}](phase_correlation.eps){width="50.00000%"} ![Histogram of the correlation values obtained in Fig. \[fig:phase\_correlation\].[]{data-label="fig:phase_correlation_histogram"}](phase_correlation_histogram.eps){width="50.00000%"} [^1]: It is defined as the direction of the major axis with respect to the celestial parallel of the source measured counter-clockwise. [^2]: We are neglecting other phase modulations such as the Einstein, the Shapiro delay [@2017CQGra..34m5007M] and the contribution from further derivatives in the NS’s frequency Taylor expansion. These modulations are taken into account during the analysis but their effect is neglected when computing the metric since they have a negligible effect on the mismatch (defined later) with respect to the other effects. [^3]: The first derivative term is null since we are expanding around a local maximum. [^4]: Using the same notation introduced in \[sec:2\] , given a signal $\ket{h}$, the SNR can be defined as $${\rm SNR}=\sqrt{\braket{h|h}}$$ [^5]: The decomposition is unique if the matrix is symmetric and positive defined, true condition for a metric. [^6]: $g^\varphi_{23}$ correspond to $g_{f \dot{f}}$ of the narrow-band search example in Sec. \[sec:2\], hence for a reference time in the middle of the run is equal to 0. [^7]: With outliers we mean points in the parameter space $\vec{\lambda}$ which show a false alarm probability below a given threshold and need deeper studies or could be due to a real CW signal. [^8]: The Integrated Autocorrelation is an estimator of how many samples are necessary in order to have two independent samples from a MCMC algorithm [@Sokal1996]. [^9]: The distribution of the statistic in the noise case in a $\chi^2$ with $4 N$ degree of freedom, where $N$ are the number of chunks used in the analysis. [^10]: Which is a reasonable assumption since the sidereal responses are averaged over a very long integration time. [^11]: This is often a reasonable assumption for long integration time, since effect such as the Romer delay are always bigger and for signals in the range from the Hz to the kHz.
--- abstract: 'A new micro-irreversible $3D$ theory of quantum multichannel scattering in the three-body system is developed. The quantum approach is constructed on the generating trajectory tubes which allow taking into account influence of classical non-integrability of the dynamical quantum system. When the volume of classical chaos in phase space is larger than the quantum cell in the corresponding quantum system, quantum chaos is generated. The probability of quantum transitions is constructed for this case. The collinear collision of the $Li+ (FH)\to (LiF) +H$ system is used for numerical illustration of a system generating quantum (wave) chaos.' author: - 'Ashot S. Gevorkyan' - 'Alexander V. Bogdanov' - Gunnar Nyman title: 'Regular and Chaotic Quantum Dynamic in Atom-Diatom Reactive Collisions' --- Introduction ============ In the early stage of quantum mechanics development A. Einstein asked a question that have attracted close attention several decades later [@Ein]. The question was: what would be the analogue of a classical chaotic system in quantum mechanics? In particular he pointed to the three-body system, which in general is well known to have a chaotic nature. In an effort to formulate and obtain the solution of the problem of quantum chaos, M. Gutzwiller tentatively divided all the existing knowledge of the dynamics of physical systems into three areas [@Gutz]: 1. Regular classical mechanics ($R$ area); 2. Classical chaotic system or dynamical Poincare system ($P$ area); 3. Regular quantum mechanics ($Q$ area). The mentioned areas are connected by certain conditions. Thus, Bohr’s correspondence principle connects the $R$ and $Q$ areas, transferring quantum mechanics into classical Newtonian mechanics in the limit $\hbar\rightarrow0$. Areas $R$ and $P$ are connected by the Kolmogorov-Arnold-Moser (KAM) theorem. The general principle which can connect $P$ and $Q$ areas is not determined yet. Related to the fourth, conditionally named the *quantum chaos* area $Q_{ch}$, M.  Gutzwiller mentioned that the “quantum chaos” conception is rather a puzzle than a well formulated problem. It is evident that the task formulated correctly in $Q_{ch}$ area is the most general one and under specific conditions must be transformed into the aforementioned limiting areas. Observation of chaotic phenomena in the spectroscopy of atomic nuclei [@Brody], atoms , molecules [@Nemes] and in billiard systems [@Miln]-[@Dembr] has stimulated a considerable interest in the quantum chaos problem in recent years. Irregular behavior of the wavefunction has been found in numerical calculations of quantum mechanical stadium billiard problem [@McDonald]. It has been shown that the so called scars which were observed have classical trajectory characteristics [@Heller]. It has been known for a long time that classical models of chemical reactions exhibit chaos [@Hamilton]. It was shown that the mixing properties of chaotic dynamics observed in unimolecular reactions can be explained by some statistical laws [@Marcus]. Recall that one major motivation for the continued classical investigation of the reactive scattering problem [@Kovacs]-[@Ott] is several kinds of experiments on waves, which have demonstrated the validity of the ideas of quantum chaotic scattering [@Smilansky]-[@Dorn]. Atomic systems are quantum objects should thus be treated considering their quantum properties. The development of different semiclassical and mixed quantum-classical methods (see for example the detailed report [@Nyman]) can be considered as a natural extension of the classical trajectory study. This development has been motivated by the fact that the standard quantum approach is too demanding even for most few-body systems. For many problems various quasi-classical methods can give satisfactory results. The semiclassical methods, however, are restricted to relatively small systems. The problem of *quantum chaos* and its connection with classical nonintegrability was originally studied by the authors in the framework of a collinear three-body collision model [@Gev]. In the current article this approach is generalized to the $3D$ case. Formulation of scattering problem ================================= We will be interested in the three-body reactive scattering process $ A+(BC)_n\to (ABC)^*\to (AB)_m+C$, where $A,B,$ and $C$ are atoms, $n$ and $m$ characterize the set of quantum numbers of diatomic states corresponding to initial $(in)$ and final $(out)$ scattering arrangements and $(ABC)^*$ denotes the resonance complex. Moreover $m_A,m_B$ and $m_C$ are the masses of the particles and ${\bf{r_{A}}}, {\bf{r_{B}}}$ and ${\bf{r_{C}}}$ the column vectors describing their positions relative to an origin fixed in the laboratory system. The reactant arrangement is best described by mass scaled reactant Jacobi co-ordinates, while the product arrangement is best described by mass scaled product Jacobi coordinates. For the reactant arrangement we can write [@Delves; @Smith]: $$\begin{aligned} \label{01} {\bf{q}_{0\alpha}}=\lambda\,{\bf{R}}_\alpha,\qquad {\bf{q}_{1\alpha}}=\lambda^{-1}\,{\bf{r}}_\alpha,\end{aligned}$$ where $\bf{R}_\alpha$ and ${\bf{r}}_\alpha$ Jacobi coordinates of reactant $(in)$ channel, moreover: $ \lambda=\bigl[m_A\bigl(1-m_A/{M}\bigr)/\mu\bigr]^{1/2}, \quad \mu=\bigl[m_Am_Bm_C/M\bigr]^{1/2},\quad M=m_A+m_B+m_C.$ In term of coordinates $({\bf{q}_{0\alpha}},{\bf{q}_{1\alpha}})$ the Hamiltonian of three-body system takes: $$H\bigl({\bf{q}};{\bf{P_{{\bf{q}}}}}\bigr)=\bigl(1/2\mu\bigr) {\bf{P}}_{\bf{q}}^2+ V\bigl(q_0,q_1,\theta\bigr),\qquad \bf{q}=\bigl({\bf{q}_0},{\bf{q}_1})=\{q_k\}, \quad k=0,...,5.\label{02}$$ Note that here and in the following we omit the channel index for simplicity. In (\[02\]) $\mu$ and ${\bf{P}_{\bf{q}}}$ are the effective mass and moment of body system, $(q_0=|{\bf{\bf{q_0}}}|,q_1=|{\bf{\bf{q_1}}}|,\theta)$ characterizes the intrinsic coordinates, $\theta$ is the angle between vectors ${\bf{\bf{q_0}}}$ and ${{\bf{q_1}}}$. The remaining coordinates $(q_3,q_4,q_5)$ are expresses via Euler angles. The interaction potential between all atoms $V\bigl(q_0,q_1,\theta\bigr)$ depends on intrinsic coordinates. ![a) [ Intrinsic Jacobi and local coordinate systems. The angle $\vartheta$ is defined from $\cot\vartheta=b=\sqrt{{m_Am_C}\bigl/{m_BM}}$. [b) The reaction path is passing through the minimums of potential energy while the reaction coordinate $\Im_{if}$ can be an arbitrary smooth curve connecting $(in)$ and $(out)$ asymptotic channels]{}. The lower shaded area is self-crossing region for *natural collision coordinate* (NCC) system associated to the reaction path curve while the upper one self-crossing region for NCC system is associated to the curve $\Im_{if}$.]{}](fig1){width="\textwidth"} Recall, that the coordinate systems needed for reactants and products are different [@Nyman]. This fact creates certain mathematical and computational complexities for the investigation of multichannel scattering problem. The way to overcome it is to turn to special type of curvilinear coordinates, which are natural and suitable for description of two (or more) asymptotical states $(in)$ and $(out)$ simultaneously. For satisfying of this conditions in the collinear collision case was introduced smooth curve $\Im_{if}$ (*coordinate reaction*) which connected $(in)$ and $(out)$ asymptotical channels and along which was defined local orthogonal coordinates system $(u,v)$ (see [@Marcus1; @Light]). In 3$D$ case too we can introduce the curve $\Im_{if}$, along which NCC system is defined. In this case $\Im_{if}$ is defined on the plane $(q_0,q_1,\theta=0)$ by expression [@Gev4]: $$q_{0}^c= a\bigl/\bigl({q_1^c-q_{eq}^-}\bigr)+bq_1^c+q^+_{eq},\qquad\qquad \qquad q_{eq}^-<q_1^c<\infty,\label{03}$$ where $a$ and $b$ are constants. In Eq. (\[03\]) $q^-_{eq}$ and $q^+_{eq}$ are mass-scaled equilibrium bond lengths of molecules in the $(in)$ and $(out)$ channels respectively, $a$ is an arbitrary constant, which is usually chosen to make the curve pass close to the saddle point of the reaction. The superscript $c$ over $q_0$ and $q_1$ underlines the fact that the point $(q_0^c,q_1^c)$ lies on the curve. The limit $(q_1=0,\,\,q_0\rightarrow{\infty})$ corresponds to the $(in)$ state, while the limit $(q_0=b q_1=q_1\cot\vartheta)$ corresponds to the $(out)$ state. The movement along the curve $\Im_{if}$ is described by the coordinate $u$: $$u=u_0-a\bigl/\bar{q}_1^c+b \bar{q}_1^c,\qquad \bar{q}_1^c=q_1^c-q_{eq}^-,\label{04}$$ where $u_0$ is some initial point on the curve $\Im_{if}$. The inverse transformations between the two pairs of coordinates $(q_0,q_1)\Leftrightarrow (u,v)$ are: $$\begin{aligned} \label{05} q_0(u,v)=q_0^c(u)-v\sin\phi(u),\quad q_1(u,v)=q_1^c(u)+v\cos\phi(u),\end{aligned}$$ where $v$ is the distance from the curve $\Im_{if}$. In Eq. (\[05\]) the angle $\phi(u)$ is determined by requiring orthogonality of coordinate system $(u,v)$: $${dq_0^c}/{dq_1^c}\Bigl|_{(u,v=0)}=\cot\phi(u),\qquad \lim_{u\rightarrow+\infty}\cot\phi(u)=\cot\vartheta. \label{06}$$ Let us introduce the system of orthogonal local coordinates ${\bf{x}}\equiv{\bf{x}}(x^0,x^1,...x^5)$ along the curve $\Im_{if}$ using the transformations: $$\begin{aligned} x^0=u,\quad x^1=v,\quad\, x^2=f(u,v,\theta),\quad x^3=d_0\omega_1,\quad x^4=d_0\omega_2,\quad x^5=d_0\omega_3, \label{07}\end{aligned}$$ where function $f(u,v,\theta)= \sqrt{(q_0)^2-2b q_0q_1\cos\theta+b^2(q_1)^2}$ is mass-scale distance between $A$ and $B$ particles. In some part of the $3D$ Cartesian configuration space these equations determine a biunivocal mapping between the two intrinsic coordinate systems: $\{q_0,q_1,\theta\}$ and $\{x^0,x^1,x^2\}$. The set of coordinates $(\omega_1,\omega_2,\omega_3)$ are three Euler angles, which orient the three-body system in the space-fixed frame [@Balint1], $d_0$ is some space-dimensional constant. Classical dynamics of three-body scattering system -------------------------------------------------- For the investigation of ergodic properties of conservative dynamical system the geodesic axes distribution method on Riemann surfaces had been originally applied in [@Hoft]. Later this method has been used and developed in the investigations of the foundations of statistical physics [@Krylov]. The study of geodesic flow behavior on Lagrange surfaces provides an opportunity to observe important properties of classical dynamics systems [@katok]. Consider the $6D$ three-body classical problem on the Lagrange surface $S_{P}$: $$S_{P}=\bigl\{{\bf{x}};\,P^2(u,v,\theta)=2\mu\bigl[E-U(u,v,\theta)\bigr]>0\bigr\}, \label{08}$$ where $E$ is the total energy and $U(u,v,\theta)$ is the interaction potential of the three-body system. The metric on the surface $S_{P}$ is introduced in conform-Euclidian form: $$(ds)^2=\sum_{i,j}g_{ij}dx^idx^j,\qquad g_{ij}(u,v,\theta)= P^2(u,v,\theta)\delta_{ij},\quad i,j=0,1,...,5. \label{09}$$ Now we can write the geodesic trajectory problem for the reduced mass $\mu$: $$x^k_{;ss}+\Gamma^k_{ij}x^i_{;s}x^j_{;s}=0,\qquad i,j,k=0,1,...,5, \label{10}$$ where $s$ is a natural parameter (time or length of the geodesic trajectory), $\Gamma^k_{ij}=\frac{1}{2}g^{kl}\Bigl(\frac{\partial g_{lj}}{\partial x^i }+\frac{\partial g_{il}}{\partial x^j}-\frac{\partial g_{ij}}{\partial x^l}\Bigr)$ is a Cristoffel symbol. Moreover $ x^i_{;s}=\frac{dx^i}{ds}$ and $x^i_{;ss}=\frac{d^2x^i}{ds^2}.$ The system of differential equations (\[10\]) is solved for the initial conditions: $$x^i_0=x^i(-\infty),\qquad \dot{x}^i_{0}=x^i_{;s}(-\infty), \label{11}$$ for any value of the natural parameter $s$ from which the geodesic trajectory $x^i(s)$ and the geodesic velocity $x^i_{;s}(s)$ are defined. Using the relations in Eqs. (\[09\]) and (\[10\]) it is not complicated to obtain the following system of equations: $$\begin{aligned} x^0_{;ss}+\frac{1}{2}\frac{\partial{\chi}}{\partial{x^0}} \biggl\{\bigl(x_{;s}^0\bigr)^2-\bigl(x_{;s}^1\bigr)^2-\bigl(x_{;s}^2\bigr)^2 -\frac{I^2}{\mu^2 g_{00}^2}\biggr\}+\biggl\{\frac{\partial{\chi}} {\partial{x^1}}x^1_{;s}+\frac{\partial{\chi}} {\partial{x^2}}x^2_{;s}\biggr\}x_{;s}^0=0,\quad \nonumber\\ x^1_{;ss}+\frac{1}{2}\frac{\partial{\chi}}{\partial{x^1}} \biggl\{\bigl(x_{;s}^1\bigr)^2-\bigl(x_{;s}^0\bigr)^2-\bigl(x_{;s}^2\bigr)^2 -\frac{I^2}{\mu^2 g_{00}^2}\biggr\}+\biggl\{\frac{\partial{\chi}} {\partial{x^0}}x^0_{;s}+\frac{\partial{\chi}} {\partial{x^2}}x^2_{;s}\biggr\}x_{;s}^1=0,\quad \nonumber\\ x^2_{;ss}+\frac{1}{2}\frac{\partial{\chi}}{\partial{x^2}} \biggl\{\bigl(x_{;s}^2\bigr)^2-\bigl(x_{;s}^0\bigr)^2-\bigl(x_{;s}^1\bigr)^2 -\frac{I^2}{\mu^2 g_{00}^2}\biggr\}+\biggl\{\frac{\partial{\chi}} {\partial{x^0}}x^0_{;s}+\frac{\partial{\chi}} {\partial{x^1}}x^1_{;s}\biggr\}x_{;s}^2=0,\quad \label{12}\end{aligned}$$ where $\chi(x^0,x^1,x^2)=\ln g_{00}(x^0,x^1,x^2)$, the $I$ is total angular momentum of three-body system. It is suitable to conduct the later calculation in the $(u,v,\theta)$ coordinates system. Because the explicit form of equation system in those coordinates is complicated, we don’t write down them here. We have now formulated the reactive scattering problem in terms of classical dynamics on the Lagrange surface of the three-body system. Note, that the system has one integral of motion (overall energy $E$) and three degrees of freedom. According to Poincare (see [@katok]), conservative dynamical systems can have regions of chaotic movement in their phase space provided that they are not integrable, i.e. have less integrals of motion than degrees of freedom. This means that certain areas in phase space may show non-stability and chaos may then be observed, i.e. the trajectory $x^j(s)$ then becomes exponentially non-stable with respect to change of the initial condition $x^j(0)$: $${\partial{x^j(s)}}/{\partial{x^j(0)}}\sim\exp(\lambda_j{s}),\quad j=0,1,2 , \label{13}$$ where $\lambda_j$ describes the degree of instability and is called Lyapunov exponent. Quantization of classical dynamical three-body scattering system ---------------------------------------------------------------- **Representation for regular case**: In some coordinate systems, like the NCC system, [@Gev4] the $3D$ quantum reactive scattering problem can be treated in the same way as an inelastic single-arrangement problem. The overall wavefunction of the three body system can be represented: $${\Phi}^{(+)J}_{K'\varrho}(u,v,\theta)=\sum_{\bar{n}\bar{j}} {G}^{(+)J}_{\bar{n}\bar{j}K'\,\varrho}(u) \Xi_{\bar{n}(\bar{j})}(v;u)\Theta_{\bar{j}K'}(\theta),\quad \varrho=(njK), \label{14}$$ where $(n,j,K)$ is a set of quantum numbers, $\Theta_{jK}(\theta)$ is a normalized associated Legendre polynomial, $\Xi_{n(j)}(v;u)$ is the vibrational part of the wavefunction and satisfies the equation: $$\begin{aligned} \biggl[-\frac{\,\hbar^2}{2\mu}\frac{d^{\,2}}{dv^2} + \overline{U}(u,v) + \frac{\hbar^2j(j+1)}{2\mu{v^2}} \biggr] \Xi_{n(j)}(v;u)=\epsilon_{n(j)}(u)\,\Xi_{n( j)}(v;u), \label{15}\end{aligned}$$ where $\overline{U}(u,v) =U(u,v,\theta)\bigl|_{\theta=0}-U_{eff}(u,v)$. The function $U(u,v,\theta)\bigl|_{\theta=0}$ describes the potential energy of the collinear collision. $U_{eff}(u,v)$ is an effective potential: $$\begin{aligned} U_{eff}(u,v)=\frac{1}{4\eta^2} \biggl(\frac{\partial\eta}{\partial{v}}\biggr)^2-\frac{1}{2\eta^3} \frac{\partial^2\eta}{\partial{u^2}}+\frac{5}{4\eta^4} \biggl(\frac{\partial\eta}{\partial{u}}\biggr)^2,\quad \eta(u,v)=\bigl[1+K(u)v\bigr]\frac{ds}{du}.\quad \label{16}\end{aligned}$$ In eqn. (\[16\]), $K(u)$ is the curvature of $u$ and $s$ is the length along the curve $\Im_{ij}$: $$K(u)= 2a\frac{\bigl[F(q_1^c)\bigr]^{-3/2}}{\bigl(q_1^c-q_{eq}^-\bigr)^3},\quad \frac{ds}{du}=\frac{\sqrt{F(q_1^c)} }{b+a/{\bigl(q_1^c-q_{eq}^-\bigr)^2}},\quad F(q_1^c)=1+\Bigl[b- a/{\bigl(q_1^c-q_{eq}^-\bigr)^2}\Bigr]^2. \label{17}$$ Note, that the scattering function ${G}^{(+)J}_{\bar{n}\bar{j}K'\varrho}(u)$ satisfies the following equation: $$\begin{aligned} \Biggl\{\biggl[\delta_{n'\bar{n}} \frac{d^2}{d{u^2}}+2\biggl<\frac{\partial}{\partial{u}} -\frac{1}{\eta}\frac{\partial\eta}{\partial{u}}\biggr>_{n'\bar{n}}\frac{d}{d{u}} +\biggl<\frac{\partial^2}{\partial{u^2}} -\frac{2}{\eta} \frac{\partial\eta}{\partial{u}}\frac{\partial}{\partial{u}}\biggr>_{n'\bar{n}} +\frac{2\mu}{\hbar^2}\biggl<\eta^2\Bigl[E-E_{J\bar{K}}\quad \nonumber\\ -\epsilon_{\bar{n}(\bar{j}\,)}(u)+ U(u,v)+ \frac{\hbar^2\bar{j\,}(\bar{j\,}+1)} {2\mu{v^2}}\Bigr]\,\biggl>_{n'\bar{n}}\,\,\biggr]\delta_{j\,'\bar{j\,}}\delta_{K'\bar{K}} -\frac{2\mu}{\hbar^2}\Bigl<\eta^2\,U_{j\,'\bar{j\,}}^{\bar{K}}(u,v)\Bigr>_{n'\bar{n}}\, \delta_{K'\bar{K}}\quad \nonumber\\ +\Bigl<\,\frac{\eta^2}{q_0^2}\,\Bigr>_{n'\bar{n}}\,\delta_{j\,'\bar{j\,}} \left[\delta_{K'+1\,\bar{K}}\,{C_{J\bar{j\,}(\bar{K}-1)}^+ +\delta_{K'-1\,\bar{K}}\,C_{J\bar{j\,}(\bar{K}+1)}^-}\right] \Biggr\}{G}^{(+)J}_{\bar{n}\bar{j\,}\bar{K}\,\varrho}(u) =0,\qquad\quad \label{18}\end{aligned}$$ where $C^{\pm}_{JjK}=c^{\pm}_{JK}c^{\pm}_{jK}$, and moreover:$ c^{\pm}_{JK}=\bigl[J(J+1)-K(K\pm1)\bigr]^{1/2}, \quad E_{JK}(u,v)=(1/2){\hbar^2}{\mu}^{-1} q_0^{-2}(u,v) \bigl(J(J+1)-2K^2\bigr).$ The summations over the repeating index $n'$ and $j\,'$ are implied and we use the following notations for the matrix elements: $$\begin{aligned} \bigl<f(u)\bigr>_{nn'}=\int^{+\infty}_{-\infty} \Xi_{n(j)}(v;u)f(u,v)\Xi_{n'(j)}^\ast(v;u)dv, \nonumber\\ U^K_{jj\,'}(u,v)=\int^{\pi}_0{\Theta_{jK}(\theta)\,U(u,v,\theta) \, \Theta_{j' K}(\theta)}\sin\theta{d\theta}. \label{19}\end{aligned}$$ Note, that the solution of Eq. (\[18\]) is must satisfy the asymptotic condition: $$\label{20} \lim_{u\rightarrow-\infty}\sum_{n'j\,'}{G}^{(+)J}_{n'j\,'K'\,\varrho}(u) = \frac{1}{\sqrt{2 \pi}} \exp \left(-i p_{n j}^-\,u \right) \delta_{n n'} \delta_{j j\,'}\delta_{K K'}.$$ The exact $\emph{\textbf{S}}$-matrix elements can be constructed in terms of stationary overall and asymptotic wavefunctions, considering that the variable $u$ plays the role of a *timing parameter* ( which later will be called *internal time*) [@Gev4]: $$\begin{aligned} \emph{S}_{n'j\,'K'\,\leftarrow\,njK}(E)= \sqrt{{p^{+}_{n'j\,'}}\bigl/{p^-_{nj}}} \,\lim_{u\rightarrow+\infty}\sum_{\bar{n}\bar{j}} G^{(+)J}_{\bar{n}\bar{j}K'\,\varrho}(u)\,W_{\bar{n}n'}(u) \Lambda_{\bar{j}K'\,\leftarrow\,jK'}, \label{21} \end{aligned}$$ where $ W_{n'\bar{n}}(u)=\bigl\langle\Xi_{\bar{n}(\bar{j\,})}(v;u) \Pi_{n'(j\,')}^{(out)}(v)\bigr\rangle_{v}, \quad \Lambda_{\bar{j}K'\,\leftarrow\,jK'}= \bigl<\Theta_{\bar{j\,}K'}(\theta)\Theta_{jK'} (\theta)\bigr>_{\theta}=\delta_{\bar{j\,}j}.$ The expression for the $\textbf{\emph{S}}$-matrix elements in eqn. (\[21\]) can be simplified, if we take as basis the functions $\Xi_{n(j)}(v;u)$, which in the limit $u\rightarrow+\infty$ coincide with the orthonormal basis of the $(out)$ asymptotic wavefunctions $\Pi_{n(j)}^{(out)}(v)$. In this case we get the simplification $\lim_{u\rightarrow+\infty}W_{n'\bar{n}}(u)=\delta_{n'\bar{n}}$ and the following expression holds for the $\textbf{\emph{S}}$-matrix elements: $$\begin{aligned} \emph{S}_{n'j\,'K'\,\leftarrow\,njK}(E) =\sqrt{{p^{+}_{n'j\,'\,}} \bigl/{p^-_{nj}}}\, {G}^{(+)J}_{\varrho\,'\varrho}(E;+\infty),\qquad \varrho=(njK). \label{22}\end{aligned}$$ **Representation for chaotic case**: It is well known that some chemical reactions, especially when highly excited, exhibit quantum chaotic behavior [@Gutz; @Honv], i.e., the statistical properties of eigen-energies and eigen-vectors are very similar to those of random matrix systems [@Hak; @Mehta]. For systems which are not too quantum mechanical in nature, the quantum probability current is localized along the classical trajectory. In the chaotic case, these classical trajectories diverge exponentially from each other and from the quantum current tubes too. This results in serious difficulties in describing chaotic reactive quantum processes in terms of standard quantum representations. In order to overcome the this problem, a new quantization method bases on the quasi-classical approach has been proposed [@Gev] for the three-body system. The idea is to carry out the quantization on separate classical trajectory tubes $\Re({\bf{x}}_3(s))$, where ${\bf{x}}_3(s)\equiv{\bf{x}}_3\bigl(u(s),v,\theta\bigr)$ and $u(s)$ is the solution of the geodesic equations (\[12\]), which varies along the curve $\Im_{if}$ and is called as a *generating trajectory* (recall that it has the meaning of *internal time*). Every solution $u(s)$ generates some topological trajectory tube, which can be described by the Schrödinger equation, which for the present case means Eq. (\[18\]). The summed contribution of all such tubes gives the whole quantum picture. The goal of the scattering problem is the calculation of the probability amplitudes for transitions between different asymptotic states. In mathematical language this corresponds to the determination of the total mathematical expectation of the elementary quantum process in the three-body system. In the classical case the solution $u(s) $ depends on the initial scattering phase $\varphi=2\pi\{u/L\}$, as does then the overall wavefunction and $S$-matrix elements. Here $L$ is a some period and $\{\cdot\}$ describes a fractional part of the function. This implies that the transition amplitude $\bigtriangleup_{\varrho\,'\,\varrho}= \bigl|\emph{S}_{\varrho\,'\,\leftarrow\,\varrho}(\varphi;E)\bigl|^2$ must be averaged over the $\varphi$ phase distribution: $$\label{23} W_{\varrho\,'\,\leftarrow\,\varrho}(E)=\frac{ \int\sigma(\varphi;E)\bigtriangleup_{\varrho\,'\,\,\varrho} (\varphi\,;E)\,d\varphi}{\int\sigma(\varphi;E)d\varphi},$$ where $\sigma(\varphi;E)$ is the distribution of classical trajectories which will be determined later in the section III. In the case when the chaotic regions in phase space of classical system are smaller than the elementary quantum cell $\hbar^N$ ($N$ is the dimension of configuration space) the transition amplitude $\bigtriangleup_{\varrho\,'\,\,\varrho} (\varphi\,;E)$ is independent from $\varphi$. Numerical experiment ==================== Numerical calculations are here made for the collinear reaction $Li+(FH)_n\to( LiFH)^* \to( LiF)_m + H$. The LEPS type potential energy surface of Carter and Murrell for this reaction was used [@carter]. The classical trajectory study was performed by solving eqn. (\[12\]) for a total angular momentum quantum number $\bigl(J=0,\,$ and fixing the NCC angle (Jacobi angle) to $\theta=0\bigr)$. ![*Geodesic trajectories and internal times depending on natural parameter* $s$.[]{data-label="f1"}](fig2){width="90.00000%"} In Fig.2, three *generating trajectories* (or *internal time*) $u(s)$ and their corresponding $v(u)$ graphs are shown for different initial phases $\varphi$ of the trajectories. It is seen that the *generating trajectories* behave quite differently depending on the initial phase $\varphi$ for fixed energy $E$. Panel a) in Fig.2 shows a direct exchange reaction to which corresponds a monotone, but not uniformly changing, internal time (as a function of the natural parameter $s$ (usual time). Panel b) shows a non-reactive trajectory to which corresponds non-monotone internal time. In panel c) the geodesic trajectory again describes the exchange reaction which here goes via a resonance $(ABC)^\ast$ state and for which the dependence of $u$ on the parameter $s$ is complicated. ![*Chaotic map of initial values of the total energy $E$ and initial phase $\varphi$ for passed through (white points) and reflected back (black points) geodesic trajectories.* []{data-label="f1"}](fig3){width="90.00000%"} Now the main task is the investigation of the behavior of the geodesic trajectory flow. Numerical calculations shows, that for initial values corresponding to the chaotic regions mentioned above, the main Lyapunov exponent is positive and grows fast. The last fact points to exponential divergence of geodesic trajectories. In Fig.3, the white points in initial parameter space correspond to the transition from the reactant ($R_{(in)}^2$ subspace) to product regions ($R_{(out)}^2$ subspace), while the black points correspond to the reflection back to the product region. The distribution of black and white points depend on energy and initial phase, for fixed initial vibrational coordinate $v_0$, and shows an irregular behavior. Recall that $v_0$ is an average equilibrium distance between bound particles $B$ and $C$ in the ground ($n=0$, where $n$-is a vibrational quantum number) state. Note that qualitatively the same picture we get for $v_n$ (equilibrium distance on excited quantum stat $n$). One can see from the results of calculations that the structure of chaotic behavior region is self-similar with respect to scale transformation. ![(a) - *dependencies of transition probabilities* $\Delta_{00},\, \Delta_{01}$ *and* $ \Delta_{02}$ *on energy $E$ for fixed phase* $\varphi$; (b) - *the same dependencies, but calculated for the other (slightly differing from the first one) fixed phase* $\varphi_a-\varphi_b=10^{-5} $.[]{data-label="f1"}](fig4){width="90.00000%"} Let us consider the influence of chaotic behavior of the classical problem in Fig.4, which show the dependence on energy $E$ of over-barrier transition probabilities in the $Li + FH$ system for fixed phases $\varphi_i$ and equilibrium distance $v_n$. It can be seen that a small change in initial phase significantly changes the dependencies. In this connection the difficult problem arises to find a measure for the space (map) of passed through and reflected back geodesic trajectories. To calculate the probability for a specific quantum transition at an energy $E$, one has to average the corresponding quantum probability with respect to $\bigl(E,\varphi\bigr)$ within the range $[\Delta{E},\Delta{\varphi}=2\pi]$, where $\Delta{E}$ is a small interval of energies near $E$ and $\Delta{\varphi}=2\pi$ is the period of the values for the initial vibrational phase. The procedure of the averaging consists of that square $[\Delta{E}\times2\pi]$ is divided by $N\gg 1$ rectangles, each of them having some phase point $\varphi_{i}$ inside. Then each rectangle is subdivided by the grid with $M_i = [l_i \times k_i]$ nodes, $l_i$ and $k_i$ being the number of breaking points for $\Delta{E}$ and $2\pi/N$ intervals respectively. ![*The transition probabilities after averaging with respect on phase distribution $\sigma(\varphi)$.* []{data-label="f1"}](fig5){width="60.00000%"} Probability for geodesic trajectory (*generating trajectory*) to pass through the $i$-th rectangle is calculated by the formula: $$\label{24} \sigma(\varphi_i,E)= \lim_{l_i,k_i\to\infty}N_i/M_i,$$ where $N_i$ counts how many times the *generating trajectory* passes through into subspace $R^2_{(in)}$. Exchange reaction probability is then calculated as a limit of sum: $$\label{25} W_{\,n\to m}= \lim_{N\to\infty}\biggl\{\frac{\sum_{i=1}^N\sigma(\varphi_i)\big|S_{\,n\to m}(\varphi_i,E)\bigr|^2}{\sum_{i=1}^N\sigma(\varphi_i)}\biggr\} =\frac{\int_0^{2\pi} \sigma(\varphi)\Delta_{n\, m}(\varphi,E)d\varphi}{\int_0^{2\pi} \sigma(\varphi)d\varphi}.$$ Particularly using (\[25\]) for reacting system $Li+(FH)\to(LiFH)^\ast\to(LiF)+H$ we can calculate transition probabilities see Fig.5. conclusion ========== The $3D$ quantum theory Eq.s (\[12\]) and (\[18\]) was constructed, which described the classical permissible $P^2(u,v,\theta)>0$ reactive scattering in the three-body system with taking into account the influence of classical non-integrability on the quantum dynamics. By means of numerical modelling of collinear reacting system $Li+(FH)_n\to(LiFH)^\ast\to(LiF)_m+H$ it was shown that when chaotic region in phase space of classical system is larger than the quantum sell volume in the corresponding quantum system chaos is generated too. The calculations which are made for reacting collinear systems $O+N_2,\, N+N_2,\, N + O_2,\, N + O_2$ show that the classical chaos is not enough developed and can’t generate the quantum chaos. However not excepting that these systems can be quantum chaotic in the $3D$ scattering case. Moreover we are sure that all $3D$ three-body scattering systems in more or less degrees are chaotic. Finally it is necessary to note that developed approach to give a chance by defined conditions pass to $R$, $P$ and $Q$ regions of motion. When classical chaos is absent or developed insufficiently strong this approach coincides with standard quantum representation. Acknowledgments =============== This work partially was supported by INTAS Grant No. 03-51-4000, Armenian Science Research Council and Swedish Science Research Council. [99]{} A. Einstein, Zum Quantensatz von Sommerfeld und Epstein, Vehr. Dtsch. Phys. Ges., **19**, 82, (1917). M. C. Gutzwiller, Chaos in Class. and Quantum Mechanics, Springer, Berlin, (1990). T. A. Brody et al., Rev. Mod. Phys. [**[53]{}**]{}, (1981) 385. H. Friedrich, D. Wintgen, Phys. Rep. [**[183]{}**]{}, (1989) 37. L. Nemes, Acta Phys. Hung. [**[73]{}**]{} (1993) 95. V. Milner et al., Phys. Rev. Lett. [**[86]{}**]{} (2001) 1514. N. Friedman, A. Kaplan, D. Carasso, N. Davidson, Phys. Rev. Lett. [**[86]{}**]{} (2001) 1518. C. Dembrowski et al., Phys. Rev. Lett. [**[86]{}**]{} (2001) 3284. S. W. McDonald, A. N. Kaufman, Phys. Rev. Lett. [**[42]{}**]{} (1979) 1189. E. J. Heller, Phys. Rev. Lett. [**[53]{}**]{} (1984) 1515. I. Hamilton and P. J. Brumer, Chem. Phys. [**[82]{}**]{}, (1985) 1937. D. M. Wardlaw and R.A. Marcus, Adv. Chem. Phys. 70-1, 231 (1988). Z. Kovacs, L. Wiesenfeld, Phys. Rev. E. [**[51]{}**]{}, (1995) 5476. E. Ott and T. Tel, Chaos [**[3]{}**]{}, (1993) 417. U. Smilansky, Chaos and Quantum Physics, Ed. by M. J. Giannoni, A. Varos, and J. Zinn-Justin (north-Holland, Amsterdam, 1991), p. 371. H. J. Stöckmann and J. Stein, Phys. Rev. Lett. [**[64]{}**]{}, (1990) 1255. E. Dorn and U. Smilansky, ibid [**[68]{}**]{}, (1992) 1255. G. Nyman and Yu Hua-Gen, Rep. Prog. Phys. [**[63]{}**]{}, (2000) 1001. A. V. Bogdanov et al., AMS/IP Studies in Adv. Math., [**[13]{}**]{}, (1999) 69. L. M. Delves, Nuclear Phys., [**[9]{}**]{}, (1959) 391. L. M. Smith, J. Chem. Phys., [**[31]{}**]{}, (1959) 1352. R. A. Marcus, J. Chem. Phys., [**[45]{}**]{}, (1966) 4493. J. Light, Adv. Chem. Phys., [**[19]{}**]{}, (1971) 1. A. S. Gevorkyan, G. Balint-Kurti and G. Nyman, arXiv:physics/0607093. G. G. Balint-Kurti, L. F. $\ddot{u}$sti-Moln$\acute{a}$r and A. Brown, Phys. Chem., [**[3]{}**]{}, (2001) 702. E. Hoft, J. Proc., of National Acad. of Sciences of USA, [**[18]{}**]{}, (1932) 93. N. S. Krylov, [*Studies on Foundation of Statistical Mechanics*]{}, Publ. AN SSSR, Leningrad, 1950. A. Katok, Hassenblatt B., [*Introduction to the Modern Theory of Dynamical Systems*]{}, Cambridge University Press, 1996. P. Honvault, J.-M. Launay, Chem. Phys. Lett. [**[329]{}**]{}, (2000) 233-238. F. Haake, Quantum Signatures of Chaos, (Springer-Verlag, Heidelberg, 2001). M. L. Mehta, Random Matrices, Academic Press, New York, 1991. J. S. Carter, J. N. Murrel, Physics, **41**, 567, 1980.
--- abstract: 'We present a search for the decays $\Btaunu$ and $\BKnunu$ in a $253~\textrm{fb}^{-1}$ data sample collected at the $\Upsilon(4S)$ resonance with the Belle detector at the KEKB asymmetric energy $B$ factory. Combinatorial and continuum backgrounds are suppressed by selecting a sample of events with one fully reconstructed $B$. The decay products of the $B$ on the other side of the event are analyzed to search for $\Btaunu$ and $\BKnunu$ decays. We find no significant evidence for a signal and set 90% confidence level upper limits of ${\cal B}(B^{-}\rightarrow\tau^{-}\overline{\nu}) < 1.8\times 10^{-4}$ and ${\cal B}(B^{-}\rightarrow K^{-}\nu\overline{\nu}) < 3.6\times 10^{-5}$. All results are preliminary.' title: | \ Search for $B \to \tau \nu$ and $B \to K \nu \bar{\nu}$ Decays with a Fully Reconstructed $B$ at Belle --- The purely leptonic decay $B^{-}\rightarrow\ell^{-}\overline{\nu}$ (charge conjugate states are implied throughout the paper) is of particular interest since it provides direct measurement of the product of the Cabibbo-Kobayashi-Maskawa(CKM) matrix element $V_{ub}$ and the $B$ meson decay constant $f_{B}$. In the Standard Model(SM), the branching fraction of the decay $B^{-}\rightarrow\ell^{-}\overline{\nu}$ is given as $$\label{eq:BR_B_taunu} {\cal B}(B^{-}\rightarrow\ell^{-}\overline{\nu}) = \frac{G_{F}^{2}m_{B}m_{\ell}^{2}}{8\pi}\left(1-\frac{m_{\ell}^{2}}{m_{B}^{2}}\right)^{2}f_{B}^{2}|V_{ub}|^{2}\tau_{B}$$ where $G_{F}$ is the Fermi coupling constant, $m_{\ell}$ and $m_{B}$ are the charged lepton and $B$ meson masses, $\tau_{B}$ is the $B^{-}$ lifetime. The dependence on the lepton mass arises from helicity conservation, which suppresses the muon and electron channels. The CKMfitter predicts the $B^{-}\rightarrow \tau^{-}\bar{\nu}$ branching fraction to be $(9.3 ^{\,+ 3.4}_{\,- 2.3}) \times10^{-5}$[@Charles:2004jd]. No evidence for an enhancement relative to the SM prediction was observed in previous experimental studies. The most stringent upper limit has been achieved by the BABAR Collaboration : ${\cal B}(\Btaunu) < 4.2\times 10^{-4}$ at $90\%$ confidence level (C.L.) [@Aubert:2004kz]. Flavor-changing neutral-current transition such as $b\rightarrow s\nu\bar{\nu}$ occurs in the SM via one-loop box or electroweak penguin diagrams with heavy particles in the loops. Because heavy non-SM particles could contribute additional loop diagrams, various new physics scenarios can lead to significant enhancements in the observed rates [@Grossman:1995gt; @Bird:2004ts]. The SM $B^{-}\rightarrow K^{-}\nu\bar{\nu}$ branching fraction has been estimated to be $(3.8_{-0.6}^{+1.2})\times 10^{-6}$[@Faessler:2002ut; @Buchalla:2000sk], while the most stringent published experimental limit is ${\cal B}(\BKnunu) < 5.2\times 10^{-5}$ at $90\%$ C.L. [@Aubert:2004ws] We use a $253~\textrm{fb}^{-1}$ data sample containing $275\times 10^{6}$ $B$ meson pairs collected with the Belle detector at the KEKB asymmetric energy $e^{+}e^{-}$ ($3.5$ on $8$ GeV) collider [@Kurokawa:2003] operating at the $\Upsilon(4S)$ resonance ($\sqrt{s} = 10.58$ GeV). The Belle detector is a large-solid-angle magnetic spectrometer consisting of a three-layer silicon vertex detector, a $50$-layer central drift chamber (CDC), a system of aerogel threshold $\check{\textrm{C}}$erenkov counters (ACC), time-of-flight scintillation counters (TOF), and an electromagnetic calorimeter comprised of CsI(Tl) crystals (ECL) located inside a superconducting solenoid coil that provides a $1.5$ T magnetic field. An iron flux-return located outside of the coil is instrumented to identify $K_{L}^{0}$ and muons. The detector is described in detail elsewhere [@belle_detector:2003]. The strategy adopted for this analysis is to reconstruct exclusively the decay of one of the $B$ mesons in the event and compare properties of the remaining particle(s) in the event (referred to as the signal side) to those expected for signal and background. All the tracks and photon candidates in the event not used to reconstruct the $B$ are studied to search for $B^{-}\rightarrow\tau^{-}\overline{\nu}$ and $B^{-}\rightarrow K^{-}\nu\overline{\nu}$. The advantage of having a sample of fully reconstructed $B$ meson is to provide a strong suppression of the combinatorial and continuum background events. The disadvantage is the low efficiency of full $B$ meson reconstruction (approximately $0.3\%$). Fully reconstructed $B$ mesons, $B_{\rm rec}$, are observed in the following decay modes: $B^{+}\rightarrow\overline{D}^{(*)0}\pi^{+}$, $\overline{D}^{(*)0}\rho^{+}$, $\overline{D}^{(*)0}a_{1}^{+}$ and $\overline{D}^{(*)0}D_{s}^{(*)+}$ where $\rho^{+}$ is reconstructed in $\pi^{+}\pi^{0}$ mode ($|M_{\pi^{+}\pi^{0}}-M_{\rho^{+}}| < 0.3~\mbox{GeV}/c^{2}$) and $a_{1}^{+}$ is reconstructed as $a_{1}^{+}\rightarrow \rho^{0}\pi^{+}$ ($|M_{\rho^{0}\pi^{+}}-M_{a_{1}^{+}}| < 0.25~\mbox{GeV}/c^{2}$). $\overline{D}^{0}$ candidates are reconstructed as $\overline{D}^{0}\rightarrow K^{+}\pi^{-}$, $K^{+}\pi^{-}\pi^{0}$, $K^{+}\pi^{-}\pi^{+}\pi^{-}$, $K_{S}^{0}\pi^{0}$, $K_{S}^{0}\pi^{-}\pi^{+}$, $K_{S}^{0}\pi^{-}\pi^{+}\pi^{0}$ and $K^{-}K^{+}$. $\overline{D}^{*0}$ mesons are reconstructed by combining the $\overline{D}^{0}$ candidates with a pion or a photon. The invariant mass of $\overline{D}^{*0}$ candidates is required to be within a $\pm 3~\mbox{MeV}/c^{2}$ (for $\overline{D}^{*0}\pi^{0}$) and $\pm 10~\mbox{MeV}/c^{2}$ (for $\overline{D}^{*0}\gamma$) intervals around the nominal $\overline{D}^{*0}$ mass. $D_{s}^{+}$ candidates are reconstructed in the decay modes $D_{s}^{+}\rightarrow K_{S}^{0}K^{+}$ and $K^{+}K^{-}\pi^{+}$. The invariant mass of the $D_{s}^{+}$ candidates is required to be within $\pm 15~\mbox{MeV}/c^{2}$ interval around the nominal $D_{s}^{+}$ mass. $D_{s}^{*+}$ candidates are defined as $D_{s}^{+}\gamma$ combinations where the $D_{s}^{+}\gamma$ invariant mass lies in the interval $\pm 15~\mbox{MeV}/c^{2}$ around the nominal $D_{s}^{*+}$ mass. Charged $B$ pair events are produced from $\Upsilon(4S)$ resonance $(\sqrt{s}\sim10.58~\textrm{GeV})$, where the $B^{+}$ or $B^{-}$ is produced with specific momentum and energy. Selection of the fully reconstructed $B$ candidates is made according to the values of two variables : the beam-constrained mass $M_{\rm bc}\equiv\sqrt{E_{\rm beam}^{2} - p_{B}^{2}}$ and the energy difference $\Delta E\equiv E_{B} - E_{beam}$. Here, $E_{B}$ and $p_{B}$ are the reconstructed energy and momentum of the fully reconstructed $B$ candidate in the center-of-mass (CM) system, and $E_{\rm beam}$ is the beam energy in the CM frame. The signal region for tagging $B$ candidates is defined as $M_{\rm bc}>5.27~\mbox{GeV}/c^{2}$ and $-80~\mbox{MeV}<\Delta E< 60~\mbox{MeV}$. The $M_{\rm bc}$ distribution of reconstructed $B$ candidates is fit with the sum of an Argus function [@Albrecht:1986nr] and a Crystal Ball function [@Bloom:1983pc]. The Argus function models the continuum and combinatorial background whereas the Crystal Ball function models the signal component, which peaks at the $B$ mass. The purity is defined as $S/(S+B)$, where $S~(B)$ is the number of signal (background) events for $M_{\rm bc} > 5.27~\textrm{GeV}/c^{2}$, as determined from a fit. Fig. \[mbc\_fit\_ver3\_dE008\] shows the $M_{\rm bc}$ distribution for all $B_{\rm rec}$ candidates in our data set in the signal $\Delta E$ region. In this sample, there are cross-feed effects between charged and neutral $B$. From the Monte Carlo simulation, we estimate the fraction of $B^{0}~(B^{+})$ events in the reconstruction of $B^{+}~(B^{0})$ to be $0.095~(0.090)$. Then, we obtain $N_{B^{+}B^{-}} = (4.00\pm 0.24)\times 10^{5}$ and the purity of $0.55$, where the uncertainty on $N_{B^{+}B^{-}}$ is dominated by systematic errors. In the events where a $B_{\rm rec}$ is reconstructed, we search for decays into a $\tau$ plus a neutrino and a $K$ plus two neutrinos. Candidate events are required to have one or three signal-side charged track(s) with the total charge being opposite to that of the reconstructed $B$. The $\tau$ lepton is identified in the following decay channels: $\tau^{-}\rightarrow\mu^{-}\nu\bar{\nu}$, $\tau^{-}\rightarrow e^{-}\nu\bar{\nu}$, $\tau^{-}\rightarrow\pi^{-}\nu$, $\tau^{-}\rightarrow\pi^{-}\pi^{0}\nu$, and $\tau^{-}\rightarrow\pi^{-}\pi^{+}\pi^{-}\nu$. We require the charged particles to be identified as leptons, pions or kaons. The event is required to have zero net charge and $E_{\rm ECL}$ less than $0.3~\textrm{GeV}$ where $E_{\rm ECL}$ is the remaining energy calculated by adding the energy of the photons that are not associated with either the $B_{\rm rec}$ or the $\pi^{0}$ candidate from $\tau^{-}\rightarrow \pi^{-}\pi^{0}\nu$ decay. For all modes except $\tau^{-}\rightarrow\pi^{-}\pi^{0}\nu$ mode we reject events with $\pi^{0}$ mesons in the recoil against $B_{\rm rec}$. We place the following requirements on the momentum of the track(s) in the CM, $p_{\pi^{-}} > 0.8~\textrm{GeV}/c$ for $\tau^{-}\rightarrow\pi^{-}\nu$, $p_{\pi^{-}\pi^{0}} > 1.2~\textrm{GeV}/c$ for $\tau^{-}\rightarrow\pi^{-}\pi^{0}\nu$, $p_{\pi^{-}\pi^{+}\pi^{-}} > 1.4~\textrm{GeV}/c$ for $\tau^{-}\rightarrow\pi^{-}\pi^{+}\pi^{-}\nu$, and $p_{K^{-}} > 1.2~\textrm{GeV}/c$ for $B^{-}\rightarrow K^{-}\nu\bar{\nu}$. The event is required to have the total missing momentum of the event to be greater than $0.2~\textrm{GeV}/c$ for all modes except leptonic decay modes and the direction of missing momentum to be $-0.86 < \cos\theta_{\rm miss}^{*} < 0.95$ in the CM frame. Further requirements are made on the invariant mass of two or three pions $|M_{\pi\pi}-M_{\rho}| < 0.15~\mbox{GeV}/c^{2}$ and $|M_{\pi\pi\pi}-M_{a_{1}^{+}}| < 0.2~\mbox{GeV}/c^{2}$. The selection efficiencies for each decay mode we consider in this analysis are determined from a large sample of GEANT-based Monte Carlo simulations [@GEANT] for $B^{-}\rightarrow\tau^{-}\bar{\nu}$ and $B^{-}\rightarrow K^{-}\nu\bar{\nu}$ events generated by EvtGen decay package[@EvtGen]. We compute the efficiency as the ratio of the number of events surviving each of our selections over the number of fully reconstructed $B^{\pm}$. The most powerful variable for separating signal and background is the remaining energy $E_{\rm ECL}$. We use different energy cuts for neutral clusters contributing to the $E_{\rm ECL}$ for barrel part and end-cap parts since the effect of beam background is severe in the end-caps. For signal events the neutral clusters contributing to the $E_{\rm ECL}$ can only come from beam background, therefore the signal events peaks at low $E_{\rm ECL}$ and the background events, which contain additional sources of neutral clusters, are distributed toward higher $E_{\rm ECL}$ values. The $E_{\rm ECL} < 0.3~\mbox{GeV}$ region is defined as the signal region and the $0.45 < E_{\rm ECL} < 1.5~\mbox{GeV}$ region is defined as the sideband region. The $E_{\rm ECL}$ shape in the MC distribution is used to extrapolate the sideband data to the signal region. The number of MC events in signal region and sideband are counted and their ratio ($r_{MC}$) is obtained. Using the number of data in the sideband and the ratio $r_{MC}$, the number of expected background events in the signal region is estimated. The background estimation for the different subdecay modes from the $E_{\rm ECL}$ sideband extrapolation is shown in Table \[tab:bg\_sg\_extrapolate\]. The numbers of events in sideband region agrees well between MC and data. To obtain the background expected from the MC simulation, $B\overline{B}$ and $e^+e^-\rightarrow u\bar{u},~d\bar{d},~s\bar{s},~c\bar{c}$ events are scaled to equivalent luminosity in data.  $r_{MC}$   Sideband Data  Sideband MC  MC signal region  Expected BG -- ----------------------------------------------- ---------------- --------------- ------------------- --------------- ---------------- $\tau^{-}\rightarrow\mu^{-}\nu\bar{\nu}$ $0.17$ $70$ $63.9\pm 7.2$ $10.7\pm 2.8$ $11.8 \pm 3.6$ $\tau^{-}\rightarrow e^{-}\nu\bar{\nu}$ $0.14$ $67$ $62.3\pm 8.1$ $8.9\pm 2.6$ $9.5 \pm 3.2$ $\tau^{-}\rightarrow\pi^{-}\nu$ $0.07$ $47$ $48.4\pm 7.7$ $3.6\pm 1.6$ $3.5 \pm 1.7$ $\tau^{-}\rightarrow\pi^{-}\pi^{0}\nu$ $0.23$ $13$ $19.2\pm 4.0$ $4.4\pm 2.1$ $3.0 \pm 1.8$ $\tau^{-}\rightarrow\pi^{-}\pi^{+}\pi^{-}\nu$ $0.23$ $16$ $23.0\pm 7.2$ $5.2\pm 2.5$ $3.6 \pm 2.2$ $0.15$ $17$ $18.9\pm 4.7$ $2.9\pm 1.4$ $2.6 \pm 1.6$ : Expected background in the signal region for the different selection modes.[]{data-label="tab:bg_sg_extrapolate"} The double tag events, for which one of the $B$ mesons is fully reconstructed and the other $B$ meson is reconstructed in the set of decay modes $B^{-} \rightarrow D^{0}\ell^{-}\bar{\nu}$, where $\ell$ is a muon or an electron and the $D^{0}$ is reconstructed in two modes : $D^{0} \rightarrow K^{+}\pi^{-}~\& ~K^{+}\pi^{-}\pi^{+}\pi^{-}$, are used as a control sample to validate the $E_{\rm ECL}$ simulation. The sources affecting the $E_{\rm ECL}$ in double-tagged events are similar to those affecting the $E_{\rm ECL}$ distribution in the signal MC simulation. The agreement of the $E_{\rm ECL}$ distribution between data and MC simulation for the double-tagged sample, in Fig. \[ecl\_semilepton\], is used as a validation of the $E_{\rm ECL}$ simulation in the signal MC. The main sources of uncertainty we consider in the determination of the ${\cal B}(B^{-}\rightarrow\tau^{-}\overline{\nu})$ and ${\cal B}(B^{-}\rightarrow K^{-}\nu^{-}\overline{\nu})$ are the number of $B^{+}B^{-}$ events with one reconstructed $B$, the determination of the signal efficiency and the determination of the number of expected background events. The number of $B^{+}B^{-}$ events is determined as the area of the Crystal Ball function fitted to the $M_{\rm bc}$ distribution. Using a Gaussian function as an alternative fitting function, we obtain a relative change in the number of events and this difference is assigned as the systematic uncertainty on the number of $B^{+}B^{-}$ events. The main contribution to the systematic uncertainties in the determination of the efficiencies comes from uncertainty on tracking efficiency, Monte Carlo statistics and particle identification. The uncertainty in the expected background comes from Monte Carlo statistics and statistics of sideband data events. Estimates of systematic uncertainties are summarized in Table \[tab:total\_systematics\]. Source ------------------------------ ----------------------- ---------------------- -------------- --------------------- ---------------------------- --------------------- $\mu^{-}\nu\bar{\nu}$ $ e^{-}\nu\bar{\nu}$ $\pi^{-}\nu$ $\pi^{-}\pi^{0}\nu$ $\pi^{+}\pi^{-}\pi^{+}\nu$ $K^{-}\nu\bar{\nu}$ Number of $B^{+}B^{-}$ tracking $2.0$ $2.0$ $2.0$ $2.0$ $6.0$ $2.0$ $\tau$ decay BR $0.3$ $0.3$ $1.0$ $0.6$ $1.1$ MC statistics $0.6$ $0.6$ $0.7$ $1.0$ $2.0$ $4.1$ Lepton identification $2.2$ $2.2$ $\pi^{0}$ identification $2.6$ $\pi^{\pm}$ identification $1.7$ $1.7$ $5.1$ Total Efficiency Uncertainty $2.9$ $2.9$ $3.0$ $4.0$ $8.8$ $5.0$ MC statistics $28.2$ $31.8$ $47.5$ $52.0$ $57.1$ $55.8$ Data in sideband $12.0$ $12.2$ $14.6$ $27.2$ $25.0$ $24.3$ Total Background Uncertainty $30.6$ $34.1$ $49.7$ $58.7$ $62.3$ $60.9$ : Systematic uncertainties for the number of $B^{+}B^{-}$ events with one reconstructed $B$, the determination of the efficiency and the determination of the number of expected background events for the different decay channels. []{data-label="tab:total_systematics"} After finalizing the signal selection criteria, the signal region ($E_{\rm ECL} < 0.3~\mbox{GeV}$) in the on-resonance data is examined. Table \[tab:observed\_events\] lists the number of observed events in data in the signal region, together with the expected number of signal and background events in the signal region. Fig. \[ecl\_opened\] shows the $E_{\rm ECL}$ distributions in the data after all selection requirements except the one on $E_{\rm ECL}$ have been applied compared with the expected background. Each distribution refers to a different mode. -- ----------------------------------------------- ----------------- --------------- --------------- ------ Signal Signal Background Observed Efficiency$(\%)$ Expected Expected Events $\tau^{-}\rightarrow\mu^{-}\nu\bar{\nu}$ $ 9.8\pm 0.1$ $ 3.9\pm 0.1$ $11.8\pm 3.6$ $8$ $\tau^{-}\rightarrow e^{-}\nu\bar{\nu}$ $ 9.4\pm 0.1$ $ 3.8\pm 0.1$ $9.5\pm 3.2$ $10$ $\tau^{-}\rightarrow\pi^{-}\nu$ $ 8.4\pm 0.1$ $ 3.4\pm 0.1$ $3.5\pm 1.7$ $11$ $\tau^{-}\rightarrow\pi^{-}\pi^{0}\nu$ $ 3.5\pm 0.1$ $ 1.4\pm 0.1$ $3.0\pm 1.8$ $4$ $\tau^{-}\rightarrow\pi^{-}\pi^{+}\pi^{-}\nu$ $ 2.6\pm 0.1$ $ 1.0\pm 0.1$ $3.6\pm 2.2$ $6$ $ 33.7\pm 1.4$ $ 13.5\pm 0.2$ $31.4\pm 5.9$ $39$ $ 42.8\pm 1.8$ $ 0.70\pm 0.03$ $2.6\pm 1.6$ $4$ -- ----------------------------------------------- ----------------- --------------- --------------- ------ : Number of observed data events in the signal region, together with number of expected signal and background events. Errors in the background expectation is both statistical and systematic errors. The numbers of expected signal are obtained by assuming that ${\cal B}(\Btaunu)=10^{-4}$ and ${\cal B}(\BKnunu)=4\times10^{-6}$. []{data-label="tab:observed_events"} Since we do not observe significant excess over the expected background, we set upper limits. To extract the upper limits on the branching fraction for ${\cal B}(B^{-}\rightarrow\tau^{-}\overline{\nu})$ and ${\cal B}(B^{-}\rightarrow K^{-}\nu\overline{\nu})$, we fit the observed $E_{\rm ECL}$ distributions to the expected background and signal, using maximum likelihood method. The likelihood function ${\cal L}$ is defined as $${\cal L} = \frac{1}{\sqrt{2\pi}\sigma_{b}}e^{-(n_{b}-N_{b})^{2}/2\sigma_{b}} \cdot \frac{e^{-(n_{s}+n_{b})}(n_{s}+n_{b})^{N}}{N!}\prod_{i=1}^{N}\frac{n_{s}f_{b}(i)+n_{b}f_{s}(i)}{n_{s}+n_{b}}$$ where $n_{b}$ and $n_{s}$ represent the number of background and signal events, respectively, $N$ is the number of observed events, $N_{b}$ is the calculated number of background events, and $\sigma_{b}$ is the calculated background uncertainty. The variable $f_{s}$ is the normalized signal $E_{\rm ECL}$ distribution and $f_{b}$ is the normalized background $E_{\rm ECL}$ distribution. The negative log likelihood function is minimized using MINUIT [@James:1975dr] with respect to $n_{b}$ for each $n_{s}~(=\varepsilon_{i}\cdot N_{B^{+}B^{-}}\cdot {\cal B})$. The $90\%$ C. L. upper limit on the branching fraction ${\cal B}$ is calculated by $$\label{eq:alpha_integrate} 0.9 = \frac{\int_{0}^{{\cal B}_{90}}{\cal L}({\cal B})d{\cal B}} {\int_{0}^{\infty}{\cal L}({\cal B})d{\cal B}}$$ For $\Btaunu$, we calculate the likelihood function for each different decay mode (${\cal L}_{i}({\cal B})$). Total likelihood function is defined by $$\label{eq:L_alpha} {\cal L}({\cal B}) = \prod_{i=1}^{n_{ch}} {\cal L}_{i}({\cal B})$$ where $n_{ch}$ is the number of decay modes for $\Btaunu$. The full systematic uncertainty must be incorporated into the likelihood function. We convolve the systematic uncertainty into the likelihood function, ${\cal L}({\cal B})$, by replacing each point of ${\cal L}({\cal B})$ by a Gaussian distribution centered at that point with width $\Delta{\cal B}$ which is determined from systematic uncertainty study. To get the value of a particular point of the smeared likelihood function, we integrate all the contributions from the Gaussian replaced points of the unsmeared likelihood function. To combine likelihood functions of 5 decay modes for $\Btaunu$, we simply multiply the likelihood functions to produce the combined likelihood. The only complication arises in that there are common sources of systematic uncertainty and therefore correlated uncertainties between the samples. We throw different normal Gaussian random number for correlated and uncorrelated systematics for each systematics sources. In this way, we can smear the correlated systematics in the same direction. We obtain the final smeared likelihood function by multiplying smeared likelihood functions for $\Btaunu$. For $\BKnunu$ decay, we smear the likelihood function by the total systematic uncertainty. Fig. \[taunu\_like\_landau\] shows the likelihood functions for the fit to data after smearing for $\Btaunu$ (left) and $\BKnunu$ (right). We obtain upper limits on the branching fraction at the 90% (C.L.) of $${\cal B}(B^{-}\rightarrow\tau^{-}\overline{\nu}) < 1.8\times 10^{-4}$$ $${\cal B}(B^{-}\rightarrow K^{-}\nu\overline{\nu}) < 3.6\times 10^{-5}.$$ In the extension of the Standard Model, one expects significant modification to the $\Btaunu$ decay branching fraction. In the two-Higgs doublet model, the decay can occur via a charged Higgs particle. The $\Btaunu$ branching fraction is given as $${\cal B}(\Btaunu) = {\cal B}(\Btaunu)_{\rm SM} \times r_{H},$$ where $r_{H}$ is defined as $$r_{H} = \left(1-\frac{\tan^{2}\beta}{m_{H}^{2}}m_{B}^{2}\right)^{2}, \label{eq:r_H}$$ $m_{H}$ is the charged Higgs mass and $\tan\beta$ is the ratio of vacuum expectation values of two Higgs doublets [@Hou:1992sy]. ${\cal B}(\Btaunu)_{\rm SM}$ represents SM contribution given by Eq.(\[eq:BR\_B\_taunu\]). Once we get an upper limit on ${\cal B}(\Btaunu)$, we can give a constraint on $\tan\beta$ and $m_{H}$. Fig. \[Mh\_tanb\_LP05\] shows the $90\%$ C.L. exclusion boundaries in the $[m_{H}, \tan\beta]$ plane obtained with $m_{B}=5279~\textrm{MeV}/c^{2}$ and ${\cal B}(\Btaunu)_{\rm SM} = 0.93\times 10^{-4}$ from the CKMfitter prediction compared with other experimental searches at LEP[@Bock:2000gk], at the Tevatron[@Abazov:2001md], and at BABAR. In conclusion, we have performed a search for the $B^{-}\rightarrow\tau^{-}\overline{\nu}$ and $B^{-}\rightarrow K^{-}\nu\overline{\nu}$ decays in a fully reconstructed $B$ sample. Upper limits have been set : $${\cal B}(B^{-}\rightarrow\tau^{-}\overline{\nu}) < 1.8\times 10^{-4}~~(90\%~\textrm{C.L.})$$ $${\cal B}(B^{-}\rightarrow K^{-}\nu\overline{\nu}) < 3.6\times 10^{-5}~~(90\%~\textrm{C.L.})$$ which are the most stringent upper limits on these processes to date. We thank the KEKB group for the excellent operation of the accelerator, the KEK cryogenics group for the efficient operation of the solenoid, and the KEK computer group and the National Institute of Informatics for valuable computing and Super-SINET network support. We acknowledge support from the Ministry of Education, Culture, Sports, Science, and Technology of Japan and the Japan Society for the Promotion of Science; the Australian Research Council and the Australian Department of Education, Science and Training; the National Science Foundation of China under contract No. 10175071; the Department of Science and Technology of India; the BK21 program of the Ministry of Education of Korea and the CHEP SRC program of the Korea Science and Engineering Foundation; the Polish State Committee for Scientific Research under contract No. 2P03B 01324; the Ministry of Science and Technology of the Russian Federation; the Ministry of Higher Education, Science and Technology of the Republic of Slovenia; the Swiss National Science Foundation; the National Science Council and the Ministry of Education of Taiwan; and the U.S.Department of Energy. [99]{} J. Charles [*et al.*]{} (CKMfitter Group), Eur. Phys. J. C [**41**]{}, 1 (2005) and the updated results presented at CKM2005 workshop. B. Aubert (BABAR Collaboration), arXiv:hep-ex/0407038. Y. Grossman, Z. Ligeti and E. Nardi, Nucl. Phys. B [**465**]{}, 369 (1996) \[Erratum-ibid. B [**480**]{}, 753 (1996)\] \[arXiv:hep-ph/9510378\]. C. Bird, P. Jackson, R. Kowalewski and M. Pospelov, Phys. Rev. Lett.  [**93**]{}, 201803 (2004) \[arXiv:hep-ph/0401195\]. A. Faessler, T. Gutsche, M. A. Ivanov, J. G. Korner and V. E. Lyubovitskij, Eur. Phys. J. directC [**4**]{}, 18 (2002) \[arXiv:hep-ph/0205287\]. G. Buchalla, G. Hiller and G. Isidori, Phys. Rev. D [**63**]{}, 014015 (2001) \[arXiv:hep-ph/0006136\]. B. Aubert [*et al.*]{} (BABAR Collaboration), Phys. Rev. Lett.  [**94**]{}, 101801 (2005) \[arXiv:hep-ex/0411061\]. S. Kurokawa and E. Kikutani, Nucl. Instrum. Methods Phys. Res., Sect. A [**499**]{}, 1 (2003) Belle Collaboration, A. Abashian [*et al.*]{} Nucl. Instrum. Methods Phys. Res., Sect. A [**479**]{}, 117 (2002) H. Albrecht [*et al.*]{} (ARGUS Collaboration), Phys. Lett. B [**185**]{}, 218 (1987). E. D. Bloom and C. Peck, Ann. Rev. Nucl. Part. Sci.  [**33**]{}, 143 (1983). R. Brun [*et al.*]{}, GEANT3.21, CERN Report DD/EE/84-1 (1984). See the EvtGen package home page, http://www.slac.stanford.edu/ lange/EvtGen/. F. James and M. Roos, Comput. Phys. Commun.  [**10**]{} (1975) 343. W. S. Hou, Phys. Rev. D [**48**]{}, 2342 (1993). P. Bock [*et al.*]{} (ALEPH, DELPHI, L3 and OPAL Collaborations), CERN-EP-2000-055 V. M. Abazov [*et al.*]{} (D0 Collaboration), Phys. Rev. Lett.  [**88**]{}, 151803 (2002) \[arXiv:hep-ex/0102039\].
--- abstract: 'We employ the Hartree-Fock approximation to identify the magnetic ground state of the Hubbard model on a frustrated square lattice. We investigate the phase diagram as a function of the Coulomb repulsion’s strength $U$, and the ratio $t''/t$ between the nearest and next nearest neighbor hoppings $t$ and $t''$. At half-filling and for a sufficiently large $U$, an antiferromagnetic chiral spin density wave order with nonzero spin chirality emerges as the ground state in a wide regime of the phase diagram near $t''/t=1/\sqrt{2}$, where the Fermi surface is well-nested for both $(\pi,\pi)$ and $(\pi,0)/(0,\pi)$ wave vectors. This triple-${\bf Q}$ chiral phase is sandwiched by a single-${{\bf Q}}$ Néel phase and a double-${\bf Q}$ coplanar spin-vortex crystal phase, at smaller and larger $t''/t$, respectively. The energy spectrum in the chiral spin density wave phase consists of four pairs of degenerate bands. These give rise to two pairs of Dirac cones with the same chirality at the point $({\pi \over 2},{\pi\over 2})$ of the Brillouin zone. We demonstrate that the application of a diagonal strain induces a $d_{xy}$-wave next nearest neighbor hopping which, in turn, opens gaps in the two Dirac cones with opposite masses. As a result, four pairs of well-separated topologically-nontrivial bands emerge, and each pair of those contributes with a Chern number $\pm1$. At half-filling, this leads to a zero total Chern number and renders the topologically-notrivial properties observable only in the ac response regime. Instead, we show that at $3/4$ filling, the triple-${\bf Q}$ chiral phase yields a Chern insulator exhibiting the quantum anomalous Hall effect.' author: - 'Yun-Peng Huang' - 'Jin-Wei Dong' - Panagiotis Kotetes - Sen Zhou title: | Antiferromagnetic chiral spin density wave and strain-induced Chern insulator\ in the square lattice Hubbard model with frustration --- Introduction \[sec0\] ===================== The chiral spin density wave ($\chi$SDW) has attracted much attention in condensed matter physics, as it is distinct for the net spin chirality [@Wen89] $\chi_{ijk}=\langle{{\bf S}}_i\cdot({{\bf S}}_j\times{{\bf S}}_k)\rangle\neq0$ that it threads through a triangular plaquette defined by three lattice sites $\bm{R}_{i,j,k}$. When itinerant electrons move under its influence, they feel a spontaneous gauge flux that leads to the accumulation of a nonzero Berry phase [@Berry], which in turn gives rise to an anomalous contribution to the Hall coefficient [@Karplus1954; @Ye1999; @Ohgushi2000; @Taguchi2001; @Shindou2001; @Nagaosa2010; @Niu]. Remarkably, this already takes place in the absence of an external magnetic field. Even more, when the bulk energy spectrum is fully gapped, such an anomalous contribution takes quantized values as a result of the nonzero total Chern number [@Thouless; @TKNN; @Niu85] ${\cal C}$ of the occupied bands. In this manner, it opens perspectives for a topologically-nontrivial Chern insulator, i.e. with ${\cal C}\neq0$, which leads to the quantum anomalous Hall effect [@Ohgushi2000; @Nagaosa2010; @Niu] (QAHE). Besides the QAHE, the breaking of both parity and time-reversal symmetries in the $\chi$SDW phase brings about a number of intriguing phenomena [@WeinbergBook; @VolovikBook; @FradkinBook], such as, the occurence of parity anomaly [@NiemiSemenoff; @Redlich; @Semenoff; @Haldane; @Yakovenko90], anyon superconductivity [@Laughlin1988], anomalous thermoelectricity [@Niu], and the polar Kerr effect [@Bennett; @Kapitulnik], which constitute characteristic features of systems belonging to the anomalous Hall metal and insulator classes [@Raghu; @Tewari2008; @Kotetes2008; @CZhang; @Kotetes2010; @Maciejko; @YuScience; @Chakravarty; @Kotetes2014]. Among the various candidates for a $\chi$SDW, the so-called antiferromagnetic (AFM) $\chi$SDW with zero net magnetization ($\sum_i{{\bf S}}_i=\bm{0}$) is particularly interesting, since AFM spin couplings and magnetic orders are ubiquitous in correlated electronic systems. In fact, the AFM $\chi$SDW order has been experimentally discovered in the NiS$_2$ [@Miyadai1975; @Kikuchi1978; @Kikuchi1978b; @Matsuura2003] and FeMn [@Endoh1971; @Tajima1976; @Kennedy1987; @Kawarazaki1990] antiferromagnets on the frustrated face-centered-cubic (fcc) lattice. Neutron scattering experiments [@Kikuchi1978; @Matsuura2003; @Miyadai1975; @Endoh1971; @Tajima1976; @Kikuchi1978b] observed a noncoplanar AFM order with a four-sublattice structure and three magnetic ordering wave vectors. Moreover, it was inferred that the ordered spin moments on the four sublattices form a tetrahedron in spin space. On the theoretical side, such noncoplanar and chiral magnetic orders have been intensively explored in the context of the Kondo lattice model [@Akagi2010; @Akagi2012; @Akagi2013; @Barros2013; @Kato2010; @Martin2008; @Ozawa2014; @Hayami2015; @Chern2010; @Barros2014; @Chern2014; @Ghosh2016; @Hayami2016; @Agterberg2000], the Hubbard model [@Martin2008; @Venderbos2012; @Ran2014; @Jiang2015; @Kiesel2012; @Li2012; @Wang2012], and Heisenberg spin models [@Kumar2010; @Ran2014; @Venderbos2012] on various two-dimensional and three-dimensional lattice structures. Specifically, it has been suggested that an AFM $\chi$SDW order can be stabilized on the triangular [@Martin2008; @Kato2010; @Akagi2010; @Akagi2012; @Barros2013; @Rahmani2013; @Akagi2013; @Ozawa2014; @Kumar2010; @Venderbos2012; @Chern2012; @Hayami2016], honeycomb [@Kiesel2012; @Wang2012; @Ran2014; @Li2012; @Nandkishore2012; @Jiang2015], kagome [@Chern2014; @Barros2014; @Ghosh2016], pyrochlore [@Chern2010], cubic [@Hayami2014], and fcc [@Yosida1981; @Yoshimori1981; @Hirai1985; @Shindou2001; @Agterberg2000] lattices. In these systems, the three ordering wave vectors of the $\chi$SDW phase are equivalent by means of the point group symmetry of the crystal. Moreover, these ordering vectors are half of the fundamental reciprocal lattice vectors of the system. Interestingly, numerical calculations found that an AFM $\chi$SDW can develop even in some decorated variants of the square lattice, e.g. the checkerboard [@Venderbos2012b] and square-to-triangular lattices [@Hayami2015], which do not support three equivalent wave vectors. While a microscopic theory for the AFM $\chi$SDW is currently lacking, it is generally believed that both electron correlation and geometric frustration play important roles in its stabilization. In this work, we explore the magnetic states and discuss the possible realization of the AFM $\chi$SDW phase in the Hubbard model on a frustrated square lattice, with the frustration introduced by considering both the nearest neighbor (NN) hopping $t$ and the next nearest neighbor (NNN) hopping $t'$ in the kinetic energy part of the model. Motivated by the experimental findings, we here consider that the ordering wave vectors of the AFM $\chi$SDW are half of the fundamental reciprocal lattice vectors. Therefore, our general investigation is focused on magnetic states with ordering wave vectors ${{\bf Q}}_1=(\pi,0)$, ${{\bf Q}}_2=(0,\pi)$, and ${{\bf Q}}_3=(\pi,\pi)$ on the square lattice with the above type of hopping-induced frustration. At half-filling, we find that the desired triple-${{\bf Q}}$ AFM $\chi$SDW phase is the ground state in an extended regime of the $(U,t'/t)$ phase diagram. The resulting band dispersions exhibit two twofold-degenerate Dirac cones possessing the same chirality. These are located at the N=$({\pi\over 2},{\pi\over 2})$ point of the Brillouin zone (BZ). Remarkably, in this case, the system is an insulator despite the presence of the two Dirac points where multiple band touchings occur. This is because the Dirac points are split in energy and are found above and below the Fermi level while, at the same time, the maximum bandwidth of the reconstructed bands is smaller than this energy splitting. Given this spontaneously developed magnetic ground state and the resulting band structure, we propose a mechanism that gaps out the Dirac points, and thus renders the system a Chern insulator. As we discuss in detail, this is possible by considering the effect of strain along the diagonal direction. The latter introduces a $d_{xy}$-wave NNN hopping $\tilde{t}$, which in turn gaps out the Dirac points by inducing mass terms of opposite signs. As a result, each pair of degenerate bands contributes with a Chern number of $\pm 1$, which implies that the total Chern number at half-filling is zero and a QAHE is unobservable. However, other electron filling fractions can support the QAHE. We explicitly demonstrate that this is the case for a 3/4 filling factor. Before proceeding with our main analysis, we remark that the present work is restricted to the interplay of magnetic instabilities generated by Fermi surface (FS) nestings only at wave vectors ${{\bf Q}}_{1,2,3}$. While in this manner possible magnetic instabilities at other wave vectors are neglected, we argue that our strategy is still valid and worthwhile to pursue. First of all, our approach is justified by the actual experimental observation of such a triple-${{\bf Q}}$ AFM order, and the fact that we here attempt a qualitative exploration of the possible magnetic orders that become accessible in such a setting. In the same spirit, the tight-binding model employed here mainly serves the purpose of investigating the desired triple-${{\bf Q}}$ degeneracy and interplay, and is not targeted to make a strong connection to the band structure of a specific material. Even more, the results and discussion presented in this work would not change qualitatively upon the modification of the tight-binding parameters, since one of our main goals is to highlight the magnetic phases which become accessible upon such a coexistence. Finally, restricting our study to these three wave vectors also appears as the natural assumption when approaching the problem using a strong-coupling model with an AFM superexchange [@SuperAnderson] coupling $J$, since the short-ranged nature of the coupling favors ordering at these wave vectors. The presentation of our methods and results are unfolded in the remaining five sections. Section \[sec1\] introduces the Hubbard model considered throughout, and highlights the rich physics emerging from it, thus supporting our motivation to study this problem. This is achieved by exposing the FS nesting properties of the band structure in the nonmagnetic phase and the resulting behavior of the bare static spin susceptibility for various parameter values. In Sec. \[sec2\], we derive the respective mean-field Hamiltonian by treating the local Coulomb interaction within the Hartree-Fock approximation. In addition, we present the possible magnetic ground states within the restricted subspace of magnetic wave vectors. Section \[sec3\] contains our numerically obtained magnetic phase diagrams in the $(U,t'/t)$ parameter plane at half-filling, where we find a wide regime where the AFM $\chi$SDW insulator is the ground state. In Sec. \[sec4\], we consider the situation of an electron filling fraction of 3/4, where a topologically-nontrivial Chern insulator featuring the QAHE is realized by introducing diagonal strain. Section \[sec5\] contains our conclusions. [![(Color online) The static bare spin susceptibilities $\chi_0({{\bf q}})$ at the three ordering wave vectors ${{\bf Q}}_1=(\pi,0)$, ${{\bf Q}}_2=(0,\pi)$, and ${{\bf Q}}_3=(\pi,\pi)$ as functions of the ratio $t'/t$ or $t/t'$. The energy unit is set to be the NN (NNN) hopping $t=1$ ($t'=1$) in the left (right) panel. In the left (right) panel we have included two insets, which depict the Fermi surface topology appearing for $t'/t=0$ ($t/t'=0$), as well as the respective Néel (stripe) phase established. As the first inset of the right panel shows, the original square lattice is divisible into two enlarged square lattices which are illustrated by red and blue solid lines. In this manner, the $\bm{Q}_1$ stripe can be viewed as two antialigned Néel orders on the enlarged sublattices.[]{data-label="fig1"}](fig1 "fig:"){width="4.3in"}]{} Hubbard model and bare spin susceptibility analysis {#sec1} =================================================== We start with the Hubbard model on the square lattice, described by the following Hamiltonian $$\mathcal{H}=-\sum_{i,j}t_{ij}c^\dagger_{i\alpha}c_{j\alpha}+U\sum_i\hat{n}_{i\uparrow}\hat{n}_{i\downarrow}, \label{Hhub}$$ where $c^\dagger_{i\alpha} (c_{i\alpha})$ creates (annihilates) an electron with spin $\alpha=(\uparrow, \downarrow)$ at lattice site $i$, and $\hat{n}_{i\alpha} = c^\dagger_{i\alpha} c_{i\alpha}$ is the corresponding particle number operator. In the kinetic energy part, we consider both the NN and NNN hoppings to introduce frustration. Their combined presence leads to the following tight-binding energy dispersion $$\epsilon_{\bf k}=-2t\left(\cos k_{x}+\cos k_{y}\right)-4t'\cos k_{x} \cos k_{y},\label{ek}$$ with the lattice constant set equal to unity and $t'$ considered to be positive throughout. Note that our sign choice for $t'$ becomes irrelevant at half-filling due to an emergent particle-hole symmetry. The nesting properties of the FS resulting from $\epsilon_{{\bf k}}$ play an important role in understanding the structure of the long-range magnetic order that develops due to the presence of Coulomb repulsion of strength $U$, in the weak-strength limit. The emergence of nesting at the three wave vectors of interest ${{\bf Q}}_{1,2,3}$ is reflected in the behavior of the bare static spin susceptibility $\chi_0({{\bf Q}}_{1,2,3})$. Figure \[fig1\] depicts $\chi_0({{\bf Q}}_{1,2,3})$ as a function of the $t'/t$ or $t/t'$ ratio at half-filling, where the NN (NNN) hopping $t$ ($t'$) is set as the reference energy unit in the left (right) panel. When we consider only the NN hopping $t$ at half-filling, *i.e.*, $t'/t=0$, the corresponding FS is perfectly nested with ${{\bf Q}}_3$. The presence of perfect nesting leads to a logarithmic divergence in $\chi_{0}({{\bf Q}}_3)$, as shown in Fig. \[fig1\]. Due to this divergence, a collinear magnetic order with ordering wave vector ${{\bf Q}}_3$ develops even for an arbitrarily small $U$, and stabilizes the standard Néel phase. In the inverse limit, where the NN hopping $t$ is zero, *i.e.*, $t/t'=0$, the square lattice can be divided into two decoupled unfrustrated square lattices with a lattice constant enlarged by a factor of $\sqrt{2}$. The FSs for these two decoupled square lattices are perfectly nested with the wave vectors $(\pi,\pi)$ in the corresponding reduced Brillouin zones (RBZs). The latter wave vectors correspond to ${{\bf Q}}_1$ and ${{\bf Q}}_2$ in the original BZ. Consequently, two decoupled Néel AFM orders develop at any nonzero $U$ for both enlarged square lattices. Introducing a small amount of NN hopping $t$ couples the two Néel AFM orders, and the resulting FS does not show perfect nesting features any longer. As a result, a threshold strength of $U$ is now required for the emergence of a magnetic order with ordering wave vector ${{\bf Q}}_1$ and/or ${{\bf Q}}_2$. More interestingly, since the FS is now nested simultaneously by two wave vectors which are equivalent by virtue of the tetragonal symmetry, a number of double-${{\bf Q}}$ magnetic orders become accessible [@lorenzana08; @eremin; @brydon; @giovannetti; @gastiasoro15; @wang15; @kang15a; @christensen15; @Scherer2016; @christensen17; @christensen18; @Fernandes2016]. For the ${{\bf Q}}_{1,2}$ wave vectors discussed here, there exist two possible double-${{\bf Q}}$ phases[@lorenzana08], *i.e.*, a collinear charge- and spin-ordered density wave (CSDW) phase, and a coplanar so-called spin-vortex crystal (SVC) phase where the moments on neighboring sites are at right angles to each other. Evidences for both of these phases have been recently experimentally recorded in Fe-based materials [@hassinger; @avci14a; @wasser15; @bohmer15a; @allred15a; @wang16a; @zheng16a; @malletta; @mallettb; @allred16a; @meier17; @Wang2019]. The tetragonal symmetry of the nonmagnetic phase further implies that the Stoner criteria for the single- and double-${{\bf Q}}$ phases are satisfied simultaneously. Thus, from a Landau theory perspective, both kinds of magnetic orders are degenerate at the quadratic level of the free-energy expansion in terms of the magnetic order parameter. All degeneracies are however lifted when considering the fourth-order contributions to the free energy [@Agterberg2000; @Hayami2014; @lorenzana08; @christensen18]. When the frustration is strong, *i.e.*, $t \sim t'$, the bare static spin susceptibilities $\chi_0 ({{\bf q}})$ at ${{\bf Q}}_{1,2}$ and ${{\bf Q}}_3$ become comparable as shown in Fig. \[fig1\]. In particular, their values are exactly the same at $t'/t = 1/\sqrt{2}\simeq 0.71$ where a Lifshitz [@Lifshitz] transition modifies the FS topology. This implies that the FS is simultaneously nested by the three wave vectors ${{\bf Q}}_{1,2,3}$, although these three wave vectors are not equivalent by means of the square lattice symmetry. The diversity of magnetic order scenarios revealed from the above susceptibility analysis further supports our motivation to study here the interplay between phases originating from the FS nesting with the three wave vectors, and explore the possible emergence of magnetic phases beyond the well-discussed single-${{\bf Q}}$ collinear stripe, and double-${{\bf Q}}$ CSDW and SVC orders [@lorenzana08; @eremin; @brydon; @giovannetti; @gastiasoro15; @wang15; @kang15a; @christensen15; @Scherer2016; @christensen17; @Fernandes2016]. Notably, when the magnetic moments order at the three vectors simultaneously, the long-sought-after AFM $\chi$SDW phase becomes accessible in the frustrated regime of the present model, and opens perspectives for realizing a topologically-nontrivial AFM Chern insulator exhibiting the QAHE. Mean-field theory\[sec2\] ========================= To study the ground state properties of the Hubbard model, the Coulomb repulsion term in Eq.  is treated within the Hartree-Fock approximation which preserves the SU(2) spin-rotational symmetry of the interaction. The resulting mean-field Hamiltonian reads $$\begin{aligned} \mathcal{H}_\text{HF} =&-\sum_{i,j,\alpha} t_{ij} c^\dagger_{i\alpha}c_{j\alpha}+{U\over 4} \sum_i \big( 2 n_i \hat{n}_i -n^2_i \big) \nonumber \\ &- {U\over 4} \sum_i \big( 2{\bm m}_i \cdot \hat{\bm m}_i-{\bm m}^2_i\big),\label{Hmf}\end{aligned}$$ where ${\bm \sigma}=(\sigma_x,\sigma_y,\sigma_z)$ defines the vector of the Pauli matrices. In the above, $n_i$ and ${\bm m}_i$ denote the mean fields of the local particle density $\hat{n}_i=\sum_\alpha \hat{n}_{i\alpha}$ and magnetic moment $\hat{\bm m}_i=\sum_{\alpha\beta} c^\dagger_{i\alpha}{\bm \sigma}_{\alpha\beta} c_{i\beta}$ operators, respectively. These are obtained from the statistical average of the corresponding operators with respect to the single-particle mean-field Hamiltonian in Eq. , and are generally expressed as $n_i={\bar n}+\sum_\eta\widetilde{\mathcal{N}}_\eta \cos ({{\bf Q}}_\eta \cdot {\bf r}_i + \theta_\eta )$ and ${\bm m}_i = \sum_\eta \widetilde{\bm M}_\eta \cos ({{\bf Q}}_\eta \cdot {\bf r}_i + \theta'_\eta )$, with $\widetilde{\mathcal{N}}_\eta~(\widetilde{\bm M}_\eta )$ the charge (magnetic) order parameters with ordering wave vector ${{\bf Q}}_\eta$ and $\theta_\eta~(\theta'_\eta)$ the relative phases. $\bar n$ is the average particle density per site which equals to $1$ at half-filling. When the allowed ordering wave vectors ${{\bf Q}}_\eta$ are limited to ($\pi$, 0), (0, $\pi$), and ($\pi$, $\pi$), and the lattice symmetry is tetragonal, the expressions simplify to $$\{n_i,{\bm m}_i\} = \{\bar{n},\bm{0}\}+\sum_\eta\{\mathcal{N}_\eta,{\bm M}_\eta\}\cos{({\bf Q}_\eta\cdot{{\bf r}}_i)},$$ with the order parameters $\mathcal{N}_\eta = \widetilde{\mathcal{N}}_\eta \cos \theta_\eta$ and ${\bm M}_\eta = \widetilde{\bm M}_\eta\cos\theta'_\eta$. As a result, there exist four inequivalent lattice sites in the ordered phase, which lead to a $2\times 2$ enlarged unit cell. On the four inequivalent sites, the local particle density is $n_i= \{\bar n +\mathcal{N}_1 +\mathcal{N}_2 +\mathcal{N}_3, \bar n -\mathcal{N}_1 +\mathcal{N}_2 -\mathcal{N}_3, \bar n -\mathcal{N}_1 -\mathcal{N}_2 +\mathcal{N}_3, \bar n +\mathcal{N}_1 -\mathcal{N}_2 -\mathcal{N}_3\}$, and the local magnetic moment reads ${\bm m}_i =\{{\bm M}_1 +{\bm M}_2 +{\bm M}_3, -{\bm M}_1 +{\bm M}_2 -{\bm M}_3, -{\bm M}_1 -{\bm M}_2 +{\bm M}_3, {\bm M}_1 -{\bm M}_2 -{\bm M}_3\}$. It is important to note that the energy contribution per site from the charge order is $U\sum_\eta \mathcal{N}^2_\eta$. Therefore, any type of charge order is energetically costly and disfavored and, as a result, the ground state of the Hubbard model is expected to be a state of a uniform charge density with all $\mathcal{N}_\eta=0$. For the magnetic order, the fourth-order expansion of the magnetic free energy, which we discuss in Appendix \[app1\], shows that if multi-${{\bf Q}}$ ordering takes place, the ordered moments of the different order parameters develop in a pairwise parallel or perpendicular fashion. This is because only such configurations minimize the free energy. When two or three magnetic order parameters have moments which are aligned in parallel, they give rise to local moments that have different amplitudes $|{\bm m}_i|$ on the four inequivalent sites. Thus, this induces some sort of charge order and leads to a CSDW-type of phase, which is not a likely ground state of the Hubbard model in Eq. . This is due to the unavoidable energy penalty for developing the charge order. Note, however, that there exists numerical evidence for the CSDW phase in extended Hubbard models with additional interactions that not considered here [@lorenzana08; @eremin; @brydon; @giovannetti; @gastiasoro15; @wang15; @kang15a; @christensen15; @Scherer2016; @christensen17; @christensen18; @Fernandes2016], as well as experimental proof that this phase is realized in certain Fe-based compounds [@allred16a]. The above corroborate that the ground state of the Hubbard model in Eq.  should be a phase with uniform charge and ordered magnetic moments which, when they arise, they align perpendicular to each other. In fact, this feature is confirmed by our unrestricted numerical calculations. Hence, we hereinafter restrict our discussion to this kind of states with uniform charge density and ordered moments on the four sublattices. By virtue of the global SO(3) spin rotational symmetry of the model, we further fix the directions of ${\bm M}_1$, ${\bm M}_2$, and ${\bm M}_3$ to be along the $x$, $y$, and $z$ spin axis, respectively. Thus, without loss of generality, the ordered moments read ${\bm M}_1=(M_1,0,0)$, ${\bm M}_2=(0,M_2,0)$, and ${\bm M}_3=(0,0,M_3)$, and yield ${\bm m}_1=(M_1,M_2,M_3)$, ${\bm m}_2=(-M_1,M_2,-M_3)$, ${\bm m}_3=(-M_1,-M_2,M_3)$, and ${\bm m}_4=(M_1,-M_2,-M_3)$. The commensurate character of the magnetic wave vectors allows us to more conveniently treat the problem in ${\bf k}$ (wave vector) space. Specifically, since ${\bf k}+2{{\bf Q}}_{1,2}={\bf k}$ and ${{\bf Q}}_3={{\bf Q}}_1+{{\bf Q}}_2$, we find ${{\bf Q}}_\eta$=$-{{\bf Q}}_\eta$ and $\pm{{\bf Q}}_1\pm{{\bf Q}}_2\pm{{\bf Q}}_3=\bm{0}$. The above relations hold modulo a shift by a reciprocal lattice vector, and imply that the ${\bf k}$-space mean-field Hamiltonian of Eq.  becomes $$\begin{aligned} \mathcal{H}_\text{HF}= & \sum_{{\bf k},\alpha} \epsilon_{\bf k} c_{\bf k \alpha}^{\dagger} c_{\bf k\alpha} -{U \over 2} \sum_{{\bf k},\alpha,\beta,\eta,\nu} M_\eta c^\dagger_{\bf k \alpha} \sigma^\eta_{\alpha\beta} c_{{\bf k}+{\bf Q}_{\eta},\beta} \nonumber \\ +& {1\over 4}NU\sum_\eta M^2_\eta\,. \label{Hk}\end{aligned}$$ Interestingly, since the here-considered magnetic order parameters are invariant under a lattice translation combined with a spin rotation [@Martin2008], the Hamiltonian in Eq.  can be split into the following two identical disjoint parts $$\mathcal{H}_\text{HF}=\sum_{{{\bf k}}\in {\rm RBZ}}\left({\bm \Psi}^\dagger_{{\bf k}}\hat{H}_{{\bf k}}{\bm\Psi}_{{\bf k}}+{\bm\Phi}^\dagger_{{\bf k}}\hat{H}_{{\bf k}}{\bm\Phi}_{{\bf k}}\right),\label{Hmatrix}$$ with the spinors ${\bm \Psi}_{{\bf k}}$=$(c_{{{\bf k}}\uparrow}$, $c_{{{\bf k}}+{{\bf Q}}_1 \downarrow}$, $c_{{{\bf k}}+{{\bf Q}}_2 \downarrow}$, $c_{{{\bf k}}+{{\bf Q}}_3 \uparrow})^\intercal$, ${\bm \Phi}_{{\bf k}}$=$(c_{{{\bf k}}\downarrow}$, $c_{{{\bf k}}+{{\bf Q}}_1 \uparrow}$, $-c_{{{\bf k}}+{{\bf Q}}_2 \uparrow}$, $-c_{{{\bf k}}+{{\bf Q}}_3 \downarrow})^\intercal$, and the ${{\bf k}}$-dependent $4\times 4$ Hamiltonian matrix $$\hat{H}_{{\bf k}}=\left(\begin{array}{cccc} \epsilon_{{\bf k}}& -{1\over 2} UM_1 & {i\over 2}UM_2 & -{1\over 2} UM_3\\ -{1\over 2}UM_1 & \epsilon_{{{\bf k}}+{{\bf Q}}_1} & {1\over 2}UM_3 & -{i\over 2}UM_2 \\ -{i\over 2}UM_2 & {1\over 2}UM_3 & \epsilon_{{{\bf k}}+{{\bf Q}}_2} & -{1\over 2}UM_1\\ -{1\over 2} UM_3 & {i\over 2}UM_2 & -{1\over 2}UM_1 & \epsilon_{{{\bf k}}+{{\bf Q}}_3} \end{array}\right). \label{matrix}$$ As a result, the reconstructed band structure consists of four pairs of degenerate bands. Note that the wave vector summation in Eq.  is over the RBZ, which corresponds to one quarter of the original BZ, due to the four-sublattice structure of the direct lattice. To investigate the ground state properties, we minimize the energy by obtaining the magnetic order parameter associated with each ordering wave vector ${{\bf Q}}_\eta$ self-consistently at zero temperature, via the relations $$M_\eta={1\over N}\sum_{{{\bf k}},\alpha,\beta} \left\langle c^\dagger_{{{\bf k}}\alpha} \sigma^\eta_{\alpha\beta}c_{{{\bf k}}+{{\bf Q}}_\eta,\beta}\right\rangle,$$ with $N$ denoting the number of ${{\bf k}}$-points in the RBZ. When a magnetic order at a single ${{\bf Q}}$ develops, it gives rise to a collinear state which is termed as the Néel or the stripe phase, depending on whether the ordering vector is ${{\bf Q}}_3$ or ${{\bf Q}}_{1,2}$. The ${{\bf Q}}_{1,2}$ stripe phases are equivalent by means of the fourfold rotational symmetry of the energy dispersion. The coplanar SVC magnetic phase carrying a vector chirality $\bm{\chi}_{ij}=\langle{{\bf S}}_i\times{{\bf S}}_j\rangle\neq\bm{0}$ is achieved when two magnetic orders emerge at the same time. All coplanar phases obtained in this work exhibit ordering at ${{\bf Q}}_1$ and ${{\bf Q}}_2$, and have moments of equal amplitude which lie in the $xy$ spin plane. When all three magnetic orders coexist simultaneously, the AFM $\chi$SDW phase is realized with the spin-chirality value $\chi={1\over 2} M_1 M_2 M_3$ per unit cell. Note that the tetragonal point group symmetry enforces $|M_1|=|M_2|$ in the $\chi$SDW phase. In summary, besides the paramagnetic (PM) metal phase with all $M_\eta=0$, which is obtained at small strengths of Coulomb repulsion, the minimization of the energy leads to four possible magnetic ground states: (a) the Néel phase with $(M_1,M_2,M_3)=(0,0,M)$, (b) the stripe phase with $(M_1,M_2,M_3)=(M,0,0)$ or, equivalently, $(0,M,0)$, (c) the SVC phase with $(M_1,M_2,M_3)=(M,M,0)$, and (d) the $\chi$SDW with $(M_1,M_2,M_3)=(M,M,M')$. Finally, we additionally note that, for our numerical simulations, we employ different initial conditions for solving the self-consistency equations for a given choice of the set of Hamiltonian parameters. When different ground states are obtained for the different initial conditions, we compare the state energies of the different configurations in order to infer the true ground state. AFM $\bm \chi$SDW at half-filling \[sec3\] ========================================== We first explore the magnetic orders of the frustrated square lattice Hubbard model at half-filling with $\bar n=1$. Note that our study goes beyond the exploration of previous related works [@Yu2010; @Zou2012; @Yamada2013; @Mizusaki2006; @Nevidomskyy2008; @Tocchio2008], which did not consider the possibility of the $\chi$SDW phase. Phase diagram and phase transitions ----------------------------------- The magnetic phase diagram at half-filling is presented in Fig. \[fig2\] as a function of the Hubbard repulsion strength $U$ and the $t'/t$ ratio. Hereafter, we take the strength of the NN hopping $t$ as the energy unit. The possible ground states of the phase diagram are spanned by the PM metal, single-${{\bf Q}}$ Néel, double-${{\bf Q}}$ coplanar SVC phase, and triple-${{\bf Q}}$ $\chi$SDW phases. The boundaries between these phases are determined by comparing the state energies of different phases. The solid and dashed lines denote, respectively, a first-order and a continuous phase transition between two neighboring phases. When $U$ is not sufficiently strong to stabilize any long-range magnetic order, the ground state is a PM metal with all $M_\eta = 0$. Magnetic ordering generally emerges only above a critical $U_c$, and the precise structure of the magnetic order is governed by the $t'/t$ ratio. At small $t'/t$, the nesting at wave vector ${{\bf Q}}_3$ is much stronger than that at ${{\bf Q}}_{1,2}$, and leads to the Néel phase. Note that an infinitesimally weak $U$ is capable of driving a Néel phase at $t'/t=0$, due to the divergence of the spin susceptibility at ${{\bf Q}}_3$. As the $t'/t$ ratio increases, a higher critical $U_c$ is required. The situation is different at large $t'/t$, where the nesting is stronger at ${{\bf Q}}_{1,2}$. Our numerical calculations reveal that the ground state is the double-${{\bf Q}}$ coplanar SVC phase which takes advantage of the nestings at both wave vectors. The ${{\bf Q}}_1$ or ${{\bf Q}}_2$ stripe phases reside higher in energy. The critical $U_c$ decreases as $t'/t$ increases, and is expected to reach zero in the limit of $t'/t \rightarrow \infty$ (*i.e.*, $t/t' = 0$), where the susceptibility $\chi_0({{\bf Q}}_{1,2})$ diverges. Remarkably, in a significantly wide regime about $t'/t=1/\sqrt{2}$, where the nestings at ${{\bf Q}}_{1,2}$ and ${{\bf Q}}_3$ are comparable in strength, the AFM $\chi$SDW emerges as the ground state. All three ordered moments develop simultaneously to lower the state energy, giving rise to a nonzero spin chirality which opens perspectives for an anomalous Hall response. [![(Color online) Phase diagram of the half-filled square lattice Hubbard model with frustration. Solid and dashed lines denote, respectively, phase boundaries of first-order and continuous transitions.[]{data-label="fig2"}](fig2.eps "fig:"){width="3.4in"}]{} All magnetic ground states presented in Fig. \[fig2\] are insulating and, as $U$ decreases, they give their place to a PM metal phase via first-order transitions. To investigate the phase transitions between the different magnetic ground states, we consider $U=6t$, and monitor the evolution of the four distinct magnetic phases of interest as a function of the $t'/t$ ratio, with a focus on the transitions. The results are summarized in Fig. \[fig3\]. The ordered magnetic moments for the four magnetic phases are plotted in Fig. \[fig3\](a), and their energies per site are compared in Fig. \[fig3\](b). The amplitude of the ordered moments in the four phases show only a weak dependence on the frustration ratio $t'/t$, since $U$ is quite strong in the regime displayed in Fig. \[fig3\]. [![(Color online) (a) The ordered magnetic moments and (b) the state energy per site of the four distinct magnetic phases as a function of the $t'/t$ ratio at half-filling when the strength of the Coulomb repulsion is $U=6t$.[]{data-label="fig3"}](fig3.eps "fig:"){width="3.2in"}]{} We find that the Néel phase is lower in energy than the stripe and coplanar phases at small $t'/t$. Note that although the ordered moment has a slightly larger amplitude in the stripe phase (see for instance $M_1$ in the ${{\bf Q}}_1$-ordered stripe phase) than in the coplanar phase ($\sqrt{M^2_1+M^2_2}$), the stripe phase is always higher in energy than the coplanar SVC phase, as the latter utilizes both ordering wave vectors. As $t'/t$ increases, the inplane magnetic orders $M_1$ and $M_2$ become favored, and emerge at $t'/t \simeq 0.69$, where the out-of-plane magnetic order $M_3$ starts to decrease, thus reflecting the competition between the FS nestings at wave vectors ${{\bf Q}}_{1,2}$ and ${{\bf Q}}_3$. A further increase of $t'/t$, leads to the complete suppression of $M_3$ at $t'/t\simeq 0.78$, and the ground state converges to the double-${{\bf Q}}$ coplanar SVC phase. The AFM $\chi$SDW phase is obtained in the regime of $0.69\lesssim t'/t \lesssim 0.78$, where all three magnetic orders coexist. When approaching its phase boundaries, the energy of the AFM $\chi$SDW gradually becomes equal to that of the Néel and coplanar SVC phases, as shown in Fig. \[fig3\](b). The transitions between the three magnetic phases in the phase diagram are therefore continuous, with the boundaries denoted by the dashed lines in Fig. \[fig2\]. Band dispersion and Dirac cones ------------------------------- The band dispersions are readily obtained by diagonalizing the matrix Hamiltonian of Eq. . The spectrum of each magnetic phases has four pairs of twofold degenerate bands, with two of them above and the other two below the Fermi energy. For a direct comparison of the band dispersions arising in the four distinct magnetic phases, we consider the parameter values $(U,t')=(6,0.74)t$, and depict the resulting energy dispersions in Fig. \[fig4\]. For these parameter values, the AFM $\chi$SDW constitutes the ground state, while the remaining three can be viewed as metastable phases corresponding to local minima of the free energy. To obtain the displayed band structures for the metastable phases, we use the self-consistently obtained magnetic moments by restricting our evaluation to the vicinity of the local minima. Remarkably, in the AFM $\chi$SDW and coplanar SVC phases, we find two pairs of Dirac cones at the ${{\bf k}}$-space point N=$({\pi\over2},{\pi\over2})$ as shown in Figs. \[fig4\](c) and \[fig4\](d). The appearance of the Dirac cones in the coplanar SVC and AFM $\chi$SDW phases is much more evident in a rotated basis, with the unitary transformation matrix explicitly given in Appendix \[app2\]. In the rotated basis, the Hamiltonian $\hat{H}_{{\bf k}}$ of Eq.  becomes $$\tilde{H}_{{\bf k}}= \left(\begin{array}{cc} H_+ & H_w \\ H^\dagger_w & H_- \end{array}\right), \label{matrix2}$$ with the ${{\bf k}}$-dependent $2\times 2$ matrices $H_\pm$ and $H_w$ correspondingly given by $$\begin{aligned} H_{\pm}= & \pm \big(\Gamma_0 \sigma_0 -\Gamma' \cos \theta \sigma_z \big) \nonumber \\ &+\Gamma_x \sin\theta \sin\varphi \sigma_x -\Gamma_y \sin\theta \cos\varphi \sigma_y, \label{Hpm} \\ H_{w}= & -\Gamma' \sin\theta \sigma_0 -\Gamma_x \big( \cos\varphi -i\cos\theta \sin\varphi \big) \sigma_y \nonumber\\ & +\Gamma_y \big( \sin\varphi +i\cos\theta \cos\varphi \big) \sigma_x. \label{Hw}\end{aligned}$$ Here $\Gamma_0=\frac{1}{2} U M$ , $\Gamma_x =2t\cos k_x$, $\Gamma_y= 2t\cos k_y$, $\Gamma' =4t' \cos k_x \cos k_y$, and $M$, $\theta$, $\varphi$ are, respectively, the radial distance, polar angle, and azimuthal angle in the spherical coordinate of the magnetic moment on the first sublattice site ${\bm m}_1=(M_1,M_2,M_3)$. At the N point, where $\cos k_x=\cos k_y=0$, $\Gamma_x=\Gamma_y=\Gamma'=0$, the Hamiltonian matrix $\tilde{H}_{{\bf k}}$ becomes diagonal with two twofold degenerate eigenvalues $\pm\Gamma_0$. We expand the Hamiltonian around the N point and set ${{\bf k}}=({\pi\over 2},{\pi\over 2})+{{\bf p}}$. At leading order in ${{\bf p}}$, we find: $$\begin{aligned} H_\pm=&\pm\Gamma_0\sigma_0+2t\sin\theta\big(\sin\varphi p_x\sigma_x-\cos\varphi p_y\sigma_y\big),\label{Hpm2} \\ H_w=&-2t\big(\cos\varphi-i\cos\theta\sin\varphi\big)p_x\sigma_y \nonumber\\ &+2t\big(\sin\varphi+i\cos\theta \cos\varphi\big) p_y \sigma_x.\label{Hw2}\end{aligned}$$ Note that neither the diagonal nor the off-diagonal blocks contain $\Gamma'$, as this appears only at quadratic order in ${{\bf p}}$. [![(Color online) Band dispersions in the four magnetically-ordered phases, (a) Néel, (b) stripe, (c) coplanar SVC, and (d) $\chi$SDW, along the high-symmetry path in the RBZ at half-filling and $(U,t')=(6, 0.74)t$, where the $\chi$SDW is the ground state and the other three metastable states. $\Gamma=(0,0)$, $\text{Y}=(0,{\pi\over2})$, $\text{X}=({\pi\over2},0)$, and $\text{N}=({\pi\over 2}, {\pi\over 2})$. The blue and red lines denote, respectively, the band dispersion with the $d_{xy}$-wave NNN hopping $\tilde{t}=0$ and $\tilde{t}=0.1t$. The numbers in (d) denote the Chern numbers of each pair of bands with $\tilde{t}=0.1t$. Insets in (b) and (c) are enlargements of the band dispersions near the N point.[]{data-label="fig4"}](fig4.eps "fig:"){width="3.4in"}]{} In the AFM $\chi$SDW phase, where $\sin\theta$ is nonzero and $\varphi ={\pi\over 4}$, the diagonal blocks $H_\pm$ give rise to two isotropic Dirac cones at energies $\pm \Gamma_0$, with the same chirality, and velocity of value $\sqrt{2}t\sin\theta$. Even more, the off-diagonal blocks $H_w$ and $H^\dagger_w$, are linear in ${{\bf p}}$ and vanish at the N point, thus preserving the Dirac cones. In a similar fashion, the spectrum of the coplanar SVC phase with $\theta={\pi\over2}$ and $\varphi={\pi\over4}$ also exhibits two Dirac cones at N, as shown in Fig. \[fig4\](c). In the single-${{\bf Q}}$ stripe phase, take ${{\bf Q}}_1$-ordered phase for example, $\theta={\pi\over 2}$ and $\varphi=0$. The diagonal blocks thus become $H_\pm=\pm\Gamma_0\sigma_0-2t p_y\sigma_y$, that does not lift the twofold degeneracy along the N-Y RBZ line, where $p_y=0$ (*i.e.*, $k_y = {\pi\over 2}$), and fails to generate Dirac cones in the spectrum. Taking into account the off-diagonal blocks $H_w=-2tp_x\sigma_y$, the dispersions of the four pairs of bands in the stripe phase are given by $\pm \sqrt{\Gamma^2_0 +4t^2 p^2_x } \pm 2t p_y$, producing the spectrum shown in Fig. \[fig4\](b). In the ${{\bf Q}}_3$-ordered Néel phase, $\theta=0$, and thus the diagonal and off-diagonal blocks read $H_\pm=\pm\Gamma_0\sigma_0$, $H_w=2te^{i\varphi}(p_y\sigma_x+ip_x\sigma_y)$, therefore leading to the band dispersion $\pm\sqrt{\Gamma^2_0+4t^2(p_x\pm p_y)^2}$. The bands are fourfold degenerate along both N-X and N-Y directions where, either $p_x$ or $p_y$ equals zero, as shown in Fig. \[fig4\](a). Strain-induced gap opening and band topology -------------------------------------------- To lift the additional degeneracy at the N point and thus gap out the Dirac cones, we consider the presence of strain applied in the diagonal direction, which breaks the fourfold rotational symmetry. This violation introduces a $d_{xy}$-wave hopping $\tilde{t}$ on the NNN bonds. When $\tilde{t}=t'$ ($\tilde{t}=t'=t/2$), the lattice effectively becomes square-to-triangular [@Hayami2015] (triangular [@Martin2008]), and the spectrum becomes fully-gapped in the AFM $\chi$SDW phase. The $d_{xy}$-wave NNN hopping modifies the tight-binding dispersion $\epsilon_{{\bf k}}$ in Eq. , since its effect is reflected in the addition of the term $4\tilde{t}\sin k_x\sin k_y$. Consequently, the term $\Gamma'$ of Eqs. - acquires an extra contribution of $-4\tilde{t}\sin k_x\sin k_y$, which is of the order of $1$ near the N point, and is no longer negligible when the Hamiltonian is expanded in terms of ${{\bf p}}$. Indeed, the diagonal blocks $H_\pm$ of Eq.  and the off-diagonal blocks $H_w$ of Eq.  receive, respectively, the additional term $\pm 4\tilde{t} \cos\theta \sigma_z$ and $-4\tilde{t}\sin\theta\sigma_0$. As a result, in the triple-${{\bf Q}}$ chiral phase, where $\cos\theta\neq 0$, two mass terms with opposite masses $\pm 4\tilde{t}\cos\theta$ are introduced to the two Dirac cones stemming from the diagonal blocks. The Dirac cones are therefore gapped out, as shown in Fig. \[fig4\](d), where the AFM $\chi$SDW phase is obtained self-consistently in the presence of $d_{xy}$-wave NNN hopping $\tilde{t}$. On the other hand, the diagonal blocks $H_\pm$ describing the Dirac cones are unaltered in the double-${{\bf Q}}$ coplanar SVC phase with $\theta={\pi\over 2}$. Specifically, despite the fact that the term $-4\tilde{t}\sigma_0$ is added to the off-diagonal blocks, it leads to a mere shift of the energies of the two Dirac cones, from the initial $\pm\Gamma_0$ to the final $\pm\sqrt{\Gamma^2_0+16\tilde{t}^2}$ energies. The full gap that is induced in the $\chi$SDW phase due to addition of the $d_{xy}$-wave NNN hopping, generally gives rise to a nonzero total Chern number ${\cal C}$, and a concomitant QAHE at zero temperature with Hall conductance: $$\sigma_{xy}=-{e^2\over h}{\cal C}\quad{\rm where}\quad{\cal C}=\sum_n^{\rm occupied}\int\frac{d^2k}{2\pi}\Omega_{\nu{{\bf k}}}\,.$$ In the above, $\Omega_{\nu{{\bf k}}}=i\varepsilon_{zij}\big<\partial_{k_i}\bm{u}_{n{{\bf k}}}\big|\partial_{k_j}\bm{u}_{n{{\bf k}}}\big>$ ($i,j=x,y$) denotes the Berry curvature [@Niu] of the $n$-th occupied quasiparticle reconstructed band with eigenvector $\big|\bm{u}_{n{{\bf k}}}\big>$. Note that we employed the Einstein summation convention, introduced the totally-antisymmetric symbol $\varepsilon_{zij}$, and converted the summation to an integration by considering the continuum $N\rightarrow\infty$ limit. We find that the strain-induced mass term has an opposite sign on the two Dirac cones at the N point. This implies that each pair of degenerate bands, from top to bottom, contributes with a Chern number of $\{+1,-1,-1,+1\}$, as indicated in Fig. \[fig4\](d). Although the dc Hall conductance is zero at half-filling, as the two pairs of occupied bands have opposite Chern numbers, the Chern bands shown in Fig. \[fig4\](d) still allow for an anomalous Hall effect in the ac regime and the emergence of anomalous optical dichroism [@Jiang2015] which becomes accessible via interband transitions. Strain-induced Chern insulator at 3/4 filling \[sec4\] ====================================================== [![(Color online) Phase diagram of the frustrated square lattice Hubbard model at 3/4 filling. Additional third NN hopping with $t''=0.4t'$ is introduced in the model to stabilize the AFM $\chi$SDW as a ground state in the phase diagram.[]{data-label="fig5"}](fig5.eps "fig:"){width="3.4in"}]{} In order to obtain an AFM Chern insulator which exhibits the QAHE at zero temperature, it is required to modify the occupation of the bands, so that only an odd number of these are filled. We find that this becomes possible when doping the system away from half-filling to 3/4 filling. We note that the AFM $\chi$SDW phase does not emerge as a ground state of the 3/4-filled Hubbard model when only considering the NN and NNN hoppings. However, adding a third NN hopping $t''$ stabilizes it. Therefore, in this section, we introduce a $t''$ and fix it to the value $t''=0.4 t'$. The third NN hopping $t''$ modifies the tight-binding dispersion $\epsilon_{{\bf k}}$ by introducing an extra term $-\Gamma''$, with $\Gamma''=2t''(\cos 2k_x$ $+\cos 2k_y)$. Consequently, the diagonal blocks $H_\pm$, receive an additional term of $\Gamma'' \sigma_0$ in the rotated basis, which shifts the energies of the Dirac cones but does not lift any degeneracy. In Fig. \[fig5\] we present the $(U,t'/t)$ phase diagram at 3/4 filling. All phase boundaries correspond to critical lines of first-order transitions. The ground state is a PM metal at small $U$, while upon its increase, a coplanar SVC phase is established. In particular, the coplanar SVC phase directly succeeds the PM phase in the window $0.6< t'/t<0.95$, while for $t'/t < 0.6$ and $t'/t > 0.95$, an intermediate stripe phase appears. The triple-${{\bf Q}}$ $\chi$SDW phase is stabilized in the upper-right corner of the phase diagram, where $t'\sim t$ and $U$ are significant. Notably, due to the fourfold degeneracy at the N point, the magnetic ordered phases are metallic at 3/4 filling. In particular, the double-${{\bf Q}}$ coplanar phase and the triple-${{\bf Q}}$ $\chi$SDW phase give rise to a Dirac semimetal. See also Fig. \[fig6\]. As put forward in the previous section, the Dirac cones at the N point can be gapped out by applying strain along the diagonal direction, since this brings about a $d_{xy}$-wave NNN hopping. The band dispersion of the $\chi$SDW phase with $\tilde{t}=0.1t$ is shown in Fig. \[fig6\]. Indeed, the Dirac cones are gapped out and all four pairs of bands are well-separated from each other. Each pair of bands, from top to bottom, contributes with a Chern number $\{+1, -1, -1, +1\}$. Remarkably, since only the lower three pairs of bands are occupied at 3/4 filling, the total Chern number is $-1$, giving rise to a topologically-nontrivial Chern insulator which features a QAHE at zero temperature. [![(Color online) Band dispersion in the $\chi$SDW phase along a high-symmetry path in the RBZ at 3/4 filling and $(U, t')=(9,0.84)t$. The blue and red lines denote the band dispersion with the $d_{xy}$-wave NNN hopping $\tilde{t}=0$ and $\tilde{t}=0.1 t$, respectively. The numbers in the figure denote the Chern numbers of each pair of bands with $\tilde{t}=0.1t$.[]{data-label="fig6"}](fig6.eps "fig:"){width="3.4in"}]{} Conclusions \[sec5\] ==================== In this work, we investigate the magnetic orders and phase transitions which arise in the square lattice Hubbard model with frustration, where the allowed ordering wave vectors are restricted to ${{\bf Q}}_1 =(\pi,0)$, ${{\bf Q}}_2 =(0,\pi)$, and ${{\bf Q}}_3 =(\pi,\pi)$. To study the ground state properties, the Hartree-Fock approximation is applied to the local Coulomb interaction. When the strength of the interaction is sufficiently strong, the ground state at half-filling is a ${{\bf Q}}_3$-ordered Néel phase at small $t'/t$, and a double-${{\bf Q}}$ coplanar SVC phase at large $t'/t$. Interestingly, an AFM $\chi$SDW phase is stabilized in a wide regime of the phase diagram near $t'/t=1/\sqrt{2}$, where the nestings of the Fermi surface at wave vectors ${{\bf Q}}_{1,2}$ and ${{\bf Q}}_3$ are comparable in strength. Here, the three magnetic orders coexist to utilize simultaneously the nestings at the three wave vectors, and give rise to a noncoplanar magnetic phase with a nonzero spin chirality. The phase transitions from the Néel to the $\chi$SDW and from the $\chi$SDW to the coplanar SVC phase are continuous. We find that the energy spectrum of the $\chi$SDW phase contains two pairs of Dirac cones, which are located at the Brillouin zone point N=$({\pi \over 2},{\pi \over 2})$ and possess the same chirality. We show that applying strain along the diagonal direction, introduces a $d_{xy}$-wave next-nearest neighbor hopping, which in turn gaps out the two Dirac cones with opposite masses. This gives rise to four pairs of well-separated topologically-nontrivial bands. Each pair of bands contributes with a Chern number $\pm1$, and the total Chern number is zero at half-filling. Finally, we show that doping the system to a 3/4 filling with a nonzero third nearest neighbor hopping, stabilizes the $\chi$SDW phase and leads to a topologically-nontrivial Chern insulator which features the quantum anomalous Hall effect. Acknowledgments =============== This work is supported by the CAS Key Research Program of Frontier Sciences (Grant No. QYZDB-SSW-SYS012), the Strategic Priority Research Program of CAS (Grant No. XDB28000000), and the National Natural Science Foundation of China (Grant No. 11747601 and 11974362). Numerical calculations were performed on the HPC Cluster of ITP-CAS. Landau-type analysis of the magnetic free energy {#app1} ================================================ To perform a Landau-type magnetic instability analysis for the model under consideration, we obtain the free energy density up to quartic order in terms of the magnetic order parameters ${\bm M}_{1,2,3}$ with ordering wave vector ${{\bf Q}}_{1,2,3}$, which reads: $$\begin{aligned} F & =\alpha \left( \bm{M}_1^2 +\bm{M}_2^2 +\bm{M}_3^2 \right)+\beta\left(\bm{M}_1^4+ \bm{M}_2^4 +\bm{M}_3^4 \right)\nonumber \\ & +\gamma \left( \bm{M}_1^2 \bm{M}_2^2 +\bm{M}_1^2 \bm{M}_3^2 +\bm{M}_2^2 \bm{M}_3^2 \right) \nonumber \\ & +\eta \left[ \left( \bm{M}_1 \cdot \bm{M}_2 \right)^2 +\left( \bm{M}_1 \cdot \bm{M}_3 \right)^2 +\left( \bm{M}_2 \cdot \bm{M}_3 \right)^2 \right] \nonumber \\ & +\delta \alpha \bm{M}_3^2 +\delta \beta \bm{M}_3^4 +\delta \gamma \bm{M}_1^2 \bm{M}_2^2 +\delta \eta \left( \bm{M}_1 \cdot \bm{M}_2 \right)^{2}. \nonumber\end{aligned}$$ The anisotropic terms in the last line of the above expression are present to account for the fact that ${{\bf Q}}_3$ is inequivalent to ${{\bf Q}}_{1,2}$ given a square lattice symmetry. The Landau expansion contains all symmetry-allowed terms, and therefore enables the discussion of all possible magnetic instabilities, independently of the underlying microscopic mechanism. We proceed by parametrizing the magnetic order parameters by $\bm{M}_1 = M \sin \theta \cos \varphi \hat{\bm n}_1$, $\bm{M}_2 = M \sin \theta \sin \varphi \hat{\bm n}_2$, and $\bm{M}_3 = M \cos \theta \hat{\bm n}_3$, with $M= (\bm{M}^2_1 +\bm{M}^2_2 +\bm{M}^2_3 )^{1/2}$, and the angles $\theta \in [0, \pi]$, $\varphi \in [0, 2\pi)$. The unit vectors along the direction of $\bm{M}_{1,2,3}$ are denoted as $\hat{\bm n}_{1,2,3}$, with the angle $\phi_{ij}$ between $\hat{\bm n}_i$ and $\hat{\bm n}_j$ given by $\cos \phi_{ij} =\hat{\bm n}_i \cdot \hat{\bm n}_j$. In terms of these newly defined parameters, the Landau free energy becomes $$\begin{aligned} F& =(\alpha +\delta\alpha \cos^2 \theta ) M^2 \nonumber \\ &+\left[ \beta \sin^4 \theta \left( \cos^4 \varphi +\sin^4 \varphi \right) +\left( \beta +\delta \beta \right) \cos^4\theta\right] M^4 \nonumber \\ &+{1\over 4}\left[ (\gamma+\delta \gamma) \sin^4 \theta \sin^2 2\varphi + \gamma \sin^2 2\theta \right] M^4 \nonumber \\ &+{1\over 4} \big[ (\eta +\delta \eta)\sin^4 \theta\sin^2 2\varphi \cos^2 \phi_{12} \nonumber\\ &+\eta \sin^2 2\theta (\cos^2 \varphi \cos^2 \phi_{13} + \sin^2 \varphi \cos^2\phi_{23})\big] M^4. \nonumber\end{aligned}$$ Minimizing the Landau free energy with respect to the angles $\phi_{12}$, $\phi_{13}$, and $\phi_{23}$, yields the following three corresponding equations of motion $$\begin{aligned} \sin^4 \theta \sin^2(2\varphi)\sin(2\phi_{12})&=0,\nonumber \\ \sin^2 (2\theta)\cos^2\varphi\sin(2\phi_{13})&=0,\nonumber \\ \sin^2 (2\theta)\sin^2\varphi\sin(2\phi_{23})&=0.\nonumber\end{aligned}$$ The above equations are satisfied only when the $\phi$ angles are multiples of ${\pi \over 2}$. The latter implies that the ordered magnetic moments, when developed, are pairwise parallel or perpendicular to each other.\ Transformation of the Hamiltonian\[app2\] ========================================= For generality, we consider here the case with the third NN hopping $t''$ and the strain-induced $d_{xy}$-wave NN hopping $\tilde{t}$. The tight-binding dispersion entering the Hamiltonian matrix of Eq.  becomes: $\epsilon_{{\bf k}}= -\Gamma_x -\Gamma_y -\Gamma' -\Gamma'', $ with $\Gamma_x =2t\cos k_x$, $\Gamma_y =2t\cos k_y$, $\Gamma' =4t' \cos k_x \cos k_y$ $- 4\tilde{t} \sin k_x \sin k_y$, and $\Gamma'' = 2t''(\cos 2k_x + \cos 2k_y)$. Parametrizing the magnetic order parameters by $M_1 = M \sin \theta \cos \varphi$, $M_2 = M \sin \theta \sin \varphi$, and $M_3 = M \cos \theta$, with $M= (M^2_1 +M^2_2 +M^2_3 )^{1/2}$, the Hamiltonian matrix can be rewritten as $H=H_t+H_m$ with: $$\begin{aligned} H_{t}=&-\Gamma_x \tau_0 \otimes \sigma_z -\Gamma_y \tau_z \otimes \sigma_0 -\Gamma' \tau_z \otimes \sigma_z -\Gamma'' \tau_0 \otimes \sigma_0,\nonumber\\ H_{m}=&-\Gamma_0 \sin \theta \cos \varphi \tau_0 \otimes \sigma_x -\Gamma_0 \sin \theta \sin \varphi \tau_y \otimes \sigma_z \nonumber\\ &+\Gamma_0 \cos \theta \tau_y \otimes \sigma_y,\nonumber\end{aligned}$$ where $\Gamma_0 = {1\over 2} U M$, and $\sigma_{0,x,y,z}$, $\tau_{0,x,y,z}$ are the $2\times2$ identity matrices and Pauli matrices. Under a unitary transformation of $U=U_1e^{i{\varphi\over2}\tau_z\otimes\sigma_z} e^{i {\theta \over 2}\tau_z\otimes\sigma_y}U_2,$ with $$U_1= {1\over 2}\left(\begin{array}{cccc} 1 & -i & -i & -1\\ -i & 1 & -1 & -i\\ i & 1 & -1 & i\\ 1 & i & i & -1 \end{array}\right),\, U_2= {1\over \sqrt{2}}\left(\begin{array}{cccc} 0 & 0 & 1 & -1\\ 1 & 1 & 0 & 0\\ 1 & -1 & 0 & 0\\ 0 & 0 & 1 & 1 \end{array}\right)\nonumber$$ the magnetic Hamiltonian becomes diagonal, *i.e.*, $H_m= \Gamma_0 \tau_z \otimes \sigma_0$, and the kinetic Hamiltonian $H_t$ becomes $$\begin{aligned} H_{t} & =\Gamma_x \sin \theta \sin \varphi \tau_0 \otimes \sigma_x -\Gamma_y \sin \theta \cos \varphi \tau_0 \otimes \sigma_y\nonumber\\ & -\Gamma' \cos \theta \tau_z \otimes \sigma_z -\Gamma' \sin \theta \tau_x \otimes \sigma_0 \nonumber\\ & -\Gamma_x \cos \theta \sin \varphi \tau_y \otimes \sigma_y -\Gamma_x \cos \varphi \tau_x \otimes \sigma_y \nonumber\\ & -\Gamma_y \cos \theta \cos \varphi \tau_y \otimes \sigma_x +\Gamma_y \sin \varphi \tau_x \otimes \sigma_x-\Gamma'' \tau_0 \otimes \sigma_0. \nonumber\end{aligned}$$ Sorting into $2\times 2$ diagonal and off-diagonal blocks, the Hamiltonian matrix in Eq.  reads in the rotated basis, $$\tilde{H} = \left(\begin{array}{cc} H_+ & H_w \\ H^\dagger_w & H_- \end{array}\right), \nonumber$$ with the wave-vector dependent $2\times 2$ blocks $H_\pm$ and $H_w$ given by, respectively, $$\begin{aligned} H_{\pm}= & \pm \left(\Gamma_0 \sigma_0 -\Gamma' \cos \theta \sigma_z \right) -\Gamma'' \sigma_0 \nonumber \\ &+\Gamma_x \sin\theta \sin\varphi \sigma_x -\Gamma_y \sin\theta \cos\varphi \sigma_y, \nonumber \\ H_{w}= & -\Gamma' \sin\theta \sigma_0 -\Gamma_x \left( \cos\varphi -i\cos\theta \sin\varphi \right) \sigma_y \nonumber\\ & \hspace{1.85cm} +\Gamma_y \left( \sin\varphi +i\cos\theta \cos\varphi \right) \sigma_x. \nonumber\end{aligned}$$ [99]{} X. G. Wen, F. Wilczek, and A. Zee, Chiral spin states and superconductivity, Phys. Rev. B [**39**]{}, 11413 (1989). M. V. Berry, Quantal phase factors accompanying adiabatic changes, Proc. R. Soc. London A [**392**]{}, 45 (1984). R. Karplus and J. M. Luttinger, Hall Effect in Ferromagnetics, Phys. Rev. [**95**]{}, 1154 (1954). J. Ye, Y. B. Kim, A. J. Millis, B. I. Shraiman, P. Majumdar, and Z. Tesanovic, Berry Phase Theory of the Anomalous Hall Effect: Application to Colossal Magnetoresistance Manganites, Phys. Rev. Lett. [**83**]{}, 3737 (1999). Y. Taguchi, Y. Oohara, H. Yoshizawa, N. Nagaosa, and Y. Tokura, Spin Chirality, Berry Phase, and Anomalous Hall Effect in a Frustrated Ferromagnet, Science [**291**]{}, 2573 (2001). R. Shindou and N. Nagaosa, Orbital Ferromagnetism and Anomalous Hall Effect in Antiferromagnets on the Distorted fcc Lattice, Phys. Rev. Lett. [**87**]{}, 116801 (2001). K. Ohgushi, S. Murakami, and N. Nagaosa, Spin anisotropy and quantum Hall effect in the kagomé lattice: Chiral spin state based on a ferromagnet, Phys. Rev. B [**62**]{}, R6065(R) (2000). N. Nagaosa, J. Sinova, S. Onoda, A. H. MacDonald, and N. P. Ong, Anomalous Hall effect, Rev. Mod. Phys. [**82**]{}, 1539 (2010). D. Xiao, M.-C. Chang, and Q. Niu, Berry phase effects on electronic properties, Rev. Mod. Phys. [**82**]{}, 1959 (2010). D. J. Thouless, Quantization of particle transport, Phys. Rev. B [**27**]{}, 6083 (1983). D. J. Thouless, M. Kohmoto, M. P. Nightingale, and M. den Nijs, Quantized Hall Conductance in a Two-Dimensional Periodic Potential, Phys. Rev. Lett. [**49**]{}, 405 (1982). Q. Niu, D. J. Thouless, and Y.-S. Wu, Quantized Hall conductance as a topological invariant, Phys. Rev. B [**31**]{}, 3372 (1985). S. Weinberg, *The Quantum Theory of Fields: Volume 2. Modern Applications*, New York: Cambridge University Press (1996). G. E. Volovik, *The Universe in a Helium Droplet*, Oxford: Clarendon Press (2003). E. Fradkin, *Field Theories of Condensed Matter Physics*, 2nd edn Cambridge: Cambridge University Press (2013). A. J. Niemi and G. W. Semenoff, Axial-Anomaly-Induced Fermion Fractionization and Effective Gauge-Theory Actions in Odd-Dimensional Space-Times, Phys. Rev. Lett. [**51**]{}, 2077 (1983). A. N. Redlich, Gauge Noninvariance and Parity Nonconservation of Three-Dimensional Fermions, Phys. Rev. Lett. [**52**]{}, 18 (1984). G. W. Semenoff, Condensed-Matter Simulation of a Three-Dimensional Anomaly, Phys. Rev. Lett. [**53**]{}, 2449 (1984). F. D. M. Haldane, Model for a Quantum Hall Effect without Landau Levels: Condensed-Matter Realization of the “Parity Anomaly”, Phys. Rev. Lett. [**61**]{}, 2015 (1988). V. M. Yakovenko, Chern-Simons Terms and $n$ Field in Haldane’s Model for the Quantum Hall Effect without Landau Levels, Phys. Rev. Lett. [**65**]{}, 251 (1990). R. B. Laughlin, Superconducting Ground State of Noninteracting Particles Obeying Fractional Statistics, Phys. Rev. Lett. [**60**]{}, 2677 (1988). H. S. Bennett and E. A. Stern, Faraday Effect in Solids, Phys. Rev. [**137**]{}, A448 (1965). A. Kapitulnik, J. Xia, E. Schemm, and A. Palevski, Polar Kerr effect as probe for time-reversal symmetry breaking in unconventional superconductors, New J. Phys. [**11**]{}, 055060 (2009). S. Raghu, X.-L. Qi, C. Honerkamp, and S.-C. Zhang, Topological Mott Insulators, Phys. Rev. Lett. [**100**]{}, 156401 (2008). S. Tewari, C. Zhang, V. M. Yakovenko, and S. Das Sarma, Time-reversal symmetry breaking by a ($d+id$) density-wave state in underdoped cuprate superconductors, Phys. Rev. Lett. [**100**]{}, 217004 (2008). P. Kotetes and G. Varelogiannis, Meissner effect without superconductivity from a chiral d-density wave, Phys. Rev. B [**78**]{}, 220509(R) (2008). C. Zhang, S. Tewari, V. M. Yakovenko, and S. Das Sarma, Anomalous Nernst effect from a chiral d-density-wave state in underdoped cuprate superconductors, Phys. Rev. B [**78**]{}, 174508 (2008). P. Kotetes and G. Varelogiannis, Chirality Induced Tilted-Hill Giant Nernst Signal, Phys. Rev. Lett. [**104**]{}, 106404 (2010). J. Maciejko, X.-L. Qi, H. D. Drew, and S.-C. Zhang, Topological Quantization in Units of the Fine Structure Constant, Phys. Rev. Lett. [**105**]{}, 166803 (2010). R. Yu, W. Zhang, H. J. Zhang, S. C. Zhang, X. Dai, Z. Fang, Quantized Anomalous Hall Effect in Magnetic Topological Insulators, Science [**329**]{}, 61 (2010). C.-H. Hsu, S. Raghu, and S. Chakravarty, Topological density wave states of non-zero angular momentum, Phys. Rev. B [**84**]{}, 155111 (2011). P. Kotetes, A. Aperis and G. Varelogiannis, Magnetic-field-induced chiral hidden order in URu$_2$Si$_2$, Phil. Mag. [**94**]{}, 3789 (2014). T. Miyadai, K. Takizawa, H. Nagata, H. Ito, S. Miyahara, and K. Hirakawa, Neutron Diffraction Study of NiS$_2$ with Pyrite Structure, J. Phys. Soc. Jpn. [**38**]{}, 115 (1975). K. Kikuchi, T. Miyadai, T. Fukui, H. Ito, and K. Takizawa, Spin Structure and Magnetic Properties of NiS$_2$, J. Phys. Soc. Jpn. [**44**]{}, 410 (1978). K. Kikuchi, T. Miyadai, H. Itoh, and T. Fukui, Polarized-Neutron Study of NiS$_2$, J. Phys. Soc. Jpn. [**45**]{}, 444 (1978). M. Matsuura, Y. Endoh, H. Hiraka, K. Yamada, A. S. Mishchenko, N. Nagaosa, and I. V. Solovyev, Classical and quantum spin dynamics in the fcc antiferromagnet NiS$_2$ with frustration, Phys. Rev. B [**68**]{}, 094409 (2003). Y. Endoh and Y. Ishikawa, Antiferromagnetism of $\gamma$ Iron Manganes Alloys, J. Phys. Soc. Jpn. [**30**]{}, 1614 (1971). K. Tajima, Y. Ishikawa, Y. Endoh, and Y. Noda, Spin Dynamics in Itinerant Antiferromagnetic $\gamma$FeMn Alloys, J. Phys. Soc. Jpn. [**41**]{}, 1195 (1976). S. J. Kennedy and T. J. Hicks, The magnetic structure of $\gamma$-iron-manganese, J. Phys. F [**17**]{}, 1599 (1987). S. Kawarazaki, Y. Sasaki, K. Yasuda, T. Mizusaki, and A. Hirai, The triple-Q-spin-density wave in the face-centered cubic antiferromagnetic Fe$_{54}$Mn$_{46}$ alloy, J. Phys. Condens. Matter [**2**]{}, 5747 (1990). I. Martin and C. D. Batista, Itinerant Electron-Driven Chiral Magnetic Ordering and Spontaneous Quantum Hall Effect in Triangular Lattice Models, Phys. Rev. Lett. [**101**]{}, 156402 (2008). Y. Kato, I. Martin, and C. D. Batista, Stability of the Spontaneous Quantum Hall State in the Triangular Kondo-Lattice Model, Phys. Rev. Lett. [**105**]{}, 266405 (2010). Y. Akagi and Y. Motome, Spin Chirality Ordering and Anomalous Hall Effect in the Ferromagnetic Kondo Lattice Model on a Triangular Lattice, J. Phys. Soc. Jpn. [**79**]{}, 083711 (2010). Y. Akagi, M. Udagawa, and Y. Motome, Hidden Multiple-Spin Interactions as an Origin of Spin Scalar Chiral Order in Frustrated Kondo Lattice Models, Phys. Rev. Lett. [**108**]{}, 096401 (2012). S. Hayami, R. Ozawa, and Y. Motome, Engineering chiral density waves and topological band structures by multiple-Q superpositions of collinear up-up-down-down orders, Phys. Rev. B [**94**]{}, 024424 (2016). K. Barros and Y. Kato, Efficient Langevin simulation of coupled classical fields and fermions, Phys. Rev. B [**88**]{}, 235101 (2013). Y. Akagi, M. Udagawa, and Y. Motome, Effect of Quantum Spin Fluctuation on Scalar Chiral Ordering in the Kondo Lattice Model on a Triangular Lattice, J. Phys. Soc. Jpn. [**82**]{}, 123709 (2013). R. Ozawa, M. Udagawa, Y. Akagi, and Y. Motome, Reconstruction of Chiral Edge States in a Magnetic Chern Insulator, J. Phys. Soc. Jpn. [**83**]{}, 073706 (2014). G.-W. Chern, A. Rahmani, I. Martin, and C. D. Batista, Quantum Hall ice, Phys. Rev. B [**90**]{}, 241102(R) (2014). K. Barros, J. W. F. Venderbos, G. W. Chern, and C. D. Batista, Exotic magnetic orderings in the kagome Kondo-lattice model, Phys. Rev. B [**90**]{}, 245119 (2014). S. Ghosh, P. O’Brien, M. J. Lawler, and C. L. Henley, Phase diagram of the Kondo lattice model on the kagome lattice, Phys. Rev. B [**93**]{}, 024401 (2016). G.-W. Chern, Noncoplanar Magnetic Ordering Driven by Itinerant Electrons on the Pyrochlore Lattice, Phys. Rev. Lett. [**105**]{}, 226403 (2010). S. Hayami and Y. Motome, Topological semimetal-to-insulator phase transition between noncollinear and noncoplanar multiple-Q states on a square-to-triangular lattice, Phys. Rev. B [**91**]{}, 075104 (2015). D. F. Agterberg and S. Yunoki, Spin-flux phase in the Kondo lattice model with classical localized spins, Phys. Rev. B [**62**]{}, 13816 (2000). J. W. F. Venderbos, S. Kourtis, J. van den Brink, and M. Daghofer, Fractional Quantum-Hall Liquid Spontaneously Generated by Strongly Correlated $t_{2g}$ Electrons, Phys. Rev. Lett. [**108**]{}, 126405 (2012). Tao Li, Spontaneous quantum Hall effect in quarter-doped Hubbard model on honeycomb lattice and its possible realization in doped graphene system, Eur. Phys. Lett. [**97**]{}, 37001 (2012). M. L. Kiesel, C. Platt, W. Hanke, D. A. Abanin, and R. Thomale, Competing many-body instabilities and unconventional superconductivity in graphene, Phys. Rev. B [**86**]{}, 020507 (2012). W.-S. Wang, Y.-Y. Xiang, Q.-H. Wang, F. Wang, F. Yang, and D.-H. Lee, Functional renormalization group and variational Monte Carlo studies of the electronic instabilities in graphene near $\frac{1}{4}$ doping, Phys. Rev. B [**85**]{}, 035414 (2012). S. Jiang, A. Mesaros, and Y. Ran, Chiral Spin-Density Wave, Spin-Charge-Chern Liquid, and $d+id$ Superconductivity in 1/4-Doped Correlated Electronic Systems on the Honeycomb Lattice, Phys. Rev. X [**4**]{}, 031040 (2014). K. Jiang, Y. Zhang, S. Zhou, and Z. Wang, Chiral Spin Density Wave Order on the Frustrated Honeycomb and Bilayer Triangle Lattice Hubbard Model at Half-Filling, Phys. Rev. Lett. [**114**]{}, 216402 (2015). S. Kumar and J. van den Brink, Frustration-Induced Insulating Chiral Spin State in Itinerant Triangular-Lattice Magnets, Phys. Rev. Lett. [**105**]{}, 216405 (2010). A. Rahmani, R. A. Muniz, and I. Martin, Anyons in Integer Quantum Hall Magnets, Phys. Rev. X [**3**]{}, 031008 (2013). G. W. Chern and C. D. Batista, Spontaneous Quantum Hall Effect via a Thermally Induced Quadratic Fermi Point, Phys. Rev. Lett. [**109**]{}, 156801 (2012). R. Nandkishore, G. W. Chern, and A. V. Chubukov, Itinerant Half-Metal Spin-Density-Wave State on the Hexagonal Lattice, Phys. Rev. Lett. [**108**]{}, 227204 (2012). S. Hayami and Y. Motome, Multiple-Q instability by(d-2)-dimensional connections of Fermi surfaces, Phys. Rev. B [**90**]{}, 060402 (2014). K. Yosida and S. Inagaki, Consideration on Four-Spin Exchange Interactions in fcc Spin Lattice with Particular Reference to NiS$_2$, J. Phys. Soc. Jpn. [**50**]{}, 3268 (1981). A. Yoshimori and S. Inagaki, Fourth Order Interaction Effects on the Antiferromagnetic Structures. II. A Phenomenological Model for NiS$_2$, J. Phys. Soc. Jpn. [**50**]{}, 769 (1981). K. Hirai and T. Jo, Triple-Q and Single-Q States in Antiferromagnetic fcc Transition Metals with the First-Kind Ordering, J. Phys. Soc. Jpn. [**54**]{}, 3567 (1985). J. W. F. Venderbos, M. Daghofer, J. van den Brink, and S. Kumar, Switchable Quantum Anomalous Hall State in a Strongly Frustrated Lattice Magnet, Phys. Rev. Lett. [**109**]{}, 166405 (2012). P. W. Anderson, Antiferromagnetism. Theory of Superexchange Interaction, Phys. Rev. [**79**]{}, 350 (1950). J. Lorenzana, G. Seibold, C. Ortix, and M. Grilli, Competing Orders in FeAs Layers, Phys. Rev. Lett. [**101**]{}, 186402 (2008). I. Eremin and A. V. Chubukov, Magnetic degeneracy and hidden metallicity of the spin-density-wave state in ferropnictides, Phys. Rev. B [**81**]{}, 024511 (2010). P. M. R. Brydon, J. Schmiedt, and C. Timm, Microscopically derived Ginzburg-Landau theory for magnetic order in the iron pnictides, Phys. Rev. B [**84**]{}, 214510 (2011). G. Giovannetti, C. Ortix, M. Marsman, M. Capone, J. van den Brink, and J. Lorenzana, Proximity of iron pnictide superconductors to a quantum tricritical point, Nat. Commun. [**2**]{}, 398 (2011). M. N. Gastiasoro and B. M. Andersen, Competing magnetic double-$Q$ phases and superconductivity-induced reentrance of ${C}_{2}$ magnetic stripe order in iron pnictides, Phys. Rev. B [**92**]{}, 140506(R) (2015). X. Wang, J. Kang, and R. M. Fernandes, Magnetic order without tetragonal-symmetry-breaking in iron arsenides: Microscopic mechanism and spin-wave spectrum, Phys. Rev. B [**91**]{}, 024401 (2015). J. Kang, X. Wang, A. V. Chubukov, and R. M. Fernandes, Interplay between tetragonal magnetic order, stripe magnetism, and superconductivity in iron-based materials, Phys. Rev. B [**91**]{}, 121104(R) (2015). M. H. Christensen, J. Kang, B. M. Andersen, I. Eremin, and R. M. Fernandes, Spin reorientation driven by the interplay between spin-orbit coupling and Hund’s rule coupling in iron pnictides, Phys. Rev. B [**92**]{}, 214509 (2015). D. D. Scherer, I. Eremin, B. M. Andersen, Collective magnetic excitations of C$_4$ symmetric magnetic states in iron-based superconductors, Phys. Rev. B [**94**]{}, 180405(R) (2016). M. H. Christensen, D. D. Scherer, P. Kotetes, and B. M. Andersen, Role of multiorbital effects in the magnetic phase diagram of iron-pnictides, Phys. Rev. B [**96**]{}, 014523 (2017). R. M. Fernandes, S. A. Kivelson, and E. Berg, Vestigial chiral and charge orders from bidirectional spin-density waves: Application to the iron-based superconductors, Phys. Rev. B [**93**]{}, 014511 (2016). M. H. Christensen, B. M. Andersen and P. Kotetes, Unravelling incommensurate magnetism and its emergence in iron-based superconductors, Phys. Rev. X [**8**]{}, 041022 (2018). E. Hassinger, G. Gredat, F. Valade, S. R. de Cotret, A. Juneau-Fecteau, J.-P. Reid, H. Kim, M. A. Tanatar, R. Prozorov, B. Shen, H.-H. Wen, N. Doiron-Leyraud, and L. Taillefer, Pressure-induced Fermi-surface reconstruction in the iron-arsenide superconductor Ba${}_{1\ensuremath{-}x}$K${}_{x}$Fe${}_{2}$As${}_{2}$: Evidence of a phase transition inside the antiferromagnetic phase, Phys. Rev. B [**86**]{}, 140502 (2012). S. Avci, O. Chmaissem, J. M. Allred, S. Rosenkranz, I. Eremin, A. V. Chubukov, D. E. Bugaris, D. Y. Chung, M. G. Kanatzidis, J.-P Castellan, J. A. Schlueter, H. Claus, D. D. Khalyavin, P. Manuel, A. Daoud-Aladine, and R. Osborn, Magnetically driven suppression of nematic order in an iron-based superconductor, Nat. Commun. [**5**]{}, 3845 (2014). F. Wa[ß]{}er, A. Schneidewind, Y. Sidis, S. Wurmehl, S. Aswartham, B. Büchner, and M. Braden, Spin reorientation in ${\mathrm{Ba}}_{0.65}{\mathrm{Na}}_{0.35}{\mathrm{Fe}}_{2}{\mathrm{As}}_{2}$ studied by single-crystal neutron diffraction, Phys. Rev. B [**91**]{}, 060505 (2015). A. E. B[ö]{}hmer, F. Hardy, L. Wang, T. Wolf, P. Schweiss, and C. Meingast, Superconductivity-induced re-entrance of the orthorhombic distortion in Ba$_{1-x}$K$_x$Fe$_2$As$_2$, Nat. Commun. [**6**]{}, 7911 (2015). J. M. Allred, S. Avci, Y. Chung, H. Claus, D. D. Khalyavin, P. Manuel, K. M. Taddei, M. G. Kanatzidis, S. Rosenkranz, R. Osborn, and O. Chmaissem, Tetragonal magnetic phase in ${\mathrm{Ba}}_{1\ensuremath{-}x}{\mathrm{K}}_{x}{\mathrm{Fe}}_{2}{\mathrm{As}}_{2}$ from x-ray and neutron diffraction, Phys. Rev. B [**92**]{}, 094515 (2015). L. Wang, F. Hardy, A. E. B[ö]{}hmer, T. Wolf, P. Schweiss, and C. Meingast, Complex phase diagram of ${\mathrm{Ba}}_{1\ensuremath{-}x}{\mathrm{Na}}_{x}{\mathrm{Fe}}_{2}{\mathrm{As}}_{2}$: A multitude of phases striving for the electronic entropy, Phys. Rev. B [**93**]{}, 014514 (2016). Y. Zheng, P. M. Tam, J. Hou, A. E. B[ö]{}hmer, T. Wolf, C. Meingast, and R. Lortz, Absence of nematic order in the pressure-induced intermediate phase of the iron-based superconductor $\mathrm{B}{\mathrm{a}}_{0.85}{\mathrm{K}}_{0.15}\mathrm{F}{\mathrm{e}}_{2}\mathrm{A}{\mathrm{s}}_{2}$, Phys. Rev. B [**93**]{}, 104516 (2016). B. P. P. Mallett, Y. G. Pashkevich, A. Gusev, T. Wolf, and C. Bernhard, Muon spin rotation study of the magnetic structure in the tetragonal antiferromagnetic state of weakly underdoped Ba$_{1-x}$K$_x$Fe$_2$As$_2$, EPL [**111**]{}, 57001 (2015). B. P. P. Mallett, P. Marsik, M. Yazdi-Rizi, T. Wolf, A. E. Böhmer, F. Hardy, C. Meingast, D. Munzar, and C. Bernhard, Infrared Study of the Spin Reorientation Transition and Its Reversal in the Superconducting State in Underdoped ${\mathrm{Ba}}_{1-x}{\mathrm{K}}_{x}{\mathrm{Fe}}_{2}{\mathrm{As}}_{2}$, Phys. Rev. Lett. [**115**]{}, 027003 (2015). J. M. Allred, K. M. Taddei, D. E. Bugaris, M. J. Krogstad, S. H. Lapidus, D. Y. Chung, H. Claus, M. G. Kanatzidis, D. E. Brown, J. Kang, R. M. Fernandes, I. Eremin, S. Rosenkranz, O. Chmaissem, and R. Osborn, Double-Q spin-density wave in iron arsenide superconductors, Nat. Phys. [**12**]{}, 493 (2016). W. R. Meier, Q.-P. Ding, A. Kreyssig, S. L. Bud’ko, A. Sapkota, K. Kothapalli, V. Borisov, R. Valentí, C. D. Batista, P. P. Orth, R. M. Fernandes, A. I. Goldman, Y. Furukawa, A. E. Böhmer, and P. C. Canfield, Hedgehog spin-vortex crystal stabilized in a hole-doped iron-based superconductor, npj Quant. Mat. [**3**]{}, 5 (2018). L. Wang, M. He, D. D. Scherer, F. Hardy, P. Schweiss, T. Wolf, M. Merz, B. M. Andersen, and C. Meingast, Competing Electronic Phases near the Onset of Superconductivity in Hole-doped SrFe2As2, J. Phys. Soc. Jpn. [**88**]{}, 104710 (2019). I. M. Lifshitz, Anomalies of electron characteristics of a metal in the high pressure region, Sov. Phys. JETP [**11**]{}, 1130 (1960). Z.-Q. Yu and L. Yin, Collinear antiferromagnetic state in a two-dimensional Hubbard model at half filling, Phys. Rev. B [**81**]{}, 195122 (2010). T.-J. Li, Y.-Mi. Quan, D.-Y. Liu, and L.-J. Zou, Magnetic phase diagram of an extended Hubbard model at half-filling: Possible application for strongly correlated iron pnictides, J. Magn. Magn. Mater. [**324**]{}, 1046 (2012). A. Yamada, K. Seki, R. Eder, and Y. Ohta, Magnetic properties and Mott transition in the square-lattice Hubbard model with frustration, Phys. Rev. B [**88**]{}, 075114 (2013). T. Mizusaki and M. Imada, Gapless quantum spin liquid, stripe, and antiferromagnetic phases in frustrated Hubbard models in two dimensions, Phys. Rev. B [**74**]{}, 014421 (2006). A. H. Nevidomskyy, C. Scheiber, D. Sénéchal, and A.-M. S. Tremblay, Magnetism and $d$-wave superconductivity on the half-filled square lattice with frustration, Phys. Rev. B [**77**]{}, 064427 (2008). L. F. Tocchio, F. Becca, A. Parola, and S. Sorella, Role of backflow correlations for the nonmagnetic phase of $t-t^\prime$ Hubbard model, Phys. Rev. B [**78**]{}, 041101 (2008).
--- abstract: 'Considering the class $G$ of $g$-natural metrics on the tangent bundle of a Riemannian manifold $(M, g)$, it is shown that the flatnees for $g$ is a necessary and sufficient condition of weakly symmetry (recurrent or pseudo-symmetry) of $G$. In particular, the cases of weakly symmetric Sasakian lift metric studied by Bejan and Crasmareanu and recurrent or pseudo-symmetric Sasakian lift metric studied by Binh and Tamássy are obtained.' author: - 'E. Peyghan' title: 'Weakly symmetry of a class of $g$-natural metrics on tangent bundles' --- **Keywords:** $g$-natural metric, Weakly symmetric Riemannian manifold. Introduction ============ In [@TB], Tamássy and Binh introduced the notation of weakly symmetric Riemannian manifold which is a stronger variant of recurrent and pseudo-symmetric manifolds. Then they studied the weak symmetries of Einstein and Sasakian manifolds in [@TB1]. Recent studies show that the notion of weakly symmetry has an important role in Riemannian geometry [@BC]-[@UL]. In [@BC], Bejan and Crasmarenu considered the Sasakian lift $g^s$ to the tangent bundle of a Riemannian manifold $(M, g)$ and proved that the weakly symmetry of $g^s$ is equivalent to the flatness for $g$ and $g^s$. Indeed, they extended the result obtained by Tamássy and Binh [@BT] for recurrent and pseudo-symmetric manifolds. Moreover, in [@BC] the authors provided the following open problem: to extend the present result to other classes of metrics on tangent bundles. To solving of this open problem, we consider the metric $G=ag^s+bg^h+cg^v$ ($a$, $b$, $c$ are constants) which is a class of $g$-natural metrics introduced by Abbassi and Sarih in [@AS] and we show that $(TM, G)$ is weakly symmetric (recurrent or pseudo-symmetric) Riemannian manifold if and only if $(M, g)$ is flat. Preliminaries ============= Let $(M, g)$ be a Riemannian manifold with dimension $n\geq3$ and $TM$ its tangent bundle. If we consider coordinate system $x=(x^i)$ on the base manifold $M$ and corresponding coordinates $(x, y)=(x^i, y^i)$ on $TM$, then the metric $g$ has the local coefficients $g_{ij}=g(\frac{\partial}{\partial x^i}, \frac{\partial}{\partial x^j})$. Let $\nabla$ be a Riemannian connection on $M$ with coefficients $\Gamma_{ij}^{k}$ where $1\leq i, j, k\leq n$. The Riemannian curvature tensor is defined by $$R( X, Y)Z=\nabla_{X}\nabla_{Y}Z-\nabla_{Y}\nabla_{X}Z-\nabla_{[X, Y]}Z , \ \ \ \forall X, Y, Z \in {\cal X}(M).$$ Let $\pi$ the natural projection from $TM$ to $M$. Consider $\pi_{*}: TTM \longmapsto TM$ and put $$ker{\pi_{*}}_{v}=\{z\in TTM|{\pi_{*}}_{v}( z)=0 \},\ \ \ \ \ \ \ \ \forall v\in TM.$$ Then the vertical vector bundle on $M$ is defined by $VTM = \bigcup_{_{v\in TM}}ker{\pi_{*}}_{v}$. A *horizontal distribution* on $TM$ is a complementary distribution $HTM$ for $VTM$ on $TTM$. It is clear that $HTM$ is a horizontal vector bundle. By definition, we have the decomposition $$\label{dd} TTM =VTM\oplus HTM.$$ Using the induced coordinates $( x^{i}, y^{i})$ on $TM$, we can choose a local field of frames $\{\frac{\delta}{\delta x^i}, \frac{\partial} {\partial y^{i}}\}$ adapted to the above decomposition namely $\frac{\delta}{\delta x^i}\in {\cal X}( HTM)$ and $\frac{\partial}{\partial y^{i}}\in {\cal X}( VTM)$ are sections of horizontal and vertical sub-bundles $HTM$ and $VTM$, defined by $$\frac{\delta}{\delta x^i}=\frac{\partial}{\partial x^{i}}-y^a\Gamma_{ai}^j\frac{\partial} {\partial y^{j}}.\label{decomp2}$$ According to (\[dd\]), every vector field $\widetilde{X}$ on $TM$ has the decomposition $\widetilde{X}=h\widetilde{X}+v\widetilde{X}$. Moreover, a vector field $X=X^i\frac{\partial}{\partial x^i}$ on $M$ has the vertical lift $X^v=X^i\frac{\partial}{\partial y^i}$ and the horizontal lift $X^h=X^i\frac{\delta}{\delta x^i}$. A class of $g$-natural metrics on tangent bundle ------------------------------------------------ Let $g$ be a Riemannian metric on a manifold $M$. The Sasaki lift $g^s$ of $g$ is defined by $$\left\{ \begin{array}{cc} &\hspace{-17mm}g^s_{(x, y)}(X^h, Y^h)=g_x(X, Y),\ \ \ g^s_{(x, y)}(X^h, Y^v)=0,\\\\ &\hspace{-5mm}g^s_{(x, y)}(X^v, Y^h)=0,\ \ \ \ \ \ \ \ \ \ \ \ \ g^s_{(x, y)}(X^v, Y^v)=g_x(X, Y). \end{array} \right.$$ Also, the horizontal lift $g^h$ and the vertical lift $g^v$ of $g$ are defined as follows [@AS] $$\left\{ \begin{array}{cc} &\hspace{-5mm}g^h_{(x, y)}(X^h, Y^h)=0,\ \ \ \ \ \ \ \ \ \ \ \ g^h_{(x, y)}(X^h, Y^v)=g_x(X, Y),\\\\ &\hspace{-16mm}g^h_{(x, y)}(X^v, Y^h)=g_x(X, Y),\ \ \ g^h_{(x, y)}(X^v, Y^v)=0,\\ \end{array} \right.$$ $$\left\{ \begin{array}{cc} &\hspace{-5mm}g^v_{(x, y)}(X^h, Y^h)=g_x(X, Y),\ \ \ g^v_{(x, y)}(X^h, Y^v)=0,\\\\ &\hspace{-5mm}g^v_{(x, y)}(X^v, Y^h)=0,\ \ \ \ \ \ \ \ \ \ \ \ \ g^v_{(x, y)}(X^v, Y^v)=0. \end{array} \right.$$ Now we consider the metric $G=ag^s+bg^h+cg^v$, where $a$, $b$, $c$ are constants. Indeed we can present $G$ as follows $$\label{metr} \left\{ \begin{array}{cc} &\hspace{-7mm}G_{(x, y)}(X^h, Y^h)=(a+c)g_x(X, Y),\ \ \ G_{(x, y)}(X^h, Y^v)=bg_x(X, Y),\\\\ &\hspace{-5mm}G_{(x, y)}(X^v, Y^h)=bg_x(X, Y),\ \ \ \ \ \ \ \ \ \ \ \ \ G_{(x, y)}(X^v, Y^v)=ag_x(X, Y). \end{array} \right.$$ This metric is a class of $g$-natural metrics and it is Riemannian if and only if $a>0$ and $\alpha=a(a+c)-b^2>0$ hold. Also, for $a=1$ and $b=c=0$, the metric $G$ reduces to the Sasaki lift metric (See [@AS]). Let $\widetilde{\nabla}$ be the Levi-Civita connection of $G$. Then it is characterized by [@AS] $$\left\{ \begin{array}{cc} &\hspace{-12mm}(\widetilde{\nabla}_{X^h}Y^h)|_t=(\nabla_XY)^h|_t+(A(t, X, Y))^h+(B(t, X, Y))^v,\\ &\hspace{-12mm}(\widetilde{\nabla}_{X^h}Y^v)|_t=(\nabla_XY)^v|_t+(C(t, X, Y))^h+(D(t, X, Y))^v,\\ &\hspace{-6mm}(\widetilde{\nabla}_{X^v}Y^h)|_t=(C(t, Y, X))^h+(D(t, Y, X))^v,\ \ (\widetilde{\nabla}_{X^v}Y^v)|_t=0, \end{array} \right.$$ for all vector fields $X$, $Y$ on $M$, where $A$, $B$, $C$, $D$ are the tensor fields of type (1, 2) on $M$ defined by $$\begin{aligned} A(t, X, Y)&=-\frac{ab}{2\alpha}[R(X, t)Y+R(Y, t)X],\\ B(t, X, Y)&=\frac{b^2}{\alpha}R(X, t)Y-\frac{a(a+c)}{2\alpha}R(X, Y)t,\\ C(t, X, Y)&=-\frac{a^2}{2\alpha}R(Y, t)X,\ \ \ D(t, X, Y)=\frac{ab}{2\alpha}R(Y,t)X,\end{aligned}$$ where $t$ is thought as a vector field on $M$ with local expression $t=y^i\frac{\partial}{\partial x^i}$. Moreover, $t^v=y^i\frac{\partial}{\partial y^i}$ is the Liouville vector field and $t^h=y^i\frac{\delta}{\delta x^i}$ is the geodesic spray of the metric $g$. \[TH\] Let $(M, g)$ be a Riemannian manifold and $G$ be the Riemannian metric given by (\[metr\]) on $TM$ . Then the Riemannian curvature tensor $\widetilde{R}$ of $(TM, G)$ is completely determined by $$\begin{aligned} \widetilde{R}(X^v, Y^v)Z^v&=0,\\ \widetilde{R}(X^v, Y^v)Z^h&=\{\frac{a^2}{\alpha}R(X, Y)Z+\frac{a^2}{4\alpha^2}[R(X, t)R(Y, t)Z-R(Y, t)R(X, t)Z]\}^h\\ &\ \ +\{\frac{ab}{\alpha}R(Y, X)Z+\frac{a^3b}{4\alpha^2}[R(Y, t)R(X, t)Z-R(X, t)R(Y, t)Z]\}^v,\end{aligned}$$ $$\begin{aligned} \widetilde{R}(X^h, Y^v)Z^v&=\{\frac{a^2}{2\alpha}R(Z, Y)X-\frac{a^4}{4\alpha^2}R(Y, t)R(Z, t)X\}^h\\ &\ \ +\{\frac{a^3b}{4\alpha^2}R(Y, t)R(Z, t)X-\frac{ab}{2\alpha}R(Z, Y)X\}^v,\end{aligned}$$ $$\begin{aligned} \widetilde{R}(X^h, Y^h)Z^v&=\{\frac{a^2}{2\alpha}[(\nabla_YR)(Z, t)X-(\nabla_XR)(Z, t)Y]\\ &\ \ +\frac{a^3b}{4\alpha^2}[R(X, t)R(Z, t)Y-R(Y, t)R(Z, t)X]\}^h\\ &\ \ +\{R(X, Y)Z+\frac{ab}{2\alpha}[(\nabla_XR)(Z, t)Y-(\nabla_YR)(Z, t)X]\\ &\ \ +\frac{a^2}{4\alpha}[R(X, R(Z, t)Y)t-R(Y, R(Z, t)X)t]\\ &\ \ +\frac{a^2b^2}{4\alpha^2}[R(Y, t)R(Z, t)X-R(X, t)R(Z, t)Y]\}^v,\end{aligned}$$ $$\begin{aligned} \widetilde{R}(X^h, Y^h)Z^h&=\{R(X, Y)Z+\frac{ab}{2\alpha}[2(\nabla_tR)(X, Y)Z-(\nabla_ZR)(X, Y)t]\\ &\ \ +\frac{a^2}{4\alpha}[R(R(Y, Z)t, t)X-R(R(X, Z)t, t)Y]\\ &\ \ +\frac{a^2b^2}{4\alpha^2}[R(X, t)R(Y, t)Z+R(X, t)R(Z, t)Y\\ &\ \ -R(Y, t)R(X, t)Z-R(Y, t)R(Z, t)X]-\frac{a^2}{2\alpha}R(R(X, Y)t, t)Z\}^h\\ &\ \ +\{-\frac{b^2}{\alpha}(\nabla_t R)(X, Y)Z+\frac{a(a+c)}{2\alpha}(\nabla_ZR)(X, Y)t\\ &\ \ +\frac{ab^3}{2\alpha^2}[R(R(Y, t)Z, X)t-R(X, t)R(Z, t)Y\\ &\ \ -R(R(X, t)Z, Y)t+R(Y, t)R(Z, t)X]\\ &\ \ +\frac{a^2b(a+c)}{4\alpha^2}[R(X, R(Y, t)Z)t+R(X, R(Z, t)Y)t\\ &\ \ -R(Y, R(X, t)Z)t-R(Y, R(Z, t)X)t\\ &\ \ -R(R(Y, Z)t, t)X+R(R(X, Z)t, t)Y]\\ &\ \ +\frac{ab}{2\alpha}R(R(X, Y)t, t)Z\}^v,\end{aligned}$$ $$\begin{aligned} \widetilde{R}(X^h, Y^v)Z^h&=\{-\frac{a^2}{2\alpha}(\nabla_XR)(Y, t)Z+\frac{a^3b}{4\alpha^2}[R(X, t)R(Y, t)Z\\ &\ \ -R(Y, t)R(Z, t)X-R(Y, t)R(X, t)Z]+\frac{ab}{2\alpha}[R(X, Y)Z\\ &\ \ +R(Z, Y)X]\}^h+\{\frac{ab}{2\alpha}(\nabla_XR)(Y, t)Z-\frac{a^2b^2}{4\alpha^2}[R(X, t)R(Y, t)Z\\ &\ \ -R(Y, t)R(Z, t)X-R(Y, t)R(X, t)Z]+\frac{a^2}{4\alpha}R(X, R(Y, t)Z)t\\ &\ \ -\frac{b^2}{\alpha}R(X, Y)Z+\frac{a(a+c)}{2\alpha}R(X, Z)Y\}^v.\end{aligned}$$ The proof is an special case of the proof of Proposition 2.9 of [@AS]. Weakly symmetric Riemannian manifold $(TM, G)$ ============================================== Let $(M, g)$ be a Riemannian manifold. If there exist 1-forms $\alpha_1$, $\alpha_2$, $\alpha_3$, $\alpha_4$ and a vector field $A$ on $M$ such that $$\begin{aligned} (\nabla_WR)(X, Y, Z)&=\alpha_1(W)R(X, Y)Z+\alpha_2(X)R(W, Y)Z+\alpha_3(Y)R(X, W)Z\nonumber\\ &\ \ \ +\alpha_4(Z)R(X, Y)W+g(R(X, Y)Z, W)A,\end{aligned}$$ then $(M, g)$ is called weakly symmetric. In [@DB], the authors proved that the relations $\alpha_2=\alpha_3=\alpha_4$ and $A_2=(\alpha_2)^\sharp$ are necessary conditions to weakly symmetry of $g$. Thus a weakly symmetric manifold $(M, g)$ is characterized by: $$\begin{aligned} \label{weak} (\nabla_WR)(X, Y, Z)&=\alpha_1(W)R(X, Y)Z+\alpha_2(X)R(W, Y)Z+\alpha_2(Y)R(X, W)Z\nonumber\\ &\ \ \ +\alpha_2(Z)R(X, Y)W+g(R(X, Y)Z, W)(\alpha_2)^\sharp.\end{aligned}$$ Let $(M, g)$ be a Riemannian manifold and $TM$ be its tangent bundle with Riemannian metric $G$ given by (\[metr\]). Then $(TM, G)$ is weakly symmetric if and only if $(M, g)$ is flat. Hence $(TM, G)$ is flat. If $R=0$, then from Theorem \[TH\], we conclude that $\widetilde{R}=0$ and so we have (\[weak\]). Now let $(TM, G)$ be a weakly symmetric manifold. Then we have (\[weak\]) for all vector fields $\widetilde{X}$, $\widetilde{Y}$, $\widetilde{Z}$ and $\widetilde{W}$ on $TM$. If we suppose $\widetilde{X}=X^h$, $\widetilde{Y}=Y^v$, $\widetilde{Z}=Z^v$ and $\widetilde{W}=W^h$, then the right side of (\[weak\]) has the following vertical part $$\begin{aligned} \label{5} &v\{\alpha_1(W^h)\widetilde{R}(X^h, Y^v)Z^v+\alpha_2(X^h)\widetilde{R}(W^h, Y^v)Z^v\nonumber\\ &+\alpha_2(Y^v)\widetilde{R}(X^h, W^h)Z^v+\alpha_2(Z^v)R(X^h, Y^v)W^h\nonumber\\ &+G(R(X^h, Y^v)Z^v, W^h)(\alpha_2)^\sharp\}=\{\alpha_1(W^h)[\frac{a^3b}{4\alpha^2}R(Y, t)R(Z, t)X\nonumber\\ &-\frac{ab}{2\alpha}R(Z, Y)X]+\alpha_2(X^h)[\frac{a^3b}{4\alpha^2}R(Y, t)R(Z, t)W-\frac{ab}{2\alpha}R(Z, Y)W]\nonumber\\ &+\alpha_2(Y^v)\{R(X, W)Z+\frac{ab}{2\alpha}[(\nabla_XR)(Z, t)W-(\nabla_WR)(Z, t)X]\nonumber\\ &+\frac{a^2}{4\alpha}[R(X, R(Z, t)W)t-R(W, R(Z, t)X)t]+\frac{a^2b^2}{4\alpha^2}[R(W, t)R(Z, t)X\nonumber\\ &-R(X, t)R(Z, t)W]\}+\alpha_2(Z^v)\{\frac{ab}{2\alpha}(\nabla_XR)(Y, t)W-\frac{b^2}{\alpha}R(X, Y)W\nonumber\\ &-\frac{a^2b^2}{4\alpha^2}[R(X, t)R(Y, t)W-R(Y, t)R(W, t)X-R(Y, t)R(X, t)W]\nonumber\\ &+\frac{a^2}{4\alpha}R(X, R(Y, t)W)t+\frac{a(a+c)}{2\alpha}R(X, W)Y\}\nonumber\\ &+(a+c)[-\frac{a^4}{4\alpha^2}g(R(Y, t)R(Z, t)X, W)+\frac{a^2}{2\alpha}g(R(Z, Y)X, W)]{\alpha_2}^\sharp\nonumber\\ &+b[\frac{a^3b}{4\alpha^2}g(R(Y, u)R(Z, u)X, W)-\frac{ab}{2\alpha}g(R(Z, Y)X, W)]{\alpha_2}^\sharp\}^v.\end{aligned}$$ Now, we compute the vertical part of the left side of (\[weak\]). Using Theorem \[TH\] we obtain $$\begin{aligned} \label{1} v(\widetilde{\nabla}_{W^h}\widetilde{R}(X^h, Y^v)Z^v)&=\{\frac{a^3b}{4\alpha^2}\nabla_W(R(Y, t)R(Z, t)X)-\frac{ab}{2\alpha}\nabla_WR(Z, Y)X\nonumber\\ &\hspace{-2.4cm}+\frac{a^5(a+c)}{8\alpha^3}R(W, R(Y, t)R(Z, t)X)t+\frac{a^3(a+c)}{4\alpha^2}R(W, R(Z, Y)X)t\nonumber\\ &\hspace{-2.4cm}-\frac{a^4b^2}{4\alpha^3}R(W, t)R(Y, t)R(Z, t)X-\frac{a^2b^2}{2\alpha^2}R(W, t)R(Z, Y)X\nonumber\\ &\hspace{-2.4cm}+\frac{a^4b^2}{8\alpha^3}R(R(Y, t)R(Z, t)X, t)W-\frac{a^2b^2}{4\alpha^2}R(R(Z, Y)X, t)W\}^v,\end{aligned}$$ $$\begin{aligned} \label{2} v(\widetilde{R}(\widetilde{\nabla}_{W^h}X^h, Y^v)Z^v)&=\{\frac{a^3b}{4\alpha^2}R(Y, t)R(Z, t)\nabla_WX-\frac{ab}{2\alpha}R(Z, Y)\nabla_WX\nonumber\\ &\hspace{-2.4cm}+\frac{a^3}{4\alpha^2}R(Y, t)R(Z, t)A(t, W, X)-\frac{ab}{2\alpha}R(Z, Y)A(t, W, X)\}^v,\end{aligned}$$ $$\begin{aligned} \label{3} v(\widetilde{R}(X^h, \widetilde{\nabla}_{W^h}Y^v)Z^v)&=\{\frac{ab}{2\alpha}[(\nabla_XR)(Z, t)C(t, W, Y)-(\nabla_{C(t, W, Y)}R)(Z, t)X]\nonumber\\ &\hspace{-2.4cm}+\frac{a^3b}{4\alpha^2}R(\nabla_WY, t)R(Z, t)X-\frac{ab}{2\alpha}R(Z, \nabla_WY)X+R(X, C(t, W, Y))Z\nonumber\\ &\hspace{-2.4cm}+\frac{a^2b^2}{4\alpha^2}[R(C(t, W, Y), t)R(Z, t)X-R(X, t)R(Z, t)C(t, W, Y)]\nonumber\\ &\hspace{-2.4cm}+\frac{a^2}{4\alpha}[R(X, R(Z, t)C(t, W, Y))t-R(C(t, W, Y), R(Z, t)X)t]\nonumber\\ &\hspace{-2.4cm}+\frac{a^3b}{4\alpha^2}R(D(t, W, Y), t)R(Z, t)X-\frac{ab}{2\alpha}R(Z, D(t, W)Y)X\}^v,\end{aligned}$$ $$\begin{aligned} \label{4} v(\widetilde{R}(X^h, Y^v)\widetilde{\nabla}_{W^h}Z^v)&=\{\frac{a^3b}{4\alpha^2}R(Y, t)R(\nabla_WZ, t)X-\frac{ab}{2\alpha}R(\nabla_WZ, Y)X\nonumber\\ &\hspace{-2.4cm}+\frac{a^3b}{4\alpha^2}R(Y, t)R(D(t, W, Z), t)X+\frac{ab}{2\alpha}(\nabla_XR)(Y, t)C(u, W, Z)\nonumber\\ &\hspace{-2.4cm}-\frac{ab}{2\alpha}R(D(t, W, Z), Y)X+\frac{a^2}{4\alpha}R(X, R(Y, t)C(t, W, Z))t\nonumber\\ &\hspace{-2.4cm}-\frac{b^2}{\alpha}R(X, Y)C(t, W, Z)+\frac{a(a+c)}{2\alpha}R(X, C(t, W, Z))Y\nonumber\\ &\hspace{-2.4cm}-\frac{a^2b^2}{4\alpha^2}[R(X, t)R(Y, t)C(t, W, Z)-R(Y, t)R(C(t, W, Z), t)X\nonumber\\ &\hspace{-2.4cm}-R(Y, t)R(X, t)C(t, W, Z)]\}^v.\end{aligned}$$ Using (\[1\])-(\[4\]) we have $v((\widetilde{\nabla}_{W^H}\widetilde{R})(X^H, Y^V)Z^V)$. Now we consider the following $$\label{IM} v((\widetilde{\nabla}_{W^H}\widetilde{R})(X^H, Y^V)Z^V)=(\ref{5}).$$ Setting $Y=t$ in the above equation implies $$\begin{aligned} \label{3.6} &-\frac{ab}{2\alpha}\alpha_1(W^h)R(Z, t)X-\frac{ab}{2\alpha}\alpha_2(X^h)R(Z, t)W+\alpha_2(t^v)\{R(X, W)Z\nonumber\\ &+\frac{ab}{2\alpha}[(\nabla_XR)(Z, t)W-(\nabla_WR)(Z, t)X]+\frac{a^2}{4\alpha}[R(X, R(Z, t)W)t\nonumber\\ &-R(W, R(Z, t)X)t]+\frac{a^2b^2}{4\alpha^2}[R(W, t)R(Z, t)X-R(X, t)R(Z, t)W]\}\nonumber\\ &+\alpha_2(Z^v)[\frac{a(a+c)}{2\alpha}R(X, W)t-\frac{b^2}{\alpha}R(X, t)W]+\frac{a}{2}g(R(Z, t)X, W)\alpha_2^\sharp\nonumber\\ &=\frac{a^2b^2}{2\alpha^2}R(W, t)R(Z, t)X-\frac{a^3(a+c)}{4\alpha^2}R(W, R(Z, t)X)t\nonumber\\ &-\frac{ab}{2\alpha}(\nabla_WR)(Z, t)X-\frac{a^2b^2}{4\alpha^2}R(R(Z, t)X, t)W+\frac{ab}{2\alpha}R(Z, t)A(t, W, X)\nonumber\\ &+\frac{ab}{2\alpha}R(D(t, W, Z), t)X-\frac{a(a+c)}{2\alpha}R(X, C(t, W, Z))t\nonumber\\ &+\frac{b^2}{\alpha}R(X, t)C(t, W, Z).\end{aligned}$$ Similarly, setting $Z=t$ in (\[IM\]) gives us $$\begin{aligned} &-\frac{ab}{2\alpha}\alpha_1(W^h)R(u, Y)X-\frac{ab}{2\alpha}\alpha_2(X^h)R(t, Y)W+\alpha_2(Y^v)R(X, W)t\nonumber\\ &+\alpha_2(t^v)\{\frac{ab}{2\alpha}(\nabla_XR)(Y, t)W-\frac{a^2b^2}{4\alpha^2}[R(X, t)R(Y, t)W-R(Y, t)R(W, t)X\nonumber\\ &-R(Y, t)R(X, t)W]+\frac{a^2}{4\alpha}R(X, R(Y, t)W)t+\frac{a(a+c)}{2\alpha}R(X, W)Y\nonumber\\ &-\frac{b^2}{\alpha}R(X, Y)W\}+\frac{a}{2}g(R(t, Y)X, W)\alpha_2^\sharp\nonumber\\ &=\frac{a^2b^2}{2\alpha^2}R(W, t)R(t, Y)X-\frac{a^3(a+c)}{4\alpha^2}R(W, R(t, Y)X)t\nonumber\\ &-\frac{ab}{2\alpha}(\nabla_WR)(t, Y)X-\frac{a^2b^2}{4\alpha^2}R(R(t, Y)X, t)W+\frac{ab}{2\alpha}R(t, Y)A(t, W, X)\nonumber\\ &+\frac{ab}{2\alpha}R(t, D(t, W, Y))X-R(X, C(t, W, Y))t.\end{aligned}$$ Setting $Y=Z$ in the above equation and then summing it with (\[3.6\]) derive that $$\begin{aligned} \label{N} & \alpha_2(Z^v)\{-\frac{b^2}{\alpha}R(X, t)W+\frac{a(a+c)+2\alpha}{2\alpha}R(X, W)t\}\nonumber\\ &+\alpha_2(t^v)\{R(X,W)Z+ \frac{ab}{2\alpha}[2(\nabla_XR)(Z, t)W-(\nabla_WR)(Z, t)X]\nonumber\\ &+\frac{a^2}{4\alpha}[2R(X, R(Z, t)W)t-R(W, R(Z, t)X)t]\nonumber\\ &+\frac{a^2b^2}{4\alpha^2}[R(W, t)R(Z, t)X-2R(X, t)R(Z, t)W+R(Z, t)R(W, t)X\nonumber\\ &+R(Z, t)R(X, t)W]-\frac{b^2}{\alpha}R(X, Z)W+\frac{a(a+c)}{2\alpha}R(X,W)Z\} \nonumber\\ &=\frac{b^2}{\alpha}R(X,t)C(t, W, Z)-\frac{a(a+c)+2\alpha}{2\alpha}R(X,C(t,W,Z))t.\end{aligned}$$ Putting $Z=t$ in the above equation we get $$\label{m} \alpha_2(t^v)[\frac{b^2}{\alpha}R(t, X)W+\frac{a(a+c)+2\alpha}{2\alpha}R(X, W)t]=0.$$ Interchanging $X$ and $W$ in the above equation yields $$\alpha_2(t^v)\{\frac{b^2}{\alpha}R(t, W)X+\frac{a(a+c)+2\alpha}{2\alpha}R(W, X)t\}=0.$$ By subtracting the above equation from (\[m\]) we get $$\alpha_2(t^v)\{\frac{b^2}{\alpha}[R(t, X)W+R(W, t)X]+\frac{a(a+c)+2\alpha}{\alpha}R(X, W)t\}=0.$$ Using Bianchi identity in above relation gives us $$\alpha_2(t^v)R(X, W)t=0.$$ If $\alpha_2(t^v)\neq 0$ we have the conclusion. Now let $\alpha_2(t^v)= 0$, then from (\[N\]) we have $$\begin{aligned} \label{m1} &\alpha_2(Z^v)\{\frac{a(a+c)+2\alpha}{2\alpha}R(X, W)t-\frac{b^2}{\alpha}R(X, t)W\}\nonumber\\ &=-\frac{a(a+c)+2\alpha}{2\alpha}R(X, C(t, W, Z))t+\frac{b^2}{\alpha}R(X, t)C(t, W, Z).\end{aligned}$$ Exchanging $X$ and $W$ in the above equation we obtain $$\begin{aligned} &\alpha_2(Z^v)\{\frac{a(a+c)+2\alpha}{2\alpha}R(W, X)t-\frac{b^2}{\alpha}R(W, t)X\}\nonumber\\ &=-\frac{a(a+c)+2\alpha}{2\alpha}R(W, C(t, X, Z))t+\frac{b^2}{\alpha}R(W, t)C(t, X, Z).\end{aligned}$$ By subtracting the above equation from (\[m1\]) we get $$\begin{aligned} &\alpha_2(Z^v)\{\frac{a(a+c)+2\alpha}{\alpha}R(X, W)t+\frac{b^2}{\alpha}[R(W, t)X-R(X, t)W]\}\nonumber\\ &=\frac{a(a+c)+2\alpha}{2\alpha}[R(W, C(t, X, Z))t-R(X, C(t, W, Z))t]\nonumber\\ &+\frac{b^2}{\alpha}[R(X, t)C(t, W, Z)-R(W, t)C(t, X, Z)].\end{aligned}$$ Using Bianchi identity in the above relation we conclude $$\begin{aligned} 3\alpha_2(Z^v)R(X, W)t&=\frac{a(a+c)+2\alpha}{2\alpha}[R(W, C(t, X, Z))t\nonumber\\ &\hspace{-20mm}-R(X, C(t, W, Z))t]+\frac{b^2}{\alpha}[R(X, t)C(t, W, Z)\nonumber\\ &\hspace{-20mm}-R(W, t)C(t, X, Z)].\end{aligned}$$ Now, we take the $g-$product with $t$ $$\begin{aligned} &0=\frac{a(a+c)+2\alpha}{2\alpha}[g(R(W, C(t, X, Z))t, t)-g(R(X, C(t, W, Z))t, t)]\nonumber\\ &+\frac{b^2}{\alpha}[g(R(X, t)C(t, W, Z), t)-g(R(W, t)C(t, X, Z), t)]\nonumber\\ &=-\frac{a^2b^2}{2\alpha^2}[g(R(X, t)R(Z, t)W, t)-g(R(W, t)R(Z, t)X, t)].\end{aligned}$$ Setting $W=t$ and $Z=X$ in the above equation we obtain $$0=\frac{a^2b^2}{2\alpha^2}g(R(X, t)t, R(X, t)t)$$ If $b\neq0$ then the above equation yields $R(X, t)t=0$ which gives us $R=0$. Now let $b=0$. In this case we have $\alpha=a(a+c)$ and then from (\[m1\]) we get $$\alpha_2(Z^v)R(X, W)t=-R(X, C(t, W, Z))t.$$ Setting $W=X$ in the above equation gives us $$R(X, C(t, W, Z))t=0,$$ and consequently $$\frac{a^2}{2\alpha}R(X, R(t, Z)X)t=0.$$ Taking the $g-$product with $Z$ we have, $g(R(X, R(t, Z)X)t,Z)$ which gives us $$R(t, Z)X=0.$$ Thus $R=0$, i.e. $(M, g)$ is flat. For $\alpha_2=0$ respectively $\alpha_1=2\alpha_2$ in (\[weak\]) we get the following result Let $(M, g)$ be a Riemannian manifold and $TM$ be its tangent bundle with Riemannian metric $G$ given by (\[metr\]). Then $(TM, G)$ is recurrent or pseudo-symmetric or locally symmetric if and only if $(M, g)$ is flat. Hence $(TM, G)$ is flat. Considering $a=1$ and $b=c=0$ in (\[metr\]) we get the results of [@BC], [@BT] for the Sasakian lift metric $g^s$. $(TM, g^s)$ is weakly symmetric (recurrent or pseudo-symmetric or locally symmetric) Riemannian manifold if and only if the base manifold $(M, g)$ is flat. Hence $(TM, G)$ is flat. [MaHo]{} K. M. T. Abbassi and M. Sarih, [*On Riemannian $g$-natural metrics of the form $ag^s+bg^h+cg^v$ on the tangent bundle of a Riemannian manifold $(M, g)$*]{}, Meditter. J. Math., [**2**]{}(2003), 19-43. C. L. Bejan and M. Crasmareanu, [*Weakly-symmetry of the Sasakian lifts on tangent bundles*]{}, Publ. Math. Debrecen, [**83**]{}(2013), 1-7. T. Q. Binh and L. Tamássy, [*On recurrence or pseudo-symmetry of the Sasakian metric on the tangent bundle of a Riemannian manifold*]{},Indian J. Pure Appl. Math., [**35**]{}(2004), 555-560. U. C. De and S. Bandyopadhyay, [*On weakly symmetric Riemannian spaces*]{}, Publ. Math. Debrecen, [**54**]{}(1999), 377-381. M. Prvanović, [*On weakly symmetric Riemannian manifolds*]{}, Publ. Math. Debrecen, [**46**]{}(1995), 19-25. A. A. Shaikh and S. K. Jana, [*On weakly symmetric Riemannian manifolds*]{}, Publ. Math. Debrecen, [**71**]{}(2007), 27-41. H. Singh and Q. Khan, [*On special weakly symmetric Riemannian manifolds*]{}, Publ. Math. Debrecen, [**58**]{}(2001), 523-536. L. Tamássy and T.Q. Binh, [*On weakly symmetric and weakly projective symmetric Riemannian manifolds*]{}, Differential geometry and its applications (Eger,1989), North-Holland, Amsterdam, Colloq. Math. Soc. János Bolyai, [**56**]{}(1992), 663-670. L. Tamássy and T.Q. Binh, [*On weak symmetries of Einstein and Sasakian manifolds*]{}, Tensor, [**53**]{}(1993), 140-148. S. A. Uysal and R. Ö. Laleoglu, [*On weakly symmetric spaces with semi-symmetric metric connection*]{}, Publ. Math. Debrecen, [**67**]{}(2005), 145-154. Esmaeil Peyghan\ Department of Mathematics, Faculty of Science\ Arak University\ Arak 38156-8-8349. Iran\ Email: epeyghan@gmail.com
--- abstract: 'Titanate compounds have been recognized as key materials for understanding the coupling of magnetism and orbitals in strongly correlated electron systems. In the perovskite Ti oxide $R$TiO$_3$ ($R=$trivalent rare-earth ions), which is a typical Mott-Hubbard insulator, the Ti $t_{2g}$ orbitals and spins in the $3d^1$ state couple each other through the strong electron correlations, resulting in a rich variety of orbital-spin phases. One way of controlling the coupling is to change the tiltings of the TiO$_6$ octahedra (namely the GdFeO$_3$-type distortion) by varying the $R$ ions, through which the relative ratio of the electron bandwidth to the Coulomb interaction is controlled. With this control, these Mott insulators exhibit an antiferromagnetic-to-ferromagnetic (AFM-FM) phase transition, which has turned out to be a consequence of rich orbital physics in these materials. The origin and nature of orbital-spin structures of these Mott insulators have been intensively studied both experimentally and theoretically. When the Mott insulators are doped with carriers, the titanates show touchstone properties of the filling controlled Mott transition. In this article, we first review the state of the art on the studies for understanding physics contained in the properties of the perovskite titanates. On the properties of the insulators, we focus on the following three topics: (1) the origin and nature of the ferromagnetism as well as the orbital ordering in the compounds with relatively small $R$ ions such as GdTiO$_3$ and YTiO$_3$, (2) the origin of the G-type antiferromagnetism and the orbital state in LaTiO$_3$, and (3) the orbital-spin structures in other AFM(G) compounds with relatively large $R$ ions ($R=$Ce, Pr, Nd and Sm). On the basis of these discussions, we discuss the whole phase diagram together with mechanisms of the magnetic phase transition. On the basis of the microscopic understanding of the orbital-spin states, we show that the Ti $t_{2g}$ degeneracy is inherently lifted in the titanates, which allows the single-band descriptions of the ground-state and the low-energy electronic structures as a good starting point. Our analyses indicate that these compounds offer good touch-stone materials described by the single-band Hubbard model on the cubic lattice. From this insight, we also reanalyze the hole-doped titanates $R_{1-x}A_x$TiO$_3$ ($A=$divalent alkaline-earth ions). Experimentally revealed filling-dependent and bandwidth-dependent properties and the critical behavior of the metal-insulator transitions are discussed in the light of theories based on the single-band Hubbard models.' author: - 'Masahito [Mochizuki]{}$^1$ and Masatoshi [Imada]{}$^{2,3}$' date: today title: Orbital Physics in the Perovskite Ti Oxides --- ![image](FIG01.eps) ![image](FIG02.eps) ![image](FIG03.eps) ![image](FIG04.eps) ![image](FIG05.eps) ![image](FIG06.eps) ![image](FIG07.eps)
--- abstract: 'The ability to trap and guide coherent electrons is gaining importance in fundamental as well as in applied physics. In this regard novel quantum devices are currently developed that may operate under low vacuum conditions. Here we study the loss of electron coherence with increasing background gas pressure. Thereby, optionally helium, hydrogen or nitrogen is introduced in a biprism interferometer where the interference contrast is a measure for the coherence of the electrons. The results indicate a constant contrast that is not decreasing in the examined pressure range between and . Therefore, no decoherence was observed even under poor vacuum conditions. Due to scattering of the electron beam with background H$_2$-molecules a signal loss of was determined. The results may lower the vacuum requirements for novel quantum devices with free coherent electrons.' author: - 'G. Schütz$^1$, A. Rembold$^1$, A. Pooch$^1$, W.T. Chang$^2$ and A. Stibor' title: Electron matter wave interferences at high vacuum pressures --- Introduction ============ The coherent control and interference of free electrons has a long history. In the 1950s a mayor scientific breakthrough happened with the development of biprism electron interferometers [@Moellenstedt1955]. A variety of experiments for free electrons were accomplished in the following decades proving e.g. the magnetic Aharonov-Bohm effect [@Moellenstedt1962], the Sagnac effect [@Hasselbach1993] or Hanbury Brown-Twiss anticorrelations [@Kiesel2002]. In recent years the coherent control of free electrons is gaining again importance for fundamental research and in a technical point of view. This can be observed in decoherence studies of electrons near semiconducting surfaces [@Sonnentag2007] and developments such as a field emission source for free electron femtosecond pulses [@Hommelhoff2006a; @Hommelhoff2006b], surface-electrode chips [@Hammer2014] or a biprism electron interferometer with a single atom tip source [@Schuetz2014]. New quantum devices with coherent electrons are currently implemented like a recently proposed noninvasive quantum electron microscope [@Putnam2009]. Due to the quantum Zeno effect it potentially reduces the electron radiation exposure during scanning of fragile biological samples by two orders of magnitude.\ \ Some of these applications may operate under low vacuum conditions or are technically less demanding to realize if an ultra-high vacuum (UHV) environment is not needed. The question arises, how background gases influence the properties of the matter wave. Important applications such as reflection high energy electron diffraction (RHEED) for in situ monitoring of the growth of thin films on surfaces [@Rijnders1997] or ultrafast electron diffraction (UED) at molecular beams for direct imaging of transient molecular structures [@Goodson2003; @Ihee2001] are known to work at high background gas pressures.\ However, it has not been studied yet, how the coherence of an electron beam is influenced by increasing background gas pressure. The gradual loss of coherence through collisions with background gases was analyzed for neutral C$_{60}$-fullerenes in a near field Talbot-Lau matter wave interferometer [@Hornberger2003]. Thereby, decoherence was observed at a gas pressure of $\sim$ .\ \ In this work we study the possible loss of coherence for electron matter waves in a biprism interferometer in presence of helium (He), nitrogen (N$_2$) or hydrogen (H$_2$) background gas. Our instrument is able to generate interferograms with high interference contrast in a pressure range between $10^{-9}$ and . It is only limited by the vacuum specifications of the multi channel plate (MCP) detector. We will demonstrate that in this whole pressure region no decoherence can be observed. Setup ===== A scheme of our experimental setup is shown in Fig. \[figure1\] and is described in detail elsewhere [@Schuetz2014; @Hasselbach1998a; @Hasselbach2010a; @Maier1997]. In our approach the electron beam is field emitted by an etched tungsten tip that is covered with a monolayer of iridium and annealed to form a protrusion in the nanometer regime [@Kuo2006a; @Kuo2008]. The tip forming procedure is monitored by a MCP-detector that can be moved out of the optical axis. The electrons start with an emission energy of for the experiment with He and N$_2$ background gas and for the one with H$_2$. They coherently illuminate a wide gold covered biprism fiber that divides and combines the electron matter waves [@Moellenstedt1955; @Schuetz2014]. It is set on a positive potential of for the experiments with He or N$_2$ background gas and for H$_2$. All beam alignment is performed by electrostatic deflection electrodes. Behind the biprism the partial waves overlap and interfere with each other. The interference pattern has a period of several and is oriented parallel to the biprism in the $x$-direction. It is magnified by an electrostatic quadrupole lens to fit the detectors resolution of about . The image rotator, a magnetic coil, allows to rotate the interferogram to correct possible misalignments. The interference pattern is detected by a MCP-detector with a delay line anode. It is able to operate at background gas pressures up to about . Above that level the risk of destruction of the MCPs due to electric discharges is high.\ The whole interferometer has a length (tip to detector) of . It is constructed rigidly [@Hasselbach1988] to avoid mechanical vibrations and shielded against electromagnetic noise [@Rembold2014] by a copper and mu-metal tube. The inlet of different background gases is performed by an UHV gas nozzle. The interferometer is placed in a chamber where a minimum pressure of is achieved by an ion getter pump in combination with a cryopump. Measurements ============ Three experimental runs were performed introducing either He, N$_2$ or H$_2$ gas in the UHV chamber. Before each run, the tip was annealed to form a protrusion being the emission center. Therefore, possible variations in the tip apex size influencing the electron emission voltage and the maximal contrast cannot be ruled out. The measurement started at a background gas pressure of and electron interferograms were recorded with a signal acquisition time of for He or N$_2$ and for H$_2$. Then a further small amount of gas was introduced through the nozzle. At equilibrium another interference pattern was recorded. This process was repeated stepwise with increasing pressure for the different gases. Background images for the same integration time were acquired in the experiments with He or N$_2$ by switching off the field emission subsequently to each recording. This data was subtracted from the interferograms. For the H$_2$ measurement no background subtraction was necessary since the ion getter pump, as the main source of background, was turned off. The recorded images were analyzed by adding all counts in the pixel-rows of the detector along the $x$-direction of the interference pattern and dividing the sum by the amount of pixel-columns. The distribution of the resulting average interference pattern $I(y)$ versus the $y$-direction normal to the interference stripes was fitted by the following expression to determine the mean intensity $I_0$, the pattern periodicity $d_s$ and the contrast $K$ [@Lenz1984] $$I(y) = I_0 \left[1 + K \cos\left(\frac{2 \pi y}{d_s}\right) \right]. \label{eq:fit}$$ In Fig. \[figure2\] the resulting contrast is plotted versus the background gas pressure for He and N$_2$. The contrast distribution is rather constant for the whole measured pressure range indicating the electrons remain coherent. At higher pressures the ion pump produced an increasing noise level of ions on the detector that leads to greater error bars and a higher dispersion of the data points. For hydrogen we were able to work without the ion getter pump and stabilized the pressure only with the cryopump. This significantly reduced the background counts resulting in a more stable signal. The pressure-dependent contrast for hydrogen is shown in Fig. \[figure3\]. It is constant around . The inset illustrates a typical interference pattern recorded at .\ Additionally, the mean intensity $I_0$ of the interference pattern on the detector was determined for hydrogen with increasing pressure. It represents the center line of the cosine-function in Eq. \[eq:fit\]. The data is plotted in Fig. \[figure4\]. As expected, a significant signal drop is determined. This is presumably due to increasing collisions between electrons and H$_2$ molecules that decrease the count rate. At a pressure of only a fraction of of the original electron signal is left.\ Discussion and Conclusion {#conclusion} ========================= We have studied the coherent properties of electron matter waves in a biprism interferometer under low vacuum conditions by introducing helium, nitrogen or hydrogen background gas in the UHV chamber. Unlike to interference experiments with C$_{60}$ fullerenes [@Hornberger2003] the electrons in our instrument do not show decoherence up to a pressure of which can be observed in a constant interference contrast. In the C$_{60}$ near field interferometer the heavy molecules have a significant probability to be measured in the region between the interference stripes after a collision, leading to a loss of contrast. In our far field interferometer the situation is different. After a collision with a significantly heavier background gas atom or molecule, the electron is in most cases scattered into an angle large enough to miss the detector. Due to the quadrupole magnification, minimal deflection of the electrons trajectory lead to a large displacement in the detection plane. This is indicated by the significant signal loss of comparing the mean electron intensity measured on the detector at a H$_2$ pressure of with the one at . In other words, those electrons that made it to the detector did not scatter on a gas atom and are therefore still coherent. It is an advantage of this setup to be able to select these electrons and remain high contrast interference pattern even under rather low vacuum conditions.\ \ Due to possible electric discharges in the MCP-detector, interference at even higher vacuum pressures could not be studied. However, coherent behaviour of electrons at a comparable pressure of was reported in an UED experiment [@Goodson2003] and for significantly higher pressures in a RHEED measurement [@Rijnders1997]. The latter describes electron diffraction on SrTiO$_3$ and YBa$_2$Cu$_3$O$_{7\,-\,\delta}$ surfaces at an oxygen background pressure up to . This was possible due to a short design of the setup, a differential pumping unit for the source and significantly higher electron energies of that allowed to work without MCP amplification prior to the detection on the fluorescent screen. In accordance to our observations, also a strong scattering loss of electrons in the high oxygen pressure was observed in the RHEED experiment [@Rijnders1997].\ We therefore conclude that with different detection methods and shorter configurations, electron diffraction or interference may be observed at even higher background gas pressures. The results of our experiments provide an indication of the vacuum requirements for novel devices applying free coherent electrons. Acknowledgements ================ This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the Emmy Noether program A.R. acknowledges support from the Evangelisches Studienwerk e.V. Villigst. [00]{} G. Möllenstedt, and H. Düker, Naturwissenschaften [**42**]{}, 41 (1955) G. Möllenstedt, and W. Bayh, Naturwissenschaften [**49**]{}, 81 (1962) F. Hasselbach, M. Nicklaus, Phys. Rev. A [**48**]{}, 143 (1993) H. Kiesel, A. Renz, and F. Hasselbach, Nature [**418**]{}, 392 (2002) P. Sonnentag, and F. Hasselbach, Phys. Rev. Lett. [**98**]{}, 200402 (2007) P. Hommelhoff, Y. Sortais, A. Aghajani-Talesh, and M. A. Kasevich, Phys. Rev. Lett [**96**]{}, 077401 (2006) P. Hommelhoff, C. Kealhofer, and M. A. Kasevich, Phys. Rev. Lett. [**97**]{}, 247402 (2006) J. Hammer, J. Hoffrogge, S. Heinrich, and P. Hommelhoff, Phys. Rev. Appl. [**2**]{}, 044015 (2014) G. Schütz, A. Rembold, A. Pooch, S. Meier, P. Schneeweiss, A. Rauschenbeutel, A. Günther, W.T. Chang, I.S. Hwang, and A. Stibor, Ultramicroscopy [**141**]{}, 9 (2014) W. P. Putnam, and M. F. Yanik, Phys. Rev. A [**80**]{}, 040902(R) (2009) G.J.H.M. Rijnders, G. Koster, D.H.A. Blank, and H. Rogalla, Appl. Phys. Lett. [**70**]{}, 1888 (1997) B.M Goodson, C.Y. Ruan, V.A. Lobastov, R. Srinivasan and A.H. Zewail, Chem. Phys. Lett. [**374**]{}, 417 (2003) H. Ihee, V.A. Lobastov, U.M. Gomez, B.M. Goodson, R. Srinivasan, C.Y. Ruan and A.H. Zewail, Science [**291**]{}, 458 (2001) K. Hornberger, S. Uttenthaler, B. Brezger, L. Hackermüller, M. Arndt, and A. Zeilinger, Phys. Rev. Lett. [**90**]{}, 160401 (2003) F. Hasselbach and U. Maier, Quantum Coherence and Decoherence - Proc. ISQM-Tokyo 98 ed. by Y.A. Ono and K. Fujikawa (Amsterdam: Elsevier), 299 (1999). F. Hasselbach, Rep. Prog. Phys. [**73**]{}, 016101 (2010). U. Maier, Dissertation, University of Tübingen (1997). H.S. Kuo, I.S. Hwang, T.Y. Fu, Y.C. Lin, C.C. Chang and T.T. Tsong, Japanese J. Appl. Phys. [**45**]{}, 8972 (2006). H.S. Kuo, I.S. Hwang, T.Y. Fu, Y.H. Lu, C.Y. Lin and T.T. Tsong, Appl. Phys. Lett. [**92**]{}, 063106 (2008). F. Hasselbach, Z. Phys. B [**71**]{}, 443 (1988) A. Rembold, G. Schütz, W.T. Chang, A. Stefanov, A. Pooch, I.S. Hwang, A. Günther, and A. Stibor, Phys. Rev. A [**89**]{}, 033635 (2014) F. Lenz, and G. Wohland, Optik [**67**]{}, 315 (1984)
--- abstract: 'A simple $3 \times 3$ neutrino Majorana mass matrix is proposed to accommodate both the solar and atmospheric neutrino deficits. This scenario can be realized naturally by a radiative mechanism for the generation of neutrino masses. It also goes together naturally with electroweak baryogenesis and cold dark matter in a specific model.' --- plus 1pt ‘@=12 -0.5in 0.0in 0.0in 8.5in 6.5in UCRHEP-T136\ November 1994\ [**Simple Radiative Neutrino Mass Matrix\ for Solar and Atmospheric Oscillations\ **]{} There is now a good deal of evidence from different experiments that there exists a solar neutrino deficit[@1; @2; @3; @4] as well as mounting evidence for an atmospheric neutrino deficit.[@5; @6] In terms of neutrino oscillations, the former (latter) is an indication that $\nu_e$ ($\nu_\mu$) is not a mass eigenstate.[@7; @8] A popular approach to the neutrino-mass problem is the seesaw mechanism,[@9] in which case $m_{\nu_l}$ is naively expected to be proportional to $m_l^2$, where $l = e, \mu, \tau$, and the mixing angles are assumed to be small, in analogy with what is observed in the quark sector. However, that is not the only, nor necessarily the most natural, possibility. In this paper, a very different form of the neutrino mass matrix will be proposed. It is simple and can be realized naturally by a radiative mechanism for the generation of Majorana neutrino masses. It also fits very well into the framework of a recently proposed doublet Majoron model[@10; @11] which allows for the generation of baryon number during the electroweak phase transition as well as having $\nu_\tau$ as the late decaying particle for a consistent interpretation that the missing mass of the Universe is all cold dark matter. Consider the following $3 \times 3$ Majorana mass matrix for the states $\nu_e, \nu_\mu$, and $\nu_\tau$ (or $\nu_S$, a hypothetical singlet neutrino): $${\cal M}_\nu = \left[ \begin{array}{c@{\quad}c@{\quad}c} \epsilon_1 & \epsilon_4 & m \cos \theta \\ \epsilon_4 & \epsilon_2 & m \sin \theta \\ m \cos \theta & m \sin \theta & \epsilon_3 \end{array} \right],$$ where $$\epsilon_{1,2,3,4} << m.$$ Let the mass eigenstates be denoted by $n_{1,2,3}$, then the corresponding mass eigenvalues are $$\begin{aligned} m_1 &\simeq& m + {1 \over 2} (\epsilon_1 \cos^2 \theta + \epsilon_2 \sin^2 \theta + \epsilon_3 + \epsilon_4 \sin 2 \theta), \\ m_2 &\simeq& - m + {1 \over 2} (\epsilon_1 \cos^2 \theta + \epsilon_2 \sin^2 \theta + \epsilon_3 + \epsilon_4 \sin 2 \theta), \\ m_3 &\simeq& \epsilon_1 \sin^2 \theta + \epsilon_2 \cos^2 \theta - \epsilon_4 \sin 2 \theta,\end{aligned}$$ and $$\begin{aligned} \nu_e &\simeq& {1 \over \sqrt 2} (n_1 - n_2) \cos \theta - n_3 \sin \theta, \\ \nu_\mu &\simeq& n_3 \cos \theta + {1 \over \sqrt 2} (n_1 - n_2) \sin \theta, \\ \nu_\tau (\nu_S) &\simeq& {1 \over \sqrt 2} (n_1 + n_2).\end{aligned}$$ From Eqs. (3) - (5), we see that $$\begin{aligned} \Delta m_{12}^2 &\simeq& 2 m (\epsilon_1 \cos^2 \theta + \epsilon_2 \sin^2 \theta + \epsilon_3 + \epsilon_4 \sin 2 \theta) \nonumber \\ &<<& m^2 \simeq \Delta m_{13}^2 \simeq \Delta m_{23}^2.\end{aligned}$$ This means that $\nu_\mu - \nu_e$ oscillations are governed by $m^2$ and $\sin^2 2 \theta$, which can be chosen to be about $10^{-2} {\rm eV}^2$ and 0.5 respectively[@12] to account for the atmospheric neutrino data.[@5; @6] As for the solar neutrino deficit, the $\nu_e$ flux is first diminished by its rapid oscillation into $\nu_\mu$ to $(1- {1 \over 2} \sin^2 2 \theta)$ of its initial value, then the oscillation into $\nu_\tau$ (or $\nu_S$) with $\sin^2 2 \theta_{12} = 1$ and $\Delta m_{12}^2$ of about $10^{-10} {\rm eV}^2$ for the vacuum oscillation solution[@13] reduces it further[@14] to what is observed.[@1; @2; @3; @4] Matter-enhanced oscillations[@15] are not possible here because the mixing is maximum, [*i.e.*]{} $\theta_{12} = \pi/4$. The above discussion shows that as long as the $\nu_e - \nu_\tau$ (or $\nu_e - \nu_S$) and $\nu_\mu - \nu_\tau$ (or $\nu_\mu - \nu_S$) entries of the $3 \times 3$ Majorana neutrino mass matrix are much greater than all other entries, the resulting mass eigenstates will be such that a linear combination of $\nu_e$ and $\nu_\mu$ pairs up with $\nu_\tau$ (or $\nu_S$) to form a pseudo-Dirac neutrino, [*i.e.*]{} an equal (or almost equal) admixture of two nearly degenerate Majorana neutrinos. With suitable values for the two large entries and a general magnitude for the small ones, both the solar and atmospheric neutrino deficits are explained. The question now is whether such a simple ansatz has a natural realization. It may be of interest to note that in the discredited case of the 17-keV neutrino, the most probable theoretical explanation was that a linear combination of $\nu_e$ and $\nu_\tau$ pairs up with $\nu_\mu$ to form a pseudo-Dirac neutrino.[@16] Since $m$ and $\Delta m_{12}^2$ should be of order 0.1 eV and $\rm 10^{-10} eV^2$ respectively, the ratios $\epsilon_{1,2,3,4}/m$ are of order $10^{-8}$. Hence it is natural to assume as a first approximation that $\epsilon_1 = \epsilon_2 = \epsilon_3 = \epsilon_4 = 0$. This can be achieved by the imposition of a discrete symmetry which is then softly broken so that $\epsilon_{1,2,3,4}$ may acquire small nonzero values. Since $m$ itself is already rather small, a natural explanation is that of radiative generation.[@17] In the following it will be shown how everything can be done in the context of the recently proposed doublet Majoron model.[@10; @11] If there is no $\nu_S$ and ${\cal M}_\nu$ refers to the known three light neutrinos, then they have no impact on the question of dark matter in the Universe because the sum of their masses would be much less than 1 eV. After the results of the Cosmic Background Explorer (COBE),[@18] it is popularly assumed that the Universe contains 70% cold dark matter and 30% hot dark matter.[@19] The latter could be neutrinos, but the sum of their masses has to be about 7 eV. Implications of this assumption on the neutrino mass matrix have been explored.[@20] On the other hand, it is also possible that the Universe contains 100% cold dark matter and the COBE results are explained by a late decaying particle,[@11; @21; @22; @23; @24] the prime candidate being $\nu_\tau$, but its mass should be a few MeV. There is actually another good reason for a $\nu_\tau$ of this mass. Its Yukawa coupling would then be large enough to allow for the possible generation of the observed baryon-number asymmetry of the Universe during the electroweak phase transition from the spontaneous breaking of lepton-number conservation.[@25] This mechanism requires a detailed understanding of transmission through and reflection off bubble walls, and is under active investigation.[@26] The recently proposed doublet Majoron model[@10; @11] provides a natural framework for both electroweak baryogenesis and cold dark matter. Since $m_{\nu_\tau}$ is a few MeV in this case, the mass matrix ${\cal M}_\nu$ of Eq. (1) should now be interpreted as representing $\nu_e, \nu_\mu$, and $\nu_S$, the last being a singlet neutrino, each having lepton number $L = 1$. Note that in this model,[@10; @11] lepton number corresponds to a conserved global U(1) symmetry above the energy scale of electroweak symmetry breaking. It is broken spontaneously together with the SU(2) $\times$ U(1) gauge symmetry necessarily and a lepton asymmetry of the Universe is created which gets converted into a baryon asymmetry through sphalerons.[@25] The massless Goldstone boson associated with the spontaneous breaking of $L$ is called the Majoron. The massive $\nu_\tau$’s annihilate into Majorons very quickly in this model so that the $\nu_\tau$ contribution to the energy density of the Universe at the time of nucleosynthesis is negligible. On the other hand, $\nu_\tau$ decays rather slowly and as the Universe expands, it eventually becomes dominant, but only until it finally decays away into Majorons and other light neutrinos. This scenario is thus very much suited for the radiative generation of Majorana neutrino masses[@17] because lepton number is already assumed to be spontaneously broken. In addition to all the particles of the standard model, let there be one light singlet neutrino $\nu_{SL}$ with $L = 1$, one heavy neutral singlet fermion $N_R$ with $L = 0$, and two scalar doublets $\Phi_{1,2} = (\phi_{1,2}^+, \phi_{1,2}^0)$ with $L = \mp 1$. To obtain $\epsilon_1 = \epsilon_2 = \epsilon_3 = \epsilon_4 = 0$ in ${\cal M}_\nu$, assume a discrete $Z_3$ symmetry such that $(\nu_e, e)_L, (\nu_\mu, \mu)_L, e_R, \mu_R$ transform as $\omega$, whereas $(\nu_\tau, \tau)_L, \tau_R, \nu_{SL}, N_R$ transform as $\omega^2$, with $\omega^3 = 1$. To obtain radiative neutrino masses, assume the existence of three charged scalar singlets $\eta^-_{0,1,2}$ with $L = 0,1,2$ respectively. All scalar particles are assumed to be trivial under $Z_3$. As a result, the $\nu_e - \nu_S$ and $\nu_\mu - \nu_S$ mass terms are generated in one loop as shown in Fig. 1, but all other entries of ${\cal M}_\nu$ remain zero. Specifically, $$\begin{aligned} m \cos \theta &=& {{f_{e\tau} m_\tau f_{\tau S}} \over {16 \pi^2}} {{v_1^2 v_0^2} \over M^4}, \\ m \sin \theta &=& {{f_{\mu\tau} m_\tau f_{\tau S}} \over {16 \pi^2}} {{v_1^2 v_0^2} \over M^4},\end{aligned}$$ where $v_{0,1}$ are the vacuum expectation values of $\phi^0_{0,1}$, and $M$ is an effective mass of the $\eta$’s in the loop. However, ${\cal M}_\nu$ is only a submatrix of a larger $5 \times 5$ matrix containing also $\nu_\tau$ and $N$. Assuming a heavy Majorana mass for $N$ (which breaks $Z_3$ softly), $\nu_\tau$ gets a seesaw mass due to its coupling to $N$ via $\phi_1^0$. The effective $4 \times 4$ mass matrix spanning $\nu_e, \nu_\mu, \nu_S$, and $\nu_\tau$ is then given by $${\cal M}'_\nu = \left[ \begin{array} {c@{\quad}c@{\quad}c@{\quad}c} 0 & 0 & m \cos \theta & m' \cos \theta' \\ 0 & 0 & m \sin \theta & m' \sin \theta' \\ m \cos \theta & m \sin \theta & 0 & 0 \\ m' \cos \theta' & m' \sin \theta' & 0 & m_{\nu_\tau} \end{array} \right].$$ The $\nu_e - \nu_\tau$ and $\nu_\mu - \nu_\tau$ mass terms are also radiatively induced in one loop as in Fig. 1, but with $\nu_S$ replaced by $\nu_\tau$ and $\eta_0^-$ by $\phi_0^-$. As a result, $$\begin{aligned} m' \cos \theta' &=& {{f_{e\tau} (m_\tau^2 - m_e^2)} \over {16 \pi^2}} {{\Lambda v_1^2} \over M^4}, \\ m' \sin \theta' &=& {{f_{\mu\tau} (m_\tau^2 - m_\mu^2)} \over {16 \pi^2}} {{\Lambda v_1^2} \over M^4},\end{aligned}$$ where $\Lambda$ is the cubic $\phi_0^+ \phi_1^0 \eta_1^-$ coupling. Comparing Eqs. (13) and (14) to Eqs. (10) and (11), it is clear that $\theta \simeq \theta'$, and $\sin^2 2 \theta = 0.5$ is obtained if $f_{\mu\tau}/f_{e\tau} = 0.4$. Using $M = 1$ TeV, $v_0 = 245$ GeV, and $v_1 = 22$ GeV, a value of 0.1 eV for $m$ is also obtained if $\sqrt {f_{e\tau}^2 + f_{\mu\tau}^2} = 0.01$ and $f_{\tau S} = 0.03$. Because of mixing with $\nu_\tau$, the effective ${\cal M}_\nu$ of Eq. (1) now has $$\begin{aligned} &~& \epsilon_1 \simeq m'^2 \cos^2 \theta / m_{\nu_\tau}, ~~~~ \epsilon_2 \simeq m'^2 \sin^2 \theta / m_{\nu_\tau}, \\ &~& \epsilon_3 = 0, ~~~~ \epsilon_4 \simeq m'^2 \sin \theta \cos \theta / m_{\nu_\tau}.\end{aligned}$$ Therefore, $$\Delta m_{12}^2 \simeq 2 m m'^2 / m_{\nu_\tau}.$$ Using $\Lambda = 400$ GeV, a value of about 0.04 eV for $m'$ is obtained. Hence $\Delta m_{12}^2 \simeq 10^{-10}{\rm eV}^2$ if $m_{\nu_\tau} \simeq 3$ MeV. These numbers clearly demonstrate that a natural radiative realization of ${\cal M}_\nu$ is possible for a successful explanation of the solar and atmospheric neutrino deficits. It should be mentioned that ${\cal M}'_\nu$ of Eq. (12) has also been obtained with a Dirac seesaw mechanism in a recently proposed singlet-triplet Majoron model.[@24] Consider now the decay of $\nu_\tau$ in the present model. It proceeds via the mixing of $\nu_e$ and $\nu_\mu$ with $\nu_\tau$ in ${\cal M}'_\nu$ which is $m'/m_{\nu_\tau}$. The rate is given by[@11] $$\Gamma = {{m'^2 m_{\nu_\tau}} \over {64 \pi v_1^2}}.$$ For $m_{\nu_\tau} = 3$ MeV, the $\nu_\tau$ lifetime is then about $1.3 \times 10^4$ seconds, which is within the required range for a successful explanation of the COBE data in the case of 100% cold dark matter.[@11] This is a remarkable correlation between the constraint of cosmology and that of solar and atmospheric neutrino data. The singlet neutrino $\nu_S$ is not inert, but because of the discrete $Z_3$ symmetry, its only interaction at tree level with the other leptons is given by $f_{\tau S} \overline \tau_R \nu_S \eta_0^- + H.c.$ Hence its effect on all known leptonic processes is easily shown to be negligible for $f_{\tau S} = 0.03$ and $m_\eta = 1$ TeV. It decouples from other light particles in the early Universe when the $\tau$ does. Hence its contribution to the energy density at the time of nucleosynthesis is also negligible. Since $m_{12}$ is of order $10^{-10} {\rm eV}^2$, the oscillation time between $\nu_e$ and $\nu_S$ is about $10^2$ seconds. This is long enough also for $\nu_S$ not to be a factor in nucleosynthesis. In fact, the contributing light degrees of freedom in this model, not counting the photon, consists of only $\nu_e, \nu_\mu$, and the Majoron. Hence the effective number of neutrinos $N_\nu$ is only 2.6, below the standard upper bound of 3.3[@27] or the more recently proposed 3.04[@28]. Since $\eta_2$ couples to the leptons via the interactions $(\nu_e \tau_L - e_L \nu_\tau) \eta_2^+$ and $(\nu_\mu \tau_L - \mu_L \nu_\tau) \eta_2^+$, there are additional contributions to leptonic processes. For example, $\mu \rightarrow e \overline \nu_e \nu_\mu$ decay is accompanied by $\mu \rightarrow e \overline \nu_\tau \nu_\tau$ but the latter is only of order $10^{-6}G_F$ in strength. Similarly, $\mu \rightarrow e \gamma$ and $\nu_\tau \rightarrow \overline \nu_e \gamma + \overline \nu_\mu \gamma$ have branching fractions of order $10^{-14}$, and $\nu_\tau \rightarrow e^- e^+ \overline \nu_e$ is even more negligible. Hence the standard low-energy weak-interaction phenomenology is not affected. A second comment involves $CP$ nonconservation. In the above, since only one $N_R$ is assumed, the $\nu_\tau$ Yukawa coupling to $\phi_1^0$ can be chosen real. Nevertheless, $CP$ nonconserving couplings do exist in the Higgs sector which may or may not be sufficient for electroweak baryogenesis. If not, an easy remedy is to add one more $N_R$, then a $CP$ nonconserving phase will show up explicitly in the $\nu_\tau$ Yukawa coupling. In conclusion, it has been shown in this paper that a simple ansatz for the neutrino mass matrix, [*i.e.*]{} ${\cal M}_\nu$ of Eq. (1), works very well as an explanation of the present observed solar and atmospheric neutrino deficits. It is also naturally realized by a radiative mechanism based on the spontaneous breaking of lepton number. This has the advantage of incorporating electroweak baryogenesis and allowing the missing mass of the Universe to be all cold dark matter. The key is for $\nu_\tau$ to be a few MeV in mass and to decay late enough to delay the ultimate time of matter-radiation equality in the early Universe. This has been accomplished in a previously proposed doublet Majoron model,[@10; @11] which is now extended to include a singlet neutrino $\nu_{SL}$ with $L = 1$ and three charged scalar singlets together with a softly broken discrete $Z_3$ symmetry, resulting in an effective ${\cal M}_\nu$ exactly of the right form. Because of the necessity of maximum mixing, only the vacuum oscillation solution of the solar neutrino deficit is applicable in this scenario. However, the numbers turn out to be just right for the $\nu_\tau$ lifetime. Specifically, $m \simeq 0.1$ eV from the atmospheric data, $m m'^2 / m_{\nu_\tau} \simeq 10^{-10} {\rm eV}^2$ from the solar data, and $m_{\nu_\tau} \sim $ few MeV, $m' / m_{\nu_\tau} \sim 10^{-8}$ from cosmology. [*Note Added.*]{} If there are no neutrinos beyond $\nu_e$, $\nu_\mu$, and $\nu_\tau$, it is still possible to obtain ${\cal M}_\nu$ of Eq. (1) radiatively. Since a Majoron is not required, lepton number will now be assumed to be broken by explicit soft terms. In particular, the cubic term $\eta_2^- \phi_0^+ \phi_1^0$ is allowed. Hence the $\nu_e - \nu_\tau$ and $\nu_\mu - \nu_\tau$ entries are radiatively generated in one loop, but the other entries remain zero. Now let there be a doubly charged singlet scalar $\sigma^{--}$ with lepton number $L = 2$ and which transforms as $\omega$ under $Z_3$, then the interaction $\sigma^{++} \tau_R \tau_R$ is allowed (but not with $\tau_R$ replaced by $e_R$ or $\mu_R$). Let there also be the cubic term $\sigma^{++} \eta_2^- \eta_2^-$ which breaks both $L$ and $Z_3$, then these other entries also become nonzero in two loops.[@17] Hence the desired form of the $3 \times 3$ ${\cal M}_\nu$ is again realized radiatively. [ACKNOWLEDGEMENT]{} This work was supported in part by the U. S. Department of Energy under Grant No. DE-FG03-94ER40837. [99]{} R. Davis, Jr. [*et al.*]{}, Ann. Rev. Nucl. and Part. Sci. [**39**]{}, 467 (1989). K. S. Hirata [*et al.*]{}, Phys. Rev. Lett. [**63**]{}, 16 (1989); [**65**]{}, 1297 (1990); [**66**]{}, 9 (1991). A. I. Abazov [*et al.*]{}, Phys. Rev. Lett. [**67**]{}, 3332 (1991). P. Anselmann [*et al.*]{}, Phys. Lett. [**B314**]{}, 445 (1993); [**B327**]{}, 377 (1994). R. Becker-Szendy [*et al.*]{}, Phys. Rev. D [**46**]{}, 3720 (1992). K. S. Hirata [*et al.*]{}, Phys. Lett. [**B280**]{}, 146 (1992); Y. Fukuda [*et al.*]{}, [*ibid.*]{} [**B335**]{}, 237 (1994). For solar neutrinos, see for example D. Harley, T. K. Kuo, and J. Pantaleone, Phys. Rev. [**D47**]{}, 4059 (1993). For atmospheric neutrinos, see for example J. Pantaleone, Phys. Rev. [**D49**]{}, R2152 (1994). M. Gell-Mann, P. Ramond, and R. Slansky, in [*Supergravity*]{}, edited by P. van Nieuwenhuizen and D. Z. Freedman (North-Holland, Amsterdam, 1979), p. 315; T. Yanagida, in [*Proceedings of the Workshop on the Unified Theory and the Baryon Number in the Universe*]{}, Tsukuba, Ibaraki, Japan, 1979, edited by O. Sawada and A. Sugamoto (KEK, Tsukuba, Japan, 1979). H. Kikuchi and E. Ma, Phys. Lett. [**B335**]{}, 444 (1994). H. Kikuchi and E. Ma, Phys. Rev. [**D**]{} (Rapid Communication), to be published. See for example W. Frati [*et al.*]{}, Phys. Rev. [**D48**]{}, 1140 (1993). See for example V. Barger, R. J. N. Phillips, and K. Whisnant, Phys. Rev. Lett. [**69**]{}, 3135 (1992). See for example A. Acker, A. B. Balantekin, and F. Loreti, Phys. Rev. [**D49**]{}, 328 (1994). S. P. Mikheyev and A. Yu. Smirnov, Yad. Fiz. [**42**]{}, 1441 (1985) \[Sov. J. Nucl. Phys. [**42**]{}, 913 (1985)\]; Nuovo Cimento [**9C**]{}, 17 (1986); L. Wolfenstein, Phys. Rev. [**D17**]{}, 2369 (1978). See for example E. Ma, Phys. Rev. Lett. [**68**]{}, 1981 (1992). For a brief review, see for example K. S. Babu and E. Ma, Mod. Phys. Lett. [**A4**]{}, 1975 (1989). G. F. Smoot [*et al.*]{}, Astrophys. J. [**396**]{}, L1 (1992). Q. Shafi and F. W. Stecker, Phys. Rev. Lett. [**53**]{}, 1292 (1984); E. L. Wright [*et al.*]{}, Astrophys. J. [**396**]{}, L13 (1992). D. O. Caldwell and R. N. Mohapatra, Phys. Rev. [**D48**]{}, 3259 (1993). S. Dodelson, G. Gyuk, and M. Turner, Phys. Rev. Lett. [**72**]{}, 3754 (1994). H. B. Kim and J. E. Kim, Seoul National University Report No. SNUTP-94-48 (1994); E. J. Chun, ICTP Trieste Report No. IC/94/306 (1994). R. N. Mohapatra and S. Nussinov, University of Maryland Report No. UMD-PP-95-21 (1994). A. S. Joshipura and J. W. F. Valle, University of Valencia Report No. FTUV/94-46 (1994). A. Cohen, D. Kaplan, and A. Nelson, Phys. Lett. [**B245**]{}, 561 (1990); Nucl. Phys. [**B349**]{}, 727 (1991). See for example M. Joyce, T. Prokopec, and N. Turok, Princeton University Reports PUPT-1437, PUPT-1495, PUPT-1496, PUPT-1497. T. Walker [*et al.*]{}, Astrophys. J. [**376**]{}, 51 (1991). P. Kernan and L. Krauss, Phys. Rev. Lett. [**72**]{}, 3309 (1994). [FIGURE CAPTION]{} Fig. 1. One-loop radiative $\nu_e - \nu_s$ mass due to the spontaneous breaking of lepton number.
--- abstract: 'We develop a toy–model for the chemical evolution of the intra–cluster medium, polluted by the galactic winds from elliptical galaxies. The model follows the “galaxy formation history” of cluster galaxies, constrained by the observed luminosity function.' author: - 'L. Portinari' - 'A. Moretti and C. Chiosi' title: A chemical evolution model for galaxy clusters --- \#1[[*\#1*]{}]{} \#1[[*\#1*]{}]{} = \#1 1.25in .125in .25in Introduction ============ To account for the large amount of metals observed in the intra–cluster medium (ICM), some non-standard stellar Initial Mass Function (IMF) has been often invoked for cluster ellipticals, such as a more top–heavy IMF than the Salpeter one (Matteucci & Gibson 1995; Gibson & Matteucci 1997ab; Loewenstein & Mushotzky 1996), or a bimodal IMF with an early generation of massive stars (Arnaud et al. 1992; Elbaz, Arnaud & Vangioni-Flam 1995); see also the review by Matteucci (this conference). A non–standard IMF in ellipticals has been suggested also on the base of other arguments: a top–heavy IMF best reproduces their photometric properties (Arimoto & Yoshii 1987), and sistematic variations of the IMF in ellipticals of increasing mass might explain the observed trend $M/L \propto L$ (Larson 1986, Renzini & Ciotti 1993, Zepf & Silk 1996). Chiosi et al. (1998) developed chemo-spectrophotometric models for elliptical galaxies with galactic winds, adopting the variable IMF by Padoan, Nordlund & Jones (1997, hereinafter PNJ) which is naturally skewed toward more massive stars in the early galactic phases, in more massive galaxies and for higher redshifts of formation (see also Chiosi 2000). These galactic models were successful at reproducing a number of spectro–photometric features of observed ellipticals; now, an immediate question is: what do these galactic models predict for the pollution of the ICM through galactic winds (GWs)? Galactic wind ejecta: PNJ vs. Salpeter IMF ========================================== To address this issue, Chiosi (2000) calculated multi–zone chemical models of elliptical galaxies with the PNJ IMF, together with models with the standard Salpeter IMF for the sake of comparison. Before discussing the resulting global chemical evolution of the ICM, let’s inspect the different GW ejecta of the model ellipticals when the two IMFs are adopted in turn. [ ]{} In Fig. 1 (left panels) we compare the mass fraction of gas ejected in the GW, and the complementary mass fraction locked into stars, for galactic models with the variable PNJ IMF and for models with the Salpeter IMF (thick and thin lines, respectively). Mass fractions refer to the total initial baryonic mass of the galaxy. The amount of ejected gas is larger in the case of the PNJ IMF, since in the early galactic phases this IMF is skewed toward more massive stars and less mass remains locked into long–lived, low-mass stars. The difference with the Salpeter case gets sharper for larger galactic masses, and for higher redshifts of formation. Models with the Salpeter IMF evidently bear no dependence on the redshift of formation. The rightmost panel in Fig. 1 shows the iron abundances in the gas ejected as GW, again comparing the Salpeter IMF and the PNJ IMF case. In most cases, the galactic ejecta in the PNJ models are more metal–rich than in the Salpeter case, up to a factor of 5 or more for the more massive galaxies, and for high redshifts of formation. In the PNJ models, in fact, more gas in the galaxy gets recycled through massive stars, effective metal contributors, while less gas gets locked into low–mass stars. From the trends described above, we expect galactic models with the PNJ IMF to predict, for the ICM, a more efficient metal pollution and a higher fraction of the gas originating from GWs, with respect to “standard” models. The first results in this respect are discussed in Chiosi (2000). The chemical evolution of the ICM: a toy model ============================================== Since the GW ejecta of ellipticals modelled with the PNJ IMF are sensitive to the detailed redshift of formation of the individual galaxies, to predict the chemical enrichment of the ICM we need to model the history of galaxy formation in the cluster. To this aim, we developed a global, self-consistent chemical model for the cluster, which can follow the simultaneous evolution of all its components: the galaxies, the primordial gas, and the gas processed and re-ejected via GWs (Moretti et al. 2001). Our chemical model for clusters is developed in analogy with the usual chemical models for galaxies, as illustrated in the scheme below. ----- ------------ -------------------- --------------- ------------ -----                       $\Downarrow$ $\Downarrow$ $\swarrow$ [SFR, IMF]{} [GFR, GIMF]{} $\searrow$ $\Downarrow$ $\Downarrow$ ISM ICM $\Downarrow$ $\Downarrow$ $\nwarrow$ [stellar yields]{} [GW yields]{} $\nearrow$ $\Downarrow$ $\Downarrow$ ----- ------------ -------------------- --------------- ------------ ----- As the interstellar medium (ISM) is polluted by stars, the ICM is polluted by galaxies. The primordial gas in the ICM gets consumed in time by some prescribed Galactic Formation Rate (GFR); at each time galaxies form distributed in mass according to a Galactic Initial Mass Function (GIMF), derived from the Press-Schechter mass function suited to that redshift. Through GWs, galaxies restitute chemically enriched gas, which mixes with the overall ICM; the latter consists of the primordial gas not yet consumed by galaxy formation (if any) and of the gas re-ejected by galaxies up to the present age. Model equation parallel those of galactic chemical models, with the substitutions , , . Model parameters are calibrated so that the resulting galaxy formation history matches the observed present–day luminosity function (LF) at the end of the simulation. For all details, see Moretti et al. (2001). The “best case” models ====================== In Fig. 2 we show our case of “best match” with the observed LF in the B–band (Trentham 1998, top panels). The left panels refer to the case when galactic models with the PNJ IMF are adopted; the right panels display results for the same cluster parameters (i.e. same galaxy formation history), but adopting ellipticals with the Salpeter IMF. The Salpeter case predicts somewhat more galaxies in the high–luminosity bins, due to the fact that for massive galaxies a larger mass fraction remains locked into stars in the Salpeter case than with the PNJ IMF (cf. Fig. 1). Anyways, the LF is still in agreement with the observed one within errors. [ ]{} [ ]{} [ ]{} Although the predicted LF is virtually the same in the two models, strong differences are found in the predicted gas and metallicity content in the ICM. The mid panels in Fig. 2 show the predicted abundance evolution in the ICM. Adopting galactic models with the PNJ IMF clearly improves predictions about the metallicity of the ICM. The bottom panels in Fig. 2 show the evolution of the mass fraction of the various components of the cluster: the primordial gas, which gets consumed by galaxy formation; the processed gas, namely the gas that has been involved in galaxy formation and then re-ejected as GW; the total gas, sum of the primordial and of the processed gas; the mass in galaxies, that is in the stellar component we see today, “left over” after the GW. While in the Salpeter case (right panel) the overall mass that remains locked into galaxies (long–dashed line) is larger that the mass ejected in the GWs (dotted line), the opposite is true for the cluster model with the PNJ galaxies (left panel), as qualitatively expected from § 2. In the latter case, the mass of the re-ejected gas is $\sim 1.5$ times larger than that locked into galaxies. Although this is not enough to account for the whole of the observed intra-cluster gas (whith a mass 2–5 times larger than that in galaxies, Arnaud et al. 1992), the amount of gas re-ejected by galaxies is expected to make up for a remarkable fraction of the overall ICM. Open issues and future perspectives =================================== Once the model is calibrated to reproduce the observed LF in the B–band (Fig. 2, top panels), it turns out that the match with LFs in redder bands is not as good. Fig. 3 (left panel) shows the comparison to the observed LF in the R–band: the “best–case” model calibrated on the B–band seems to underestimate the number of luminous galaxies with a red stellar population. A similar effect is seen for the LF in the K–band. We used the B–band LF for the calibration, as it offered the deepest and most extensive dataset, but the LF in the red bands is probably a better track of the old stellar population responsible for the bulk of the metal enrichment, while the B–band might be sensitive to recent minor bursts of star formation. Hence, calibrating the model over the red stellar population should provide a better estimate of the galaxy formation history and of the consequent chemical enrichment of the ICM. In particular, a larger number of old giant galaxies will help to obtain higher values of the Iron Mass to Light Ratio (IMLR, for a definition see Renzini 1997 and references therein), closer to the very high IMLRs measured in real clusters ($\geq 0.01$, Finoguenov, David & Ponman 2000). To illustrate this point, in the right panel of Fig. 3 we plot the present–day IMLR of individual ellipticals modelled with the PNJ IMF, as a function of the initial mass of the galaxy and for different redshifts of formation. Higher values of the IMLR pertain to more massive and older galaxies. With the PNJ IMF in fact, galaxies which are more massive and/or formed at higher redshifts, store less mass in the stellar component, while ejecting more, and more metal–rich, gas in the GW (Fig. 1 and § 2); both effects tend to enhance the corresponding IMLR. Hence, with a galaxy formation history producing more red giant galaxies in the “cluster mixture”, as suggested by the LFs in the red bands, we expect to predict high values for the overall IMLR in the cluster. [ ]{} Acknowledgments {#acknowledgments .unnumbered} =============== LP and AM are grateful to F. Matteucci and to the organizers of this conference for giving them the opportunity to participate and contribute. Arimoto N., Yoshii Y., 1987, A&A 173, 23 Arnaud M., Rothenflug R., Boulade O., et al., 1992, A&A 254, 49 Chiosi C., 2000, A&A 364, 423 Chiosi C., Bressan A., Portinari L., Tantalo R., 1998, A&A 339, 355 Driver S.P., Couch W.J., Phillips S., 1998, MNRAS 301, 369 Elbaz D., Arnaud M., Vangioni–Flam E., 1995, A&A 303, 345 Finoguenov A., David L.P., Ponman T.J., 2000, ApJ 544, 188 Fukazawa Y., Makishima K., Tamura T., et al., 1998, PASJ 50, 187 Gibson B.K., Matteucci F., 1997a, ApJ 475, 47 Gibson B.K., Matteucci F., 1997b, MNRAS 291, L8 Larson R.B., 1986, MNRAS 218, 409 Loewenstein M., Mushotzky R.F., 1996, ApJ 466, 695 Matsumoto H., Tsuru T.G., Fukazawa Y., et al., 2000, PASJ 52, 153 Matteucci F., Gibson B.K., 1995, A&A 304, 11 Moretti A., Portinari L., Chiosi C., 2001, in preparation Mushotzky R., Loewenstein M., 1997, ApJ 481, L63 Padoan P., Nordlund A.P., Jones B.J.T., 1997, MNRAS 288, 145 (PNJ) Renzini A., 1997, ApJ 488, 35 Renzini A., Ciotti L., 1993, ApJ 416, L49 Trentham N., 1998, MNRAS 294, 193 Zepf S.E., Silk J., 1996, ApJ 466, 114
--- abstract: 'We continue the study of the contact homology of subcritical Stein manifolds initiated by Mei-Lin Yau. With the technical assumption that the first Chern class of the Stein domain vanishes, we determine the full contact homology of the boundary of a subcritical Stein domain. Moreover we calculate the genus $0$ correlators and descendants of one marked point for the Stein domain. As an application, we prove that if a Kähler manifold $M^{2n}$ admits a subcritical polarization and $c_{1}(M)$ is proportional to the Kähler form, then $M$ is uniruled.' address: 'University of Southern California, Los Angeles, CA 90089' author: - Jian He date: 'This version: October 18, 2008.' title: Correlators and Descendants of Subcritical Stein Manifolds --- Introduction {#section: introduction} ============ An open complex manifold $(M^{2n},J)$ is *Stein* if it can be realized as a properly embedded complex submanifold of some $\mathbb{C}^{N}$. In this paper we will assume $n\geq 3$. A smooth function $f: M\rightarrow \mathbb{R}$ is *exhausting* if it is proper and bounded from below. Let $d^{J}f$ denote $df\circ J$. The function $f$ is *plurisubharmonic* if the associated 2-form $\omega_{f} = -dd^{J}f$ is a symplectic form taming $J$, i.e., $\omega_{f}(v,Jv)>0$ for every non-zero tangent vector $v$. Plurisubharmonicity is an open condition. Therefore $f$ can be assumed to be Morse. By a theorem of Grauert, an open complex manifold is Stein if and only if it admits a plurisubharmonic function. A Stein manifold $(M^{2n},J)$ with an exhausting plurisubharmonic function $f$ admits the following associated structures: - a symplectic form $\omega_{f} = -dd^{J}f$, $\omega_{f}$ is $J$-invariant, - a primitive $\alpha = -d^{J}f$, - a vector field $Y$ such that $\alpha=\iota_{Y}\omega$, - a metric $g(v,w)=\omega(v,Jw)$. Since $L_{Y}\omega = \iota_{Y}d\omega + d(\iota_{Y}\omega) = d\alpha =\omega$, the vector field $Y$ is Liouville, i.e., the flow of $Y$ acts by expanding the symplectic form. In fact $Y$ is the gradient vector field of $f$ with respect to the metric $g$, $$df=-df\circ J \circ J=\alpha \circ J= (\iota_{Y}\omega) \circ J=\iota_{Y}g.$$ By rescaling $f$ we can assume that $Y$ is a complete vector field. If the Morse function $f$ has finitely many critical points, then $M$ is of *finite type*. If the Morse indices of the critical points are all less than $n$, then $M$ is *subcritical*. A Stein manifold $(M,\omega_{f})$ of finite type can be viewed as a symplectic cobordism with one positive end in the sense of [@EliashbergGiventalHofer]. Suppose $V$ is a hypersurface transverse to $Y$ which encloses all the critical points of $f$, then $V$ is of contact type with contact form $\alpha|_{V}$. Using the flow of $Y$, we see that the open end of $M$ is symplectomorphic to $(V\times [0,\infty),d(e^t\alpha|_{V}))$. The framework of symplectic field theory provides symplectic invariants of $M$. The goal of this paper is to compute some of these invariants. The definitions of the invariants will be given in the next section. In [@Yau], Mei-Lin Yau showed that if $M$ is subcritical and $c_{1}(M)=0$, then the *cylindrical contact homology* of $V$, $HC^{\textsf{cyl}}(V)$, is isomorphic as a graded vector space to the direct sum $\bigoplus_{i=1}^{\infty} H_{*}(M,\partial M)[2i-4]$, where $H_{*}(M,\partial M)[2i-4]$ is a copy of the relative homology with a positive degree shift of $2i-4$. Note that for a subcritical Stein manifold, $c_{1}(M)=0$ is equivalent to $c_{1}(\xi)=0$, where $\xi$ is the contact distribution on $V$. The unstable submanifolds of $M$ are of codimension at least $4$, and the complement of the unstable manifolds is homeomorphic to $V\times \mathbb{R}$, so $c_{1}(M)=c_{1}(TM|_{V})$. But $TM|_{V}=\xi\oplus \mathbb{C}$ where $\mathbb{C}$ is spanned by the Reeb field and the Liouville field, thus $c_{1}(M)= c_{1}(\xi)$. Our first result determines the *full contact homology algebra* of $M$, $HC(V)$. For a graded vector space $W=\oplus W_{i}$, let $\Lambda \left(W\right)$, the graded exterior algebra of $W$, be the quotient of the tensor algebra $W$ by the ideal generated by the graded commuting relations. If $M$ is a subcritical Stein manifold and $c_{1}(M)=0$, then $$HC(V)\cong \Lambda \left(HC^{\emph{\textsf{cyl}}}(V)\right).$$ In [@Yau], if a suitable plurisubharmonic function $f$ and a contact form $\alpha$ are chosen, then $\mathcal{C}^{\alpha}$, the chain complex for $HC^\textsf{cyl}(V,\alpha)$, can be identified with copies of $\text{Crit}(f)$, the Morse complex of a gradient–like vector field on $M$, by a chain map $\Psi(\alpha, f)$. This chain map $\Psi(\alpha, f)$ induces the desired isomorphism on homology. However the naturality of this isomorphism was unknown. More precisely, if $g$ is another function and $\beta$ another contact form, then there are natural chain maps $$\begin{aligned} \Phi(\alpha, \beta): \mathcal{C}^{\alpha} &\rightarrow& \mathcal{C}^{\beta}\\ \Theta(f,g): \text{Crit}(f) &\rightarrow& \text{Crit}(g)\end{aligned}$$ which induce isomorphisms on $HC(V)$ and $H_{*}(M,\partial M)$ respectively. Does the following diagram commute at homology level? $$\begin{CD} \mathcal{C}^{\alpha} @>\ \ \ \Psi(\alpha, f)\ \ \ >> \text{Crit}(f)\\ @VV\Phi(\alpha, \beta)V @VV\Theta(f,g)V\\ \mathcal{C}^{\beta} @>\ \ \ \Psi(\beta, g)\ \ \ >> \text{Crit}(g) \end{CD}$$ By a computation of one-point descendant correlators of genus $0$, we give an affirmative answer to the above question. \[main\] There exists a non-degenerate pairing $$GW: HC^{\textsf{\emph{cyl}}}(V)\ \otimes \ \bigoplus_{i=1}^{\infty} H^{*}(M,\partial M)[i] \longrightarrow \mathbb{R}.$$ The map $GW$ induces an isomorphism $HC^{\textsf{\emph{cyl}}}(V) \rightarrow \bigoplus_{i=1}^{\infty} H_{*}(M,\partial M)[i]$ which coincides with $\Psi(\alpha, f)$ for any choice of $f$ and $\alpha$. In [@BiranCieliebak], Biran and Cieliebak studied *subcritical polarization* of a Kähler manifold, which is a Kähler manifold $(M,\omega,J)$, where $\omega$ is an integral Kähler form, together with a smooth reduced complex hypersurface $\Sigma$ representing the Poincaré dual of $k[\omega]$, such that the complement $M \setminus \Sigma$ is a subcritical Stein manifold. Biran and Cieliebak asked if manifolds admitting subcritical polarizations are always uniruled. Combining our results on correlators on the subcritical Stein manifold and a standard Morse–Bott computation of correlators on the normal bundle of $\Sigma$, we obtain a partial answer to this question: If $M$ admits a subcritical polarization and $c_{1}(M)$ is proportional to the Kähler form $\omega$, then $M$ is uniruled. The cylindrical contact homology of $V$ can be computed as the boundary of a subcritical Stein domain, and also as the unit normal bundle inside the normal bundle of $\Sigma$. Equating the two we obtain a set of relations for the Betti numbers of $\Sigma$ and $M\setminus \Sigma$, described below. Suppose $(M^{2n},J,\Sigma)$ is a subcritical polarization such that $c_{1}(M)$ is proportional to $\omega$. Let $a_{i}$ be the $i$-th Betti number of $M\setminus \Sigma$, $b_{i}$ the $i$-th Betti number of $\Sigma$, and $D$ the homological dimension of $M\setminus \Sigma$, i.e., the grading of highest non-vanishing homology group. Then (**a**) : the sequence $\{a_{i}\}$ is symmetric about $\frac{D}{2}$, i.e., $a_{i}=a_{D-i}$; (**b**) : for $2i < n$ or $2i+1 < n$, $$b_{2i}=\sum_{j=0}^{i}a_{2j},\ \ b_{2i+1}=\sum_{j=0}^{i}a_{2j+1}.$$ The computation of correlators and descendants relies on the fact that there is an $S^1$-action on a subcritical Stein manifold. In the spirit of localization theorems of Atiyah–Bott [@AtiyahBott]and Graber–Pandharipande [@GraberPandharipande], the correlators and descendants can be determined by studying the fixed loci of the $S^1$-action. This paper is organized as follows: in section $2$ we review some basic facts about Stein manifolds and symplectic field theory, and define the relevant invariants; section $3$ describes subcritical Stein manifolds and Reeb dynamics in more detail, essentially summarizing the previous work of Yau; in section $4$ we determine the set of $S^1$-invariant holomorphic curves; in section $5$ we prove the necessary transversality results; finally in the last section we apply our results to subcritical polarizations. Stein manifolds and symplectic field theory =========================================== From now on we will restrict ourselves to Stein manifolds of finite type. All plurisubharmonic functions are assumed to be Morse unless otherwise stated. By a theorem of Eliashberg and Gromov, a Stein manifold carries a canonical symplectic structure. A different choice of plurisubharmonic function corresponds to a different choice of complete Liouville vector field $Y$ on the same symplectic manifold. Let $f$ and $g$ be two plurisubharmonic functions which induce complete Liouville vector fields on a Stein manifold $M$. Then the manifolds $(M, \omega_{f})$ and $(M, \omega_{g})$ are symplectomorphic. It is easy to check that the unstable submanifolds of $f$ are isotropic, hence the Morse index of each critical point is less than or equal to $n$. A Stein manifold is *split* if it is of the form $(M'\times \mathbb{C},J'\times i)$, where $(M',J')$ is Stein. For a split Stein manifold we will always use a plurisubharmonic function of the form $f=f' + \kappa(x_n^2+y_n^2)$, where $f'$ is a plurisubharmonic function on $M'$, $(x_n,y_n)$ the Euclidean coordinate of $\mathbb{C}$, and $\kappa$ a positive constant. The associated structures of $M$ are related to the associated structures of $M'$ as follows: - symplectic form $\omega = \omega' + dx_ndy_n$, - Liouville field $Y = Y' + \frac{1}{2}(x_n{\partial_{x_n}} + y_n{\partial_{y_n}})$, - primitive $\alpha = \alpha' + \frac{1}{2}(x_ndy_n -y_ndx_n)$. Note that $Y$ and $Y'$ share the same critical points. A hypersurface $V'\subset M'$ is transverse to $Y'$ and encloses the critical points of $Y'$ if it can be realized as the level set of a function $\phi$ such that $Y'$ is gradient–like for $\phi$, and $\phi(p)<c$ for every critical point $p$ of $Y'$. Let $V\subset M$ be the level set $\{\phi + \kappa(x_n^2+y_n^2)=c\}$. Such a $V$ is transverse to $Y$ and encloses the critical points of $Y$. It is said to be a *stabilization* of $V'$. By construction, all structures and the contact type hypersurface $V$ admit an $S^1$-symmetry: rotation in the $\mathbb{C}$ component. Two Stein structures $(M,J_{0})$ and $(M,J_{1})$ are *Stein homotopic* if there is a continuous family of Stein structures $(M,J_{t})$ with exhausting plurisubharmonic functions $f_{t}$ such that the critical points of $f_{t}$ stay in some compact subset during the homotopy. Two Stein manifolds $(M_{0},J_{0})$ and $(M_{1},J_{1})$ are *deformation equivalent* if there exists a diffeomorphism $\phi: M_{0} \rightarrow M_{1}$ such that $(M_{0},J_{0})$ and $(M_{0},\phi^{*}J_{1})$ are Stein homotopic. Every subcritical Stein manifold is deformation equivalent to a split one. If $(M_{0},J_{0})$ and $(M_{1},J_{1})$ are deformation equivalent, then $(M_{0},\omega_{f_0})$ and $(M_{1},\omega_{f_1})$ are symplectomorphic. Hence we always treat a subcritical Stein manifold as split. A *symplectic cobordism* is an open symplectic manifold $(M,\omega)$ with cylindrical ends, i.e., $(M,\omega)$ can be decomposed into three parts, $M=M^{+}\cup M' \cup M^{-}$, with the following properties: - $M^{+}$ is symplectomorphic to $(V^{+}\times (-\epsilon,\infty), d(e^{t}\alpha^{+}))$, where $(V^{+}, \alpha^{+})$ is a closed contact manifold, - $M'$ is compact, - $M^{-}$ is symplectomorphic to $(V^{-}\times (-\infty, \epsilon), d(e^{t}\alpha^{-}))$, where $(V^{-}, \alpha^{-})$ is a closed contact manifold. A Stein manifold with a plurisubharmonic function $f$ is therefore a symplectic cobordism with one positive end. Symplectic field theory invariants of a cobordism arise from the structure of the moduli spaces of finite energy holomorphic curves. We will give an extremely quick overview and introduce the invariants we will compute. See [@EliashbergGiventalHofer] for a more complete discussion. An almost complex structure $J$ on $M$ is *compatible* with the cobordism if it satisfies the following: - $\omega(v,Jv)>0$ for non-trivial $v\in TM$, - $J$ is invariant under translation in the $t$ direction on the cylindrical ends $M^{\pm}$, - $J{\partial_{t}}$ is the Reeb field on $V^{\pm}$, - $J$ is an anti-involution on the contact distributions $\xi^{\pm}$. Unfortunately the integrable complex structure $J$ of the Stein manifold is not compatible with the end structure $V\times (-\epsilon,\infty)$. If the closed Reeb orbits on $V^{\pm}$ are non-degenerate, i.e., $1$ is not an eigenvalue of the linearized returned map of the Reeb flow. Hofer, Wysocki and Zehnder [@HoferWysockiZehnder] showed that finite energy $J$-holomorphic surfaces are asymptotic to closed Reeb orbits as $t\rightarrow \pm\infty$. The symplectization of a contact manifold $(V, \alpha)$, $(M,\omega)=(V\times \mathbb{R},d(e^{t}\alpha))$, can be considered as a trivial cobordism. For the trivial cobordism, we require compatible complex structures to be $\mathbb{R}$-invariant. Holomorphic curves then come in one parameter families. Suppose the first Chern class of the contact distribution $\xi$ vanishes. Then there is a consistent way to assign a Conley-Zehnder index $\mu_{\gamma}$ to each contractible Reeb orbit $\gamma$, such that the expected dimension of the moduli space $\mathcal{M}_{g,\{\gamma^+_1\dots, \gamma^+_a\},\{\gamma^-_1\dots, \gamma^-_b\},m}$ is $$\sum_{i} \mu_{\gamma^+_i}- \sum_{j} \mu_{\gamma^-_j} + (n-3)(2-2g-a-b)+2m$$ where $\mathcal{M}_{g,\{\gamma^+_1\dots, \gamma^+_a\},\{\gamma^-_1\dots, \gamma^-_b\},m}$ is the moduli space of $J$-holomorphic maps from a genus $g$ surface with $a+b$ punctures and $m$ interior marked points to $M$, such that the $a$ positive punctures are asymptotic to $\{\gamma^+_i\}$ and the $b$ negative punctures to $\{\gamma^-_j\}$. For coherent orientations of the moduli spaces, we only consider *good* orbits. An orbit is good if it is not an even multiple cover of another orbit whose linearized return map has an odd number of eigenvalues in $(-1,0)$. Invariants of $(V,\xi)$ can be obtained by counting holomorphic curves in the symplectization of $V$. The *full contact homology algebra* of $V$, $HC(V)$, is defined to be the homology of the chain complex of the graded commutative algebra generated by good contractible Reeb orbits. The differential of an orbit is given by the count of genus zero holomorphic curves with one positive puncture and any number of negative punctures: $$\partial \gamma = \sum_{i} c_{i} \sigma_i,$$ where each $\sigma_{i}$ is a monomial $\beta_{1}\cdots \beta_{k_i}$, and $c_{i}$ is the algebraic number of expected dimension $1$ (to account for the $\mathbb{R}$-invariance) moduli spaces of genus zero curves with a positive puncture asymptotic to $\gamma$ and several negative punctures asymptotic to $\beta_{1},\ldots,\beta_{k_i}$. The differential is then extended to the entire algebra by Leibniz rule. From the study of the possible boundaries of an expected dimension $2$ moduli space of genus zero curves with one positive end and several negative ends, one can deduce $\partial^2 = 0$. A differential counting only holomorphic cylinders can also be defined on the additive group generated by the good contractible Reeb orbits. Define $$\partial \gamma = \sum_{i} c_{i} \beta_{i},$$ where $c_{i}$ is the algebraic number of expected dimension $1$ moduli spaces of holomorphic cylinders from $\gamma$ to $\beta_{i}$. It was pointed out in [@EliashbergGiventalHofer] that if there are no index $-1$, $0$ or $1$ holomorphic planes, then $\partial^2=0$. The resulting homology is called the *cylindrical contact homology* of $(V,\xi)$. These homologies are invariants of the contact structure $\xi$ on $V$, independent of choices of contact form or complex structures. With respect to a fixed Liouville vector field $Y$, it is clear that any two contact–type hypersurfaces enclosing the critical points of $Y$ are contactomorphic, since a flow along $Y$ takes one to the other. However this is no longer obvious for two different Liouville vector fields coming from two different plurisubharmonic functions. Still, the full contact homology and the cylindrical contact homology (if well defined) of the two surfaces are naturally isomorphic. Let $Y_1$ and $Y_2$ be two complete Liouville vector fields on a symplectic manifold $M$, and $V_1$, $V_2$ be contact type hypersurfaces at infinity with respect to each. Then $HC(V_1)\cong HC(V_2)$, and $HC^{\emph{\textsf{cyl}}}(V_1)\cong HC^{\emph{\textsf{cyl}}}(V_2)$ if defined. Fix $V_1$, since the negative side of $V_1$ is compact, we can translate $V_2$ along $Y_2$ to $V_{2}'$ such that $V_{2}'$ lies entirely on the positive side of $V_1$. The region between $V_1$ and $V_{2}'$ then becomes a symplectic cobordism with two cylindrical ends. This induces a homomorphism $\Phi: HC(V_2)\rightarrow HC(V_1)$. Now translate $V_1$ along $Y_1$ to $V_{1}''$ such that $V_{1}''$ lies entirely on the positive side of $V_{2}'$. Similarly this induces $\Psi: HC(V_1)\rightarrow HC(V_2)$. Since the total region between $V_1''$ and $V_1$ is just part of a symplectization, by the composition property of symplectic cobordisms, $\Phi\circ \Psi$ is identity. Similarly $HC^{{\textsf{cyl}}}(V_1)\cong HC^{{\textsf{cyl}}}(V_2)$. Thus the full contact homology and cylindrical contact homology (if defined), are invariants the Stein manifold $M$. We can also count holomorphic curves in $M$, the whole cobordism. Let $\overline{M}$ denote the compactification of $M$ where a copy of $V$ is attached “at infinity”. Let $\mathcal{M}_{0,\{\gamma_{1},\dots,\gamma_{n}\},m}$ be the moduli space of genus $0$ holomorphic curves with $n$ positive punctures asymptotic to Reeb orbits $\{\gamma_{i}\}$, and $m$ interior marked points. In general the moduli spaces are not of the expected dimension. A regularization procedure must be performed to obtain a perturbed moduli space $\mathcal{M}^{virt}$, which is a weighted branched manifold with boundary. Together with the evaluation maps at the marked points, this represents a chain in $\left(\overline{M}^m, \partial \left(\overline{M}^m\right)\right)$. One may integrate compactly supported forms over such chains, in other words, evaluate the integral $$\int_{\mathcal{M}^{virt}} ev_{1}^*(\theta_{1})\dots ev_{m}^*(\theta_{m})$$ where $\{\theta_{i}\}$ is a set of compactly supported closed forms on $M$. This is call a *correlator*. However unlike the case of Gromov–Witten invariants for closed manifolds, these chains are not cycles, they can have codimension one strata in the interior of $\overline{M}^m$. Therefore the value of a correlator depends on actual forms instead of their cohomology classes. To have an invariant correlator we look for moduli spaces whose union is a relative cycle. Suppose $\sigma=[\sum_{i}c_{i}\sigma_{i}]$ is an element in the full contact homology of $V$, where each $\sigma_{i}=\gamma_{i_1}\cdots\gamma_{i_k}$ is a monomial. For each monomial $\sigma_{i}$, let $\mathcal{M}_i$ be the moduli space of $k_{i}$ holomorphic planes asymptotic to $\gamma_{i_1}, \ldots, \gamma_{i_k}$ respectively, with $m$ marked points distributed between them in total. Then the union of the moduli spaces $\mathcal{M}_i$ with the evaluation map is a relative cycle in $\left(\overline{M}^{m},\partial \left(\overline{M}^{m}\right)\right)$. This follows from compactness results on holomorphic curves in cobordisms. The boundary of a moduli space $\mathcal{M}_i$ that evaluates to the interior of $\overline{M}^m$ consists of multi-story holomorphic curves. The codimension one stratum consists of 2-story holomorphic curves. The curve in the symplectization of $V$ level has expected dimension one, contains no marked points, and consists of trivial cylinders $\gamma_{i_j}\times \mathbb{R}$ except for one component $i_{0}$, where it is a genus zero curve with one positive puncture asymptotic to $\gamma_{i_{0}}$ and any number of negative punctures. This is precisely what the rational contact homology differential counts. The fact that $[\sum_{i}\sigma_{i}]$ is a cycle in full contact homology implies that the codimension one strata from all the $\mathcal{M}_i$’s cancel each other. Therefore their union is a relative cycle in $(\overline{M}^{m},\partial (\overline{M}^{m}))$. We will denote this cycle by $\mathcal{M}_{[\sigma]}$. The relative homology class of $\mathcal{M}_{[\sigma]}$ is independent of the contact form $\alpha$, as well as all other choices. Another class of forms we can integrate over the moduli space $\mathcal{M}^{virt}$ are the $\psi$-classes. Suppose $p$ is a marked point, then for each element of the moduli space, the cotangent line to the domain curve at the marked point $p$ is a complex dimension one vector space. These spaces patch together to form a line bundle $L_{p}$, called a tautological line bundle of the moduli space. Define $\psi_{p}=c_1(L_{p})$. An mixed integral of pull–backed forms and powers of $\psi$-classes over $\mathcal{M}^{virt}$ is called a *descendant*. Subcritical Stein manifolds and Reeb dynamics ============================================= This section is essentially a summary of [@Yau]. Given a plurisubharmonic Morse function $f$ on a Stein manifold, or more importantly, the complete gradient–like Liouville vector field $Y$, we can reconstruct $M$ by attaching standard handles as in [@Weinstein]. We will describe the handle attachment procedure and discuss how the shape of the handle can be chosen to control the Reeb dynamics. An index $k$ handle of real dimension $2n$ is modeled on the complex $n$–dimensional space $\mathbb{C}^n$ with the standard symplectic form $\omega_{{{\rm st}}}$ together with a standard complete Liouville vector field $Y_{{{\rm st}}}$. Let $(x_i,y_i)$ be the Euclidean coordinates, $\omega_{{{\rm st}}}=\sum _{i=1}^n dx_i\wedge dy_i$. Define $$Y_{{{\rm st}}} :=\sum_{i=1}^{k}\left(2x_i\frac{\partial}{\partial x_i}- y_i\frac{\partial}{\partial y_i}\right)+\sum _{j=k+1}^n\frac{1}{2}\left( x_j\frac{\partial}{\partial x_j}+y_j\frac{\partial}{\partial y_j}\right),$$ $Y_{{{\rm st}}}$ is the gradient vector field of the function $$f_{{{\rm st}}}=\sum_{i=1}^{k}(x_i^2- \frac{1}{2}y_i^2)+ \sum_{j=k+1}^{n}\frac{1}{4}(x_j^2+y_j^2)$$ with respect to the Euclidean metric, and it is easy to see that $L_{Y_{{{\rm st}}}}\omega_{{{\rm st}}}=\omega_{{{\rm st}}}$. Let the $1$-form $\alpha_{{{\rm st}}}$ be the contraction $\iota_{Y_{{{\rm st}}}}\omega_{{{\rm st}}}$, it restricts to a contact $1$-form on any hypersurface $V$ transverse to $Y_{{{\rm st}}}$. $$\alpha_{{{\rm st}}}=\sum_{i=1}^{k}(2x_idy_i + y_idx_i)+ \sum_{j=k+1}^{n}\frac{1}{2}(x_jdy_j - y_jdx_j).$$ We have an isotropic $k$-disk $$D_{{{\rm st}}}=\left\{\sum_{i=1}^{k}y_i^2 \leq 1, x_i=x_j=y_j=0\right\},$$ and its boundary sphere $$S_{{{\rm st}}}= \left\{\sum_{i=1}^{k}y_i^2 = 1, x_i=x_j=y_j=0\right\}.$$ Note that $S_{{{\rm st}}}$ lies on the hypersurface $V_{-}=\{f_{{{\rm st}}}= -1/2\}$. In fact $S_{{{\rm st}}}$ is an isotropic sphere of the contact manifold $V_{-}$. Suppose we have a symplectic manifold $(M,\omega)$ with convex boundary, i.e., there is a local Liouville vector field $Y$ such that $\partial M$ is transverse to $Y$. If $S$ is an isotropic $k-1$ sphere on $\partial M$ together with a trivialization of the symplectic subnormal bundle, then using the following standard neighborhood theorem, we can attach a tubular neighborhood $U$ of $D_{{{\rm st}}}$ to $M$, identifying $S_{{{\rm st}}}$ with $S$: For $i=1,2$ let $(M_i,\omega_i,Y_i,V_i,S_i)$ be symplectic manifolds $(M_i,\omega_i)$ with Liouville vector fields $Y_i$, hypersurfaces $V_i$ transverse to $Y_i$, and isotropic submanifolds $S_i$ of $V_i$. Given a diffeomorphism from $S_1$ to $S_2$ covered by an isomorphism between their symplectic subnormal bundles, there exist neighborhoods $N_i$ of $S_i$ in $M_i$ and a symplectomorphism between them extending the given mapping on $Y_1$, such that $(N_1,\omega_1,Y_1,V_1,S_1)$ is taken to $(N_2,\omega_2,Y_2,V_2,S_2)$. Apply this theorem to the pair $(M,\omega,Y,\partial M,S)$ and $(\mathbb{C}^n,\omega_{{{\rm st}}},Y_{{{\rm st}}},V_{-},S_{{{\rm st}}})$. Note that the symplectic normal bundle of $S_{{{\rm st}}}$ is trivial and has a natural framing $$\left\{{\partial_{x_{k+1}}},{\partial_{y_{k+1}}},\dots, {\partial_{x_n}},{\partial_{y_n}}\right\}.$$ The Liouville vector field $\widetilde{Y}$ on the new manifold $M\cup U$ restricts to the $Y$ on $M$ and $Y_{{{\rm st}}}$ on $U$. We can take any hypersurface $V_+$ in $U$ transverse to $Y_{{{\rm st}}}$ and tangent to $V_{-}$, then $(\partial M\setminus S)\cup V_+$ will be a hypersurface in $M\cup U$ transverse to $\widetilde{Y}$. There is a lot of freedom in choosing $V_+$, we will choose one which makes the Reeb dynamic easy to understand. In fact we will make the surface quadratic in shape: $$V_+=\left\{\sum_{1}^{k}(b_i x_{i}^2 - b'_i y_{i}^2) + \sum_{k+1}^n a_{j}(x_{j}^2+y_{j}^2) = c > 0\right\}.$$ Such a $V_+$ is said to be a *standard contact handle*. Let $\phi$ be the function $\sum_{1}^{k}(b_i x_{i}^2 - b'_i y_{i}^2) + \sum_{k+1}^n a_{j}(x_{j}^2+y_{j}^2)$. For positive constants $\{b_i,b'_i,a_j\}$ the level set $V_+$ is everywhere transverse to the Liouville vector field $Y_{{{\rm st}}}$. $$\begin{aligned} L_{Y_{{{\rm st}}}}\left(\sum_{1}^{k}(b_i x_{i}^2 - b'_i y_{i}^2) + \sum_{k+1}^n a_{j}(x_{j}^2+y_{j}^2)\right) &=& \\ \sum_{1}^k (4b_i x_{i}^2+2b'_i y_{i}^2) + \sum_{k+1}^n a_j (x_j^2+y_j^2) &>& 0.\end{aligned}$$ Strictly speaking this level set $V_+$ intersects $V_{-}$ at some angle. In order to have a smooth surface after handle attachment we round the corners in an arbitrarily small neighborhood of the intersection. As shown in [@Yau], the smoothing will not affect the final analysis of Reeb orbits and their Conley-Zehnder indices. Suppose $\gamma$ is a contractible Reeb orbit, then a choice of a spanning disk $D$ and a symplectic trivialization of $\xi$ on $D$ induces a trivialization of $\xi$ along $\gamma$. The Reeb flow $\Phi_{R}^t$ preserves the contact structure $\xi$. Hence with respect to the chosen trivialization, the linearized Reeb flow $d(\Phi^t)$ defines a path in the symplectic matrices $Sp(2n-2)$. The Conley-Zehnder index of a path $\Gamma$ in $Sp(2n-2)$, $\mu(\Gamma)$, is defined by the intersection number of the path with the Maslov cycle. If $\Gamma$ is a path in $Sp(2m_1)$ and $\Gamma'$ a path in $Sp(2m_2)$, then $\mu(\Gamma\oplus\Gamma')=\mu(\Gamma)+\mu(\Gamma')$. By continuity the Conley-Zehnder index is the same for two different trivializations over the same disk. For a different spanning disk $D'$, the Conley-Zehnder index with respect to $D'$ differ from that of $D$ by twice the first Chern class of $\xi$ on the sphere bounded by $D$ and $D'$. Since we assume $c_1(\xi)=0$, the Conley-Zehnder index of every contractible Reeb orbit is well defined and independent of choices. Consider the Reeb dynamic on a standard handle $$V=\left\{\sum_{1}^{k}(b_i x_{i}^2 - b'_i y_{i}^2) + \sum_{k+1}^n a_{j}(x_{j}^2+y_{j}^2) = c > 0\right\}.$$ The Hamiltonian vector field of $\phi$, $X_{\phi}$, is defined by $\iota_{X}\omega=-d\phi$. When restricted to the level set $V$ of $\phi$, the Hamiltonian field $X$ is a multiple of the Reeb field, $$X = \sum_{1}^{k}\left(2 b_i x_i{\partial_{y_{i}}} + 2b'_i y_{i} {\partial_{x_i}}\right) + \sum_{k+1}^n a_{j}\left(2x_{j}{\partial_{y_j}} - 2y_{j}{\partial_{x_j}}\right).$$ The Reeb field on $V$ is $$R=\frac {X}{\alpha_{{{\rm st}}}(X)} = \frac{X}{\sum_1^k(4b_ix_i^2+2b_i'y_i^2)+ \sum_{k+1}^n a_j(x_j^2+y_j^2)}=\frac{X}{\sum_1^k (3b_ix_i^2+3b_i'y_i^2) + c}.$$ Since Hamiltonian field $X$ and Reeb field $R$ have the same integral curves we look for close orbits of $X$. The Hamiltonian flow is hyperbolic in the $(x_i,y_i)$ components, and rotational with constant angular velocity in the $(x_j,y_j)$ components. Therefore any close Reeb orbit has $(x_i,y_i)=0$ for $i\leq k$ and lies on the ellipsoid $\sum_{k+1}^{n}a_j(x_j^2+y_j^2)=c$. If we choose $\{a_j\}$ to be rationally independent, then the rotations in different $(x_j,y_j)$ factors will always be out of phase. Hence the only close orbits are those lying entirely on $(x_j,y_j)$-planes, i.e., multiple covers of the circle $a_j(x_j^2+y_j^2)=c$. A simple orbit has period $\pi/a_j$ with action $c\pi/a_j$. We can calculate the Conley-Zehnder index for each one of these Reeb orbits $\gamma$. Along $\gamma$, $R=X/c$ is a constant factor of Hamiltonian field, furthermore near $\gamma$, $R = X/c$ up to second order. Therefore with a fixed trivialization, the linearized Reeb flow represents the same path of symplectic matrices in $Sp(2n-2)$ as the linearized Hamiltonian flow, except reparametrized by a constant factor $c$. The disk $D= \{a_j(x_j^2+y_j^2)\leq c, x_i=y_i=0, i\neq j\}$ can be used as a spanning disk. The standard symplectic trivialization of $\mathbb{C}^n$ by coordinate vectors $\{{\partial_{x_i}},{\partial_{y_i}}\}$ over $\gamma$ extends to the standard trivialization of $T\mathbb{C}^n$ over $D$. Identify $T\mathbb{C}^n$ as the sum of the contact distribution $\xi$ and a trivial symplectic bundle spanned by $R$ and $Y_{{{\rm st}}}$. The linearized Hamiltonian flow fixes $R|_{\gamma}$ and $Y_{{{\rm st}}}|_{\gamma}$. Suppose we pick a trivialization of $\xi$ on $D$, with respect to this direct sum trivialization of $T\mathbb{C}^n|_{\gamma}=\xi\oplus \mathbb{R}Y \oplus \mathbb{R}R$, the path of matrices has the form $\Gamma=\Gamma_1\oplus \Gamma_2$, where $\Gamma_1$ is a path in $Sp(2n-2)$ which computes the Conley-Zehnder index of the Reeb orbit $\gamma$, and $\Gamma_2$ is the constant path in $Sp(2)$. Therefore $\mu(\gamma,\xi)=\mu(\gamma,T\mathbb{C}^n)$. With respect to the standard trivialization of $T\mathbb{C}^n$, the linearized Hamiltonian flow is a block diagonal matrix $$d(\Phi_X^t)= \left[ \begin{array}{ccc} D_{1} & & \\ & \ddots & \\ & & D_{n} \end{array} \right]$$ with $2\times 2$ blocks $$D_{i}=\left[\begin{array}{cc} \cosh 2\sqrt{b_i b_i'}t & \sqrt{\frac{b'_i}{b_i}}\sinh 2\sqrt{b_i b_i'}t \\ \sqrt{\frac{b_i}{b_i'}}\sinh 2\sqrt{b_i b_i'}t & \cosh 2\sqrt{b_i b_i'}t \end{array} \right],\ i\leq k,$$ $$D_{j}=\left[\begin{array}{cc} \cos (2a_j t) & \sin (2a_j t) \\ \sin (2a_j t) & \cos (2a_i t) \end{array} \right],\ j > k.$$ The Conley-Zehnder index of this path is then the sum of the Conley-Zehnder indices for the individual blocks. For $i\leq k$, the $2\times 2$ matrices always have real eigenvalues, therefore the Conley-Zehnder index is 0. For $i > k$, suppose $a_n$ is much larger than the others, then for any orbit in the $(x_j,y_j)$-plane, $j<n$, in the time for one revolution, the linearized flow already made a large number of rotations in the $(x_n,y_n)$ component. Therefore the Conley Zehnder index would be large. For the simple orbit in $(x_n,y_n)$-plane, the linearized flow made one full revolution in $(x_n,y_n)$ component, contributing $2$ to the index, and in the other components a small positive rotation of angle less than $\pi$, contributing $1$ to the index. Therefore the Conley-Zehnder index is $2+(n-k-1)=n-k+1$. In fact by choosing $a_n$ sufficiently large so that $m/a_n < 1/a_j$ for all $k<j<n$, we see that an $m$-fold cover of this orbit has index $2m+(n-k-1)$. We summarize the discussion on the Reeb dynamic as follows: \[ind1\] Given any positive integer $N$, in a standard $2n$ dimensional handle of Morse index $k<n$, there exist a standard contact handle $V$ such that the closed Reeb orbits of Conley-Zehnder index less than $N$ on $V$ are multiple covers of a single non-degenerate orbit $\gamma$, they have Conley–Zehnder indices $2m+(n-k-1)$ where $m$ is the multiplicity. This deals with orbits lying entirely in a single handle. Lemma 4.2 of [@Yau] deals with Reeb orbits of passing through multiple handles. \[ind2\] If $M$ is subcritical, then given any positive integer $N$, by thinning the handles, i.e., changing the quadratic shape parameter inside each standard subcritical handle, there is a hypersurface $V$ such that each contractible Reeb orbit on $V$ passing through multiple handles has Conley-Zehnder index larger than $N$. Let $M=M'\times \mathbb{C}$, $V' = \{\phi = c\}$ be a level set of a function $\phi$ for which $Y'$ is gradient–like, and $\phi(p)<c$ for all critical points $p$ of $Y'$. The stabilization of a standard contact handle is a standard subcritical contact handle. By increasing the constant $\kappa$, we can simultaneously “thin” all subcritical handles in $V$, the stabilization of $V'$. Therefore Lemmas \[ind1\] and \[ind2\] imply \[index\] Let $M=M'\times \mathbb{C}$ be a subcritical Stein manifold, Then for any $N>0$, there exists $\kappa$ sufficiently large so that each Reeb orbit of Conley–Zehnder index less than $N$ on the stabilization $V\subset M$ is an $m$–fold cover of a distinguished simple Reeb orbit $\{x^2+y^2 = c - \phi(p)\}$ over an index $k$ critical point $p$ of $M'$. This orbit has Conley–Zehnder index $2m+n-k+1$. By the dimension formula, we see that there is no index $-1$, $0$ or $1$ holomorphic planes, therefore the cylindrical contact homology of $V$ is well defined. Complex structure and $S^1$–invariant curves ============================================ Let $J'$ be a compatible complex structure on $M'$. As remarked earlier, the product complex structure $J'\oplus i$ will not be compatible with the cylindrical end structure $V\times [0,\infty)$, even though $J'\oplus i$ tames $\omega' + dx_ndy_n$. This is due to the fact that the splittings $TM|_{V}=TM'\oplus T\mathbb{C}$ and $TM|_{V}=\xi \oplus \mathbb{R}Y\oplus \mathbb{R}R $ do not match at a generic point on $V$, thus $J'\oplus i$ does not preserve the contact distribution on $V$, nor pairs the Liouville field $Y$ with the Reeb field $R$. Let $M'$ be a Stein manifold, $f'$ a plurisubharmonic Morse function, and $Y'$ the associated complete Liouville vector field. Given any hypersurface $V'$ transverse to $Y'$ enclosing all the critical points, $f'$ can be modified to a function $\phi'$ outside a neighbourhood of the critical points such that $V'=\{\phi' = c\}$ is a level set and $Y'$ is gradient–like with respect to $\phi'$. Let $M = M'\times \mathbb{C}$ be subcritical, $\phi=\phi'+\kappa|z|^2$, and $V=\{\phi= c\}$ a stabilization of $V'$. Identify $M'=M'\times \{0\}$, $V'=V'\times \{0\}$, and let $N'=\{\phi'<c\}\subset M'\subset M$. Let $\rho$ be the projection $M\rightarrow M'$. Then $V\setminus V' \stackrel{\rho}{\rightarrow} N'$ is a circle bundle over $N'$. Let $J$ be a compatible almost complex structure on $M$ with the following properties: 1. $J$ is invariant under the $S^1$–action of rotation in the $\mathbb{C}$ factor. 2. On $M'$, we require $J=J'\oplus i$ where $J'$ is a compatible complex structure for $M'$ and $i$ the complex multiplication in $T\mathbb{C}$. This does not conflict with compatibility since on $V'=M\cap V'$, the splitting of $TM$ satisfies $$\begin{aligned} \xi &=& \xi'\oplus T\mathbb{C} \\ R &=& R' \\ Y &=& Y' \end{aligned}$$ 3. Inside each standard handle there is a single critical point $p$. For a standard handle, we will use the standard $\mathbb{R}^{2n}=\mathbb{C}^{n-1}\times \mathbb{C}$ coordinates, where $\mathbb{C}^{n-1}$ are coordinates of a handle in $M'$, and $(x_n,y_n)$ is the coordinate of the $\mathbb{C}$ factor. Let $$\phi = \sum_{1}^{k}(b_i x_{i}^2 - b'_i y_{i}^2) + \sum_{k+1}^n a_{j}(x_{j}^2+y_{j}^2).$$ Near the center of the handle the contact–type hypersurface $V$ is given by a level set $\{\phi=c\}$. The distinguished Reeb orbit $\gamma$ on $V$ is the circle $\{x_{n}^2+y_{n}^2=c, x_1=y_1=\dots = x_{n-1}=y_{n-1}=0\}$. On $\gamma$, the splitting of $TM=\xi\oplus(\mathbb{R}Y\oplus \mathbb{R}R)$ exactly coincides with $TM=T\mathbb{C}^{n-1}\oplus T\mathbb{C}$. Define $J|_{\xi}$ to be the standard complex structure on $T\mathbb{C}^{n-1}$, and $J$ pairs the Liouville vector $Y$ with the Reeb vector $R$ on $T\mathbb{C}$. The positive translation along $Y$ then extends $J$ to the region $\{x_{n}^2+y_{n}^2\geq c, x_1=y_1=\dots = x_{n-1}=y_{n-1}=0\}$. Then $J$ can be extended to the entire vertical plane $\{x_1=y_1=\dots = x_{n-1}=y_{n-1}=0\}$ such that $J$ preserves the splitting $TM=T\mathbb{C}^{n-1}\oplus T\mathbb{C}$ and acts by standard complex multiplication on the first factor. In this way the plane $\{x_1=y_1=\dots = x_{n-1}=y_{n-1}=0\}$ is a $J$-holomorphic. 4. We also specify $J$ in a small neighbourhood of the plane $\{x_1=y_1=\dots = x_{n-1}=y_{n-1}=0\}$. Let $U\subset V$ be a sufficiently small neighbourhood of the Reeb orbit $\gamma$ such that $d\rho$, the projection of $TM=T\mathbb{C}^{n-1}\oplus T\mathbb{C}$ to the first factor is an isomorphism from $\xi$ to $T\mathbb{C}^{n-1}$. Define $J|_{\xi}$ to be the lift of the standard complex multiplication on $T\mathbb{C}^{n-1}$ to $\xi$, in other words, $J(v)= d\rho^{-1}(i\cdot d\rho(v))$. Again define $J(Y)=R$ on $U$ and extend $J$ by positive translation along the Liouville field $Y$. To extend $J$ to the interior, foliate the space between $U$ and the hypersurface $\{x_n^2+y_{n}^2 = \epsilon, x_1^2+y_1^2+\dots+x_{n-1}^2+y_{n-1}^2<\delta\}$ by quadratic hypersurfaces such that each leaf is transverse to $Y$, so each leaf is of contact type, and such that the projection $d\rho$ is an isomorphism from the induced contact distribution on each leaf to $T\mathbb{C}^{n-1}$. Define $J$ on each leaf as the lift of complex multiplication on $T\mathbb{C}^{n-1}$. Finally $J$ is the standard complex structure on $\mathbb{C}^{n}$ in the region $\{x_n^2+y_{n}^2 < \epsilon, x_1^2+y_1^2+\dots+x_{n-1}^2+y_{n-1}^2<\delta\}$. Denote ${\partial_{\theta}}=x_n{\partial_{y_n}}-y_n{\partial_{x_n}}\in T\mathbb{C}\subset TM$ to be the vector field generating the $S^1$-rotation. By symmetry, $d\rho(J{\partial_{\theta}}|_{V}) = Z$ for some vector field $Z$ on $N'$. Let ${\partial_{r}} = x_n{\partial_{x_n}} + y_n{\partial_{y_n}}$. Suppose $u(s,t): \mathbb{R}\times S^1 \rightarrow V\times \mathbb{R}$ is an $S^1$–invariant holomorphic cylinder in the symplectization $V\times \mathbb{R}$, asymptotic to an $m$-fold cover of a simple Reeb orbit at each end. Then up to $\mathbb{R}$–translation, $u$ is uniquely determine by the projection $u'(s,t): S^1\times \mathbb{R} \rightarrow V$. Furthermore $u'(s,t)$ projects to a trajectory of $Z$ on $N'$. The vector field $Z$ is gradient–like for $\phi'$, and $\rho\circ u'$ is a trajectory of $Z$. Since ${\partial_{\theta}}$ is tangent to $V$, in the decomposition $TM|_{V}=\xi \oplus \mathbb{R}Y \oplus \mathbb{R}R$, we have $$\begin{aligned} {\partial_{\theta}} &=& {\partial_{\theta}}- \alpha\left( {\partial_{\theta}} \right) \cdot R \oplus 0\cdot Y \oplus \alpha \left( {\partial_{\theta}} \right) \cdot R\\ \label{eqn1} &=& {\partial_{\theta}}- \frac{1}{2}(x_n^2+y_n^2)\cdot R \oplus 0\cdot Y \oplus \frac{1}{2}(x_n^2+y_n^2)\cdot R \\ \label{eqn2} J{\partial_{\theta}} &=& J\left({\partial_{\theta}}- \frac{1}{2}(x_n^2+y_n^2)\cdot R\right) \oplus -\frac{1}{2}(x_n^2+y_n^2)\cdot Y \oplus 0\cdot R\end{aligned}$$ The projection to $V\times \{0\}$ ignores the $\mathbb{R} Y$ coordinate, $$\begin{aligned} -\frac{\partial u'}{\partial s} &=& J\left({\partial_{\theta}}- \frac{1}{2}(x_{n}^2+y_{n}^2)R\right) \\ Z &=& d\rho \left(\frac{\partial u'}{\partial s}\right) \\ &=& d\rho \left( J\left({\partial_{\theta}}- \frac{1}{2}(x_{n}^2+y_{n}^2) R\right) \right) \end{aligned}$$ Since $J\left({\partial_{\theta}}- \frac{1}{2}(x_{n}^2+y_{n}^2) R\right)\in \xi$, $$\begin{aligned} \alpha\left(J\left({\partial_{\theta}}- \frac{1}{2}(x_{n}^2+y_{n}^2) R\right)\right)&=&0 \\ d\phi\left(J\left({\partial_{\theta}}- \frac{1}{2}(x_{n}^2+y_{n}^2) R\right)\right)&=&0 \\ J\left({\partial_{\theta}}- \frac{1}{2}(x_{n}^2+y_{n}^2) R\right)&=& Z - \frac{2\alpha'(Z)}{x_{n}^2+y_{n}^2} {\partial_{\theta}} - \frac{d\phi'(Z)}{2\kappa(x_n^2+y_n^2)} {\partial_{r}} \end{aligned}$$ The Reeb vector field $R$ is given by $$\begin{aligned} R&=& \frac{X_\phi}{\alpha(X_{\phi})} \\ &=& \frac{X_{\phi'} + X_{\kappa(x_n^2+y_n^2)}}{\alpha'(X_{\phi'})+\frac{1}{2}(-y_n dx_n+x_n dy_n)(X_{\kappa(x_n^2+y_n^2)})}\\ &=& \frac{X_{\phi'} + 2\kappa {\partial_{\theta}}}{\alpha'(X_{\phi'})+\kappa (x_n^2+y_n^2)} \end{aligned}$$ Since $\omega = \omega' + dx_ndy_n$ tames $J$, if ${\partial_{\theta}}\neq \frac{1}{2}(x_{n}^2+y_{n}^2) R$, i.e., over a critical point of $\phi'$, then $$\begin{aligned} 0&<& \omega \left({\partial_{\theta}}- \frac{1}{2}(x_{n}^2+y_{n}^2) R, \ \ Z - \frac{2\alpha'(Z)}{x_{n}^2+y_{n}^2} {\partial_{\theta}} - \frac{d\phi'(Z)}{2\kappa(x_n^2+y_n^2)} {\partial_{r}} \right)\\ &=& -\frac{x_{n}^2+y_{n}^2}{2\alpha(X_{\phi})}\omega'(X_{\phi'},Z)+ \frac{\alpha'(X_{\phi'})d\phi'(Z)}{2\alpha(X_{\phi})\kappa(x_n^2+y_n^2)}dxdy({\partial_{\theta}},{\partial_{r}})\\ &=& d\phi'(Z)\left(\frac{x_{n}^2+y_{n}^2}{2\alpha(X_{\phi})}+\frac{\alpha'(X_{\phi'})} {2\alpha(X_{\phi})\kappa}\right)\end{aligned}$$ Thus $d\phi'(Z)>0$ for $Z\neq 0$. The zeros of $Z$ coincides with the critical points of $\phi'$. Near a critical point, since the complex structure is defined to be the lift of the standard complex structure on the base, it is not hard to find $Z$ explicitly and see that it is indeed gradient–like for $\phi'$. For an $S^1$-invariant holomorphic cylinder $u$, by $S^1$-invariance, $du ({\partial_{t}}) = m{\partial_{\theta}}$ if $u$ is asymptotic to an $m$-fold cover of a simple Reeb orbit. So $du (-{\partial_{s}})=mJ{\partial_{\theta}}$, and $\rho\circ u'$ is a trajectory of $Z$. This covers invariant cylinders in the symplectization, the invariant planes in the cobordism $M$ are also govern by the vector field $Z$. The vector field $J{\partial_{\theta}}$ is Morse–Bott with zero set $M'$. By the construction of $J$, $J{\partial_{\theta}}=-rdr$ on $M'$. Also $\omega({\partial_{\theta}},J{\partial_{\theta}})>0$, $(\iota_{{\partial_{\theta}}}\omega)(J{\partial_{\theta}})>0$. But $\iota_{{\partial_{\theta}}}\omega= -(xdx+ydy)=d(-\frac{1}{2}(x^2+y^2))$. Hence $J{\partial_{\theta}}$ is gradient–like with respect to the function $-\frac{1}{2}(x^2+y^2)$. \[prop1\] Each trajectory of the positive flow of $J{\partial_{\theta}}$ is asymptotic to a point on $M'$. We only need to check that a trajectory does not escape to infinity without hitting $M'$. On the cylindrical end $V\times [0,\infty)$, Equation \[eqn2\] shows that $J{\partial_{\theta}}$ is always pointing towards the compact filling. It follows that any positive trajectory of $J{\partial_{\theta}}$ stays within a compact set of $M$. Since $J{\partial_{\theta}}$ is gradient–like, each trajectory must approach a zero of the vector field. \[prop2\] Each trajectory of the negative flow of $J{\partial_{\theta}}$ eventually enters the cylindrical end $V\times [0, \infty)$. In fact a negative trajectory must leave any compact region. Since the negative flow stays away from the zero set, if a trajectory stays within a compact region then there is an accumulation point where $J{\partial_{\theta}}\neq 0$, a contradiction since $J{\partial_{\theta}}$ is gradient–like. \[fol\] $M$ is foliated by simple $S^1$-invariant holomorphic planes. Given a point $p = (x,re^{i\theta}) \in M'\times \mathbb{C}$, let $\Phi(\tau)$ denote the flow of $J{\partial_{\theta}}$ by time $\tau$ starting at $p$. Define $$u(s,t)= e^{it}\circ \Phi(-s),$$ where $e^{it}\circ(x,re^{i\theta})= (x, re^{i(\theta+t)})$ is the $S^1$ action. By Lemma \[prop1\], as $s\rightarrow -\infty$, the cylinder is asymptotic to a point on $M'$. Therefore by removal of singularity, $u$ can be extended to a map of a holomorphic plane to $M$. As $s\rightarrow \infty$, by Lemma \[prop2\], the cylinder enters the cylindrical end $V\times [0,\infty)$, where it is governed by the flow of $Z$. In particular, since $-J{\partial_{\theta}}$ has a positive $Y$ component, $u\cap V$ is either empty, a point on $V'$, or a single circle on $V\setminus V'$. The part of $u$ in the cylindrical end $V\times [0,\infty)$ projects to a semi–infinite trajectory of $Z$. If this trajectory lies on the unstable manifold of some critical point $p_i$, then $u$ is asymptotic to the Reeb orbit $\gamma_i$ over the critical point $p_i$. Hence there is an $S^1$-invariant holomorphic plane through every point of $M\setminus M'$ and therefore all of $M$. If two $S^1$-invariant holomorphic planes intersect, then by $S^1$ symmetry they intersect in at least a circle and therefore coincide. Hence the planes in Theorem \[fol\] and their multiple covers are in fact all the $S^1$-invariant holomorphic planes. Consider a point $q$ on $N'\subset M'\times{0}\subset M$. There is a unique simple $S^1$-invariant holomorphic plane $u_{q}$ through $q$, intersecting $V$ in a single circle. By the projection $\rho$, this circle maps onto a point $q'\in N'$. In this way have a diffeomorphism $\pi: N' \rightarrow N'$. Since the vertical plane over a critical point $p$ is holomorphic by construction, $\pi(p)= p$. Let $U_{p}$ and $S_{p}$ denote the unstable and stable manifolds of a critical point $p$ with respect to the vector field $-Z$. If $\pi(q)$ is on the stable manifold of a critical point $p$, then $u_q$ is asymptotic to $\gamma_{p}$, the Reeb orbit over $p$. If $p$ is a critical point of $\phi'$ of Morse index $k$, then the family of $S^1$-invariant planes asymptotic to $\gamma_{p}$ is $2n-k-2$ dimensional, and is parametrized by $S_{p}$. There is exactly one holomorphic plane asymptotic to $\gamma_p$ passing through $\pi^{-1}(U_p)$, namely the vertical plane over $p$. Transversality ============== The proof of the following theorem does not yet exist in print. However the theorem follows from the polyfold theory of Hofer–Wysocki–Zehnder [@polyfolds] [@polyfolds4] together with the work of McDuff [@McDuff] [@McDuffTolman], which proved the corresponding result for moduli spaces of closed curves in the setting of Gromov–Witten theory. In particular, in section $4.2.2$ of [@McDuffTolman], we can replace the regularization process of Liu–Tian [@LT] by the polyfold regularization process of Hofer–Wysocki–Zehnder. The idea is that if a component of a moduli space does not have a $S^1$-invariant element, then regularity can be achieved by performing an abstract perturbation on a slice transverse to the $S^1$ orbits, and then extending this perturbation by the $S^1$-action. [\[s1\]]{} Let $M$ be a symplectic cobordism together with a compatible complex structure $J$ which is invariant under an $S^1$-action. Let $K>0$ be a real number such that all Reeb orbits with action $\alpha(\gamma)$ less than $K$ are non-degenerate and invariant under the $S^1$-action. If $\sum\gamma^+_i < K$ and a component of the moduli space $\mathcal{M}_{g,\{\gamma^+_1\dots, \gamma^+_a\},\{\gamma^-_1\dots, \gamma^-_b\},m}$ does not contain an $S^1$-invariant element, then after perturbation $\mathcal{M}^{virt}$ can be realized as a weighted branched manifold with boundary with a free $S^1$-action such that the evaluation map $ev: \mathcal{M}^{virt} \rightarrow \overline{M}^m$ is equivariant. Note that by taking $\kappa$ sufficiently large, all orbits of small action are multiple covers of the distinguished orbits. In the symplectization $V\times \mathbb{R}$, the following is proved in [@Yau]. The simple holomorphic cylinder $u$ corresponding to a trajectory of $Z$ between two critical point of neighboring indices and its multiple covers are regular. It follows that the differential for $HC^{\textsf{cyl}}$ is precisely the Morse differential for the vector field $Z$, and $HC^{\textsf{cyl}}$ is a direct sum of copies of $H^{*}(M')=H^{*}(M)$, where the Reeb orbits of multiplicity $i$ contributes a copy of $H^{*}(M)$ for each $i\geq 1$. Note that *cohomology* of $M$ is computed because $Z$ needs to be reversed to be a negative gradient. The full contact homology of $V$ is isomorphic to $\Lambda \left(HC^{\textsf{\emph{cyl}}}\right)$. If $\Sigma$ has multiple negative punctures, then any $u:\Sigma\rightarrow M\times \mathbb{R}$ in the compactified moduli space, which includes nodal and multistory curves, does not admit an $S^1$-family of automorphisms. Hence by Theorem \[s1\], the regularized moduli space, if non-empty, has a free $S^1$-action as well as the $\mathbb{R}$-translation. However, the expected dimension of the moduli space is $1$, thus $\mathcal{M}$ is empty. Therefore the differential in the full contact homology complex arises from the count of holomorphic cylinders only. Therefore $HC(V)$ is just the graded commutative polynomial algebra with $HC^{\textsf{cyl}}$ as the linear part. [\[tran\]]{} The simple holomorphic plane over a critical point is regular. Since the complex structure near the plane is known, we directly compute $D_{u}$, the linearized $\overline{\partial}$ operator at $u$. In local coordinates, $$D_{u}v = ds - J(u)\eta dt$$ where $v = (\mu_{1},\nu_1,\dots, \mu_{n},\nu_{n})\in u^*(TM)$ and $$\eta = \frac{1}{2}({\partial_{s}}v+J(u){\partial_{t}}v+({\partial_{v}}J)(u){\partial_{t}}(u))$$ Transversality is equivalent to the surjectivity of the differential operator $D_{u}: \mathbb{R}^{2n}\rightarrow \mathbb{R}^{2n}$. At $(t,0,\dots,0,x_{n}(t),y_{n}(t))\in V$, we have $$Y(t)= \left\{ \begin{array} [c]{ll} 2t{\partial_{x_1}}+\frac{1}{2}x_{n}(t){\partial_{x_n}}+\frac{1}{2}y_{n}(t){\partial_{y_n}}, & \text{if Morse index is zero} \\ \frac{1}{2}t{\partial_{x_1}}+\frac{1}{2}x_{n}(t){\partial_{x_n}}+\frac{1}{2}y_{n}(t){\partial_{y_n}}, & \text{if Morse index is greater than zero}\\ \end{array} \right.$$ $$R(t)= \left\{ \begin{array} [c]{ll} \frac{1}{3b_1t^2+c}(2b_{1}t{\partial_{y_{1}}}- 2\kappa y_n(t){\partial_{x_n}}+ 2\kappa x_{n}(t){\partial_{y_n}}),& \text{if Morse index is zero} \\ \frac{1}{c}(2a_{1}t{\partial_{y_1}}- 2\kappa y_n(t){\partial_{x_n}}+ 2\kappa x_{n}(t){\partial_{y_n}}), & \text{if Morse index is greater than zero}\\ \end{array} \right.$$ In any case, $$\begin{aligned} Y(t)&=&O(t){\partial_{x_1}}+(\text{constant}+O(t^2)){\partial_{x_{n}}}+(\text{constant}+ O(t^2)){\partial_{y_{n}}}\\ R(t)&=&O(t){\partial_{y_1}}+(\text{constant}+O(t^2)){\partial_{x_n}}+ (\text{constant}+ O(t^2)){\partial_{y_n}}\end{aligned}$$ For any tangent vector $v$, $v$ decomposes into $\xi\oplus \mathbb{R}Y\oplus \mathbb{R}R$ as follows, $$v=v- \frac{{\partial_{v}}\phi}{{\partial_{Y}}\phi}Y -\alpha(v)R\oplus \frac{{\partial_{v}}\phi}{{\partial_{Y}}\phi}Y\oplus \alpha(v)R$$ The contact distribution contains $\{{\partial_{x_{2}}},{\partial_{y_2}},\dots,{\partial_{x_{n-1}}},{\partial_{y_{n-1}}}\}$. By definition, $J$ is the standard complex structure on $\{{\partial_{x_{2}}},{\partial_{y_2}},\dots,{\partial_{x_{n-1}}},{\partial_{y_{n-1}}}\}$. Furthermore $$\begin{aligned} {\partial_{x_1}}&=& {\partial_{x_1}}-O(t)Y-O(t)R\oplus O(t)Y\oplus O(t)R\\ {\partial_{y_1}}&=& {\partial_{y_1}}-O(t)Y-O(t)R\oplus O(t)Y\oplus O(t)R\\ {\partial_{x_n}}&=& {\partial_{x_n}}-(\text{constant}+O(t^2))Y- (\text{constant}+O(t^2))R\oplus (\text{constant}+O(t^2)) Y\\ & & \oplus (\text{constant}+O(t^2))R\\ {\partial_{y_n}}&=& {\partial_{y_n}}-(\text{constant}+O(t^2))Y- (\text{constant}+O(t^2))R\oplus (\text{constant}+O(t^2)) Y \\ & & \oplus (\text{constant}+O(t^2))R\\\end{aligned}$$ Hence $${\partial_{(1,0,\dots,0)}}J = \left[ \begin{array}{ccccccc} * & * & 0 & \dots & 0 & * & *\\ * & * & 0 & \dots & 0 & * & *\\ 0 & 0 & 0 & \dots & 0 & 0 & 0\\ \vdots& & & \ddots & & & \vdots \\ 0 & 0 & 0 & \dots & 0 & 0 & 0\\ * & * & 0 & \dots & 0 & 0 & 0\\ * & * & 0 & \dots & 0 & 0 & 0 \end{array} \right]$$ Also ${\partial_{t}}(u) = (0,\dots, 0, {\partial_{t}}x_{n},{\partial_{t}}y_{n})$, hence $$({\partial_{(1,0,\dots,0)}}J){\partial_{t}}(u)= \left[ \begin{array}{c} A_1(s,t) \\ A_{2}(s,t)\\ 0 \\ \vdots \\ 0 \end{array} \right]$$ Differentiating in the other directions yields $$D_{u}\left[ \begin{array}{c} \mu_1 \\ \nu_1 \\ \vdots \\ \mu_n \\ \nu_n \end{array}\right] = \left[ \begin{array}{c} {\partial_{s}} \mu_1 - {\partial_{t}} \nu_1 + A_1(s,t)\mu_1+B_1(s,t)\nu_1\\ {\partial_{s}} \nu_1 + {\partial_{t}} \mu_1 + A_2(s,t)\mu_1+B_2(s,t)\nu_1\\ \vdots \\ {\partial_{s}} \mu_{n} - {\partial_{t}} \nu_{n} + A_{2n-1}(s,t)\mu_{n}+B_{2n-1}(s,t)\nu_{n}\\ {\partial_{s}} \nu_{n} + {\partial_{t}} \nu_{n} + A_{2n}(s,t)\mu_{n}+B_{2n}(s,t)\nu_{n}\\ \end{array}\right]$$ The operator $D_{u}$ splits into a direct sum of Cauchy–Riemann operators on $\mathbb{C}$. Therefore $D_{u}$ is surjective. By an identical argument, we also have A multiple covers of the simple holomorphic plane over a critical point is regular. Suppose $\sigma=[\sum_{i=1}^k\gamma_i]$ is an element of $HC^{\textsf{cyl}}$ where each $\gamma_i$ is a simple Reeb orbit over a critical point $p_i$ of the same index $k$. By Poincaré duality, $\sigma \in H^*(M')\cong H_{*}(M',\partial M')$ is represented by the union of stable manifolds $\bigcup_{i=1}^k S_{p}$. If $\theta$ is a compactly supported form on $M=M'\times \mathbb{C}$, then integration along fibers is an isomorphism, and we obtain $\theta'$, a compactly supported form on $M'$. \[l1\] If $\sigma=[\sum_{i=1}^k\gamma _i]\in HC^{\textsf{\emph{cyl}}}$ consists of simple orbits and $\theta\in H^{*}(M,\partial M)$, then $$\int_{\mathcal{M}_{[\sigma]}}ev^*(\theta) = \int_{\bigcup_{i=1}^k S_{p}}\theta'$$ The Poincaré dual of $\theta$ can be represented by $\pi(\beta)$, where $\beta$ is a union of unstable manifolds of the negative gradient–like vector field $-Z$, and $\pi$ is the diffeomorphism defined in the previous section. The correlator $\int_{\mathcal{M}_{[\sigma]}}ev^*(\theta)$ counts the discrete number of holomorphic planes asymptotic to some orbit $\gamma_i$ passing through $\pi(\beta)$. Suppose $u=(u',f): \mathbb{C}\rightarrow M=M'\times \mathbb{C}$ is a holomorphic plane with one marked point, which we assume to be $0$ without loss of generality. Let $\mathcal{M}_{\{[\sigma],\pi(\beta)\}}$ be the moduli space $\{u\in \mathcal{M}_{[\sigma]}: u(0)\in \pi(\beta)\}$. Since $\pi(\beta)\subset M'$ lies on the fixed point set, $S^1$ acts on $\mathcal{M}_{\{[\sigma],\pi(\beta)\}}$. By Theorem \[s1\], after regularization, components of $\mathcal{M}_{\{[\sigma],\pi(\beta)\}}$ without any $S^1$-invariant elements vanish for dimensional reason. By construction, the $S^1$-invariant elements of $\mathcal{M}_{[\sigma],\beta}$ consist of vertical planes of critical points. These planes are already regular by Theorem \[tran\], each contributing $1$ or $-1$ depending on orientation. Therefore $\int_{\mathcal{M}_{[\sigma]}}ev^*(\theta)$ is precisely the intersection number between the Poincaré dual of $\theta$ (regarded as a submanifold of M’) and $\bigcup_{i=1}^k S_{p}$ in $M'$. [\[l2\]]{} If $\sigma=[\sum_{i=1}^k\gamma _i]\in HC^{\textsf{\emph{cyl}}}$ consists of orbits of multiplicity $m$ and $\theta\in H^{*}(M,\partial M)$, then $$(m-1)!\int_{\mathcal{M}_{[\sigma]}}ev^*(\theta) \psi^{m-1}= \int_{\bigcup_{i=1}^k S_{p}}\theta'$$ Let $u=(u',f)$ as before. Since the complex structure $J$ is split on $M'$, $f:\mathbb{C} \rightarrow \mathbb{C}$ is holomorphic at $0$. We claim that the descendant $$(m-1)! \int_{\mathcal{M}_{[\sigma]}}ev^*(\theta) \psi^{m-1}$$ is the number of holomorphic planes in $\mathcal{M}_{\{[\sigma],\pi(\beta)\}}$ such that $f$ has ramification order at least $(m-1)$ at $0$. Note that $f$ is nowhere constantly zero, because with only one marked point, the domain cannot have any ghost bubbling, and the holomorphic plane does not lie entirely on $M'$. Now $\int_{\mathcal{M}_{[\sigma]}}ev^*(\theta)\psi^{m-1}=\int_{\mathcal{M}_{\{[\sigma],\pi(\beta)\}}}\psi^{m-1}$. Note that $df$ is a section of the tautological bundle $L$ over $\mathcal{M}_{\{[\sigma],\pi(\beta)\}}$, as it takes a tangent vector ${\partial_{z}}$ at the marked point to the number $\frac{df}{dz}(0)$. Hence $\int_{\mathcal{M}_{\{[\sigma],\pi(\beta)\}}}\psi^{1}$ is precisely the number of holomorphic planes where $df=0$. For higher powers of $\psi$, consider the map $$u\rightarrow (f'(0),f''(0),\dots,f^{m-1}(0)).$$ This is in fact a section of the bundle $L\oplus L^{\otimes 2} \oplus \cdots \oplus L^{\otimes m-1}$ and can be seen as follows. Under change of coordinates we have $$\frac{d^i f}{dz^i} = \left(\frac{dz}{dw}\right)^i\frac{d^i f}{d w^i} + \mbox{ terms involving lower order derivatives of $f$.}$$ Therefor the transition matrix is lower triangular $$\left[ \begin{array}{c} \frac{df}{dz} \\ \frac{d^2f}{dz^2} \\ \vdots \\ \frac{d^{m-1}f}{dz^{m-1}} \end{array} \right] = \left[ \begin{array}{ccccc} \frac{dz}{dw}& 0 & 0 &\cdots & 0\\ * & \left(\frac{dz}{dw}\right)^2 & 0 & \cdots & 0 \\ \vdots & & &\ddots & \\ * & * & * & \cdots & \left(\frac{dz}{dw}\right)^{m-1}\\ \end{array} \right] \left[ \begin{array}{c} \frac{df}{dw} \\ \frac{d^2f}{dw^2} \\ \vdots \\ \frac{d^{m-1}f}{dw^{m-1}} \end{array} \right]$$ Hence the bundle is isomorphic to $L\oplus L^{\otimes 2} \oplus \cdots \oplus L^{\otimes m-1}$. The number of holomorphic planes of ramification order at least $m-1$ is the intersection of this section with the zero section. Therefore it equals the integral of the Euler class of $L\oplus L^{\otimes 2} \oplus \cdots \oplus L^{\otimes m-1}$ over $\mathcal{M}$. But $$\begin{aligned} e(L\oplus L^{\otimes 2} \oplus \cdots \oplus L^{\otimes m-1}) &=& c_1(L)c_1(L^{\otimes 2})\cdots c_1(L^{\otimes m-1}) \\ &=& (\psi) (2\psi) \cdots ((m-1) \psi) \\ &=& (m-1)! \psi^{m-1}\end{aligned}$$ As before, components of $\mathcal{M}_{\{[\sigma],\pi(\beta)\}}$ without any $S^1$-invariant elements can be ignored, and there exactly one $m$-fold cover of each simple plane over a critical point such that $f(0)=0$. Combining Lemmas \[l1\] and \[l2\] yields the desired non-degenerate pairing for Theorem \[main\]. Correlators with multiple marked points can also be considered. Remember $\theta'$ denotes the form on $M'$ after integrating $\theta$ along the $\mathbb{C}$-fibers. If $m\geq 3$ then $$\int_{\mathcal{M}_{[\sigma]}}ev^*_{1}(\theta_1) ev^*_{2}(\theta_2)\cdots ev^*_{m}(\theta_m) = 0$$ If $m=2$ then the only non-vanishing correlator is $$\int_{\mathcal{M}_{[\sigma_0]}}ev^*_{1}(\theta_1) ev^*_{2}(\theta_2)= \int_{M'}\theta'_1\theta'_2$$ where $\sigma_0$ is the generator of $HC^{\textsf{\emph{cyl}}}$ corresponding to a simple Reeb orbit over the index $0$ critical point. Let $\beta_i$ be cycles on $M'$ representing the Poincaré dual of $\theta_i$. An $S^1$-invariant plane must have all its marked points on $M'$. The only possible configuration for the domain is a copy of $\mathbb{C}$ with a tree of ghost bubbles attached at the origin, and all marked point lie on the ghost bubble tree. The map on $\mathbb{C}$ is a $S^1$-invariant plane with the origin mapping to $M'$. Therefore the cycles $\beta_{i}$ must have common intersections. However, since $M$ is subcritical, each $\beta_{i}$ has dimension at most $n-1$. Since $M'$ has dimension $2n-2$, the $\beta_{i}$’s can perturbed within $M'$ to be disjoint unless $m=2$ and both $\beta_{1}$ and $\beta_{2}$ has maximum dimension $n-1$. Thus $$\int_{\mathcal{M}_{[\sigma]}}ev^*_{1}(\theta_1) ev^*_{2}(\theta_2)\cdots ev^*_{m}(\theta_m) = 0$$ When $m=2$, $\beta_1\cap \beta_2$ consists of points, we can perturb $\beta_1$ on $M'$ so that $\beta_1\cap \beta_2$ lies in a neighborhood of the index $0$ critical point on $M'$ and each is a transverse intersection. If $\sigma$ consists of multiplicity $m$ Reeb orbits over critical points of index $k$, then the $S^1$-invariant elements asymptotic to $\sigma$ are mapped to the unstable manifold of the critical points by the evaluation map. Hence unless $k=0$, there will be no $S^1$-invariant holomorphic planes passing through $\beta_1\cap \beta_2$ and asymptotic to $\sigma$. If $\sigma = \sigma_0$ then because the holomorphic planes near the vertical plane over the index $0$ critical point are also regular, the number of $S^1$-invariant planes is the algebraic number of $\beta_1\cap \beta_2$, $$\int_{\mathcal{M}_{[\sigma_0]}}ev^*_{1}(\theta_1) ev^*_{2}(\theta_2)= \int_{M'}\theta'_1\theta'_2$$ Subcritical polarizations ========================= A *polarized Kähler manifold* $(M,\omega,J,\Sigma)$ is a Kähler manifold $(M,\omega,J)$ with an integral Kähler form $\omega$ and a smooth reduced complex hypersurface $\Sigma$ representing the Poincaré dual of $k[\omega]$. The number $k$ is called the *degree* of the polarization. There is a canonical plurisubharmonic function $f_{\Sigma}: M\setminus \Sigma \rightarrow \mathbb{R}$ associated to a polarization. Let $\mathcal{L}$ be the holomorphic line bundle associated to the divisor $\Sigma$ and $s$ the holomorphic section (unique up to scalar multiplication) of $\mathbb{L}$ whose zero section is $\Sigma$. Choose a hermitian metric $\| \cdot \|$ such that the compatible metric connection $\nabla$ have curvature $R = 2\pi i k\omega$. Then define $$f_{\Sigma}=-\frac{1}{4\pi k}\log\|s(x)\|^{2}.$$ It is not hard to check that $f_{\Sigma}$ is plurisubharmonic and in fact $-dd^{J}f_{\Sigma} = \omega$. All the critical points of $f_{\Sigma}$ lie within a compact subset of $M \setminus \Sigma$. Note that this canonical $f$ does *not* give a complete Liouville vector field. However as we stated before we can always rescale to $e^{f}$ instead. A polarization is *subcritical* if there is a plurisubharmonic Morse function $f$ such that $(M \setminus \Sigma,J,f)$ is a subcritical Stein manifold, and $f=f_{\Sigma}$ outside a compact subset of $M\setminus \Sigma$. Split the $M$ along a level set $V=\|s(x)\|=\epsilon$. Let $\xi$ be the maximum complex subbundle of $TV$, we will calculate the rational contact homology of $(V,\xi)$ in two ways. On one hand $V$ is a contact–type hypersurface of the subcritical Stein manifold $(M\setminus \Sigma,J,e^f)$. Therefore by previous computations $HC(V,\xi)$ is the free graded algebra generated by $\bigoplus_{i=1}^{\infty} H^{2n-*}(M)[2i-4]$. In particular the lowest graded piece lies in degree $2n-2-D$ and is isomorphic to $H^{D}(M)$, where $D<n$ is the highest degree of non-vanishing cohomology. The other part of the splitting is biholomorphic to the complex normal bundle of $\Sigma$. Clearly $V$ is everywhere transverse to the radial vector field, since in a local coordinate chart $s(x) = cz+O(z^2)$ where $z$ is the fiber coordinate. Therefore $(V,\xi)$ is contactomorphic to the prequantization of $\Sigma$. The rational symplectic field theory of a prequantization can be computed using Morse-Bott setup of [@Bourgeois]. We summarize the necessary results. - The Reeb orbits of the prequantization coincides with the $S^1$ fibers of the fibration. The space of orbits consist of copies of $\Sigma$, one for each multiplicity. - The Morse-Bott complex for rational contact homology is generated as a graded algebra by the critical points of a Morse function on each orbit space. In this case the differential is given by the Morse differential on each copy of $\Sigma$. - To work out the grading, without loss of generality normalize the symplectic form so that $(\omega, \beta)=1$ for some $\beta\in H_{2}(M)$. Take a multiplicity $k$ orbit, $\gamma_k$, above a point $p\in \Sigma$. Using the product framing the Maslov index is zero. By Lefschetz theorem $H_{2}(\Sigma)=H_{2}(M)$, since $\Sigma$ represents $k[\omega]$, there is a surface $C$ on $\Sigma$ passing through $p$ homologous to $\beta$. This surface then lifts to a section of the normal bundle with a zero of order $k$ at $p$ and no poles. This section realizes a capping surface for $\gamma_{k}$. The Maslov index in this trivialization will be $2(c_{1}(T\Sigma),\beta):=2c$. Therefore the Maslov index for a degenerate multiplicity $l$ loops is $\frac{2cl}{k}$. The Conley-Zehnder index for a Morse index $i$ critical point with multiplicity $l$ is $$\frac{2cl}{k}- \frac{1}{2}\dim(\Sigma)+i= i+1-n+\frac{2cl}{k}.$$ The contact homology grading is $$(n-3)+i+1-n+\frac{2cl}{k}=i-2+\frac{2cl}{k}.$$ If $c_{1}(M)$ is proportional to $\omega$, then $M$ is uniruled. Note that $c_1(M)$ is proportional to $\omega$ means $c_1$ is supported on $\Sigma$, and $c_1(M\setminus \Sigma)=0$, hence all our previous computations apply. Consider the lowest graded piece of $HC(V)$. Note that this coincides with the lowest graded piece of $HC^{\textsf{cyl}}(V)$. Grading consideration immediately yields $$\begin{aligned} 2n-2-D &=& -2+\frac{2c}{k}\label{equa} \\ H^{D}(M\setminus \Sigma)&\cong& HC^{2n-2-D}(V) \cong H_{0}(\Sigma)\cong \mathbb{R}\end{aligned}$$ By Theorem \[main\] there is a non-trivial pairing $$GW: H^{D}(M\setminus \Sigma)\otimes H^{2n-D}(M\setminus \Sigma, V) \longrightarrow \mathbb{R}.$$ This implies that the simple orbit on the prequantization actually bounds a holomorphic plane in $M\setminus \Sigma$. On the normal bundle side it bounds a fiber, these two planes topologically glued to a sphere intersecting $\Sigma$ once. Since $\Sigma$ represents $k[\omega]$, it follows that $\omega$ is primitive and $k=1$. In fact the holomorphic plane on the normal bundle side is also generic. By the SFT gluing theorem, we have the non-vanishing of a $2$-point Gromov-Witten invariant of the original Kähler manifold, $$\sum_{\omega (A)=1}\int_{\mathcal{M}_{g=0,m=2,A}}ev_{1}^{*}(\theta_{1})ev_{2}^{*}(\theta_{2})=1,$$ where $\theta_{1}$ is Poincaré dual to the point class and $\theta_{2}$ to the generator of $H_{D}(M\setminus \Sigma)$ (viewed as a class in $H_{D}(M)$ via inclusion). This means that through every generic point there is a holomorphic sphere, and $M$ is uniruled. Suppose $(M^{2n},J)$ admits a subcritical polarization such that $c_{1}(M)$ is proportional to $\omega$. Let $a_{0},\ldots, a_{D}$ be the Betti numbers of $M\setminus \Sigma$,\ $b_{0},\ldots, b_{n-2}, b_{n-1}, b_{n}, \ldots, b_{2n-2}$ the Betti numbers of $\Sigma$. Then (**a**) : the sequence $\{a_{i}\}$ is symmetric, $a_{i}=a_{D-i}$, (**b**) : the sequences $\{b_{2i}\}_{2i<n}$ and $\{b_{2i+1}\}_{2i+1<n}$ are non-decreasing, if we define $a_{i}=0$ for $D<i<n$, then $$b_{2i}=\sum_{0}^{i}a_{2i},\ \ b_{2i+1}=\sum_{0}^{i}a_{2i+1}.$$ Consider the linear part of $HC(V)$, i.e., the cylindrical contact homology. From Equation \[equa\] and $k=1$ we have $D=2(n-c)$ is even. Since $$HC^{\textsf{cyl}}(V)\cong \bigoplus_{i=0}^{\infty} H^{2n-*}(M)[2i-4],$$ the ranks of $HC^{\textsf{cyl},{i}}(V)$, starting from lowest degree $2n-2-D$, is $$a_{D},\ a_{D-1},\ a_{D-2}+a_{D},\ a_{D-3}+a_{D-1},\ a_{D-4}+a_{D-2}+a_{D}, \ldots.$$ We can also calculate $HC^{\textsf{cyl}}(V)$ by Morse-Bott methods, and it is the direct sum of infinitely many copies of $H_*(\Sigma)$ with a grading shift of $2ci=(2n-D)i$ for the $i$-th copy. We have Poincaré duality on $\Sigma$ so $b_{i}=b_{2n-2-i}$. We equate the ranks and have equations $a_{D}=b_{0}, \cdots$. Since $2n-D > n > D$, we have $$\begin{aligned} a_{D}+\ldots+a_{2}&=&b_{D-2} \\ a_{D}+\ldots+a_{0}&=&b_{2n-D}+b_{0}=b_{D-2}+b_{0}\end{aligned}$$ Hence $a_{0}=a_{D}$. $$\begin{aligned} a_{D}+\ldots+a_{4}&=&b_{D-4} \\ a_{D}+\ldots+a_{0}&=&b_{2n-D+2}+b_{2}=b_{D-4}+b_{2}\end{aligned}$$ Hence $a_{0}+a_{2}=b_{2}=a_{D}+a_{D-2}$, so $a_{2}=a_{D-2}$. Inductively we have $a_{i}=a_{D-i}$, proving **(a)**. Part **(b)** is just the first $(n-1)$ equations rewritten using **(a)**. It is not hard to check that **(a)** and **(b)** are the only relations. Acknowledgements {#acknowledgements .unnumbered} ================ The author is deeply grateful to Y. Eliashberg for years of guidance, discussions and fresh ideas. He also thanks K. Honda for many helpful suggestions and valuable comments. M. Atiyah and R. Bott. *The moment map and equivariant cohomology*. Topology [**23**]{} (1984), no. 1, 1–28. P. Biran and K. Cieliebak. *Symplectic topology on subcritical manifolds*. Commentarii Math. Helvetici [**76**]{} (2001), no. 4, 712–753. F. Bourgeois. *A Morse-Bott approach to contact homology*. Ph.D. Thesis. K. Cieliebak. *Subcritical manifolds are split*. Preprint 2002 Y. Eliashberg *Symplectic geometry of plurisubharmonic functions*. NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci. 488, Gauge theory and symplectic geometry (Montreal, PQ, 1995), 49–67, Y. Eliashberg and M. Gromov. *Convex symplectic manifolds*. Proc. Sympos. Pure Math. [**52**]{} (1991), no. 2, 135–162. Y. Eliashberg and M. Gromov. *Embeddings of Stein manifolds of dimension $n$ into the affine space of dimension $3n/2+1$*. Ann. of Math. (2) [**136**]{} (1992), no. 1, 123–135. Y. Eliashberg, A. Givental and H. Hofer. *Introduction to symplectic field theory*. Geom. Funct. Anal. 2000, 560–673. H. Hofer, K. Wysocki and E. Zehnder. *Properties of pseudo-holomorphic curves in symplectization. I. Asymptotics*. Ann. Inst. H. Poincare Anal. Non Linaire [**13**]{} (1996), no. 3, 337–379. H. Hofer, K. Wysocki and E. Zehnder. *Fredholm Theory in Polyfolds I: Functional Analytic Methods*. in preparation. H. Hofer, K. Wysocki and E. Zehnder. *Fredholm Theory in Polyfolds IV: Applications to Symplectic Field Theory*. in preparation. E. Ionel and T. Parker. *The symplectic sum formula for Gromov-Witten invariants*. Ann. of Math. (2) [**159**]{} (2004), no. 3, 935–1025. T. Graber and R. Pandharipande. *Localization of virtual classes*. Invent. Math. [**135**]{} (1999), no. 2, 487–518. G. Liu and G. Tian. *Floer homology and Arnold conjecture*. Journ. Diff. Geom. [**49**]{} (1998), 1–74. D. McDuff. *Groupoids, branched manifolds and multisections*. J. Symplectic Geom. [**4**]{} (2007), no. 3, 259–315. D. McDuff and S. Tolman. *Topological properties of Hamiltonian circle actions*. IMRP Int. Math. Res. Pap. (2006), 72826, 1–77. M. Yau. *Cylindrical Contact Homology of Subcritical Stein-fillable Contact Manifolds*. Geom. & Topol. [**8**]{} (2004), 1243–1280. A Weinstein. *Contact surgery and symplectic handlebodies*. Hokkaido Math. J. [**20**]{} (1991) 241–251
--- abstract: 'Dual risk models are popular for modeling a venture capital or high tech company, for which the running cost is deterministic and the profits arrive stochastically over time. Most of the existing literature on dual risk models concentrated on the optimal dividend strategies. In this paper, we propose to study the optimal investment strategy on research and development for the dual risk models to minimize the ruin probability of the underlying company. We will also study the optimization problem when in addition the investment in a risky asset is allowed.' address: 'Department of Mathematics Florida State University 1017 Academic Way Tallahassee, FL-32306 United States of America' author: - Arash Fahim - Lingjiong Zhu date: '16 October 2015. *Revised:* 16 October 2015' title: Optimal Investment in a Dual Risk Model --- Introduction ============ The classical Cramér-Lundberg model, or the classical compound Poisson risk model says that the surplus process of an insurance company follows the dynamics: $$\label{classical} dX_{t}=\rho dt-dJ_{t},\qquad X_{0}=x>0,$$ where $\rho>0$ is the premium rate and $J_{t}=\sum_{i=1}^{N_{t}}Y_{i}$ is a compound Poisson process, where $N_{t}$ is a Poisson process with intensity $\lambda>0$ and claim sizes $Y_{i}$ are i.i.d. positive random variables independent of the Poisson process with $\mathbb{E}[Y_{1}]<\infty$. One central question in the ruin theory is to study the ruin probability $\mathbb{P}(\tau<\infty)$, where $\tau:=\inf\{t>0:X_{t}<0\}$. In recent years, there have been a lot of studies in the insurance and finance literature on the so-called dual risk model, see e.g. [@Afonso; @Avanzi; @AvanziII; @BE; @CheungI; @CheungII; @Ng; @NgII; @RCE; @YS; @Zhu], with wealth process following the dynamics: $$\label{dual} dX_{t}=-\rho dt+dJ_{t},\qquad X_{0}=x>0,$$ where $\rho>0$ is the cost of running the company and $J_{t}=\sum_{i=1}^{N_{t}}Y_{i}$, is the stream of profits, where $Y_{i}$ are i.i.d. $\mathbb{R}^{+}$ valued random variables with common probability density function $p(y)$, $y>0$ and $N_{t}$ is a Poisson process with intensity $\lambda>0$. The dual risk model is used to model the wealth of a venture capital, whose profits depend on the research and development. The classical risk model is most often interpreted as the surplus of an insurance company. On the other hand, the dual risk model can be understood as the wealth of a venture capital or high tech company. The analogue of the premium in the classical model is the running cost in the dual model, and the claims become the future profits of the company. One of the most fundamental question in the dual risk model is the optimal dividend strategies. Avanzi et al. [@Avanzi] worked on optimal dividends in the dual risk model where the optimal strategy is a barrier strategy. Avanzi et al. [@AvanziII] studied a dividend barrier strategy for the dual risk model whereby dividend decisions are made only periodically, but still allow ruin to occur at any time. A dual model with a threshold dividend strategy, with exponential interclaim times was studied in Ng [@Ng]. Afonso et al. [@Afonso] also worked on dividend problem in the dual risk model, assuming exponential interclaim times. A new approach for the calculation of expected discounted dividends was presented and ruin and dividend probabilities, number of dividends, time to a dividend, and the distribution for the amount of single dividends were studied. Dividend moments in the dual risk model were considered in Cheung and Drekic [@CheungII]. They derived integro-differential equations for the moments of the total discounted dividends which can be solved explicitly assuming the jump size distribution has a rational Laplace transform. The expected discounted dividends assuming the profits follow a Phase Type distribution were studied in Rodríguez et al. [@RCE] . The Laplace transform of the ruin time, expected discounted dividends for the Sparre-Andersen dual model were derived in Yang and Sendova [@YS]. So far the optimization problems studied in the literature on dual risk models are almost exclusively devoted to the optimal dividend strategy. In this paper, we consider a different type of optimization problem. For a venture capital, or a high tech company, the investment strategy on research and development (R&D) is crucial. A decision to increase the investment on research and development will increase the running cost of the company, but that will also boost the possibility of the future profits. Therefore, we believe that it is of fundamental interest to understand the optimal investment strategy to strengthen the position of the company. It is well known that research and development is a basic engine of economic and social growth. It is a considerable amount of spending among many leading corporations in the world. A 2014 FORTUNE article listed the top ten biggest R&D spenders worldwide in the year 2013, including Volkswagen, Samsung, Intel, Microsoft, Roche, Novartis, Toyota, Johnson & Johnson, Google and Merck, with Intel spent as much as 20.1% of their revenue on R&D, see [@FORTUNE]. Many technology giants increase their R&D spending consistently, year over year, see e.g. Table \[Google\] for the R&D and percentage of the revenues of Google in the years 2011-2014[^1]. Notice that in the case of Google, even though the R&D expenditure increases year by year, it increases in line with the increase of the total revenues so that as the percentage of revenues, the number does not change much. For some companies, both the absolute R&D expenditure amount and the percentage as the revenues remain reasonably stable, see e.g. Table \[Merck\] for Merck in the years 2011-2014 [^2]. For some companies, both the absolute R&D expenditure amount and the percentage as the revenues can change dramatically, see e.g. Table \[Tesla\] for Tesla in the years 2011-2014 [^3]. The case of Tesla is exceptional but not unusual for a new high-tech company in the sense the in the fiscal year 2011, the R&D expenditure exceeded the total revenues. Another company that has enjoyed similar phenomenal growth as Tesla is the Facebook, see Table \[Facebook\]. But Facebook’s spending on R&D is not as aggressive as Tesla. Since it is expensed rather than capitalized, cuts on research and development increases in profit in the short term, but it can hurt the strength of a company in the long run, even if the detrimental impact of the cuts may not be felt for a few years. In the most recent recession, firms with revenues greater than 100 million USD reduced their research and development intensity (divided by revenue) by 5.6%, even though the advertising intensity actually increased 3.4%, see [@HBS]. In the long run, the research and development does help the company grow and increase the value of a company. Using a measure of the so-called research quotient, a study over all publicly traded US companies from 1981 through 2006 suggested that a 10% increase in research quotient, results an increase in market value of 1.1%, see [@HBS]. Indeed, the US government also encourages the research and development activities. The Research & Experimentation Tax Credit, is a general business tax credit passed by the Congress in 1981, as a response to the concerns that research spending declines had adversely affected the country’s economic growth, productivity gains, and competitiveness within the global marketplace. According to a study by Ernst & Young, in the year 2005, 17,700 US corporations claimed 6.6 billion USD R&D tax credits on their tax returns[^4]. Full Year 2011 2012 2013 2014 --------------------- ---------- ---------- ---------- ---------- R&D (millions) \$5,162 \$6,083 \$7,137 \$9,832 Revenues (millions) \$37,905 \$46,039 \$55,519 \$66,001 As % of Revenues 14% 13% 13% 15% : R&D spending by Google during 2011-2014.[]{data-label="Google"} Full Year 2011 2012 2013 2014 --------------------- ---------- ---------- ---------- ---------- R&D (millions) \$7,742 \$7,911 \$7,123 \$6,897 Revenues (millions) \$48,047 \$47,267 \$44,033 \$42,237 As % of Revenues 16% 17% 16% 16% : R&D spending by Merck during 2011-2014.[]{data-label="Merck"} Full Year 2011 2012 2013 2014 --------------------- ------- ------- --------- --------- R&D (millions) \$209 \$274 \$232 \$465 Revenues (millions) \$204 \$413 \$2,014 \$3,198 As % of Revenues 102% 66% 12% 15% : R&D spending by Tesla during 2011-2014.[]{data-label="Tesla"} Full Year 2011 2012 2013 2014 --------------------- -------- --------- --------- --------- R&D (millions) \$388 \$1,399 \$1,415 \$2,666 Revenues (millions) \$3711 \$5089 \$7872 \$12466 As % of Revenues 10% 27% 18% 21% : R&D spending by Facebook during 2011-2014.[]{data-label="Facebook"} To the best of our knowledge, the optimal investment in research and development for the dual risk model has never been studied. There are only a limited number of works on the optimal venture capital investments, see e.g. [@BE]. In addition to the investment in research and development, we will also allow the investment in a risky asset, e.g. a market index. The possibility that an insurer can invest part of the surplus into a risky asset to minimize the ruin probability has been studied by Browne [@Browne] for the case that the insurance business is modelled by a Brownian motion with constant drift and the risky asset is modelled as a geometric Brownian motion. Later, Hipp and Plum [@Hipp] studied the optimal investment in a market index for insurers in the classical compound Poisson risk model. We will study the the optimal investment problem when both investment in research and development and investment in a risky asset are allowed. Unlike the problem of minimizing the ruin probability for an insurer in the classical risk model [@Hipp], we will obtain closed-form formulas in the dual risk model. Since the works of Browne [@Browne] and Hipp and Plum [@Hipp], the optimal investment in the market for the classical risk model and related models have been extensively studied. In Liu and Yang [@Liu], they generalized the works by Hipp and Plum [@Hipp] by including a risk-free asset. In Schmidli [@Schmidli], the optimization problem of minimizing the ruin probability for the classical risk model is studied when investment in a risky assent and proportional reinsurance are both allowed. The asymptotic ruin probability for the classical risk model under the optimal investment in a risky asset is obtained by Gaier et al. [@Gaier] for large initial wealth. The asymptotics for small claim sizes were obtained in Hipp [@Hipp2004]. In Yang and Zhang [@YZ], they studied the optimal investment for an insurer when the risk process is compound Poisson process perturbed by a standard Brownian motion and the insurer can invest in the money market and in a risky asset. In Gaier and Grandits [@GG], the case when the claim sizes are of regularly varying tails were studied. The results were then extended to include interest rates in [@GG2]. The case for subexponential claims was investigaed in Schmidli [@Schmidli2]. In Promislow and Young [@Promislow], they studied the problem of minimizing the probability of ruin of an insurer when the claim process is modeled by a Brownian motion with drift optimizing over the investment in a risky asset and purchasing quota-share reinsurance. In Wang et al. [@Wang], they adopted the martingale approach to study the optimal investment problem for an insurer when the insurer’s risk process is modeled by a Lévy process with possible investment in a security market described by the standard Black-Scholes model. When the underlying investor is an individual rather than an insurance company, the optimal investment problem of minimizing the ruin probability was studied in e.g. Bayraktar and Young [@BY]. In Azcue and Muler [@Azcue], they studied the minimization of the ruin probability for the classical risk model with possible investment in a risky asset that follows a geometric Brownian motion under the borrowing constraints. There have been many other works in this area. For a survey, we refer to Paulsen [@Paulsen] and the references therein. This paper is organized as follows. We first study the optimal investment strategy on research and development to minimize the ruin probability of the company. Then, we generalize our results to a state-dependent dual risk model that was first introduced in Zhu [@Zhu]. When the size of a company increases, the cost usually also increases, while the resource of income will also increase in general, which makes it natural to study a state-dependent dual risk model. Next, we will study and optimal investment strategy so that in addition to the investment in the research and development, the investment in a risky asset, e.g. a capital market index is allowed. Finally, we will do some numerical studies to understand better how the minimized ruin probability and the optimal strategy depend on the parameters in the model. Minimizing the Ruin Probability =============================== The management of the underlying company can decide whether or not to increase the capital spending on research and development to boost the future profits. Our goal is to find the optimal expenditure on research and development to minimize the probability that the company is eventually ruined. Let $\tau:=\inf\{t>0: X_{t}\leq 0\}$ be the ruin time. The eventual ruin probability is defined as a function of the initial wealth $x$: $\psi(x):=\mathbb{P}(\tau<\infty|X_{0}=x)$. Note that under the assumption $\lambda\mathbb{E}[Y_{1}]>\rho$, the ruin probability $\psi(x)$ is less than $1$. Indeed, $\psi(x)=e^{-\alpha x}$, where $\alpha>0$ is the unique solution to the equation: $$\label{alphaEqn} \rho\alpha+\lambda\int_{0}^{\infty}[e^{-\alpha y}-1]p(y)dy=0.$$ Now, let us introduce the idiosyncratic cost $C_{t}>0$ associated with the investment in research and development. Let $\mathcal{C}$ be the set of all admissible strategies, defined as $$\begin{aligned} \mathcal{C}&:=\bigg\{C:[0,\infty)\times\Omega\rightarrow\mathbb{R}_{\geq 0}:\text{$C$ is progressively measurable}, \\ &\qquad\qquad\qquad\qquad\qquad \text{bounded and predictable}\bigg\}. \nonumber\end{aligned}$$ Given $C_{t}\in\mathcal{C}$, the intensity of the arrival process of the profits is assumed to be $$\lambda_{t}^{C}=\lambda+\delta C_{t}^{\gamma},$$ where $\delta>0$ and $\gamma>0$. Given $C_{t}\in\mathcal{C}$, the wealth process $X_{t}=:X_{t}^{C}$ satisfies the dynamics: $$dX_{t}^{C}=-(\rho+C_{t})dt+dJ_{t}^{C}, \qquad X_{0}^{C}=x,$$ where $J_{t}^{C}=\sum_{i=1}^{N_{t}^{C}}Y_{i}$ and $N_{t}^{C}$ is a simple point process with intensity $\lambda_{t}^{C}$ at time $t$. Notice when $\gamma>1$, for any constant strategy $C_{t}\equiv C$, where $C>0$ is sufficiently large, the ruin probability is given by $e^{-\alpha_{C}x}$, where $\alpha_{C}$ is the unique positive solution to the equation: $$(\rho+C)\alpha_{C}+(\lambda+\delta C^{\gamma})\int_{0}^{\infty}[e^{-\alpha_{C}y}-1]p(y)dy=0.$$ We can rewrite this equation as: $$\frac{\rho+C}{\lambda+\delta C^{\gamma}}\alpha_{C}=\int_{0}^{\infty}[1-e^{-\alpha_{C}y}]p(y)dy.$$ The right hand side of the above equation is bounded between $0$ and $1$. In the left hand side of the above equation, $\lim_{C\rightarrow\infty}\frac{\rho+C}{\delta C^{\gamma}}=0$, which implies that $\alpha_{C}\rightarrow\infty$ as $C\rightarrow\infty$. Hence, $V(x)\leq\inf_{C>0}e^{-\alpha_{C}x}=0$ and the minimized ruin probability is trivially zero. Therefore, for the rest of the paper, we only consider two cases: (i) $0<\gamma<1$; (ii) $\gamma=1$. The $0<\gamma<1$ Case --------------------- Let us first assume that $0<\gamma<1$. Therefore, $\lambda_{t}$ is a concave function of $C_{t}$. What it says is that the initial investment of research and development can boost the prospect of future profits, but the margin decreases as the increase of the investment. We are interested to study the following stochastic optimal control problem: $$V(x):=\inf_{C\in\mathcal{C}}\mathbb{P}(\tau<\infty|X_{0}=x),$$ where $\mathcal{C}$ is the set of all non-negative $\mathcal{F}_{t}$ measurable functions $C_{t}$ that satisfies $\sup_{t}C_{t}<\infty$. We know that if we choose $C(\cdot)\equiv 0$, $\psi(x)=e^{-\alpha x}<1$ under the condition $\lambda\mathbb{E}[Y_{1}]>\rho$ and thus $V(x)<1$. Notice that our assumptions that $\rho$ and $\lambda$ are constant might be too simplified to model the running costs and profits of a real company, especially when the underlying company is a relatively new high tech company for which the revenues and R&D expenditure can change a lot over time, see e.g. Table \[Google\], Table \[Tesla\]. But for a more mature company, the revenues and R&D are usually more stable, see e.g. Table \[Merck\]. So our oversimplified assumptions can still provide us some insight to this optimization problem, especially since it will leads to analytical tractability. Later, we will also consider the case when $\rho$, $\lambda$ etc. depend on the state of the process $X_{t}$. Indeed, when it is allowed to invest in research and development, we will see later, that the condition $$\label{ConditionOne} (\rho-\lambda\mathbb{E}[Y_{1}])-(\delta\gamma)^{\frac{1}{1-\gamma}}\left(\frac{1}{\gamma}-1\right)(\mathbb{E}[Y_{1}])^{\frac{1}{1-\gamma}}<0$$ is sufficient to guarantee that $V(x)<1$. Note that this is weaker than the usual condition $\rho-\lambda\mathbb{E}[Y_{1}]<0$ for the dual risk model. It is easy to see that $V(x)$ satisfies the Hamilton-Jacobi-Bellman equation: $$\label{HOne} \inf_{C>0}\left\{-(\rho+C)V'(x) +(\lambda+\delta C^{\gamma})\int_{0}^{\infty}[V(x+y)-V(x)]p(y)dy\right\}=0,$$ with boundary condition $V(0)=1$. Under the assumption , $V(x)=e^{-\beta x}$ is a solution of the Hamilton-Jacobi-Bellman equation , where $\beta$ is the unique positive value that satifies the equation: $$\begin{aligned} &\beta\left[\rho+\left(\frac{1}{\delta\gamma}\right)^{\frac{1}{\gamma-1}} \left(\frac{\beta}{1-\int_{0}^{\infty}e^{-\beta y}p(y)dy}\right)^{\frac{1}{\gamma-1}}\right] \\ &\qquad -\left[\lambda+\delta\left(\frac{1}{\delta\gamma}\right)^{\frac{\gamma}{\gamma-1}} \left(\frac{\beta}{1-\int_{0}^{\infty}e^{-\beta y}p(y)dy}\right)^{\frac{\gamma}{\gamma-1}}\right] \left(1-\int_{0}^{\infty}e^{-\beta y}p(y)dy\right)=0. \nonumber\end{aligned}$$ Given $V(x)=e^{-\beta x}$, and the $$C^{\ast}:={\rm argmin}\{(-(\rho+C)V'(x)+(\lambda+\delta C^{\gamma})\int_{0}^{\infty}[V(x+y)-V(x)]p(y)dy\}$$ is given by $$C^{\ast}=\left(\frac{1}{\delta\gamma}\right)^{\frac{1}{\gamma-1}} \left(\frac{\beta}{1-\int_{0}^{\infty}e^{-\beta y}p(y)dy}\right)^{\frac{1}{\gamma-1}},$$ and it also satisfies the equation: $$\lambda+(1-\gamma)\delta(C^{\ast})^{\gamma}=\rho\delta\gamma(C^{\ast})^{\gamma-1}.$$ Minimizing over $C$ in the equation , we get $$-V'(x)+\delta\gamma C^{\gamma-1}\int_{0}^{\infty}[V(x+y)-V(x)]p(y)dy=0.$$ Note that $C$ is positive and $V(x)$ is decreasing in $x$. Hence, $$C=\left(\frac{1}{\delta\gamma}\right)^{\frac{1}{\gamma-1}} \left(\frac{V'(x)}{\int_{0}^{\infty}[V(x+y)-V(x)]p(y)dy}\right)^{\frac{1}{\gamma-1}},$$ and the Hamilton-Jacobi-Bellman equation becomes $$\begin{aligned} &-\left[\rho+\left(\frac{1}{\delta\gamma}\right)^{\frac{1}{\gamma-1}} \left(\frac{V'(x)}{\int_{0}^{\infty}[V(x+y)-V(x)]p(y)dy}\right)^{\frac{1}{\gamma-1}}\right] V'(x) \\ &\qquad +\left[\lambda+\delta\left(\frac{1}{\delta\gamma}\right)^{\frac{\gamma}{\gamma-1}} \left(\frac{V'(x)}{\int_{0}^{\infty}[V(x+y)-V(x)]p(y)dy}\right)^{\frac{\gamma}{\gamma-1}}\right] \nonumber \\ &\qquad\qquad\qquad\qquad\cdot \int_{0}^{\infty}[V(x+y)-V(x)]p(y)dy=0. \nonumber\end{aligned}$$ We can see that $V(x)=e^{-\beta x}$, where $\beta>0$ is the unique solution to the equation: $$\begin{aligned} \label{BetaEqnI} &\beta\left[\rho+\left(\frac{1}{\delta\gamma}\right)^{\frac{1}{\gamma-1}} \left(\frac{\beta}{1-\int_{0}^{\infty}e^{-\beta y}p(y)dy}\right)^{\frac{1}{\gamma-1}}\right] \\ &\qquad -\left[\lambda+\delta\left(\frac{1}{\delta\gamma}\right)^{\frac{\gamma}{\gamma-1}} \left(\frac{\beta}{1-\int_{0}^{\infty}e^{-\beta y}p(y)dy}\right)^{\frac{\gamma}{\gamma-1}}\right] \left(1-\int_{0}^{\infty}e^{-\beta y}p(y)dy\right)=0. \nonumber\end{aligned}$$ Let us define $$\begin{aligned} F(\beta)&:=\beta\left[\rho+\left(\frac{1}{\delta\gamma}\right)^{\frac{1}{\gamma-1}} \left(\frac{\beta}{1-\int_{0}^{\infty}e^{-\beta y}p(y)dy}\right)^{\frac{1}{\gamma-1}}\right] \\ &\qquad -\left[\lambda+\delta\left(\frac{1}{\delta\gamma}\right)^{\frac{\gamma}{\gamma-1}} \left(\frac{\beta}{1-\int_{0}^{\infty}e^{-\beta y}p(y)dy}\right)^{\frac{\gamma}{\gamma-1}}\right] \left(1-\int_{0}^{\infty}e^{-\beta y}p(y)dy\right). \nonumber\end{aligned}$$ We want to show that there exists a unique positive value $\beta$ such that $F(\beta)=0$. For the convenience, let us also introduce the notation: $$g(\beta):=\frac{1-\int_{0}^{\infty}e^{-\beta y}p(y)dy}{\beta}.$$ It is easy to compute that for $\beta>0$, $$g'(\beta)=\frac{1}{\beta^{2}}\int_{0}^{\infty}\left[\beta ye^{-\beta y}-1+e^{-\beta y}\right]p(y)dy.$$ Let $h(x)=xe^{-x}-1+e^{-x}$, $x\geq 0$. Then $h(0)=0$ and $h(x)\rightarrow-1$ as $x\rightarrow\infty$. Moreover, $h'(x)=-xe^{-x}<0$ for $x>0$. Thus $h(x)\leq 0$ for any $x\geq 0$ and therefore, $g(\beta)$ is a decreasing function of $\beta$. Note that we can rewrite $F(\beta)$ as $$F(\beta)=\beta\left[\rho-(\delta\gamma)^{\frac{1}{1-\gamma}}\left(\frac{1}{\gamma}-1\right)[g(\beta)]^{\frac{1}{1-\gamma}} -\lambda g(\beta)\right].$$ Therefore, $F(\beta)=0$ for $\beta>0$ if and only if $G(\beta)=0$ for $\beta>0$, where $$\label{Gbeta} G(\beta):=\rho-(\delta\gamma)^{\frac{1}{1-\gamma}}\left(\frac{1}{\gamma}-1\right)[g(\beta)]^{\frac{1}{1-\gamma}} -\lambda g(\beta).$$ Note that by L’Hôpital’s rule, $$\lim_{\beta\rightarrow 0^{+}}g(\beta)=\mathbb{E}[Y_{1}].$$ Therefore, $$\lim_{\beta\rightarrow 0^{+}}G(\beta) =(\rho-\lambda\mathbb{E}[Y_{1}])-(\delta\gamma)^{\frac{1}{1-\gamma}}\left(\frac{1}{\gamma}-1\right)(\mathbb{E}[Y_{1}])^{\frac{1}{1-\gamma}}<0.$$ On the other hand, $g(\beta)\rightarrow 0$ as $\beta\rightarrow\infty$, therefore $G(\beta)\rightarrow\rho>0$ as $\beta\rightarrow\infty$. Since $g(\beta)$ is a decreasing function in $\beta$ and $0<\gamma<1$, it follows that $G(\beta)$ is increasing in $\beta$. Hence, we conclude that $G(\beta)=0$ has a unique positive solution. Given $V(x)=e^{-\beta x}$, then we have: $$\label{OptimalC} C^{\ast}=\left(\frac{1}{\delta\gamma}\right)^{\frac{1}{\gamma-1}} \left(\frac{\beta}{1-\int_{0}^{\infty}e^{-\beta y}p(y)dy}\right)^{\frac{1}{\gamma-1}}.$$ Recall the definition of $G(\beta)$ in and $\beta$ satisfies $G(\beta)=0$. Therefore the optimal $C^{\ast}$ in must satisfy the equation: $$\lambda+(1-\gamma)\delta(C^{\ast})^{\gamma}=\rho\delta\gamma(C^{\ast})^{\gamma-1}.$$ \[ExpExample\] When $p(y)=\nu e^{-\nu y}$, $\nu>0$, $\beta$ satisfies $$\beta\left[\rho+\left(\frac{1}{\delta\gamma}\right)^{\frac{1}{\gamma-1}}(\beta+\nu)^{\frac{1}{\gamma-1}}\right] =\left[\lambda+\delta\left(\frac{1}{\delta\gamma}\right)^{\frac{\gamma}{\gamma-1}}(\beta+\nu)^{\frac{\gamma}{\gamma-1}}\right] \frac{\beta}{\beta+\nu},$$ which implies that $$\rho(\beta+\nu)=\lambda+\left(\frac{1}{\gamma}-1\right) \left(\frac{1}{\delta\gamma}\right)^{\frac{1}{\gamma-1}}(\beta+\nu)^{\frac{\gamma}{\gamma-1}}.$$ In particular, when $\gamma=\frac{1}{2}$, we get $$\rho(\beta+\nu)^{2}=\lambda(\beta+\nu)+ \frac{\delta^{2}}{4},$$ and therefore $$\beta=\frac{\lambda+\sqrt{\lambda^{2}+\rho\delta^{2}}}{2\rho}-\nu,$$ and the optimal $C^{\ast}$ is given by $$C^{\ast}=\frac{\delta^{2}\rho^{2}}{(\lambda+\sqrt{\lambda^{2}+\rho\delta^{2}})^{2}}.$$ ### A Verification Theorem Let us recall that $$\label{eqn:HJB_RandD_gamma<1} \inf_{C\ge0}\left\{-(\rho+C)V'(x)+(\lambda+\delta C^\gamma)\int_0^\infty[V(x+y)-V(x)]p(y)dy\right\}=0,$$ with $V(0)=1$. Given $C\in\mathcal{C}$, the wealth process satisfies the dynamics: $$dX_t^c=-(\rho+C_t)dt+dJ_t^C\;\text{ \rm and }X_0^C=x.$$ \[VeriThm\] Let $w\in{\rm C}^1_{\rm b}$ be solution of such that for any $C\in\mathcal{C}$ $$\label{eqn:w(X^C)to0} \lim_{t\to\infty}w(X_t^C)1_{\{t\le \tau\}}=0\;\text{ \rm a.s.}$$ Then, $w\le V$. In addition, if there exists a bounded function $C^{\ast}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}$ such that $$C^*(x)\in{\rm argmin}_{C\ge0}\left\{-(\rho+C)w'(x)+(\lambda+\delta C^\gamma)\int_0^\infty[w(x+y)-w(x)]p(y)dy\right\},$$ and $dX_{t}^{\ast}=-(\rho+C^{\ast}(X_{t}^{\ast}))dt+dJ_{t}^{C^{\ast}(X_{t-}^{\ast})}$ has a solution, then, $w=V$. Since $w$ is bounded and continuously differentiable with bounded derivative, by Itô lemma for jump processes we have $$\begin{aligned} \mathbb{E}[w(X^C_{t\wedge \tau})]&=w(x)+\mathbb{E}\bigg[\int_0^{t\wedge \tau}\bigg(-(\rho+C_s)w'(X^c_s) \\ &\qquad\qquad +(\lambda+\delta C_s^\gamma)\int_0^\infty[w(X^C_s+y)-w(X^C_s)]p(y)dy\bigg)ds\bigg] \nonumber \\ &\ge w(x), \nonumber\end{aligned}$$ for any $C\in\mathcal{C}$. Therefore, $$w(x)\le \mathbb{E}[w(X^C_{t\wedge \tau})]=\mathbb{E}[w(X^C_{t})1_{\{t<\tau\}}+1_{\{t\ge\tau\}}].$$ If follows from and Lebesgue dominated convergence theorem that the right hand side above converges to $\mathbb{P}(\tau<\infty)$ and $$w(x)\le \mathbb{P}(\tau<\infty).$$ By taking infimum over $C\in\mathcal{C}$, we obtain $w\le V$. To obtain the equality, notice that in the above argument we have equality when $C_{t}=C^{\ast}(X_{t-}^{\ast})$ $V(x)=e^{-\beta x}$ where $\beta$ is the unique solution to the equation . It is sufficient to show that for any $C\in\mathcal{C}$ $$\lim_{t\to\infty}\exp(-\beta X_t^C)1_{\{t\le \tau\}}=0\;\text{ \rm a.s.}$$ Notice that the event that the above limit is not zero is included in $\bigcup_{L\in\mathbb{N}}\{\omega\;:\;\liminf_{t\to\infty}X_t^C<L\text{ \rm and }\tau=\infty\}$. Since $C$ is bounded, given $X_t^C=x$ for $x\in[0,L]$, the probability that $X_{t+h}^C$ has no jumps during any bounded time interval $[t,t+h_0]$ is positive. More specifically, the probability that $X_{t+h}^C=x-(\rho t+\int_0^t C_sds)$ for all $h\in[0,x/\rho]$ is positive number. In other words, if $\liminf_{t\to\infty}X_t^C<L$ then ruin eventually occurs. This implies that $\mathbb{P}(\liminf_{t\to\infty}<L,\; \tau=\infty)=0$ which completes the proof. ### Asymptotic Analysis \[ParametersI\] We have already showed that $V(x)=e^{-\beta x}$, where $\beta$ is the unique positive solution to the equation and that is equivalent to $G(\beta)=0$, i.e., $$\label{Gbeta0} \rho-(\delta\gamma)^{\frac{1}{1-\gamma}}\left(\frac{1}{\gamma}-1\right)[g(\beta)]^{\frac{1}{1-\gamma}} -\lambda g(\beta)=0.$$ Now, let us discuss how the value $\beta$ (and hence the value function $V(x)=e^{-\beta x}$) and the optimal investment rate $C^{\ast}$ depend on the parameters $\rho$, $\lambda$ and $\delta$. By , we have the following observations: \(i) As $\rho$ increases, $g(\beta)$ increases. Since $g(\beta)$ is decreasing in $\beta$, we conclude that $\beta$ decreases as $\rho$ increases. Intuitively it says that as the fixed running cost for research and investment increases, the ruin probability increases. Asymptotically, as $\rho\rightarrow 0$, $g(\beta)\rightarrow 0$. When $g(\beta)\rightarrow 0$, since $0<\gamma<1$, we must have $[g(\beta)]^{\frac{1}{1-\gamma}}\ll g(\beta)$. Therefore, by , as $\rho\rightarrow 0$, we have $g(\beta)\sim\frac{\rho}{\lambda}$. From the definition of $g(\beta)$, we have $g(\beta)\sim\frac{1}{\beta}$ as $\beta\rightarrow\infty$. Hence, we conclude that $$\beta\sim\frac{\lambda}{\rho}, \qquad\text{as $\rho\rightarrow 0$}.$$ Therefore, the optimal $C^{\ast}$ satisfies $$C^{\ast}\sim(\delta\gamma)^{\frac{1}{1-\gamma}}\left(\frac{\rho}{\lambda}\right)^{\frac{1}{1-\gamma}}, \qquad\text{as $\rho\rightarrow 0$}.$$ \(ii) As $\delta$ increases, $g(\beta)$ decreases. Since $g(\beta)$ is decreasing in $\beta$, we conclude that $\beta$ increases as $\delta$ increases. Intuitively, it says that if the prospect of future profits given the investment in research and development increases, then the ruin probability decreases. Asymptotically, as $\delta\rightarrow\infty$, we have $g(\beta)\rightarrow 0$, and therefore as $\delta\rightarrow\infty$, $$(\delta\gamma)^{\frac{1}{1-\gamma}}\left(\frac{1}{\gamma}-1\right)[g(\beta)]^{\frac{1}{1-\gamma}} \rightarrow\rho,$$ which implies that as $\delta\rightarrow\infty$, we have $$g(\beta)\sim\frac{\rho^{1-\gamma}}{\gamma\left(\frac{1}{\gamma}-1\right)^{1-\gamma}}\frac{1}{\delta}.$$ Since $g(\beta)\sim\frac{1}{\beta}$ as $\beta\rightarrow\infty$, we conclude that $$\beta\sim\frac{\gamma\left(\frac{1}{\gamma}-1\right)^{1-\gamma}}{\rho^{1-\gamma}}\delta, \qquad \text{as $\delta\rightarrow\infty$}.$$ Moreover, the optimal $C^{\ast}$ satisfies: $$C^{\ast}\rightarrow\frac{\rho}{\frac{1}{\gamma}-1}, \qquad \text{as $\delta\rightarrow\infty$}.$$ Now, if $\delta\rightarrow 0$, then $g(\beta)\rightarrow\frac{\rho}{\lambda}$. Therefore, as $\delta\rightarrow 0$, $\beta\rightarrow\alpha$, where we recall that $\alpha$ is the unique positive value so that $$1-\int_{0}^{\infty}e^{-\alpha y}p(y)dy=\alpha\frac{\rho}{\lambda}.$$ which is the same as defined in . Moreover, the optimal $C^{\ast}$ satisfies $$C^{\ast}\sim(\delta\gamma)^{\frac{1}{1-\gamma}}\left(\frac{\rho}{\lambda}\right)^{\frac{1}{1-\gamma}}, \qquad \text{as $\delta\rightarrow 0$}.$$ Intuitively, it says that as $\delta\rightarrow 0$, there is no value investing in research and development. \(iii) Similarly, as $\lambda$ increases, $\beta$ increases, and the ruin probability decreases. As $\lambda\rightarrow\infty$, we have $g(\beta)\rightarrow 0$. Thus, $\lambda g(\beta)\rightarrow\rho$, and $g(\beta)\sim\frac{\rho}{\lambda}$. Since $g(\beta)\sim\frac{1}{\beta}$ as $\beta\rightarrow\infty$, we conclude that $$\beta\sim\frac{\lambda}{\rho}, \qquad\text{as $\lambda\rightarrow\infty$}.$$ Moreover, the optimal $C^{\ast}$ satisfies: $$C^{\ast}\sim(\delta\gamma)^{\frac{1}{1-\gamma}}\left(\frac{\rho}{\lambda}\right)^{\frac{1}{1-\gamma}}, \qquad \text{as $\lambda\rightarrow\infty$}.$$ \(iv) Assume that the parameters are chosen so that $$(\rho-\lambda\mathbb{E}[Y_{1}])-(\delta\gamma)^{\frac{1}{1-\gamma}}\left(\frac{1}{\gamma}-1\right)(\mathbb{E}[Y_{1}])^{\frac{1}{1-\gamma}} \rightarrow 0.$$ Then, it follows that $g(\beta)\rightarrow\mathbb{E}[Y_{1}]$ and $\beta\rightarrow 0$. More precisely, as $\beta\rightarrow 0$, $g(\beta)\sim\mathbb{E}[Y_{1}]-\frac{\beta}{2}\mathbb{E}[Y_{1}^{2}]$ if $\mathbb{E}[Y_{1}^{2}]<\infty$, and becomes $$\rho-(\delta\gamma)^{\frac{1}{1-\gamma}}\left(\frac{1}{\gamma}-1\right) \left(\mathbb{E}[Y_{1}]-\frac{\beta}{2}\mathbb{E}[Y_{1}^{2}]\right)^{\frac{1}{1-\gamma}} -\lambda\left(\mathbb{E}[Y_{1}]-\frac{\beta}{2}\mathbb{E}[Y_{1}^{2}]\right)=O(\beta^{2}).$$ as $\beta\rightarrow 0$. Then, it follows that $$\begin{aligned} &\rho-(\delta\gamma)^{\frac{1}{1-\gamma}}\left(\frac{1}{\gamma}-1\right) \left(\mathbb{E}[Y_{1}]^{\frac{1}{1-\gamma}}-\frac{1}{2(1-\gamma)} (\mathbb{E}[Y_{1}])^{\frac{\gamma}{1-\gamma}}\mathbb{E}[Y_{1}^{2}]\beta\right) \\ &\qquad\qquad\qquad -\lambda\left(\mathbb{E}[Y_{1}]-\frac{\beta}{2}\mathbb{E}[Y_{1}^{2}]\right)=O(\beta^{2}). \nonumber\end{aligned}$$ Hence, we conclude that $$\beta\sim\frac{-(\rho-\lambda\mathbb{E}[Y_{1}])+(\delta\gamma)^{\frac{1}{1-\gamma}}\left(\frac{1}{\gamma}-1\right)(\mathbb{E}[Y_{1}])^{\frac{1}{1-\gamma}}}{(\delta\gamma)^{\frac{1}{1-\gamma}}\frac{1}{2\gamma}(\mathbb{E}[Y_{1}])^{\frac{\gamma}{1-\gamma}}\mathbb{E}[Y_{1}^{2}] +\frac{\lambda}{2}\mathbb{E}[Y_{1}^{2}]}.$$ Moreover, the optimal $C^{\ast}$ satisfies: $$C^{\ast}\sim(\delta\gamma)^{\frac{1}{1-\gamma}}(\mathbb{E}[Y_{1}])^{\frac{1}{1-\gamma}}.$$ \[ParametersII\] The value function $V(x)=e^{-\beta x}$ and the optimal investment rate $C^{\ast}$ also depend on the parameter $\gamma$. We will study $\gamma=1$ case in details later. For the moment, let us try to understand the asymptotic behavior of the value function and the optimal investment rate as $\gamma\rightarrow 1^{-}$. We will also obtain the asymptotics as $\gamma\rightarrow 0^{+}$. Let us recall that the optimal $C^{\ast}$ satisfies the equation: $$\label{Cequation} \lambda+(1-\gamma)\delta(C^{\ast})^{\gamma}=\rho\delta\gamma(C^{\ast})^{\gamma-1}.$$ Thus, we have $(1-\gamma)\delta(C^{\ast})^{\gamma}\leq\rho\delta\gamma(C^{\ast})^{\gamma-1}$ which implies that $$C^{\ast}\leq\frac{\rho\gamma}{1-\gamma}.$$ Thus, $C^{\ast}\rightarrow 0$ as $\gamma\rightarrow 0$. Note that $\lim_{\gamma\rightarrow 0^{+}}\gamma^{\gamma}=1$. Therefore, we can check that $$C^{\ast}\sim\frac{\rho\delta}{\lambda+\delta}\gamma, \qquad \text{as $\gamma\rightarrow 0^{+}$}.$$ Now, let us consider the $\gamma\rightarrow 1^{-}$ limit. Let us rewrite that equation as $$\label{Dequation} \frac{\lambda}{(1-\gamma)^{1-\gamma}}+\delta D^{\gamma}=\frac{\rho\delta\gamma}{D^{1-\gamma}},$$ where $D=(1-\gamma)C^{\ast}$. Let us first consider the case $\rho\delta>\lambda$. Notice first that $\lim_{\gamma\rightarrow 1^{-}}(1-\gamma)^{1-\gamma}=1$. First, $D$ cannot go to $0$ as $\gamma\rightarrow 1^{-}$, because otherwise the left hand side of goes to $\lambda$ and as $D$ goes to $0$, $D<1$ and $D^{1-\gamma}\leq 1$, so the right hand side of is greater than $\rho\delta\gamma$. Then, in the limit as $\gamma\rightarrow 1^{-}$, we get $\lambda\geq\rho\delta$, which is a contradiction. Second, $D$ cannot go to $\infty$ as $\gamma\rightarrow 1^{-}$. To see this, notice that as $D\rightarrow\infty$, the left hand side of goes to $\infty$ and in the right hand side of , for large $D$, $D>1$ and $D^{1-\gamma}\geq 1$ and hence the right hand side is less than $\rho\delta$, which is a contradiction. Therefore, if $\rho\delta>\lambda$, $D$ converges to a positive constant, which from we can see that the limit is $\frac{\rho\delta-\lambda}{\delta}$, and we have $$C^{\ast}\sim\frac{\rho\delta-\lambda}{\delta}\frac{1}{1-\gamma}, \qquad \text{as $\gamma\rightarrow 1^{-}$}.$$ If $\rho\delta<\lambda$, then the optimal $C^{\ast}\rightarrow 0$ as $\gamma\rightarrow 1^{-}$. To see this, notice that if $\limsup_{\gamma\rightarrow 1^{-}}C^{\ast}\in(0,\infty)$, then in , we have $\limsup_{\gamma\rightarrow 1^{-}}\rho\delta\gamma(C^{\ast})^{\gamma-1}=\rho\delta$ and $\limsup_{\gamma\rightarrow 1^{-}}[\lambda+(1-\gamma)\delta(C^{\ast})^{\gamma}]=\lambda$, which is a contradiction since $\rho\delta<\lambda$. If $\limsup_{\gamma\rightarrow 1^{-}}C^{\ast}=\infty$, then for $C^{\ast}>1$, we have from that $\lambda<\lambda+(1-\gamma)\delta(C^{\ast})^{\gamma}=\rho\delta\gamma(C^{\ast})^{\gamma-1}<\rho\delta$, which is again a contraction. Hence, we must have $C^{\ast}\rightarrow 0$. Since $C^{\ast}\rightarrow 0$, $(1-\gamma)\delta(C^{\ast})^{\gamma}\ll\rho\delta\gamma(C^{\ast})^{\gamma-1}$, and thus $$C^{\ast}\sim\left(\frac{\lambda}{\rho\delta\gamma}\right)^{\frac{1}{\gamma-1}} \sim \frac{1}{e}\left(\frac{\rho\delta}{\lambda}\right)^{\frac{1}{1-\gamma}}, \qquad \text{as $\gamma\rightarrow 1^{-}$}.$$ If $\rho\delta=\lambda$, the optimal $C^{\ast}$ satisfies the equation: $$\lambda=\frac{(1-\gamma)\delta(C^{\ast})^{\gamma}}{\gamma(C^{\ast})^{\gamma-1}-1}.$$ Assume that $C^{\ast}>0$ is fixed, then by L’Hôpital’s rule, $$\lim_{\gamma\rightarrow 1^{-}} \frac{(1-\gamma)\delta(C^{\ast})^{\gamma}}{\gamma(C^{\ast})^{\gamma-1}-1} =\lim_{\gamma\rightarrow 1^{-}} \frac{-\delta(C^{\ast})^{\gamma}+(1-\gamma)\delta(C^{\ast})^{\gamma}\log C^{\ast}} {(C^{\ast})^{\gamma-1}+\gamma(C^{\ast})^{\gamma-1}\log C^{\ast}} =\frac{-\delta C^{\ast}}{1+\log C^{\ast}}.$$ Therefore as $\gamma\rightarrow 1^{-}$, $C^{\ast}$ converges to the unique positive solution to the equation: $$\delta x+\lambda(1+\log x)=0.$$ The $\gamma=1$ Case ------------------- When $\gamma=1$, this is a singular control problem and $V(x)$ satisfies the Hamilton-Jacobi-Bellman equation, see e.g. Chapter 8 in [@Fleming-Soner-book-06]: $$\begin{aligned} &\min\bigg\{-\rho V'(x)+\lambda\int_{0}^{\infty}[V(x+y)-V(x)]p(y)dy, \\ &\qquad\qquad -V'(x) +\delta\int_{0}^{\infty}[V(x+y)-V(x)]p(y)dy\bigg\}=0, \nonumber\end{aligned}$$ with boundary condition $V(0)=1$. Intuitively, we can argue as follows. When $\frac{1}{\mathbb{E}[Y_{1}]}\geq\frac{\lambda}{\rho}$, the ruin probability is $1$ without any investment in research and development. When $\frac{1}{\mathbb{E}[Y_{1}]}<\frac{\lambda}{\rho}$, the ruin probability is $e^{-\alpha x}$ which is less than $1$, where $\alpha$ is the unique positive value satisfying the equation: $$\rho\alpha+\lambda\int_{0}^{\infty}[e^{-\alpha y}-1]p(y)dy=0.$$ If we invest with constant rate $C$ and if $\frac{1}{\mathbb{E}[Y_{1}]}<\frac{\lambda+\delta C}{\rho+C}$, the ruin probability is $e^{-\alpha_{C}x}$, where $\alpha_{C}$ is the unique positive value satisfying the equation: $$(\rho+C)\alpha_{C}+(\lambda+\delta C)\int_{0}^{\infty}[e^{-\alpha_{C} y}-1]p(y)dy=0.$$ When $\frac{1}{\mathbb{E}[Y_{1}]}-\delta<0$, it is always possible to invest to achieve a ruin probability less than $1$. Otherwise, investment would not help at all. Therefore, we have the following conclusions: - $\frac{1}{\mathbb{E}[Y_{1}]}-\delta<0$, $\lambda\mathbb{E}[Y_{1}]>\rho$. In this case, the ruin probability is less than $1$ without any investment in research and development. But if you invest in research and development, it will help to lower the ruin probability. The critical threshold is $\delta=\frac{\lambda}{\rho}$. If $\delta>\frac{\lambda}{\rho}$, we can see that the value function is given by $V(x)=e^{-\alpha_{\infty}x}$, where $\alpha_{\infty}$ is the unique positive value satisfying the equation: $$\alpha_{\infty}+\delta\int_{0}^{\infty}[e^{-\alpha_{\infty} y}-1]p(y)dy=0.$$ That is achieved by investing $C\rightarrow+\infty$. If $\delta<\frac{\lambda}{\rho}$, we can see that the optimal strategy is not to invest and $V(x)=e^{-\alpha x}$. - $\frac{1}{\mathbb{E}[Y_{1}]}-\delta<0$, $\lambda\mathbb{E}[Y_{1}]<\rho$. In this case, the ruin occurs with probability $1$ without any investment in research and development. But if you invest aggressively in research and development, the ruin probability will fall below $1$. We can see that $V(x)=e^{-\alpha_{\infty}x}$. Next, we try to prove the claims above rigorously. We rely on the random time change technique, which is often used in stochastic analysis. ### Random Time Change {#RandomSection} Let us now show that the value function $V(x)$ and the optimal strategy are indeed what we described above for $\gamma=1$ case. For any $C\in\mathcal{C}$, we have $$dX^{C}_{t}=-(\rho+C_{t})dt+dJ^{C}_{t},$$ where $J^{C}_{t}=\sum_{i=1}^{N^{C}_{t}}Y_{i}$, where $N^{C}_{t}$ is a simple point process with intensity $\lambda+\delta C_{t-}$ at time $t$ and $Y_{i}$ are i.i.d. with probability density function $p(y)$ as before. Let us introduce a random time change and define $T(t)$ via: $$\int_{0}^{T(t)}(\lambda+\delta C_{s})ds=t.$$ Then, it is easy to see that $T(0)=0$ and $T(t)\rightarrow\infty$ as $t\rightarrow\infty$ since $C\in\mathcal{C}$ is bounded. Then, $$dX_{T(t)}=-(\rho+C_{T(t)})dT(t)+dJ^{C}_{T(t)}.$$ Under the random time change, $\frac{dT(t)}{dt}=\frac{1}{\lambda+\delta C_{t}}$ and $J^{C}_{T(t)}$ is distributed as $\overline{J}_{t}:=\sum_{i=1}^{\overline{N}_{t}}Y_{i}$, where $\overline{N}_{t}$ is a standard Poisson process with intensity $1$. See e.g. Meyer [@Meyer] for the random time change for simple point processes. Therefore, $$dX_{T(t)}=-\frac{\rho+C_{T(t)}}{\lambda+\delta C_{T(t)}}dt+d\overline{J}_{t}.$$ Let us also notice that $$\mathbb{P}(\text{$X_{t}$ ever gets ruined}) =\mathbb{P}(\text{$X_{T(t)}$ ever gets ruined}).$$ When $\frac{\rho}{\lambda}<\frac{1}{\delta}$, then $\inf_{C\geq 0}\frac{\rho+C}{\lambda+\delta C}=\frac{\rho}{\lambda}$ and the optimal strategy is $C_{t}\equiv 0$. In this case, the value function $V(x)=e^{-\beta x}$, where $$\rho\beta+\lambda\int_{0}^{\infty}[e^{-\beta y}-1]p(y)dy=0.$$ When $\frac{\rho}{\lambda}>\frac{1}{\delta}$, then $\inf_{C\geq 0}\frac{\rho+C}{\lambda+\delta C}=\infty$. And for any $C\in\mathcal{C}$ and $\overline{C}:=\Vert C\Vert_{\infty}$, $\overline{C}$ is more optimal than $C$. The “optimal strategy” is $C_{t}\equiv\infty$. Let us also assume that $\delta\mathbb{E}[Y_{1}]>1$. In this case, the value function $V(x)=e^{-\beta x}$, where $$\label{betasat} \beta+\delta\int_{0}^{\infty}[e^{-\beta y}-1]p(y)dy=0.$$ When $\frac{\rho}{\lambda}=\frac{1}{\delta}$, in terms of ruin probability, it does not make a difference whether the company decides to invest in research and development or not. When $\frac{\rho}{\lambda}\geq\frac{1}{\delta}$, $V(x)=e^{-\beta x}$, where $\beta$ satisfies that is independent of $\rho$ and $\lambda$. Asymptotically, when $\frac{\rho}{\lambda}\rightarrow 0$, it is easy to see that $$\beta\sim\frac{\lambda}{\rho}.$$ In the special case that $p(y)=\nu e^{-\nu y}$, when $\frac{\rho}{\lambda}<\frac{1}{\delta}$, then the optimal $C\equiv 0$ and $V(x)=e^{-(\frac{\lambda}{\rho}-\nu)x}$, and when $\frac{\rho}{\lambda}>\frac{1}{\delta}$ and $\frac{\delta}{\nu}>1$, then the optimal $C\equiv\infty$ and $V(x)=e^{-(\delta-\nu)x}$. A State-Dependent Dual Risk Model ================================= Indeed, the method of random time change used in Section \[RandomSection\] also works for $0<\gamma<1$. This gives an alternative approach to solving the optimal control problem other than using the usual Hamilton-Jacobi-Bellman equations. In this section, we want to study the more general state-dependent dual risk model in which $\lambda(x),\rho(x),\delta(x)$ all depend on the wealth process. A state-dependent dual risk model was first introduced in Zhu [@Zhu]: $$dX_{t}=-\rho(X_{t})dt+dJ_{t},$$ where $J_{t}=\sum_{i=1}^{N_{t}}Y_{i}$, where $Y_{i}$ are defined same as before and $N_{t}$ is a simple point process with intensity $\lambda(X_{t-})$ at time $t$. Now, adding controls on investment on research and development, for $C\in\mathcal{C}$, we have $$dX_{t}^{C}=-(\rho(X_{t})+C_{t})dt+dJ_{t}^{C},$$ where $J_{t}=\sum_{i=1}^{N_{t}}Y_{i}$, where $Y_{i}$ are defined same as before and $N_{t}$ is a simple point process with intensity $\lambda(X_{t-})+\delta(X_{t-})C_{t}^{\gamma}$ at time $t$. The motivation of introducing state-dependence for the dual risk model is the following. Firstly, the cost of a company usually increases as the size of the company increases. For example, the running cost of a small business and a Fortune 500 company are vastly different. Secondly, as the size of a company increases, the arrival intensity of the future profits might increase. It may be due to the fact that the larger a company gets, the more resources for income it will get. It is also well known in the finance literature that as a company gets larger and stronger, it can enjoy more benefits, e.g. net present value (NPV), which for example might be due to the opportunities brought by franchising. As we can see from Table \[Google\], Table \[Merck\] and Table \[Tesla\], the R&D expenditure may be far from being constant as the size of the company and the revenue of the company change. More realistically, the R&D expenditure and other costs of running the company should be state-dependent. From the optimal control point of view, it is also interesting to study the state-dependent case. We noticed that in the state-independent case, the optimal strategy is always a constant, and independent of the state. We expect that the optimal strategy might be state-dependent when the underlying dual risk model is state-dependent. Let us assume that $\lambda(\cdot)\geq\lambda_{0}>0$ for some $\lambda_{0}\in(0,\infty)$. Under the random time change, $$\int_{0}^{T(t)}(\lambda(X_{s})+\delta(X_{s})C_{s}^{\gamma})ds=t,$$ we get $$dX_{T(t)}=-\frac{\rho(X_{T(t)})+C_{T(t)}}{\lambda(X_{T(t)})+\delta(X_{T(t)})C_{T(t)}^{\gamma}}dt+d\overline{J}_{t}.$$ The $0<\gamma<1$ Case --------------------- Under the assumption that $0<\gamma<1$, it is easy to see that the optimal strategy $C_{T(t)}$ is the strategy that minimizes the drift: $$\frac{\rho(X_{T(t)})+C_{T(t)}}{\lambda(X_{T(t)})+\delta(X_{T(t)})C_{T(t)}^{\gamma}}.$$ It is easy to compute that the optimal strategy satisfies $$\lambda(X_{T(t)})+\delta(X_{T(t)})(1-\gamma)C_{T(t)}^{\gamma}=\rho(X_{T(t)})\delta(X_{T(t)})\gamma C_{T(t)}^{\gamma-1}.$$ Therefore, for any $t>0$, the optimal strategy $C_{t}$ satisfies $$\lambda(X_{t})+\delta(X_{t})(1-\gamma)C_{t}^{\gamma}=\rho(X_{t})\delta(X_{t})\gamma C_{t}^{\gamma-1}.$$ It is clear that the optimal strategy $C_{t}$ is a function of $X_{t}$, say $C^{\ast}(X_{t})$. Then under the optimal strategy, $$dX_{t}=-(\rho(X_{t})+C^{\ast}(X_{t}))dt+dJ_{t},$$ where $J_{t}=\sum_{i=1}^{N_{t}}Y_{i}$, where $N_{t}$ has intensity $\lambda(X_{t-})+\delta(X_{t-})C^{\ast}(X_{t-})^{\gamma}$ at time $t$. When $p(y)=\nu e^{-\nu y}$ is exponential, Zhu [@Zhu] computed $\mathbb{P}(\tau<\infty)$ in closed-form by differentiating w.r.t. $x$ and turning the integral differential equation into an ordinary differential equation. Here, we omit the derivations and we directly refer the results in Zhu [@Zhu] instead. We have the following result: Assume $p(y)=\nu e^{-\nu y}$, where $\nu>0$. Also assume that the integral $\int_{0}^{\infty}\frac{\lambda(y)+\delta(y)C^{\ast}(y)^{\gamma}}{\rho(y)+C^{\ast}(y)} e^{\nu y-\int_{0}^{y}\frac{\lambda(w)+\delta(w)C^{\ast}(w)^{\gamma}}{\rho(w)+C^{\ast}(w)}dw}dy$ exists and is finite. Then, $$V(x)=\frac{\int_{x}^{\infty}\frac{\lambda(y)+\delta(y)C^{\ast}(y)^{\gamma}}{\rho(y)+C^{\ast}(y)} e^{\nu y-\int_{0}^{y}\frac{\lambda(w)+\delta(w)C^{\ast}(w)^{\gamma}}{\rho(w)+C^{\ast}(w)}dw}dy} {\int_{0}^{\infty}\frac{\lambda(y)+\delta(y)C^{\ast}(y)^{\gamma}}{\rho(y)+C^{\ast}(y)} e^{\nu y-\int_{0}^{y}\frac{\lambda(w)+\delta(w)C^{\ast}(w)^{\gamma}}{\rho(w)+C^{\ast}(w)}dw}dy}.$$ In Zhu [@Zhu], many examples are given for the state-dependent risk model where the ruin probability without investment has explicit formulas. Let us use an example from [@Zhu] to illustrate that in the presence of investment in research and development, it is still possible to get closed-form formulas. \[StateExI\] Let $\rho(x)=\rho_{0}$, $\lambda(x)=\lambda_{0}(c_{1}x+c_{2})$, and $\delta(x)=\delta_{0}(c_{1}x+c_{2})$, where $\rho_{0},\lambda_{0},\delta_{0},c_{1},c_{2}$ are positive constants. Then, the optimal investment rate $C^{\ast}(x)$ is a constant $C^{\ast}(x)\equiv C_{0}$, where $C_{0}$ is the unique positive solution to the equation: $$\lambda_{0}+\delta_{0}(1-\gamma)C_{0}^{\gamma}=\rho_{0}\delta_{0}\gamma C_{0}^{\gamma-1}.$$ Hence, we have $$\begin{aligned} V(x)&=\frac{\int_{x}^{\infty}\frac{\lambda_{0}+\delta_{0}C_{0}^{\gamma}}{\rho_{0}+C_{0}}(c_{1}y+c_{2}) e^{\nu y-\int_{0}^{y}\frac{\lambda_{0}+\delta_{0}C_{0}^{\gamma}}{\rho_{0}+C_{0}}(c_{1}w+c_{2})dw}dy} {\int_{0}^{\infty}\frac{\lambda_{0}+\delta_{0}C_{0}^{\gamma}}{\rho_{0}+C_{0}}(c_{1}y+c_{2}) e^{\nu y-\int_{0}^{y}\frac{\lambda_{0}+\delta(w)C_{0}^{\gamma}}{\rho_{0}+C_{0}}(c_{1}w+c_{2})dw}dy} \\ &=\frac{\int_{x}^{\infty}(c_{1}y+c_{2}) e^{\left(\nu-\frac{\lambda_{0}+\delta_{0}C_{0}^{\gamma}}{\rho_{0}+C_{0}}c_{2}\right)y -\frac{\lambda_{0}+\delta_{0}C_{0}^{\gamma}}{\rho_{0}+C_{0}}\frac{c_{1}}{2}y^{2}}dy} {\int_{0}^{\infty}(c_{1}y+c_{2}) e^{\left(\nu-\frac{\lambda_{0}+\delta_{0}C_{0}^{\gamma}}{\rho_{0}+C_{0}}c_{2}\right)y -\frac{\lambda_{0}+\delta_{0}C_{0}^{\gamma}}{\rho_{0}+C_{0}}\frac{c_{1}}{2}y^{2}}dy} \nonumber \\ &= \frac{\frac{1}{4d^{3/2}}e^{-dy^{2}}\left[\sqrt{\pi}e^{\frac{c^{2}}{4d}+dy^{2}}(ac+2bd) \text{erf}(\frac{2dy-c}{2\sqrt{d}})-2a\sqrt{d}e^{cy}\right]\bigg|_{y=x}^{\infty}} {\frac{1}{4d^{3/2}}e^{-dy^{2}}\left[\sqrt{\pi}e^{\frac{c^{2}}{4d}+dy^{2}}(ac+2bd) \text{erf}(\frac{2dy-c}{2\sqrt{d}})-2a\sqrt{d}e^{cy}\right]\bigg|_{y=0}^{\infty}} \nonumber \\ &=\frac{2a\sqrt{d}e^{cx-dx^{2}}+\sqrt{\pi}e^{\frac{c^{2}}{4d}}(ac+2bd) \text{erfc}(\frac{2dx-c}{2\sqrt{d}})}{2a\sqrt{d}+\sqrt{\pi}e^{\frac{c^{2}}{4d}}(ac+2bd)\text{erfc}(\frac{-c}{2\sqrt{d}})}, \nonumber\end{aligned}$$ where $\text{erf}(x):=\frac{2}{\sqrt{2\pi}}\int_{0}^{x}e^{-t^{2}}dt$ is the error function and $\text{erfc}(x):=1-\text{erf}(x)$ is the complementary error function and $a:=c_{1}$, $b:=c_{2}$, and $$c:=\nu-\frac{\lambda_{0}+\delta_{0}C_{0}^{\gamma}}{\rho_{0}+C_{0}}c_{2}, \qquad d:=\frac{\lambda_{0}+\delta_{0}C_{0}^{\gamma}}{\rho_{0}+C_{0}}\frac{c_{1}}{2}.$$ The $\gamma=1$ Case ------------------- When $\gamma=1$, by using the random time change argument, the optimal $C^{\ast}(x)$ satisfies $C^{\ast}(x)=0$ in the region where $\delta(x)\leq\frac{\lambda(x)}{\rho(x)}$ and the “optimal” $C^{\ast}(x)=\infty$ in the region where $\delta(x)>\frac{\lambda(x)}{\rho(x)}$. If we impose a research and development budget constraint by $M\in(0,\infty)$, the maximum capacity. Then, the admissible set of controls is given by $\mathcal{C}_{M}:=\{C\in\mathcal{C}: \sup_{t\geq 0}C_{t}\leq M\}$. Then the above analysis implies that $C^{\ast}(x)=0$ in the region $\delta(x)\leq\frac{\lambda(x)}{\rho(x)}$ and $C^{\ast}(x)=M$ in the region $\delta(x)>\frac{\lambda(x)}{\rho(x)}$. Zhu [@Zhu] found many examples for the state-dependent dual risk model that has closed-form expressions for the ruin probability without any investment in research and development. Let us consider a simple example from [@Zhu] as an illustration that keeps the analytical tractability even with the investment in research and development. \[StateExII\] Let $\rho(x)=\rho_{0}(c_{1}x+c_{2})$, $\lambda(x)=\left(\nu+\frac{\lambda_{0}}{1+x}\right)\rho(x)$, and $\delta(x)=\delta_{0}$, where $\rho_{0},c_{1},c_{2},\lambda_{0},\delta_{0}$ are positive constants. We further assume that $$\nu<\delta_{0}<\nu+\lambda_{0}.$$ Then, the optimal $C^{\ast}$ is given by: $$C^{\ast}(x)= \begin{cases} 0 &\text{if $x\leq\frac{\lambda_{0}-\delta_{0}+\nu}{\delta_{0}-\nu}$}, \\ +\infty &\text{if $x>\frac{\lambda_{0}-\delta_{0}+\nu}{\delta_{0}-\nu}$}. \end{cases}$$ Let us define: $$x^{\ast}:=\frac{\lambda_{0}-\delta_{0}+\nu}{\delta_{0}-\nu}.$$ Then, we can compute that for any $y\leq x^{\ast}$, $$\int_{0}^{y}\frac{\lambda(w)+\delta(w)C^{\ast}(w)}{\rho(w)+C^{\ast}(w)}dw =\int_{0}^{y}\left(\nu+\frac{\lambda_{0}}{1+w}\right)dw =\nu y+\lambda_{0}\log(1+y),$$ and for any $y>x^{\ast}$, $$\int_{0}^{y}\frac{\lambda(w)+\delta(w)C^{\ast}(w)}{\rho(w)+C^{\ast}(w)}dw =\nu x^{\ast}+\lambda_{0}\log(1+x^{\ast}) +\delta_{0}(y-x^{\ast}).$$ Therefore, for $x>x^{\ast}$, we have $$\begin{aligned} &\int_{x}^{\infty}\frac{\lambda(y)+\delta(y)C^{\ast}(y)}{\rho(y)+C^{\ast}(y)} e^{\nu y-\int_{0}^{y}\frac{\lambda(w)+\delta(w)C^{\ast}(w)}{\rho(w)+C^{\ast}(w)}dw}dy \\ &=\int_{x}^{\infty}\delta_{0}e^{\nu y-\nu x^{\ast}-\lambda_{0}\log(1+x^{\ast}) -\delta_{0}(y-x^{\ast})}dy \nonumber \\ &=\frac{e^{-\nu x^{\ast}+\delta_{0}x^{\ast}}}{(1+x^{\ast})^{\lambda_{0}}} \frac{\delta_{0}}{\delta_{0}-\nu}e^{-(\delta_{0}-\nu)x}, \nonumber\end{aligned}$$ and for $x\leq x^{\ast}$, we have $$\begin{aligned} &\int_{x}^{\infty}\frac{\lambda(y)+\delta(y)C^{\ast}(y)}{\rho(y)+C^{\ast}(y)} e^{\nu y-\int_{0}^{y}\frac{\lambda(w)+\delta(w)C^{\ast}(w)}{\rho(w)+C^{\ast}(w)}dw}dy \\ &=\int_{x}^{x^{\ast}}\left(\nu+\frac{\lambda_{0}}{1+y}\right)e^{\nu y-\nu y-\lambda_{0}\log(1+y)}dy +\frac{1}{(1+x^{\ast})^{\lambda_{0}}} \frac{\delta_{0}}{\delta_{0}-\nu} \nonumber \\ &=\frac{\nu}{1-\lambda_{0}}\left[(1+x^{\ast})^{-\lambda_{0}+1}-(1+x)^{-\lambda_{0}+1}\right] +(1+x)^{-\lambda_{0}}-(1+x^{\ast})^{-\lambda_{0}}+\frac{1}{(1+x^{\ast})^{\lambda_{0}}} \frac{\delta_{0}}{\delta_{0}-\nu}. \nonumber\end{aligned}$$ Hence, we conclude that for $x>x^{\ast}$, we have $$\label{greaterxast} V(x)=\frac{\frac{e^{-\nu x^{\ast}+\delta_{0}x^{\ast}}}{(1+x^{\ast})^{\lambda_{0}}} \frac{\delta_{0}}{\delta_{0}-\nu}e^{-(\delta_{0}-\nu)x}} {\frac{\nu}{1-\lambda_{0}}\left[(1+x^{\ast})^{-\lambda_{0}+1}-1\right] +1-(1+x^{\ast})^{-\lambda_{0}}+\frac{1}{(1+x^{\ast})^{\lambda_{0}}} \frac{\delta_{0}}{\delta_{0}-\nu}},$$ and for $x\leq x^{\ast}$, we have $$\label{lessxast} V(x)=\frac{\frac{\nu}{1-\lambda_{0}}\left[(1+x^{\ast})^{-\lambda_{0}+1}-(1+x)^{-\lambda_{0}+1}\right] +(1+x)^{-\lambda_{0}}-(1+x^{\ast})^{-\lambda_{0}}+\frac{1}{(1+x^{\ast})^{\lambda_{0}}} \frac{\delta_{0}}{\delta_{0}-\nu}} {\frac{\nu}{1-\lambda_{0}}\left[(1+x^{\ast})^{-\lambda_{0}+1}-1\right] +1-(1+x^{\ast})^{-\lambda_{0}}+\frac{1}{(1+x^{\ast})^{\lambda_{0}}} \frac{\delta_{0}}{\delta_{0}-\nu}}.$$ Investing in a Market Index =========================== We have already studied the optimal investment in research and development for a venture capital or high tech company in the dual risk model, and now, let us also add the possibility of the alternative investment in a risky asset in the market, which is a capital market index modelled by a geometric Brownian motion. Let us assume that the market index $S_{t}$ follows a geometric Brownian motion: $$dS_{t}=\mu S_{t}dt+\sigma S_{t}dW_{t},$$ where $\mu,\sigma>0$ and $W_{t}$ is a standard Brownian motion. Assume that at time $t$, the company can invest $\theta_{t}$ shares of the market index $S_{t}$ and $C_{t}$ in research and development. Thus, the wealth process of the company satisfies the dynamics: $$dX_{t}=-(\rho+C_{t})dt+dJ^{C}_{t}+\theta_{t}dS_{t}, \qquad X_{0}=x>0$$ The invested amount in the market index is $A_{t}=\theta_{t}S_{t}$ at time $t$. We are interested to find optimal investment strategies to minimize the probability of ruin: $$V(x):=\inf_{C\in\mathcal{C},A\in\mathcal{A}}\mathbb{P}(\tau<\infty|X_{0}=x),$$ where $\mathcal{C}$ is the same as defined before and $\mathcal{A}$ is the admissible strategies for investment in the market index, defined as: $$\begin{aligned} \mathcal{A}&:= \bigg\{A:[0,\infty)\times\Omega\rightarrow\mathbb{R}: \text{$A$ is progressively measurable} \\ & \qquad\qquad \text{and for any $t>0$, there exists $K>0$} \nonumber \\ &\qquad\qquad\qquad\qquad\qquad \text{such that $\mathbb{P}\left(\int_{0}^{t}A_{s}^{2}ds<\infty\right)=1$}.\bigg\}. \nonumber\end{aligned}$$ The Hamilton-Jacobi-Bellman equation is given by $$\begin{aligned} \label{HTwo} &\inf_{C\geq 0,A\in\mathbb{R}} \bigg\{-(\rho+C)V'(x)+(\lambda+\delta C^{\gamma})\int_{0}^{\infty}[V(x+y)-V(x)]p(y)dy \\ &\qquad\qquad\qquad +A\mu V'(x)+\frac{1}{2}A^{2}\sigma^{2}V''(x)\bigg\}=0, \nonumber\end{aligned}$$ with boundary condition $V(0)=1$. The $0<\gamma<1$ Case --------------------- $V(x)=e^{-\beta x}$ is a solution to the Hamilton-Jacobi-Bellman equation , where $\beta>0$ is the unique solution to the equation: $$\begin{aligned} \label{betaEquation} &\beta\left[\rho+\left(\frac{1}{\delta\gamma}\right)^{\frac{1}{\gamma-1}} \left(\frac{\beta}{1-\int_{0}^{\infty}e^{-\beta y}p(y)dy}\right)^{\frac{1}{\gamma-1}}\right] \\ &\qquad -\left[\lambda+\delta\left(\frac{1}{\delta\gamma}\right)^{\frac{\gamma}{\gamma-1}} \left(\frac{\beta}{1-\int_{0}^{\infty}e^{-\beta y}p(y)dy}\right)^{\frac{\gamma}{\gamma-1}}\right] \left(1-\int_{0}^{\infty}e^{-\beta y}p(y)dy\right) \nonumber \\ &\qquad\qquad\qquad\qquad -\frac{1}{2}\frac{\mu^{2}}{\sigma^{2}}=0. \nonumber\end{aligned}$$ Given $V(x)=e^{-\beta x}$ and let $$\begin{aligned} (C^{\ast},A^{\ast})&\in{\rm argmin}\bigg\{-(\rho+C)V'(x)+(\lambda+\delta C^{\gamma})\int_{0}^{\infty}[V(x+y)-V(x)]p(y)dy \\ &\qquad\qquad\qquad\qquad\qquad +A\mu V'(x)+\frac{1}{2}A^{2}\sigma^{2}V''(x)\bigg\}. \nonumber\end{aligned}$$ Then, we have $$C^{\ast}=\left(\frac{1}{\delta\gamma}\right)^{\frac{1}{\gamma-1}} \left(\frac{\beta}{1-\int_{0}^{\infty}e^{-\beta y}p(y)dy}\right)^{\frac{1}{\gamma-1}},$$ and $$A^{\ast}=\frac{\mu}{\sigma^{2}\beta}.$$ Assume that $V'(x)<0$ and $V''(x)>0$, then, the optimal $C$ and $A$ are given respectively by $$\begin{aligned} &C=\left(\frac{1}{\delta\gamma}\right)^{\frac{1}{\gamma-1}} \left(\frac{V'(x)}{\int_{0}^{\infty}[V(x+y)-V(x)]p(y)dy}\right)^{\frac{1}{\gamma-1}}, \\ &A=-\frac{\mu V'(x)}{\sigma^{2}V''(x)},\end{aligned}$$ and the Hamilton-Jacobi-Bellman equation becomes $$\begin{aligned} &-\left[\rho+\left(\frac{1}{\delta\gamma}\right)^{\frac{1}{\gamma-1}} \left(\frac{V'(x)}{\int_{0}^{\infty}[V(x+y)-V(x)]p(y)dy}\right)^{\frac{1}{\gamma-1}}\right] V'(x) \\ &\qquad +\left[\lambda+\delta\left(\frac{1}{\delta\gamma}\right)^{\frac{\gamma}{\gamma-1}} \left(\frac{V'(x)}{\int_{0}^{\infty}[V(x+y)-V(x)]p(y)dy}\right)^{\frac{\gamma}{\gamma-1}}\right] \nonumber \\ &\qquad\qquad\qquad\qquad\cdot \int_{0}^{\infty}[V(x+y)-V(x)]p(y)dy \nonumber \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad -\frac{1}{2}\frac{\mu^{2}}{\sigma^{2}}\frac{(V'(x))^{2}}{V''(x)}=0. \nonumber\end{aligned}$$ We can see that $V(x)=e^{-\beta x}$, where $\beta>0$ is the unique solution to the equation: $$\begin{aligned} &\beta\left[\rho+\left(\frac{1}{\delta\gamma}\right)^{\frac{1}{\gamma-1}} \left(\frac{\beta}{1-\int_{0}^{\infty}e^{-\beta y}p(y)dy}\right)^{\frac{1}{\gamma-1}}\right] \\ &\qquad -\left[\lambda+\delta\left(\frac{1}{\delta\gamma}\right)^{\frac{\gamma}{\gamma-1}} \left(\frac{\beta}{1-\int_{0}^{\infty}e^{-\beta y}p(y)dy}\right)^{\frac{\gamma}{\gamma-1}}\right] \left(1-\int_{0}^{\infty}e^{-\beta y}p(y)dy\right) \nonumber \\ &\qquad\qquad\qquad\qquad -\frac{1}{2}\frac{\mu^{2}}{\sigma^{2}}=0. \nonumber\end{aligned}$$ Recall the definition $g(\beta)=\frac{1}{\beta}\left[1-\int_{0}^{\infty}e^{-\beta y}p(y)dy\right]$ and we want to show that the equation $$H(\beta):=\rho-(\delta\gamma)^{\frac{1}{1-\gamma}}\left(\frac{1}{\gamma-1}\right)[g(\beta)]^{\frac{1}{1-\gamma}} -\lambda g(\beta)-\frac{1}{2}\frac{\mu^{2}}{\sigma^{2}}\frac{1}{\beta}=0$$ has a unique positive solution. It is easy to see that $\lim_{\beta\rightarrow 0^{+}}g(\beta)=\mathbb{E}[Y_{1}]$ and $\lim_{\beta\rightarrow\infty}g(\beta)=0$. Thus, $H(\beta)\sim-\frac{1}{2}\frac{\mu^{2}}{\sigma^{2}\beta}<0$ as $\beta\rightarrow 0^{+}$ and $H(\beta)\rightarrow\rho$ as $\beta\rightarrow\infty$. We have already proved that $g(\beta)$ is decreasing in $\beta$. Moreover, $\frac{1}{\beta}$ is also decreasing in $\beta$. Therefore $H(\beta)$ is increasing in $\beta$ and hence there exists a unique positive value $\beta$ so that $H(\beta)=0$. Finally, we can compute that $$C^{\ast}=\left(\frac{1}{\delta\gamma}\right)^{\frac{1}{\gamma-1}} \left(\frac{\beta}{1-\int_{0}^{\infty}e^{-\beta y}p(y)dy}\right)^{\frac{1}{\gamma-1}}, \qquad A^{\ast}=\frac{\mu}{\sigma^{2}\beta}.$$ ### A Verification Theorem Let us recall that the Hamilton-Jacobi-Bellman equation is given by $$\label{eqn:(5.2)} \begin{split} 0&=\mathop{\inf}\limits_{C>0,A\in\mathbb{R}}\bigg\{-(\rho+C)V'(x)+(\lambda+\delta C^{\gamma})\int_{0}^{\infty}[V(x+y)-V(x)]p(y)dy\\ &\hspace{7cm}+A\mu V'(x)+\frac12A^2\sigma^2 V''(x) \bigg\}, \end{split}$$ with boundary condition $V(0)=1$. \[eqn:investment\_verification\_gamma&lt;1\] If $w\in{\rm C}^2_{\rm b}$ is a solution of with $w(0)=1$, such that for any $C\in\mathcal{C}$ and $A\in\mathcal{A}$ $$\label{eqn:w(X)1{tau<infty}=0} \lim_{t\to\infty}w(X_t^{C,A})1_{\{t\le\tau\}}=0\;\text{ \rm a.s.},$$ then, $w\le V$. In addition, if $$C^*(x):=\left(\frac{1}{\delta\gamma}\right)^{\frac{1}{\gamma-1}} \left(\frac{w'(x)}{\int_{0}^{\infty}[w(x+y)-w(x)]p(y)dy}\right)^{\frac{1}{\gamma-1}}\;\text{ \rm and }\;A^*(x)=-\frac{\mu w'(x)}{\sigma^{2}w''(x)},$$ are such that $$dX^*_t=-(\rho+C^*(X^*_t))dt+dJ_t^{C^*(X^*_{t-})}+A^*(X^*_t)dS_t$$ has a solution and $C^*_\cdot:=C^*(X^*_\cdot)\in\mathcal{C}$ and $A^*_\cdot:=A^*(X^*_\cdot)\in\mathcal{A}$, then $w=V$. The inequality follows from the same lines of argument as in Theorem \[VeriThm\]. To show the equality, first notice that since $$\begin{split} (C^*,A^*)&\in\mathop{{\rm argmin}}\limits_{C,A}\bigg\{-(\rho+C)w'(x)+(\lambda+\delta C^{\gamma})\int_{0}^{\infty}[w(x+y)-w(x)]p(y)dy\\ &\hspace{6cm}+A\mu w'(x)+\frac12A^2\sigma^2 w''(x) \bigg\}, \end{split}$$ one can repeat the proof of the second part of Theorem \[VeriThm\] to show $w=V$. \[cor:w=V\_gamma&lt;1\] $w(x)=e^{-\beta x}$ with $\beta$ defined in satisfies and thus $w=V$. We already showed that $w$ is a classical solution of the boundary value problem . The fact that $w$ satisfies (5.8) follows from the same lines of argument as in Theorem \[VeriThm\]. Moreover, since $C^*$ and $A^*$ defined by $$C^*=\left(\frac{1}{\delta\gamma}\right)^{\frac{1}{\gamma-1}} \left(-\frac{\beta}{\int_{0}^{\infty}[e^{-\beta x}-1]p(y)dy}\right)^{\frac{1}{\gamma-1}}\;\text{ \rm and }\;A^*=\frac{\mu}{\sigma^{2}\beta},$$ are admissible controls (constants), by Theorem \[eqn:investment\_verification\_gamma&lt;1\] we have $w=V$. ### Asymptotic Analysis As in Remark \[ParametersI\], let us discuss the dependence of $C^{\ast}$, $\beta$ and hence $V(x)=e^{-\beta x}$ on the parameters $\rho$, $\lambda$ and $\delta$. Since the results are similar to Remark \[ParametersI\], we omit the details and only summarize the results here. Note that $\beta$ satisfies $$\rho-(\delta\gamma)^{\frac{1}{1-\gamma}}\left(\frac{1}{\gamma}-1\right)[g(\beta)]^{\frac{1}{1-\gamma}} -\lambda g(\beta)-\frac{1}{2}\frac{\mu^{2}}{\sigma^{2}}\frac{1}{\beta}=0.$$ \(i) As $\rho\rightarrow 0^{+}$, we have $$\beta\sim\frac{\lambda+\frac{1}{2}\frac{\mu^{2}}{\sigma^{2}}}{\rho}, \qquad\text{and} \qquad C^{\ast}\sim(\delta\gamma)^{\frac{1}{1-\gamma}}\left(\frac{\rho}{\lambda+\frac{1}{2}\frac{\mu^{2}}{\sigma^{2}}}\right)^{\frac{1}{1-\gamma}}.$$ \(ii) As $\delta\rightarrow\infty$, we have $$\beta\sim\frac{\gamma\left(\frac{1}{\gamma}-1\right)^{1-\gamma}}{\rho^{1-\gamma}}\delta, \qquad \text{and} \qquad C^{\ast}\rightarrow\frac{\rho}{\frac{1}{\gamma}-1}.$$ As $\delta\rightarrow 0$, we have $\beta\rightarrow\alpha$, where $\alpha$ is the unique positive value so that $$\rho\alpha+\lambda\int_{0}^{\infty}[e^{-\alpha y}-1]p(y)dy-\frac{1}{2}\frac{\mu^{2}}{\sigma^{2}}=0.$$ Moreover, as $\delta\rightarrow 0$, we have $$C^{\ast}\sim(\delta\gamma)^{\frac{1}{1-\gamma}}\left(\frac{\rho-\frac{1}{2\alpha}\frac{\mu^{2}}{\sigma^{2}}} {\lambda}\right)^{\frac{1}{1-\gamma}}.$$ \(iii) As $\lambda\rightarrow\infty$, we have $$\beta\sim\frac{\lambda}{\rho}, \qquad\text{and}\qquad C^{\ast}\sim(\delta\gamma)^{\frac{1}{1-\gamma}}\left(\frac{\rho}{\lambda}\right)^{\frac{1}{1-\gamma}}.$$ Let us try understand the asymptotic behavior of the value function and the optimal investment rate as $\gamma\rightarrow 1^{-}$ and $\gamma\rightarrow 0^{+}$. Note that the optimal $C^{\ast}$ and $\beta$ satisfy: $$\rho-\left(\frac{1}{\gamma}-1\right)C^{\ast}-\frac{\lambda}{\delta\gamma}(C^{\ast})^{1-\gamma}-\frac{1}{2}\frac{\mu^{2}}{\sigma^{2}}\frac{1}{\beta}=0,$$ and $$C^{\ast}=\left(\frac{1}{\delta\gamma}\right)^{\frac{1}{\gamma-1}} \left(\frac{\beta}{1-\int_{0}^{\infty}e^{-\beta y}p(y)dy}\right)^{\frac{1}{\gamma-1}}.$$ \(i) As $\gamma\rightarrow 0^{+}$, $C^{\ast}\sim\eta\gamma$ for some $\eta>0$ and $\beta\rightarrow\iota$ for some $\iota>0$. It is easy to check that $\eta,\iota>0$ satisfy: $$\eta=\frac{1-\int_{0}^{\infty}e^{-\iota y}p(y)dy}{\iota},$$ and $\rho-\eta-\frac{\lambda}{\delta}\eta-\frac{1}{2}\frac{\mu^{2}}{\sigma^{2}}\frac{1}{\iota}=0$, thus $$\rho-\left(1+\frac{\lambda}{\delta}\right)\frac{1-\int_{0}^{\infty}e^{-\iota y}p(y)dy}{\iota} -\frac{1}{2}\frac{\mu^{2}}{\sigma^{2}}\frac{1}{\iota}=0.$$ \(ii) Next, let us consider $\gamma\rightarrow 1^{-}$. If $\delta\mathbb{E}[Y_{1}]>1$, then there exists a unique value $\iota>0$ such that $$\delta=\frac{\iota}{1-\int_{0}^{\infty}e^{-\iota y}p(y)dy}.$$ Assume further that $$\rho-\frac{\lambda}{\delta}-\frac{1}{2}\frac{\mu^{2}}{\sigma^{2}\iota}>0.$$ Then, we have $C^{\ast}\sim\frac{\eta}{1-\gamma}$ and $\beta\rightarrow\iota$ as $\gamma\rightarrow 1^{-}$, where $\eta$ is given by $$\eta=\rho-\frac{\lambda}{\delta}-\frac{1}{2}\frac{\mu^{2}}{\sigma^{2}\iota}.$$ If $\rho-\frac{\lambda}{\delta}-\frac{1}{2}\frac{\mu^{2}}{\sigma^{2}\iota}<0$, the optimal $C^{\ast}\rightarrow 0$ as $\gamma\rightarrow 1^{-}$ and $C^{\ast}\sim\left(\frac{\rho-\frac{1}{2}\frac{\mu^{2}}{\sigma^{2}}\frac{1}{\beta}}{\frac{\lambda}{\delta\gamma}}\right)^{\frac{1}{1-\gamma}}$ and $\beta\rightarrow\iota$ as $\gamma\rightarrow 1^{-}$. We can check that $\eta,\iota$ satify the equations: $$\begin{aligned} &\eta=\frac{\lambda}{\delta\left(\rho-\frac{1}{2}\frac{\mu^{2}}{\sigma^{2}\iota}\right)}, \\ &\delta=\frac{\iota}{1-\int_{0}^{\infty}e^{-\iota y}p(y)dy}.\end{aligned}$$ As $\gamma\rightarrow 1^{-}$, we have $$C^{\ast}\sim\frac{1}{e}\left(\frac{\delta\left(\rho-\frac{1}{2}\frac{\mu^{2}}{\sigma^{2}\iota}\right)}{\lambda}\right)^{\frac{1}{1-\gamma}}.$$ If $\rho-\frac{\lambda}{\delta}-\frac{1}{2}\frac{\mu^{2}}{\sigma^{2}\iota}=0$, then, as $\gamma\rightarrow 1^{-}$, we have that $C^{\ast}$ converges to the unique positive solution to the equation: $$\delta x+\lambda(1+\log x)=0.$$ The $\gamma=1$ Case ------------------- Consider the case where $\gamma=1$, i.e. for $x>0$. Then this is a singular control problem on $C\in\mathcal{C}$ and the value function $V(x)$ satisfies the Hamilton-Jacobi-Bellman equation: $$\begin{split} 0&=\min\Bigg\{-\rho V'(x)+\lambda\int_{0}^{\infty}[V(x+y)-V(x)]p(y)dy \\ &\hspace{2cm}+\inf_{A\in\mathbb{R}}\left\{A\mu V'(x)+\frac{1}{2}A^{2}\sigma^{2}V''(x)\right\},\\ &\hspace{5cm}\delta\int_{0}^{\infty}[V(x+y)-V(x)]p(y)dy-V'(x)\Bigg\} \end{split}$$ with boundary condition $V(0)=1$. Optimizing over $A$, it reduces to the following equation: $$\label{eqn:(5.12)} \begin{split} 0&=\min\Bigg\{-\rho V'(x)+\lambda\int_{0}^{\infty}[V(x+y)-V(x)]p(y)dy-\frac{\mu^2(V')^2}{2\sigma^2V''},\\ &\hspace{5cm}\delta\int_{0}^{\infty}[V(x+y)-V(x)]p(y)dy-V'(x)\Bigg\} \end{split}$$ with boundary condition $V(0)=1$. For $w\in{\rm C}^2_{\rm b}$, we define $$\mathcal{P}:=\bigg\{x\in\mathbb{R}_+\;:\; \delta\int_{0}^{\infty}[w(x+y)-w(x)]p(y)dy-w'(x)>0\bigg\}.$$ According to Fleming–Soner [@Fleming-Soner-book-06 Chapter 8], $w$ is a classical solution of if \(i) On $\mathcal{P}$, $w$ satisfies $$0=-\rho w'(x)+\lambda\int_{0}^{\infty}[w(x+y)-w(x)]p(y)dy-\frac{\mu^2(w')^2}{2\sigma^2w''}$$ \(ii) On $\mathbb{R}_+$, $w$ satisfies $$\label{eqn:variational_inequality} \begin{split} 0&\le-\rho w'(x)+\lambda\int_{0}^{\infty}[w(x+y)-w(x)]p(y)dy-\frac{\mu^2(w')^2}{2\sigma^2w''}\\ 0&\le \delta\int_{0}^{\infty}[w(x+y)-w(x)]p(y)dy-w'(x) \end{split}$$ \(iii) $w(0)=1$. \[lem:classical\_solution\_investment\] $w(x)=e^{-(\beta_1\vee\beta_2)x}$ is a classical solution of where $\beta_1$ is the unique positive solutions of $F(\beta)=0$ and $\beta_2$ is the unique positive solution of $G(\beta)=0$ if it exists or zero otherwise. Here $F$ and $G$ are given by $$\begin{split} F(\beta)&:=\rho\beta+\lambda\int_0^\infty[e^{-\beta y}-1]p(y)dy-\frac12\frac{\mu^2}{\sigma^2}\\ G(\beta)&:=\beta+\delta\int_0^\infty[e^{-\beta y}-1]p(y)dy \end{split}$$ If $G'(0)=1-\delta\mathbb{E}[Y_1]\ge0$, then $\beta_2=0$ and $G(\beta_1)>0$. This implies that $\mathcal{P}=\mathbb{R}_+$. By straightforward calculations, $$\begin{split} -\rho w'(x)+\lambda\int_{0}^{\infty}[w(x+y)-w(x)]p(y)dy-\frac{\mu^2(w')^2}{2\sigma^2w''}=wF(\beta_1)&=0\\ \delta\int_{0}^{\infty}[w(x+y)-w(x)]p(y)dy-w'(x)=wG(\beta_1)&>0 \end{split}$$ If $G'(0)=1-\delta\mathbb{E}[Y_1]<0$ and $\beta_1> \beta_2$, then $G(\beta_1)>0$ and we have $\mathcal{P}=\mathbb{R}_+$. Similar to the previous paragraph we obtain that $w$ is a classical solution. If $G'(0)=1-\delta\mathbb{E}[Y_1]<0$ and $\beta_1\le \beta_2$, then $F(\beta_2)\ge0$ and we have $\mathcal{P}=\emptyset$. Thus, $$\begin{split} -\rho w'(x)+\lambda\int_{0}^{\infty}[w(x+y)-w(x)]p(y)dy-\frac{\mu^2(w')^2}{2\sigma^2w''}=wF(\beta_2)&\ge0\\ \delta\int_{0}^{\infty}[w(x+y)-w(x)]p(y)dy-w'(x)=wG(\beta_2)&=0. \end{split}$$ ### A Verification Theorem \[thm:verification\_investment\_gamma=1\] Let $w\in{\rm C}^2_{\rm b}$ be a decreasing classical solution of problem such that condition holds. Then, $w(x)\le V(x)$, where $V(x)$ is the value function of the ruin probability minimization problem with investment.\ In addition, if $\mathcal{P}=\mathbb R_+$, then $w(x)=V(x)$. Let $A=\{A_s\}_{s\ge0}$ be an admissible strategy and $C:=\{C_t\}_{t\ge0}$ be a non-decreasing singular function, i.e. $C_t:=\int_0^tdc_s$ where $c_s$ is a non-negative measure. Then, $$X^{C,A}_t=x-\rho t-C_t+J_t^C+\int_0^tA_sdS_s$$ where $J_t^C=\sum_{i=1}^{N_t^C}Y_i$ where $N_t^C$ is a simple point process with compensator $\lambda t+\delta C_t$. Then, by Itô formula for ${\rm C}^2_{\rm b}$ functions, we have $$\begin{split} \mathbb{E}[w(X^{C,A}_{t\wedge \tau})]&=w(x)+\mathbb{E}\Bigg[\int_0^{t\wedge \tau}\bigg(-\rho w'+\lambda\int_{0}^{\infty}[w(x+y)-w(x)]p(y)dy\\ &+A_s\mu w'+\frac12A_s^2\sigma^2 w''\bigg)(X^{C,A}_{s})ds\\ &+\int_0^{t\wedge \tau}\bigg(-w'+\delta\int_{0}^{\infty}[w(x+y)-w(x)]p(y)dy\bigg)(X^{C,A}_{s})dC^0_s\\ &+\sum_{s\le t\wedge \tau}\big(w(X^{C,A}_{s}-\Delta C_s)-w(X^{C,A}_{s})\big)\Bigg]. \end{split}$$ Here $C_s=C^0_s+\Delta C_s$ where $C^0_s$ is the continuous part of $C$ and $\Delta C_s$ is the pure jump part of $C_s$. Notice that by the definition of classical solution, holds and therefore, the first two terms inside the expectation above are non-negative. In addition since $w$ is non-increasing, we have $w(X^{C,A}_{s}-\Delta C_s)-w(X^{C,A}_{s})\ge0$. Thus, $\mathbb{E}[w(X^{C,A}_{t\wedge \tau})]\ge w(x)$. Similar to Theorem \[VeriThm\], by sending $t\to\infty$, implies that $w(x)\le \mathbb{P}(\tau<\infty)$. By taking the infimum over $(C,A)$, we obtain $w\le V$.\ Now assume that $\mathcal{P}=\mathbb{R}_+$ and set $C\equiv0$. It follows from the definition of $A^*$ and Itô formula that $$\begin{split} \mathbb{E}[w(X^*_{t\wedge \tau})]&=w(x)+\mathbb{E}\Bigg[\int_0^{t\wedge \tau}\bigg(-\rho w'+\lambda\int_{0}^{\infty}[w(x+y)-w(x)]p(y)dy\\ &+A^*\mu w'+\frac12(A^*)^2\sigma^2 w''\bigg)(X^*_{s})ds\Bigg]=w(x), \end{split}$$ In the above, $X^*$ satisfies $X^*_t=x-\rho t+J_t^\lambda+\int_0^tA^*(X^*_s)dW_s$. If we let $t\to\infty$, we obtain $w(x)=\mathbb{P}(\tau^*<\infty)\ge V(x)$ where $\tau^*$ is the ruin time for process $X^*$. \[cor:w=V\_gamma=1\] The classical solution $w(x)=e^{-(\beta_1\vee\beta_2)x}$ of boundary value problem satisfies the assumption of the verification and thus $w=V$. By the same line of arguments as in Theorem \[VeriThm\], one can show that condition holds true. Therefore, if $\beta_1>\beta_2$, then $\mathcal{P}=\mathbb{R}_+$ and $w=V$ is followed by Theorem \[thm:verification\_investment\_gamma=1\]. It remains to show the result for the case that when $\beta_1\le\beta_2$, i.e. $\mathcal{P}=\emptyset$. For $c>0$ let $w_c(x)=\mathbb{P}(\tau_c<\infty)$ with $X_t=x-(\rho+c) t+J_t^c+\int_0^tA^*dW_s$ with $A^*=\frac{\mu}{\sigma^2\beta_2}$. Then, immediately we obtain $w_c\ge V$. We want to show that $w_{c}(x)\to w(x)=e^{-\beta_2 x}$ as $c\to\infty$. Notice that $w_c$ satisfies the equation $$0=-(\rho+c) w_c'(x)+(\lambda+\delta c)\int_{0}^{\infty}[w_c(x+y)-w_c(x)]p(y)dy-\frac{\mu^2(w_c')^2}{2\sigma^2w_c''},$$ with the boundary condition $w_c(0)=1$. The unique bounded solution of the above equation is given by $w_c(x)=e^{-\beta(c)x}$ where $\beta(c)$ satisfies $$\label{eqn:beta(c)} -(\rho+c)\beta(c)+(\lambda+\delta c)\int_0^\infty[e^{-\beta(c)y}-1]p(y)dy-\frac{\mu^2}{2\sigma^2}=0.$$ Notice that for any $c>0$, $\beta(c)$ is uniquely determined and is continuous on $c$. In addition, straightforward calculations shows that $\beta(c)$ is increasing, i.e. $$\beta'(c)=\frac{1}{c}\frac{\rho+\lambda\int_0^\infty[1-e^{-\beta(c)y}]p(y)dy+\frac{\mu^2}{2\sigma^2}}{\rho+c+(\lambda+\delta c)\int_0^\infty e^{-\beta(c)y}yp(y)dy}>0.$$ Thus, $\bar\beta:=\lim_{c\to\infty}\beta(c)$ exists and $\bar{\beta}>0$ and after dividing by $c$ and taking limit when $c\to\infty$, we obtain $$G(\bar\beta)=-\bar\beta+\lambda\int_0^\infty[e^{-\bar\beta y}-1]p(y)dy=0.$$ Since $G$ has a unique positive solution, we must have $\bar\beta=\beta_2$ and therefore, we obtain $V(x)\le \lim_{c\to\infty}w_c(x)=e^{-\beta_2 x}$. Numerical Studies ================= In this section, we carry out numerical studies to illustrate and understand better how the minimized ruin probability and the optimal investment rate depend on the parameters in the dual risk model. In Figure \[Comparison\], we assume that $Y_{i}$ are exponentially distributed so that $p(y)=\nu e^{-\nu y}$ for some $\nu>0$. We also assume that $\lambda\mathbb{E}[Y_{1}]=\frac{\lambda}{\nu}>\rho$ so that the ruin probability is less than $1$ without any investment in research and development. Indeed, the ruin probability is given by $e^{-\alpha x}$, where $$\rho\alpha+\lambda\int_{0}^{\infty}[e^{-\alpha y}-1]\nu e^{-\nu y}dy =\rho\alpha-\lambda\frac{\alpha}{\nu+\alpha}=0,$$ which implies that $\alpha=\frac{\lambda}{\rho}-\nu$. For simplicity, we assume that $\gamma=\frac{1}{2}$ so that as in Example \[ExpExample\], the minimized ruin probability is $V(x)=e^{-\beta x}$, where $$\beta=\frac{\lambda+\sqrt{\lambda^{2}+\rho\delta^{2}}}{2\rho}-\nu,$$ and by investing in research and development, it reduces the ruin probability. Now, if additional investment in a risky asset, e.g. a market index is allowed, then the ruin probability can be further reduced and the minimized ruin probability becomes $V(x)=e^{-\beta x}$, where by letting $p(y)=\nu e^{-\nu y}$ and $\gamma=\frac{1}{2}$ in , we deduce that $\beta>0$ is the unique solution to the equation: $$\beta\rho-\frac{\beta\delta^{2}}{4}\frac{1}{(\nu+\beta)^{2}} -\frac{\lambda\beta}{\nu+\beta}-\frac{1}{2}\frac{\mu^{2}}{\sigma^{2}}=0.$$ ![Illustration of the ruin probability without any investment (blue curve with circle markers), the minimized ruin probability with investment in research and development (black curve with triangle markers), and the minimized ruin probability when investment in both research and development and a market index are allowed (red dashed curve). The $x$-axis denotes the initial wealth of the underlying company and the $y$-axis denotes the (minimized) ruin probability. Here, we take $\gamma=\frac{1}{2}$, $\rho=0.1$, $\nu=0.1$, $\lambda=0.1$, $\delta=1$, $\mu=0.1$ and $\sigma=0.2$.[]{data-label="Comparison"}](3plots.pdf) In Figure \[GammaDelta\], we investigate the dependence of the optimal $C^{\ast}$ on the parameters $\gamma$ and $\delta$ given $\rho=2$, $\nu=2$, and $\lambda=0.1$. Let us recall that when investment in research and development is allowed, the optimal investment rate $C^{\ast}$ is the unique positive solution to the following equation: $$\label{CstarEqn} \lambda+(1-\gamma)\delta(C^{\ast})^{\gamma}=\rho\delta\gamma(C^{\ast})^{\gamma-1}.$$ When additional investment in a market index is allowed, the optimal investment rate $C^{\ast}$ for the investment in research and development remains the same. Notice that from , the optimal $C^{\ast}$ is independent of the distribution of $Y_{i}$. And therefore the definition of $C^{\ast}$ is independent of the condition under which the minimized ruin probability is less than $1$. Intuitively, that is because, $C^{\ast}$ optimizes over the drift term by the random time change technique, but when the condition is violated, even the optimal $C^{\ast}$ still gives the ruin probability equal to $1$. In Figure \[GammaDelta\], we give the heat map plot of the optimal $C^{\ast}$ as function of $\gamma$ and $\delta$. Note that the condition is equivalent to $$\label{boundary} \rho-\frac{\lambda}{\nu}-(\delta\gamma)^{\frac{1}{1-\gamma}}\left(\frac{1}{\gamma}-1\right)\frac{1}{\nu^{\frac{1}{1-\gamma}}}<0,$$ for $p(y)=\nu e^{-\nu y}$. When this condition is violated, then it corresponds to the darker region in the bottom half of the plot in Figure \[GammaDelta\]. The boundary is achieved when the left hand side of is zero. In this region, the ruin probability is always $1$ regardless of the investment in research and development. When the condition is satisfied, it corresponds to the upper half of the plot in Figure \[GammaDelta\]. In this region, it is easy to observe that as $\delta$ increases, $C^{\ast}$ increases. For the plot in Figure \[GammaDelta\], the optimal $C^{\ast}$ is less sensitive to the change of the parameter $\gamma$. ![This is the heat map plot of $C^{\ast}$ as a function of $\gamma$ and $\delta$. In the darker region in the bottom half of the plot, this is where ruin probability is always $1$ regardless of the investment. In the upper half of the plot, the minimized ruin probability is less than $1$ and it shows the heat map. Here, we take $\rho=2$, $\nu=2$, and $\lambda=0.1$.[]{data-label="GammaDelta"}](C_gamma_delta.pdf) In Figure \[RhoLambda\], we investigate the dependence of the optimal $C^{\ast}$ on the parameters $\rho$ and $\lambda$ given $\delta=1$, $\nu=0.1$ and $\gamma=\frac{1}{2}$. For $\gamma=\frac{1}{2}$, we showed in Example \[ExpExample\] that the optimal $C^{\ast}$ is given by $$C^{\ast}=\frac{\delta^{2}\rho^{2}}{(\lambda+\sqrt{\lambda^{2}+\rho\delta^{2}})^{2}}.$$ When $p(y)=\nu e^{-\nu y}$ and $\gamma=\frac{1}{2}$, the condition reduces to $$\rho-\frac{\lambda}{\nu}-\frac{\delta^{2}}{4\nu^{2}}<0.$$ When this condition is violated, the ruin probability is always $1$ regardless of the investment and it corresponds to the dark region in the right bottom corner of the plot in Figure \[RhoLambda\]. When this condition is satisfied, the heat map plot of the optimal $C^{\ast}$ as a function of $\rho$ and $\lambda$ is illustrated in Figure \[RhoLambda\]. We can see that as $\rho$ increases, the optimal $C^{\ast}$ increases, and as $\lambda$ increases, the optimal $C^{\ast}$ decreases. ![ This is the heat map plot of $C^{\ast}$ as a function of $\rho$ and $\lambda$. In the darker region in the right bottom corner of the plot, this is where ruin probability is always $1$ regardless of the investment. In the rest of the plot, the minimized ruin probability is less than $1$ and it shows the heat map. Here, we take $\nu=0.1$, $\gamma=0.5$ and $\delta=1$.[]{data-label="RhoLambda"}](C_rho_lambda.pdf) Finally, let us do some numerical studies for the state-dependent dual risk model. First, let us consider an example for $0<\gamma<1$. Let us consider the model in Example \[StateExI\]. For simplicity, let us assume that $\gamma=\frac{1}{2}$. Recall that in Example \[StateExI\], $\rho(x)=\rho_{0}$, $\lambda(x)=\lambda_{0}(c_{1}x+c_{2})$, and $\delta(x)=\delta_{0}(c_{1}x+c_{2})$. The optimal investment rate $C^{\ast}(x)\equiv C_{0}$ is a constant and is given by: $$C_{0}=\frac{\delta_{0}^{2}\rho_{0}^{2}}{(\lambda_{0}+\sqrt{\lambda_{0}^{2}+\rho_{0}\delta_{0}^{2}})^{2}}.$$ The minimized ruin probability is given by $$\label{minRuinState} \frac{2a\sqrt{d}e^{cx-dx^{2}}+\sqrt{\pi}e^{\frac{c^{2}}{4d}}(ac+2bd) \text{erfc}(\frac{2dx-c}{2\sqrt{d}})}{2a\sqrt{d}+\sqrt{\pi}e^{\frac{c^{2}}{4d}}(ac+2bd)\text{erfc}(\frac{-c}{2\sqrt{d}})},$$ where $x$ is the initial wealth, $a:=c_{1}$, $b:=c_{2}$, and $$c:=\nu-\frac{\lambda_{0}+\delta_{0}C_{0}^{1/2}}{\rho_{0}+C_{0}}c_{2}, \qquad d:=\frac{\lambda_{0}+\delta_{0}C_{0}^{1/2}}{\rho_{0}+C_{0}}\frac{c_{1}}{2}.$$ By setting $C_{0}=0$ in , we get the ruin probability without any investment in research and development. In Figure \[StateComparison\], the blue curve with circle markers stands for the ruin probability without investment and the red dashed curve stands for the minimized ruin probability with investment. These two curves differ from exponential decays, which is due to the flexibility of the state-dependent model. As observed in [@Zhu], for state-dependent dual risk model, the ruin probability can have subexponential, exponential and superexpontial decays in terms of the initial wealth. Also for the state-dependent dual risk model, the ruin probability may not be convex in the initial wealth (as we can see from the blue curve with circle markers in Figure \[StateComparison\]). ![Illustration of the ruin probability without any investment (blue curve with circle markers), the minimized ruin probability with investment in research and development (red dashed curve). The $x$-axis denotes the initial wealth of the underlying company and the $y$-axis denotes the (minimized) ruin probability. Here, we take $\gamma=0.5$, $\rho_{0}=1$, $\nu=0.1$, $\lambda_{0}=0.1$, $\delta=1$, $c_{1}=1$, and $c_{2}=1$.[]{data-label="StateComparison"}](2plots_state_dependent.pdf) Next, let us consider an example for $\gamma=1$ for the state-dependent dual risk model. Let us recall that in Example \[StateExII\], $\rho(x)=\rho_{0}(c_{1}x+c_{2})$, $\lambda(x)=\left(\nu+\frac{\lambda_{0}}{1+x}\right)\rho(x)$, and $\delta(x)=\delta_{0}$, and under the assumption that $\nu<\delta_{0}<\nu+\lambda_{0}$, the optimal $C^{\ast}$ is given by $C^{\ast}=0$ if $x\leq x^{\ast}$ and $C^{\ast}=\infty$ if $x>x^{\ast}$, where $$x^{\ast}:=\frac{\lambda_{0}-\delta_{0}+\nu}{\delta_{0}-\nu}.$$ From Example \[StateExII\], with optimal investment, the minimized ruin probability is given by $V(x)$ in if $x>x^{\ast}$ and the minimized ruin probability is given by $V(x)$ in if $x\leq x^{\ast}$, where $x$ is the initial wealth. Without any investment, as in [@Zhu], under the assumption that $\lambda_{0}>1$, we can compute that the ruin probability is given by $$\begin{aligned} V(x)&=\frac{\int_{x}^{\infty}\frac{\lambda(y)}{\rho(y)}e^{\nu y-\int_{0}^{y}\frac{\lambda(w)}{\rho(w)}dw}dy} {\int_{0}^{\infty}\frac{\lambda(y)}{\rho(y)}e^{\nu y-\int_{0}^{y}\frac{\lambda(w)}{\rho(w)}dw}dy} \\ &=\frac{\int_{x}^{\infty}\left(\nu+\frac{\lambda_{0}}{1+y}\right)\frac{1}{(1+y)^{\lambda_{0}}}dy} {\int_{0}^{\infty}\left(\nu+\frac{\lambda_{0}}{1+y}\right)\frac{1}{(1+y)^{\lambda_{0}}}dy} \nonumber \\ &=\frac{\nu(1+x)^{-\lambda_{0}+1}+(\lambda_{0}-1)(1+x)^{-\lambda_{0}}}{\lambda_{0}+\nu-1}, \nonumber\end{aligned}$$ which is strictly between $0$ and $1$. ![Illustration of the ruin probability without any investment (blue curve with circle markers), the minimized ruin probability with investment in research and development (red dashed curve). The $x$-axis denotes the initial wealth of the underlying company and the $y$-axis denotes the (minimized) ruin probability. $x^{\ast}$ on the $x$-axis is the critical threshold above which the optimal strategy is to invest as much as possible in R&D, and below which the optimal strategy is not to invest at all in R&D. Here, we take $\rho_{0}=1$ (irrelevant), $\nu=0.1$, $\lambda_{0}=1.2$, $\delta_{0}=0.4$, and $c_{1}=c_{2}=1$ (irrelevant) and $\gamma=1$.[]{data-label="StateSingularComparison"}](2plots_state_dependent_singular.pdf) In Figure \[StateSingularComparison\], we plot the curve of the ruin probability as a function of the initial wealth without investment (blue curve with circle markers) and the minimized ruin probability as a function of the initial wealth with the optimal investment in research and development (red dashed curve) as in the example of the state-dependent dual risk model we described above. In Figure \[StateSingularComparison\], the critical threshold for for the optimal investment strategy is $x^{\ast}=3$ in the plot. When the wealth process is below this threshold $x^{\ast}$, the optimal strategy for investment in R&D is not to invest, and when the wealth process is above this threshold $x^{\ast}$, the optimal strategy for investment in R&D is to invest as aggressively as possible. When $x<x^{\ast}$, from , we can see that $V(x)$ decays polynomially in $x$, and when $x>x^{\ast}$, from , we can see that $V(x)$ decays exponentially in $x$. [99]{} Afonso, L. B., Cardoso, R. M. R. and A. D. Egídio dos Reis. (2013). Dividend problems in the dual risk model. *Insurance: Mathematics and Economics*. **53**, 906-918. Albrecher, H., Badescu, A. and D. Landriault. (2008). On the dual risk model with tax payments. *Insurance: Mathematics and Economics*. **42**, 1086-1094. Avanzi, B., Cheung, E. C. K., Wong, B. and J. K. Woo. (2013). On a periodic dividend barrier strategy in the dual model with continuous monitoring of solvency. *Insurance: Mathematics and Economics*. **52**, 98-113. Avanzi, B., Gerber, H. U. and E. S. W. Shiu. (2007). Optimal dividends in the dual model. *Insurance: Mathematics and Economics*. **41**, 111-123. Azcue, P. and N. Muler. (2009). Optimal investment strategy to minimize the ruin probability of an insurance company under borrowing constraints. *Insurance: Mathematics and Economics*. **44**, 26-34. Bayraktar, E. and M. Egami. (2008). Optimizing venture capital investment in a jump diffusion model. *Mathematical Methods of Operations Research*. **67**, 21-42. Bayraktar, E. and V. R. Young. (2007). Minimizing the probability of liftetime ruin under borrowing constraints. *Insurance: Mathematics and Economics*. **41**, 196-221. Browne, S. (1995). Optima investment policies for a firm with a random risk process: exponential utility and minimizing the probability of ruin. *Mathematics of Operations Research*. **20**, 937-958. Casey, M. and R. Hackett. The 10 biggest R&D spenders worldwide. *FORTUNE*. November 17, 2014. Cheung, E. C. K. (2012). A unifying approach to the analysis of business with random gains. *Scandinavian Actuarial Journal*. **2012**, 153-182. Cheung, E. C. K. and S. Drekic. (2008). Dividend moments in the dual risk model: exact and approximate approaches. *ASTIN Bulletin*. **38**, 399-422. Fleming, W. H. and H.M. Soner. *Controlled Markov Processes and Viscosity Solutions*. Springer-Verlag, New York, 1993. Gaier, J. and P. Grandits. (2002). Ruin probabilities in the presence of regularly varying tails and optimal investment. *Insurance: Mathematics and Economics*. **30**, 211-217. Gaier, J. and P. Grandits. (2004). Ruin probabilities and investment under interest force in the presence of regularly varying tails. *Scandinavian Actuarial Journal*. **2004**, 256-278. Gaier, J., Grandits, P. and W. Schachermayer. (2003). Asymptotic ruin probabilities and optimal investment. *Annals of Applied Probability*. **13**, 1054-1076. Gerber, H. U. (1979). *An Introduction to Mathematical Risk Theory*. S. S. Huébner Foundation Monograph, Series No. 8. Hipp, C. (2004). Asymptotics of ruin probabilities for controlled risk processes in the small claims case. *Scandinavian Actuarial Journal*. **2004**, 321-335. Hipp, C. and M. Plum. (2000). Optimal investment for insurers. *Insurace: Mathematics and Economics*. **27**, 215-228. Knott, A. M. The Trillion-Dollar R&D Fix. *Harvard Business Review*. May 2012 Issue. Liu, C S. and H. Yang. (2004). Optimal investment for an insurer to minimize its probability of ruin. *North American Actuarial Journal*. **8**, 11-31. Meyer, P. A. (1971). Demonstration simplifiee d’un theoreme de Knight. *Seminaire de probabilites de Strasbourg*. **5**, 191-195. Ng, A. C. Y. (2009). On a dual model with a dividend threshold. *Insurance: Mathematics and Economics*. **44**, 315-324. Ng, A. C. Y. (2010). On the upcrossing and downcrossing probabilities of a dual risk model with phase-type gains. *ASTIN Bulletin* **40**, 281-306. Paulsen, J. (2008). Ruin models with investment income. *Probability Surveys*. **5**, 416-434. S. D. Promislow and V. R. Young. (2005). Minimizing the probability of ruin when claims follow Brownian motion with drift. *North American Actuarial Journal*. **9**, 110-128. Rodríguez, E., Cardoso, R. M. R. and A. D. Egídio dos Reis. (2015). Some advances on the Erlang(n) dual risk model. *ASTIN Bulletin*. **45**, 127-150. Schmidli, H. (2002). On minimizing the ruin probability by investment and reinsurance. *Annals of Applied Probability*. **12**, 890-907. Schmidli, H. (2005). On optimal investment and subexponential claims. *Insurance: Mathematics and Economics*. **36**, 25-35. Yang, C. and K. P. Sendova. (2014). The ruin time under the Sparre-Andersen dual model. *Insurance: Mathematics and Economics*. **54**, 28-40. Yang, H. and L. Zhang. (2005). Optimal investment for insurer with jump-diffusion risk process. *Insurnace: Mathematics and Economics*. **37**, 615-634. Wang, Z., Xia, J. and L. Zhang. (2007). Optimal investment for an insurer: The martingale approach. *Insurance: Mathematics and Economics* **40**, 322-334. Zhu, L. (2015). A state-dependent dual risk model. *arxiv:1510.03920*. [^1]: Available on `https://investor.google.com/financial/tables.html` [^2]: Available on Google Finance at `https://www.google.com/finance` [^3]: Available on Google Finance at `https://www.google.com/finance` [^4]: See Supporting innovation and economic growth: The broad impact of the R&D credit in 2005. Prepared by Ernst & Young LLP for the R&D Coalition. April 2008. Available at `http://investinamericasfuture.org/PDFs/R&DTaxCreditStudy2008final.pdf`
--- abstract: | The energy spectrum of the Coulomb potential with minimal length commutation relations $[X_i, P_j] = i\hbar\{\delta_{ij}(1+\beta P^2) + \beta'P_iP_j\}$ is determined both numerically and perturbatively for arbitrary values of $\beta'/\beta$ and angular momenta $\ell$. The constraint on the minimal length scale from precision hydrogen spectroscopy data is of order of a few GeV$\null^{-1}$, weaker than previously claimed. author: - Sándor Benczik - Lay Nam Chang - Djordje Minic - Tatsu Takeuchi title: 'Hydrogen-atom spectrum under a minimal-length hypothesis' --- Quantum gravity incorporates Newton’s constant as a dimensional parameter that could manifest itself as a minimal length in the system. Recent string theoretic considerations suggest that this length scale might imply an ultraviolet-infrared (UV-IR) correspondence, contrary to the normal perceptions on momentum and spatial separations. Large momenta are now directly tied to large spatial dimensions, which then implies the existence of a minimal length. Earlier studies have focused upon its amelioration of ultraviolet divergences [@earlyQM], but did not take into full account the UV-IR correspondence. There are various ways of implementing such an idea, but the simplest is to suppose that coordinates no longer commute in $D$-dimensional space. This, in turn, leads to a deformation of the canonical commutation relations. In our previous works, we adopted the equivalent hypothesis that the fundamental commutation relations between position and momentum are no longer constant multiples of the identity. In this paper, we report on constraints on the minimal length hypothesis from precision measurements on hydrogenic atoms. This system has a potential that is singular at the origin, and is therefore particularly sensitive to whether there is a fundamental minimal length. Considerations based upon higher-dimensional theories suggest that such lengths may be large [@LXD]. To set the context, we note that if in one dimension we have $$\label{MLCR1D} [\hat X, \hat P] = i\hbar(1+\beta \hat P^2) ,$$ where $\beta$ is a small parameter, then the resulting uncertainty relation $ \label{MLUR} (\Delta X)(\Delta P) \ge i\hbar\{1+\beta (\Delta P)^2\} $ exhibits a form of the UV-IR correspondence, and gives as minimal length $ \Delta X \ge \hbar\sqrt{\beta} $ [@Kempf]. We had examined the harmonic oscillator system under this hypothesis in [@HO], but no real constraint can be obtained on the minimal length, presumably because of the softness of the potential at the origin. An interesting approach is to take the classical limit $\hbar \to 0$ of the commutation relations; it yields an unbelievably strong bound, but its robustness might be questioned [@ClassLim]. We will work in arbitrary $D>1$ dimensions, where takes the tensorial form $$\label{MLCR} [\hat X_i, \hat P_j] = i\hbar\{\delta_{ij}(1+\beta \hat P^2) + \beta' \hat P_i \hat P_j\} ,$$ which, assuming that the momenta commute $ [\hat P_i, \hat P_j] = 0 , $ leads via the Jacobi identity to the nontrivial position commutation relations $$[\hat X_i, \hat X_j] = i\hbar\, \frac{(2\beta-\beta')+ (2\beta+\beta')\beta \hat P^2 } { (1+\beta \hat P^2) } ( \hat P_i \hat X_j - \hat P_j \hat X_i ).$$ The position and momentum operators can be represented by $$\label{repr} \hat X_i = (1+\beta \hat p^2) \hat x_i + \beta' \hat p_i \hat p_j \hat x_j, \qquad \hat P_i = \hat p_i,$$ where the operators $\hat x_i$ and $\hat p_j$ satisfy the canonical commutation relations $ [\hat x_i, \hat p_j] = i\hbar\delta_{ij}. $ The simplest representation is momentum diagonal, $$\hat x_i = i\hbar{\ensuremath{\frac{\partial^{}}{\partialp_i^{}}}}, \qquad \hat p_i = p_i .$$ In this representation the eigenvalue equation for the distance squared operator $\hat R^2=\hat X_i \hat X_i$ can be solved exactly. With $$\label{fromHere} z= \frac{(\beta+\beta')p^2 -1}{(\beta+\beta')p^2 + 1},$$ the eigenvalues $ {r_{n\ell}^2} = \hbar^2(\beta+\beta')\rho_{n\ell}^2$ and eigenfunctions $R_{n\ell}$ are given by (see [@HO] for details) $$\begin{gathered} \rho_{n\ell}^2 = (2n+a+b+1)^2 - (1-\eta)^2 \left( L^2 + \dfrac{ (D-1)^2 }{ 4 } \right) , \notag\\ R_{n\ell}(z) \propto (\beta+\beta')^{D/4} \left.\left(\frac{1-z}2\right)\!\!\right.^{\lambda/2} \left.\left(\frac{1+z}2\right)\!\!\right.^{\ell/2} P_n^{(a,b)}(z) , \notag\end{gathered}$$ where $P_n^{(a,b)}(z)$ are the Jacobi polynomials and $$\begin{aligned} \eta&=\frac{\beta}{\beta+\beta'}, \qquad &a&=\sqrt{ \frac{ [1 + (D-1)\eta]^2 }{ 4 } + \eta^2 L^2 }, \notag\\ b&=\frac{D}{2} + \ell - 1, \qquad &\lambda &=\frac{1 + (D-1)\eta}{2} + a .\end{aligned}$$ Having diagonalized $\hat R^2$, one can express the action of the $\hat R{^{-1}}$ operator on any function of definite angular momentum $\Psi(z) = \sum_{n=0}^\infty f_n R_{n\ell}(z).$ In particular, the Schrödinger equation for the Coulomb problem, $\left(\hat{P}^2/2m - k/\hat{R} \right) \Psi(p) = E\,\Psi(p),$ can be rewritten in the variable $z$ as $$\sum_{n=0}^\infty f_n \left[ \left(\frac{1+z}{1-z}\right) + \epsilon - \frac{2\xi}{\rho_{n\ell}} \right] R_{n\ell}(z) = 0 ,$$ $\xi =\Delta x_{\min}/a_0$ being the ratio of the minimal length $\Delta x_\mathrm{min}=\hbar\sqrt{\beta+\beta'}$ to the Bohr radius $a_0=1/km$, and $\epsilon=\xi^2 (E/E_0)$ the energy in units of the usual ground-state energy $E_0 = -1/2a_0^2m$ times $\xi^2$. Using the recursion relation for Jacobi polynomials as well as the orthogonality of the distance eigenfunctions $R_{n\ell}$, the Schrödinger equation is equivalent to a recursion relation for the expansion coefficients, $$f_{n+1} s_{n+1}\hat{a}_{n} + f_n t_n + f_{n-1} s_{n-1}\hat{a}_{n-1} = 0, \label{recur}$$ with $f_{-1}=0$, $s_n = 1-\epsilon + 2\xi/\rho_{n\ell}$, $$\begin{aligned} t_n &= (2-s_n) - s_n \frac{a^2-b^2}{(2n+a+b)(2n+a+b+2)}, \notag\\ \intertext{and} \hat{a}_n & = -\frac{2}{(2n+a+b+2)}\notag\\ &\quad\times\sqrt{ \frac{ (n+1)(n+a+1)(n+b+1)(n+a+b+1) } { (2n+a+b+1)(2n+a+b+3) } }.\notag\end{aligned}$$ For a normalizable solution we must have $ \langle \Psi | \Psi \rangle = \sum_{n=0}^\infty f_n^2(\epsilon) $ finite, thus $f_n$ should converge to zero. A closed-form expression for this sequence cannot be determined. One can observe though that for large $n$ it asymptotically approaches $$f_n \sim C_+\lambda_+^n + C_-\lambda_-^n \qquad\text{with } \lambda_\pm= \frac{1\pm\sqrt{\epsilon}}{1\mp\sqrt{\epsilon}},$$ $C_\pm$ being constants that depend on the minimal length through $\xi$ and the energy eigenvalues through $\epsilon$. This allows one to determine numerically the Coulomb spectrum, by imposing $C_+=0$. As an independent check, we used two different algorithms. First, for fixed minimal length $\xi$, we imposed $f_{n+1}/f_n=\lambda_-$ for sufficiently large $n$ and scanned for the values of $\epsilon$ for which the recursion gives $f_{-1} = 0$. The contents following , and concluding with the first algorithm just described, represent results in unpublished work of Joseph Slawny. We thank him for making these available to us prior to publication. The second algorithm is more direct: for a given minimal length $\xi$, we determined the values $\epsilon$ for which $f_n$ converges to 0. The subtlety is that $C_+$ will never be represented internally as exactly zero, and the term corresponding to it will eventually dominate our sequence. One can still identify the energy eigenvalues from the sign switch which occurs in $C_+$ and correspondingly in the large-$n$ behavior of $f_n$. The algorithms yield consistent results, sampled in Fig. \[plots\]. ![Plot of energy eigenvalues of the Coulomb potential in units of the regular ground-state energy as a function of the minimal length in units of the Bohr radius $\xi = \hbar\sqrt{\beta+\beta'}/a_0$. For principal quantum numbers $n=1,2,3$, two extreme cases are represented: $\beta'=0$ (thick lines) and $\beta=0$ (thin lines). Continuous line, $\ell=0$; dotted line, $\ell=1$; dash-dotted line, $\ell=2$. \[plots\] ](prl1col.eps) We can see that the degeneracy among different angular momentum states is lifted: higher-$\ell$ states get smaller corrections. The only exceptions are the $S$ states for $\beta'>2\beta$ and $D=3$. These states start out with the lowest, negative correction for small $\xi$, but cross the higher-angular-momentum levels as $\xi$ increases. Another important remark, expected but not readily transparent in the representation used, is that the energy values converge to the usual result in the limit when the minimal length is taken to zero. In order to get better insight into the observed behavior one can take another approach. This is also needed because for the very small values of the minimal length that interest us, i.e., several orders of magnitude below the Bohr radius, the numerical convergence becomes very slow and prone to rounding errors. As mentioned before, the choice of momentum representation, while convenient, is not necessary. In relation one can use the “pseudoposition” representation $$\label{FakePos} \hat x_i = x_i, \qquad \hat p_i = \frac\hbar i{\ensuremath{\frac{\partial^{}}{\partialx_i^{}}}},$$ in which the position operators $\hat X_i$ are not diagonal, except for the limit $\beta=\beta'=0$. In this representation one can treat the $\beta$- and $\beta'$-dependent terms as small, and use perturbation theory to deduce the corrections to the energy spectrum. The operator $\hat R^2$ can be written as $$\label{R2-def} \hat R^2 = \hat r^2 + \xi^2 (\hat R^2)_{\xi^2} + \xi^4 (\hat R^2)_{\xi^4} ,$$ where the first-order correction acts on the radial part of the wave-function through $$\begin{aligned} \label{R2-1} \frac{(\hat R^2)_{\xi^2}}{a_0^2} &= - 2\rho^2{\ensuremath{\frac{\partial^{2}}{\partial\rho^{2}}}} +[\eta(D-1)-3(D+1)]\rho{\ensuremath{\frac{\partial^{}}{\partial\rho^{}}}} \notag\\&\quad + \eta (2L^2+ D^2-D) -D(D+1), \end{aligned}$$ with $\rho = r/a_0$ and $L^2= \ell(\ell+D-1)$. In the expansion of the inverse distance $ \hat R{^{-1}}= \hat r{^{-1}}+ \xi^2 (\hat R{^{-1}})_{\xi^2} + O(\xi^4), $ we expect on dimensional grounds that the first-order correction is a linear combination of terms of form $(1/\rho)\partial_{\rho\rho}, (1/\rho^2)\partial_\rho$, and $1/\rho^3$. Indeed, substituting this form into $\hat R{^{-1}}\hat R^2 \hat R{^{-1}}=\hat 1$ determines uniquely $$\begin{aligned} \label{Omega} \frac{(\hat R{^{-1}})_{\xi^2}}{a_0{^{-1}}} &= \frac{[(2D-5)-\eta(2D-3)](D-1)-4\eta L^2}{4\rho^3} \notag \\ &\quad +\frac{(3-\eta)(D-1)}{2\rho^2}{\ensuremath{\frac{\partial^{}}{\partial\rho^{}}}} + \frac1\rho{\ensuremath{\frac{\partial^{2}}{\partial\rho^{2}}}}.\end{aligned}$$ Thus, expressing the expectation value ${\left<}(1/\rho)\partial_{\rho\rho}{\right>}$ through the Schrödinger equation and ${\left<}(1/\rho^2)\partial_\rho{\right>}$ in terms of ${\left<}1/\rho^3{\right>}$, the first-order corrections to the energy eigenvalues can be written as $$\begin{gathered} \frac{\xi^2}{a_0^2 m} \left\{ \left[\frac{(D-1)(3\eta-1)}4 - \bar\ell(\bar\ell+1)(1-\eta)\right] \!\Bigl<\frac1{\rho^3}\Bigr> \right.\\ \left. + 2 \Bigl<\frac1{\rho^2}\Bigr> - \frac1{\bar n^2}\Bigl<\frac1{\rho}\Bigr> - \left.\frac{(D-1)(1-\eta)\rho^{D-3}\!}{2} [\Pi_{n\ell}(\rho)]^2\right|_0^\infty \right\}\!,\\ \label{ExpVal}\end{gathered}$$ where $\bar n = n + \frac{D-3}2$, $\bar\ell = \ell + \frac{D-3}2$, and $\Pi_{n\ell}(\rho)$ is the unperturbed Coulomb radial wave-function. A note of caution is needed here. While the expansion is apparently in $\xi^2$, a quick calculation of higher-order terms confirms what is expected on dimensional grounds, namely, that the expansion parameter is $\beta/r^2 = \xi^2/\rho^2$. The $\xi$-quartic part in the expansion of $\hat R^2$ contains terms of the type $\xi^4/\rho^2$ and $(\xi^4/\rho)\partial_\rho$. Therefore, the approximation for $\hat R{^{-1}}$ is no longer good for $\rho \lesssim \xi$. In particular, in the actual operator $\hat R{^{-1}}$ there is no singularity at the origin.[^1] Let us estimate the error. The largest discrepancy between and the actual value comes from the expectation value of $1/\rho^3$ calculated over the interval $[0,\rho_c\xi]$ on which the approximation breaks down, where $\rho_c \equiv r_c/a_0\sim 1$. For an angular momentum state $\ell$, this is of the order $$\begin{gathered} \label{error} \xi^2\int_0^{r_c\xi}\frac{R_{n\ell}^2(r)}{r^3} r^{D-1}dr \sim \xi^2\int _0^{r_c\xi} r^{2\ell+D-4} dr \\ \sim \xi^{2\ell+D-1}, \qquad\text{for $\ell>0$ or $D>3$.}\end{gathered}$$ For $D>3$ or $\ell \ne 0$, this contributes only a higher-order term; thus it is safe to use , and we finally arrive at $$\frac{\Delta E_{n\ell}}{E_0} = \frac{2 \xi^2} {\bar n^3} \left[\frac{(D-1)\left(3\eta-1\right)} {4\bar\ell(\bar\ell+1)(\bar\ell+\frac12)} +\frac{\eta+1}{\bar\ell+\frac12}-\frac1{\bar n}\right]. \label{DeltaE}$$ This expression generalizes the result of Ref. [@Brau] for arbitrary $\eta$ and $D$. For the particular case $D=3$ and $\eta = 1/3$ (i.e., $\beta'=2\beta$) it reduces to the one obtained there. Moreover, it is in excellent agreement with our numerical results. ![ Comparison of different results for (a) 2$P$ states with $\eta=0,1$ and (b), (c) 1$S$ states with $\eta = 0, 1/3, 1$. Solid lines, numeric result; dotted lines, perturbative result; dash-dotted lines, Ref. [@Brau]; double-dashed lines, Ref. [@AY]. The perturbative expression is given for (a) by and for (b), (c) by formula , with the coefficient $C$ chosen such that it agrees with the numerics at $\xi^2=10^{-6}$. \[compare\] ](prl2col.eps) \[See Fig. \[compare\](a) for the case of the 2$P$ level.\] When $\ell=0$ and $D=3$, the integral is infinite. We can use only the part of that can be trusted, i.e., we have to cut off the expectation value integral ${\left<}1/\rho^3{\right>}$ at $\rho=\rho_c\xi$. The leading-order terms are $$\label{DeltaEs} \frac{\Delta E_{n\ell}}{E_0} = \frac{4(3\eta-1)}{n^3} \xi^2E_1(\rho_c\xi) + C\xi^2 + O(\xi^3),$$ where $E_1(\rho_c\xi)= -\ln \xi - (\gamma + \ln \rho_c) + O(\xi)$ is the exponential integral function. The coefficient of the $\xi^2$ term gets contributions from several parts. First, there are the remaining terms in . Second, the actual value of $\hat R{^{-1}}$ is bounded on the interval $[0, \rho_c\xi]$, so by cutting off the integral at $\rho_c\xi$ we are neglecting another term of order $\xi^2$. Lastly, the exact choice of the cutoff value $\rho_c$ contributes another $\xi^2$ term. Because we do not know the exact form of $\hat R{^{-1}}$, we cannot calculate analytically the second of these contributions. When needed, $C$ can be determined numerically, by fitting relation to the numerical results at a sufficiently low value of $\xi^2$.[^2] When compared to the numerical results, the behavior of the energy as a function of minimal length is nicely reproduced in Figs. \[compare\](b) and \[compare\](c). Our results disagree with Ref. [@Brau]. The difference is well explained by the neglect there of all but linear terms in $\beta$. These terms critically affect the small-$r$ behavior of $\hat R{^{-1}}$, and cannot be neglected. Reference [@AY] arrives at a different expression, which is independent of $\eta$. However, we could neither account for the discrepancy nor reproduce those results. We can finally set out to determine the constraint on the minimal length $\Delta x_{\min}$ from precision hydrogen spectroscopy. A naive estimate, obtained by imposing that the corrections are smaller than the experimental error on the value of the hydrogen 1$S$-2$S$ splitting, gives $\Delta x_{\min} \gtrsim 300$ GeV (cf. [@Brau; @AY]). Unfortunately, this estimate would be correct only if the measured value of the physical observable agreed with the theoretical prediction and the main source of error were the experimental one. This is certainly not the case for the 1$S$-2$S$ splitting in hydrogen: known to 1.8 parts in $10^{14}$, it is one of the most precisely measured quantities today and is considered a *de facto* standard [@1S2S]. The value of the Rydberg constant is determined using this measurement as an input, and thus the theoretical uncertainty is orders of magnitude above the experimental one. A better estimate is obtained by including contributions of the (hypothetical) minimal length in the Lamb shifts. The strongest constraint is expected from the 1$S$ Lamb shift, being the one determined most precisely and getting the largest correction. The measured 1$S$ state hydrogen Lamb shift of $ L_{1S}^\text{expt} = 8172.837(22)$ MHz [@1Sexp] is larger than today’s best theoretical prediction $L_{1S}^\text{theor} = 8172.731(40)$ MHz [@1Stheo] by about 5$\sigma$ experimental uncertainty. If we attribute the discrepancy entirely to the minimal length correction to the 1$S$ state, the bound as a function of $\eta$, obtained using the first two terms in , is shown in ![Constraint on the minimal length obtained as a function of $\eta$, including the two highest-order terms (continuous line) and just the leading-order term (dash-dotted line). \[constraint\] ](prl3col.eps) Fig. \[constraint\]. It is 1.75 GeV for $\eta=1/3$ and increases to 6.87 GeV for $\eta = 1$. Below $\eta = 1/3$, the constraint relaxes rapidly. Indeed, in this case the leading-order term in is negative, and only the contribution from the next term can account for the observed difference. As a comparison, including only the leading term, we can obtain a bound only for $\eta>1/3$, with consistent results for $\eta\gtrsim 0.5$. We should point out that the theoretical Lamb shift predictions are somewhat frail because of the uncertainties in the proton charge radius [@2Ptheo]. These are the same order of magnitude as the ones discussed here; thus one should consider the values in Fig. \[constraint\] rather as upper limits for the minimal length. There is also the possibility of using muonium spectroscopy, but the current limits are still weak for our purposes. Details and other implications for QED are under investigation. The authors would like to thank F. Brau, M. Koike, J. Slawny, and Y. P. Yao for insightful discussions. This research is supported in part by the US Department of Energy grant DE–FG05–92ER40709. [99]{} W. Heisenberg, Z. Phys.  [**110**]{}, 251 (1938); H. S. Snyder, Phys. Rev. [**71**]{}, 38 (1947); [**72**]{}, 68 (1947); C. N. Yang, *ibid.* [**72**]{}, 874 (1947). N. Arkani-Hamed, S. Dimopoulos, and G. R. Dvali, Phys. Lett. B [**429**]{}, 263 (1998). A. Kempf, G. Mangano, and R. B. Mann, Phys. Rev. D [**52**]{}, 1108 (1995). L. N. Chang, D. Minic, N. Okamura, and T. Takeuchi, Phys. Rev. D [**65**]{}, 125027 (2002); [**65**]{}, 125028 (2002). S. Benczik, L. N. Chang, D. Minic, N. Okamura, S. Rayyan, and T. Takeuchi, Phys. Rev. D [**66**]{}, 026003 (2002); e-print arXiv:hep-th/0209119. F. Brau, J. Phys. A [**32**]{}, 7691 (1999). R. Akhoury and Y. P. Yao, Phys. Lett. B [**572**]{}, 37 (2003). M. Niering *et al.*, Phys. Rev. Lett. [**84**]{}, 5496 (2000). C. Schwob *et al.*, Phys. Rev. Lett. [**82**]{}, 4960 (1999). S. Mallampalli and J. Sapirstein, Phys. Rev. Lett. [**80**]{}, 5297 (1998). See M. I. Eides, H. Grotch, and V. A. Shelyuto, Phys. Rep.  [**342**]{}, 63 (2001) for a review. [^1]: This is quite general. Any $\hat P$-dependent commutation relation is expected to expand like and to exhibit this behavior. [^2]: For $D>3$, ${\left<}1/\rho^3 {\right>}$ integrals are convergent, and the approximate solutions then agree nicely with the corresponding numerical results, even for $\ell = 0$.
--- abstract: 'We study the dynamics of entropy in a time dependent potential and explore how disorder influences this entropy flow. We show that disorder can trap entropy at the edge of the atomic cloud enabling a novel cooling method. We demonstrate the feasibility of our cooling technique by analyzing the evolution of entropy in a one-dimensional Fermi lattice gas with a time dependent superlattice potential.' author: - 'F. Nur Ünal' - 'Erich J. Mueller' bibliography: - 'EntropyNJP.bib' title: Cooling Quantum Gases with Entropy Localization --- Introduction ============ Disorder, often treated as a nuisance to be avoided, can be a great resource. For example, the quantum Hall effect is widely believed to only be observable because of disorder [@IQHDisorder]. More recently, there have been proposals to use disorder to stabilize topological orders against temperature [@DisorderTopology; @DisorderTopology2]. Here, we propose a disorder-enabled cooling technique for cold atoms, which takes advantage of the theoretical [@LocOrso; @LocAbanin; @DeMarcoTheory] and experimental [@DeMarcoExperiment; @BlochMBL] developments involving many-body localization in ultracold atoms. In discussing “cooling" of cold atomic systems, the relevant quantity is often entropy rather than temperature [@HoSqueezeEntropy; @KohlDimple; @EntropyScalettar; @EntropyShenoy; @EntropyWalraven; @HoMarch; @DeMarcoCooling; @HuletCooling; @CoolingWithDisorder; @MuellerDimple]. Temperature can be radically reduced by adiabatically changing system parameters [@HoHeating; @IntCooling; @FeshbachCooling; @AdiabaticCoolFermiLatt] (for example the depth of an optical lattice), but, there is no utility in lowering the temperature if the other energy scales in the system are commensurably reduced. One prevalent idea in the field involves cooling by spatially segregating the entropy [@ZwergerUnitaryFermi]. This approach is most thoroughly worked out in the context of dimple traps [@KohlDimple], where a deep potential well yields a low-entropy region in the midst of a shallow trap. Here, we pursue the idea of using disorder to control the spatial distribution of entropy in a trapped atomic cloud. It is straightforward to create atomic clouds with a central low-entropy region. For example, a Fermi lattice gas with a band insulating core will have most of its entropy at the edge, which is metallic. The low-entropy region, however, is boring. It has a gap to excitations. One needs a way to adiabatically transform the insulating state into something more interesting without allowing the entropy to flow into that region. One set of proposals involves removing the high-entropy atoms while simultaneously changing the confining potential [@HoSqueezeEntropy; @KohlDimple]. Here, we propose an alternative, namely using disorder to prevent the diffusion of entropy from the edge of the cloud. Indeed, Anderson showed that, in the absence of interactions, sufficiently strong disorder prevents transport, and would freeze the spatial distribution of entropy [@Anderson; @Anderson2]. Half a century later, Basko et al. coined the phrase ‘many-body localization’ showing that this insulating behavior survives weak interactions at finite temperature [@Basko]. Further experimental and theoretical studies confirmed these results, and showed they persist under very general conditions [@LocInguscio; @LocBouyer; @LocDeMarco; @LocAspect; @DeMarcoExperiment; @DeMarcoTheory; @BlochMBL; @MattMuller]. One expects that generically disorder can be used to prevent entropy flow, even in the presence of interactions. To demonstrate our idea, we investigate the dynamics of a simple model of harmonically trapped one-dimensional spin-polarized fermions. A superlattice of period two results in insulating behavior near the middle of the trap and metallic behavior at the edges. Due to the location of the low energy excitations, most of the entropy in the system resides at the edges. We subsequently eliminate the gap in the bulk by ramping down the superlattice potential. This potentially results in a low entropy metallic state for which interactions can lead to novel quantum phenomena. We show that, in the absence of disorder, ramping down the superlattice affects the entropy mainly in two ways. First, due to the harmonic confinement, entropy flows into the center. Second, for finite sweep rates, removing the superlattice potential generates some entropy. We find that sufficiently strong disorder prevents the entropy flow, effectively cooling the central region. We study the entropy dynamics for different sweep rates and compare the degree of entropy localization for different disorder strengths. We also analyze the entanglement entropy in the system to characterize the entropy generation. Finally, we comment on the effect of interactions and experimental considerations. The Model {#Sec model} ========= The Hamiltonian of our 1D noninteracting system of spinless fermions can be written as $$\begin{gathered} \label{hamiltonian} \frac{\mathcal{H}(t)}{J}=\sum_{i=-N/2}^{N/2} -(a_i^{\dag}a_{i+1}+a_{i+1}^{\dag}a_i)+\frac{1}{2}\omega^2 i^2 a_i^{\dag}a_i \\ + \Delta(t) (-1)^i a_i^{\dag}a_i + \zeta_i a_i^{\dag}a_i ,\end{gathered}$$ with nearest-neighbor tunneling rate $J$ and adimensionalized harmonic trap frequency $\omega$. The operator $a_i^{\dag}\,(a_i)$ creates (annihilates) a particle at site $i$. The superlattice strength is parameterized by dimensionless $\Delta$, which we take to be time dependent. For $\Delta\gg 1$, one finds two bands separated by a gap of order $2\Delta$. We introduce uncorrelated disorder $\zeta_i$, uniformly distributed with $|\zeta_i|\leq\zeta$ where $\zeta$ determines the disorder strength. Initially, we assume the system is in thermal equilibrium with chemical potential $\mu$ and temperature $T$. This Hamiltonian can be represented as a matrix. We diagonalize $\mathcal{H}$, finding single-particle eigenstates $\Psi^{(n)}$ and eigenvalues $\varepsilon_n$. The entropy of the system is $S=-\sum_n f_n\ln(f_n)+(1-f_n)\ln(1-f_n)$ where $f_n=(1+e^{(\varepsilon_n-\mu)/kT})^{-1}$ is the Fermi-Dirac distribution. We find it convenient to not include Boltzmann’s constant. It is then natural to introduce a local entropy density $$\label{Eq entropy} S_i=-\sum_{n} |\Psi_i^{(n)}|^2 \left(f_n\ln(f_n)+(1-f_n)\ln(1-f_n)\right),$$ so that $S=\sum_i S_i$. As we discuss later, this von Neumann definition does not capture entropy associated with quantum entanglement. For thermal ensembles, however, it is a good definition. In our simulations, we take $N=200$ sites, and tune the gap $\Delta$, trap frequency $\omega$ and chemical potential $\mu$ so that the system supports metallic excitations at the edges with a bulk insulator in between. We study how the entropy density evolves with time. In any isolated quantum system (interacting or non-interacting) the total entropy cannot change: A pure state cannot evolve into a mixed state. Regardless of how adiabatic the evolution is, no information is lost in quantum dynamics. Hence, no unitary evolution can change the von Neumann entropy in an isolated system. The spatial distribution of the entropy, can however evolve. We will largely be considering a non-interacting gas, where the occupation factors $f_n$ in Eq. (\[Eq entropy\]) will be constant, but the wave functions $\Psi_i^{(n)}$ may evolve with time. This time-dependent Hartree-Fock approximation, which was first proposed by Dirac [@dirac1930], is exact for a non-interacting gas. However, even in the case of interactions, it is accurate for describing modes which have frequencies large compared to the inverse collision time. Physically we expect that, given enough degrees of freedom, an isolated quantum system should be capable of thermalizing [@ThermalGoldstein; @ThermalPolkovnikov; @ThermalTasaki; @ThermalDeutsch; @ThermalSred1; @ThermalSred2]. Thermalization requires entropy growth, so this physical expectation is at odds with the mathematical statement that the entropy is constant. One solution to this puzzle is to consider the [*entanglement entropy*]{} of a subregion (see Section \[Sec entglmt entropy\] and Ref. [@DeutschNJP]). For generic quantum states the entanglement entropy of a small subregion is proportional to the volume of that region, allowing one to define a quantum entropy density. This quantum entropy density generically increases with time. The total entropy, as conventionally defined, is not equal to the volume integral of this quantum entropy density. There are alternative procedures which allow one to define entropy densities which increase with time in isolated systems [@Ueda; @EntQuench; @Polkovnikov]. In Section \[Sec entropy density\], we explore the entropy redistribution, as captured by Eq. (2). In section \[Sec entglmt entropy\], we calculate the evolution of the entanglement entropy of the central region. These are both valid ways of defining entropy density, and reveal different aspects of the dynamics. We show that regardless of the definition of entropy, the disorder reduces the entropy growth in the center of the cloud. Results {#Sec Results} ======= Entropy Density {#Sec entropy density} --------------- The dark blue lines in Fig. \[Sx figs\] show the initial entropy density with and without disorder. Clearly, the entropy is initially concentrated at the metallic edges. One hopes that the low entropy density at the center of the trap can be used as a resource. As previously explained, in order to make use of this resource we need to eliminate the gap by reducing $\Delta$ to zero. Thus, we wish to calculate how the entropy evolves as we change the superlattice strength. In the absence of scattering, we can use the single-particle Schrödinger equation to evolve the wave functions, keeping the occupation factors fixed. We assume a linear ramp, $$\Delta(t)= \begin{cases} \Delta_0-\frac{\Delta_0}{\tau}t, & \quad 0\leq t\leq\tau,\\ 0, & \quad t>\tau. \end{cases}$$ where larger $\tau$ corresponds to a slower sweep. In the disorder-free case, entropy defined by Eq. (\[Eq entropy\]) flows in from the edges as we close the gap. This behavior is reasonable as we know a fully adiabatic ramp would result in a thermal state, whose entropy density is peaked at the center of the cloud. We caution, however, that true adiabaticity requires extremely slow sweeps. The flow of entropy towards the center is nonetheless robust, occurring even in relatively fast sweeps. Fig. \[Sx figs\] shows that, as anticipated, strong disorder ($\zeta=1.5$) localizes the entropy at the edge of the cloud during the evolution. Although the local entropy density is low, the state is nominally non-thermal. The states $\Psi^{(n)}$ at the final time are not energy eigenstates. Nonetheless, in the central region, the system will behave in many ways similar to a low temperature state. The fluctuations will be small. ![ The fraction of the entropy in the central region of the trap ($-60<i<60$ for $N=200$ sites). Here, the superlattice strength is $\Delta=3$, trap frequency is $\omega=0.035$, chemical potential is $\mu=0.75$ and temperature is $T=0.1$. The parameters are given in dimensions of the tunneling rate $J$. The dots and the diamonds correspond to entropy immediately after the sweep $t=\tau$ and the solid lines correspond to $t=11\tau$ where we allow the system to evolve further after the sweep is complete. We show two different disorder strengths, $\zeta=1$ (dark) and $\zeta=2$ (light). For weaker disorder, there is significant entropy flow following an abrupt ramp, so to achieve the adiabaticity the ramp must be slower.[]{data-label="S Central fig"}](Fig2){width="48.00000%"} We find that the entropy evolution is sensitive to sweep rate ($1/\tau$). In a fast sweep (small $\tau$) where the wave functions do not have enough time to adjust themselves to the new Hamiltonian, the entropy distribution immediately after the sweep would be similar to the initial configuration, i.e. trapped at the edges. Fig.\[S Central fig\] demonstrates these dynamics at time $t=\tau$ for two different disorder strengths, $\zeta=1$ (dots) and $\zeta=2$ (diamonds), and the entropy is initially concentrated at the edges. We consider the relative percentage of the entropy that resides in the center of the trap (i.e. between $-60<i<60$ for $N=200$ sites). This central region holds $75\%$ of the particles. Strong disorder ($\zeta=2$) enhances the adiabaticity of the process and the central entropy percentage becomes largely independent of sweep rate. However, for weaker randomness ($\zeta=1$), the central entropy seems to increase initially as we make the sweep slower and then saturates to a finite value. ![ The fraction of the final entropy in the central region of the trap ($-60<i<60$ for $N=200$ sites) vs. disorder strength. The superlattice strength is $\Delta=3$, trap frequency is $\omega=0.035$, temperature is $T=0.1$, and chemical potential is fixed at $\mu=0.75$. We take $\tau=100$ and let the system evolve for another $10\tau$ after ramping down the superlattice. Initially for a clean system, $56\%$ of the total entropy lies in the central region. Increasing disorder quickly freezes the entropy at the edges. The inset displays the corresponding localization lengths. When the localization length is around 2 lattice sites, the central entropy percentage is already reduced to a third of the disorder-free case. []{data-label="Scentral vs Dis. fig"}](Fig3){width="48.00000%"} One important concern is that the system continues to evolve following the sweep with entropy continuing to spread towards the center. In order to study this effect, we let the system evolve for another $10\tau$ after the sweep is completed, i.e. the total time of the evolution is $11\tau$. For weaker disorder strength, the entropy evolves significantly after the sweep. After a long time, the central entropy density is nearly independent of sweep rate, saturating near $18\%$ for $\zeta=1$. A considerable percentage of the entropy still remains frozen at the edges of the cloud. For strong disorder, the entropy, as defined by Eq. (\[Eq entropy\]), fails to evolve following the sweep. Moreover, the amount of entropy which flows in during the removal of the superlattice potential decreases as the disorder increases. For $\zeta=2$, only $10\%$ of the total entropy flows into the middle of the trap. We consider this dependence of the final central entropy on the disorder strength in Fig. \[Scentral vs Dis. fig\]. In order to analyze the strength of the disorder, we also display the corresponding localization length in the inset of Fig. \[Scentral vs Dis. fig\], which is calculated by analyzing the exponential tails of the wave functions [@AspectLocLength; @LocAspect]. In the disorder-free case, almost $60\%$ of the total entropy resides in the center following the sweep, which is compatible with the length of this region. When the localization length is around two lattice sites, the central entropy percentage is already reduced to a third of the disorder-free case. In fact, for the parameters given in Fig.\[S Central fig\] and Fig.\[Scentral vs Dis. fig\], the entropy per particle is reduced by a factor of 3 to 10 in the center. These results prove that when the system is pre-cooled with conventional techniques, our disorder-induced cooling mechanism can be employed to reach temperatures much lower in the center than the rest of cloud. This is particularly useful in obtaining low temperatures in optical lattice systems or in the presence of speckle disorder. The fine structure noise in Fig.\[S Central fig\] and Fig.\[Scentral vs Dis. fig\] has two sources. First, there are rapid oscillations associated with particular disorder realizations. We somewhat control these by averaging over thirty realizations. Second, there are longer wavelength wiggles in Fig.\[S Central fig\] which are associated with the trap. Entanglement Entropy {#Sec entglmt entropy} -------------------- The definition of entropy in Eq.(\[Eq entropy\]) does not capture any entropy generation during the ramping down of the superlattice potential. One convenient way to characterize any entropy generation is to look at the entanglement entropy between the central region and the rest of the cloud [@Sent1; @Sent2]. For our state, this entanglement entropy can be calculated from the single particle density matrix, $$G_{ij}=<\hat{\Psi}_i^{\dag}\hat{\Psi}_j>=\sum_n \Psi_i^{(n)*}\Psi_j^{(n)}f_n,$$ where $i$ and $j$ label sites. Cheong and Henley showed that if one truncates this matrix, restricting $i$ and $j$ to lie in a subregion, then the entanglement entropy is related to the eigenvalues $\lambda_m$ of the truncated density matrix [@ChrisHenley]. In particular, $$\label{Eq S entanglement} S_{entanglement}=-\sum_m \lambda_m \ln(\lambda_m)+(1-\lambda_m)\ln(1-\lambda_m).$$ $S_{entanglement}$ measures how much the central region becomes correlated to the rest of the system while the superlattice is being ramped down. For our calculation, we consider the entanglement entropy of the center of the cloud, taking $-60<i,j<60$ for $N=200$ sites. In Fig.\[S\_ent fig\], we demonstrate the central entanglement entropy per site ($s_{entanglement}=S_{entanglement}/120$) for the disorder free case and the strong disorder. Initially, the central entanglement entropy density is almost zero (not displayed in Fig.\[S\_ent fig\]) for both cases. In the absence of disorder, $s_{entanglement}$ immediately after the sweep increases for increasing $\tau$ and then saturates to a finite value. This increase again reflects continuing evolution of the entropy after an abrupt ramp. In principle, for infinitely slow sweeps no entropy will be generated. For practical sweep rates however, we find that more entropy is generated for slower sweeps. This is in part because longer sweeps provide more time for the entropy to evolve. As one expects, adding disorder suppresses entropy generation for slower sweeps. Fig.\[Sx figs\]-\[S\_ent fig\] demonstrate that both the entropy flow and the entropy generation can be suppressed by using disorder. ![ Dimensionless entanglement entropy per site between the central region and the rest of the cloud for superlattice strength $\Delta=3$, trap frequency $\omega=0.035$ and temperature $T=0.1$ immediately following a sweep of duration $\tau$. The parameters are given in dimensions of the tunneling rate $J$. The entanglement entropy becomes small as $\tau$ goes to zero because there is less time for information to propagate. In principle, the entropy should again be small at very large $\tau$ when the dynamics are truly adiabatic. In the disordered case (dashed line), localization limits the amount of entanglement possible. []{data-label="S_ent fig"}](Fig4){width="48.00000%"} Conclusion and Outlook ====================== Cooling atomic gases down to temperatures low enough to observe novel quantum phenomena is an ever present challenge. The current cooling techniques mostly rely on removing the high entropy particles from the system [@StamperKurnCooling], which usually lie at the edges of the system. Instead, we propose a cooling technique where disorder is used to control the spatial distribution of entropy. In particular, we demonstrate our disorder-induced cooling mechanism by applying it to one-dimensional non-interacting fermions in a harmonic trap. By employing a period two superlattice, we create a gap in the spectrum and a low entropy region in the center of the cloud. Introducing disorder to the system localizes the entropy at the edges. We then adiabatically remove the superlattice potential to obtain a metallic low-entropy state at the center and analyze the dynamics of the entropy during the evolution. We show that only a small percentage of the total entropy lies in the central region. Since the system has been already cooled down with conventional means before ramping down the spectral gap, the central low-entropy region can then provide access to temperatures much lower than the rest of the cloud [@KohlDimple]. Our ideas are particularly valuable for producing very cold disordered gases. Typically it is extremely hard to cool in lattices or speckle disorder [@DeMarcoCooling]. Our approach, where a superlattice potential is ramped down in the presence of disorder overcomes these difficulties, providing a promising way to create a disordered low entropy gas. Although we model the case of a superlattice potential here, our approach should work in much general settings. The only requirement is that there is a spectral gap in the center of the cloud, with gapless excitations on the edge. One adds disorder to the system and cools as far as possible with conventional means. One then slowly changes the Hamiltonian to turn off the central gap. One could also imagine interesting variants, where the disorder is only applied to the edge of the cloud so that one would have a homogenous system in the center. Our disorder-induced cooling mechanism can be combined with existing cooling techniques to further lower the temperatures in these systems. For example, after using disorder to trap the entropy at the edges, one can use the techniques from Ref.[@HoSqueezeEntropy; @KohlDimple] to remove these high-entropy particles from the system. Once the atoms at the edges are separated from the center, one can think about other modifications depending on the particular system at hand. For example, Ref.[@CoolingWithDisorder] introduced another cooling technique by adiabatically ramping down the disorder with the aim of reaching the Néel temperature, however, the technique was not sufficient on its own and required an additional scheme to reduce the entropy initially. Our cooling mechanism is a promising candidate for this pre-cooling. For the parameters given in Fig.\[S Central fig\], we find roughly a factor 10 reduction in temperature, which can be sufficient to reach the Néel transition. However, more work is needed to understand the interaction between the motional degrees of freedom studied here, and spin. Ramping down the disorder is also appealing in that it provides a clean homogeneous system. Our calculations neglect interparticle interactions. We expect, however, that our results are robust. Interactions profoundly change the behavior of the clean system: collisionless ballistic motion is replaced by diffusion. In the disordered system, however, the role of the interactions are much more subtle. Extensive theoretical work shows that even when pushed far from equilibrium, the disordered interacting system displays localization [@HuseReview; @MooreLocArxiv]. Thus, even in the presence of interactions, we expect disorder will trap entropy at the edge of the cloud. Modeling the dynamics of the interacting system is much more involved, and will be reserved for future studies. Acknowledgements ================ F.N.Ü. is supported by The Scientific and Technological Research Council of Turkey (TÜBİTAK). This material is based upon work supported by the National Science Foundation under Grant No. PHY1508300.
--- abstract: 'For planning rearrangements of objects in a clutter, it is required to know the goal configuration of the objects. However, in real life scenarios, this information is not available most of the time. We introduce a novel method that computes a collision-free placement of objects on a cluttered surface, while minimizing the total number and amount of displacements of the existing moveable objects. Our method applies nested local searches that perform multi-objective optimizations guided by heuristics. Experimental evaluations demonstrate high computational efficiency and success rate of our method, as well as good quality of solutions.' author: - Abdul Rahman Dabbour - Esra Erdem - 'Volkan Patoglu [^1] [^2]' bibliography: - 'references.bib' title: | **Object Placement on Cluttered Surfaces:\ A Nested Local Search Approach** --- INTRODUCTION ============ For useful integration of robotic systems into everyday life, they must be capable of performing high-complexity real-life tasks efficiently. For instance, typical human environments, such as table tops, kitchen shelves, or office desks, are usually cluttered; and manipulating the environment to deal with such clutter is integral to performing everyday chores in social environments, whether that means rearranging objects upon a surface or across multiple surfaces. Geometric rearrangement of multiple movable objects on a surface is a difficult problem, because it requires the manipulation of existing objects on the surface, as well as the placement of new objects to be put on the surface. To solve such a problem, in general, task planning is required to decide for the order of manipulation actions (e.g., when to pick, place, move objects), and feasibility checks are required to check the execution of each manipulation action against geometric/kinematic constraints (e.g., to avoid collisions). These requirements usually lead to hybrid planning solutions that combine high-level task planning and low-level feasibility checks. To solve such a planning problem for the rearrangement of objects in a clutter, one needs to know the goal configuration (i.e., how the objects are arranged on the surface at the end of the plan). However, in real life scenarios, this information is not available most of the time. For that reason, it is essential to determine a geometrically feasible goal configuration of objects on the surface before planning for rearrangements. With this motivation, we study the object placement problem: given a surface cluttered with (unmoveable) obstacles and a set of existing moveable objects on it, and a set of new objects to be placed on the surface, the goal is to find a collision-free placement of all objects on the surface while minimizing the total number and amount of displacements of the existing moveable objects. We introduce an efficient algorithm based on nested local searches. The innermost search aims to minimize the total penetration depth of objects utilizing a potential field method over a physics-based engine, the intermediate local search aims to minimize the number of collisions by allowing re-placements of objects, and the outermost local search further tries to minimize the number of displacements of the movable objects on the surface with respect to their initial configurations. The intermediate local search relies on a grid-based heuristics to find more plausible object configurations, while the outermost local search relies on a heuristic that gradually relaxes the constraints imposed on object movements. RELATED WORK ============ [*Related work in robotics*]{} Rearrangement of multiple movable objects, a challenging problem that involves planning, manipulation and geometric reasoning, has received much attention in robotics. In particular, planning for geometric rearrangement with multiple movable objects and its variations, such as navigation among movable obstacles [@stilman2005navigation; @stilman2008planning], have been studied using various approaches. Since even a simplified variant the rearrangement problem with only one movable obstacle has been proved to be NP-hard [@wilfong1991motion; @demaine2003pushing], most studies introduce several important restrictions to the problem, like monotonicity of plans [@okada2004environment; @stilman2007manipulation; @dogar2012planning; @barry2013manipulation; @cosgun2011push], where each object can be moved at most once. Recent work have focused on generating non-monotonic plans [@havur2014geometric; @krontiris2014rearranging; @krontiris2015dealing; @krontiris2016efficiently; @han2017high; @kang2018automated]. However, in most of these studies [@okada2004environment; @stilman2007manipulation; @dogar2012planning; @barry2013manipulation; @krontiris2014rearranging; @krontiris2015dealing; @krontiris2016efficiently; @han2017high], it is assumed that the goal configuration is known. Finding suitable arrangements for objects on a cluttered surface has received relatively less attention. Cosgun et al. [@cosgun2011push] propose an algorithm that searches for a suitable placement for a single object on a cluttered surface by discretizing the possible orientations of the object, convolving object pixels with the ones on the table, and identifying candidate regions for the object placement that result in minimal penetration with other objects. A placement is then produced by sampling these regions; however, this placement may not be collision-free. Then, they plan for a sequence of linear push actions to rearrange the clutter and clear space for the new object such that this placement becomes collision-free. Note that there are several limitations in this approach; multiple new objects are not considered, the surface and possible object orientations are discretized, and the final configuration is not necessarily collision-free. Yu et al. [@yu2011make] aim to find sensible placements for furniture by initially generating a random arrangement, then rearranging it to minimize a cost function that measures the difference between the current arrangement and several positive examples provided by the user. Kang et al. [@kang2018automated] also follow on this idea, and modify the algorithm so it becomes more suitable for robotic applications. Neither study considers heavily cluttered scenes or utilizes high resolution collision checks. In [@kang2018automated], the task is to rearrange objects currently available in the scene to achieve a more tidy arrangement; no new objects are added and there exists no constraints that force a certain set of objects to be on certain surfaces. Furthermore, in these studies, even the initial state is a feasible (collision-free) configuration, and the goal is to improve it in terms of a measure of tidiness. Jiang et al. [@jiang2012learning; @jiang2012humanlearning; @jiang2013hallucinating] extract object-to-object and object-to-human features from databases of 3D environments and learn semantic/geometric preferences for object surface pairs. Then, they discretize the surfaces’ point cloud into placing areas by random sampling and solve an maximum matching problem to assign each object’s pose to a suitable placing area. This approach only considers placements to a predetermined set of discrete configurations and does not address the more challenging continuous version of the problem. In our previous work [@havur2014geometric], we have proposed an object placement algorithm based on a local search guided with heuristics and random restarts. This work significantly extends our earlier study by introducing an innermost potential field, as well as two nested local searches wrapped around this basic search algorithm, to improve upon the efficiency and quality of solutions, as well as the success rate. Our results indicate orders of magnitude difference in terms of CPU time and success rate in cluttered scenarios. [*Related work in other areas*]{} A closely related problem to object placement, studied in computer graphics and operations research, is the packing problem (also known as the knapsack problem), where the goal is to place as many objects as possible in a non-overlapping configuration within a given empty container. The packing problem is NP-hard [@Chazelle1989]. It has been widely studied in 2D (cf. the survey [@DYCKHOFF1990]). It has been also studied in 3D under various conditions/restrictions [@Liu2015; @ROMANOVA2018; @Ma2018] (e.g., packing a set of polyhedrons into a fixed size polyhedron without considering rotations [@EGEBLAD2009], orthogonal packing of tetris-like items into rectangular bins [@martello2000; @Fasano2013]). However, the object placement problem is quite different from these 3D packing problems. First of all, since object placement problem is motivated by the geometric rearrangement of objects on a cluttered surface, the surface does not have to be empty and contains movable objects. The packing problem, on the other hand, assumes that the fixed size container is empty. Also the objective function for the object placement problem is different: the goal is to find a collision-free configuration of all objects, so as to minimize the total number and amount of displacements of the existing objects on the surface. The packing problem, on the other hand, aims to find a collision-free configuration of some objects, so as to maximize the coverage rate (i.e., the total volume of the objects packed in the container). Along these lines, the packing problem and the placement problem are different computational problems with different optimization goals. The methods to attack these problems are significantly different from each other and do not allow for direct comparisons. \[fig:potential\_field\] PROBLEM DEFINITION ================== The *object placement problem* is defined by - a flat surface $S$ and its geometric model $g(S)$ that details its size and shape, - the set $O_O$ of non-movable objects (obstacles) on the surface $S$ and the set $g(O_O)$ of their geometric models, - the set $O_C$ of movable objects (clutter) on the surface $S$ and the set $g(O_C)$ of their geometric models, - the set $O_N$ of new objects to be placed on the surface $S$ and the set $g(O_N)$ of their geometric models, - a set $W$ of continuous placement constraints on objects (e.g., a monitor may be forced to be in the corner of the table), and - an initial collision-free configuration $C_I$ of all objects in $O_C\cup O_O$ on the surface $S$ relative to $g(S)$. All objects are assumed to be rigid bodies. Before we define a solution for an object placement problem, let us introduce the following definitions and notations for a configuration $C_x$ of all objects $O_N \cup O_C$ on the surface $S$ relative to $g(S)$. - The number of displacements $\ii{\#move}(C_x)$ in $C_x$ is the number of objects in $O_C$ whose configurations are different in $C_x$ from their original configurations in $C_I$. - The amount of displacement $\ii{dist}_o(C_x)$ of an object $o\in O_C$ in $C_x$ is the distance between the centroid of $o$ in $C_{I}$ and the centroid of $o$ in $C_x$. The total amount of displacements of objects in $O_C$ is $\sum_{o\in O_C} \ii{dist}_o(C_x)$. - The amount of change $\ii{arc}_o(C_x)$ in the orientation of an object $o\in O_C$ in $C_x$ is the arc length traced by the most distal point on the object, due to the change of configuration of $o$ from $C_{I}$ to $C_x$. The total amount of change in orientations of objects in $O_C$ is $\sum_{o\in O_C} \ii{arc}_o(C_x)$. - The total amount of change $\ii{cost}_d(C_x)$ in configurations of objects in $O_C$ with respect to $C_I$ is the sum of the total amount of displacements of objects in $O_C$ (i.e., $\sum_{o\in O_C} \ii{dist}_o(C_x)$) and the total amount of change in orientations of objects in $O_C$ (i.e., $\sum_{o\in O_C} \ii{arc}_o(C_x)$). A solution for an object placement problem $\langle g(S)$, $O_O$, $g(O_O)$, $O_C$, $g(O_C)$, $O_N$, $g(O_N)$, $C_I$, $W\rangle$ is a collision-free final configuration ${C}_F$ of all objects $O_N \cup O_C$ on the surface $S$ relative to $g(S)$, with a minimal value of $\langle \ii{\#move},\ii{cost}_d\rangle$ relative to lexicographic ordering (i.e., there does not exist a collision-free configuration $C_x$ such that $\ii{\#move}(C_x){<} \ii{\#move}(C_F)$, or $\ii{\#move}(C_x){=}\ii{\#move}(C_F)$ and $\ii{cost}_d(C_x){<} \ii{cost}_d(C_F)$. Figure \[fig:potential\_field\] presents a sample problem instance: initially in Figure \[fig:potential\_field\](a), a cylindrical obstacle (yellow), and four geometrically different movable objects (red) are placed on a surface (green); the goal is to find a collision-free configuration of these objects and some more new objects (blue), like in Figure \[fig:potential\_field\](c). METHOD ====== We propose a nested local search algorithm (Algorithm \[alg:outermost\]) to compute a solution for an object placement problem. Intuitively; the innermost search applies potential field method over a physics-based engine, with the goal of minimizing the total penetration depth of objects, the intermediate local search further aims to minimize the number of collisions by allowing re-placements of objects, and the outermost local search further tries to minimize the number and amount of displacements of movable objects with respect to their initial configurations. \[alg:outermost\] **Input:** Object placement problem $P=\langle g(S)$, $O_O$, $g(O_O)$, $O_C$, $g(O_C)$, $O_N$, $g(O_N)$, $C_I$, $W\rangle$.\ **Output:** A configuration of all objects $O_N \cup O_C$ on the surface $S$. // $V$: a set of placement constraints for every object $o\in O_C$, specifying $\ii{radius}_o$ of the balls where $o$ can be placed in // $\ii{cost}_r$: cost function characterizes the total amount of changes of object poses with respect to $C_I$ $C_{\is{current}} \gets$ a configuration of all objects $O_O\cup O_C \cup O_N$ on the surface $S$ obtained from $C_I$ by randomly placing the new objects $O_N$ on the surface $S$; $V\gets$ for every $o\in O_C$, $\ii{radius}_o{=}0$ // Call the intermediate local search to reduce number of collisions and check if it also decreases the total pose changes $C_{\is{next}} \gets \textsc{intermediateSearch}(P,V,C_{\is{current}})$; $C_{\is{current}} {=} C_{\is{next}}$; // Relax the placement constraints and call the intermediate local search until no better configuration can be found $C_{\is{best}} {=} C_{\is{current}}$; $\ii{radius}_o \gets$ increase the radius slightly; $V_o\gets$ update $V$ with $\ii{radius}_o$; $C_{o} \gets\textsc{intermediateSearch}(P,V_o,C_{\is{current}})$; $C_{\is{best}}{=}C_{o}$; return $C_{\is{current}}$; $C_{\is{current}} {=} C_{\is{best}}$; \[alg:intermediate\] **Input:** Object placement problem $P=\langle g(S)$, $O_O$, $g(O_O)$, $O_C$, $g(O_C)$, $O_N$, $g(O_N)$, $C_I$, $W\rangle$; placement constraints $V$; and, if provided, a configuration $C_{\is{current}}$ of all objects $O_O\cup O_C \cup O_N$ on the surface $S$ ($C_{\is{current}}$ is obtained from $C_I$ in <span style="font-variant:small-caps;">outermostSearch</span>)\ **Output:** A configuration of all objects $O_N \cup O_C$ on the surface $S$. // $H$: a set of free cells, suggested re-placements of objects $o\in O_C$ in collision // $\ii{cost}_c$: cost function characterizes the total number of collisions ($\ii{\#col}$) and the total amount of penetration depth of pairs of objects in collisions ($\ii{cost}_p$) considering the placement constraints $V$ $C_{\is{current}} \gets$ if not provided, generate a configuration of all objects $O_O\cup O_C \cup O_N$ on the surface $S$ obtained from $C_I$ by randomly placing the new objects $O_N$ on the surface $S$ // Call the dynamic simulator to reduce the total penetration depth, and check if it also decreases the number of collisions $C_{\is{next}} \gets \textsc{innermostSearch}(P, V, C_{\is{current}})$; $C_{\is{current}} {=} C_{\is{next}}$; // Re-place objects in collisions and call the dynamic simulator until no better configuration can be found $C_{\is{best}} {=} C_{\is{current}}$; $H\gets$ refine discretization, identify free cells $C_o \gets$ re-place $o$ in $C_{\is{current}}$ in $e$ $C_x \gets\textsc{innermostSearch}(P,C_o)$; $C_{\is{best}}{=}C_x$; return $C_{\is{current}}$; $C_{\is{current}} {=} C_{\is{best}}$; Penetration Minimization ------------------------ The innermost search relies on artificial potential fields [@Khatib1086] defined for each object (including the obstacles and boundaries of the table) and a dynamic simulator to calculate a configuration in static equilibrium. In particular, trajectories of objects under the action of interaction forces due to potential fields are simulated according to the governing equations of motion, that are, effectively, a solution to a dynamic optimization problem over the system’s Lagrangian. The algorithm starts with a configuration $C_{\is{current}}$ obtained from $C_I$ by randomly placing the new objects $O_N$ on the surface $S$, with zero velocities. It searches over configurations of all objects $O_O\cup O_C \cup O_N$ on the surface $S$, with the goal of minimizing the objective optimization function $\ii{cost}_p$ defined as the sum of maximum penetration depth [@Patoglu2005a] of pairs of objects in collisions. The physics engine returns a configuration $C_{\is{next}}$ of all objects, minimizing $\ii{cost}_p(C_{\is{next}})$. For instance, consider the initial configuration $C_I$ of obstacles (yellow) and movable objects (red) shown in Figure \[fig:potential\_field\](a). A configuration $C_{\is{current}}$ of objects obtained by randomly placing all the new objects (blue) on the surface can be seen in Figure \[fig:potential\_field\](b). Note that there are many collisions in $C_{\is{current}}$, and objects penetrate with each other. By applying this innermost search, the configuration shown in Figure \[fig:potential\_field\](c) may be generated, where the total penetration is zero, since there is no collisions. The only action that the physics engine can perform (due to the use of the potential field method) is to push objects in collision outside of each other; so, for instance, it cannot swap locations of two objects. This may lead to local minima where the objects can no longer be pushed, and there may be still some collisions in $C_{\is{next}}$. This motivates us towards the intermediate local search algorithm that utilizes some heuristics to re-place all the objects in collision. \[fig:outer\_search\] Collision Minimization ---------------------- The intermediate local search algorithm (Algorithm \[alg:intermediate\]) utilizes heuristically-guided re-placements to avoid local minima. Intuitively, for each object $o$ in collision, the heuristic suggests (i) dividing the surface $S$ into cells and (ii) re-initiating the placement of the object $o$ into the center one of the cells that is unoccupied (i.e., has no other object centroids); the heuristics is used to guide generation of new configurations. For (i), the heuristic imposes a grid on the surface $S$. If a grid cell has no object centroids in it, it is marked as free. If no free cell exists, a new refined grid is imposed on $S$ with smaller but more number cells. This refinement process continues until when there is at least one free cell in the imposed grid. The intermediate local search algorithm also starts with a configuration $C_{\is{current}}$ of all objects $O_O\cup O_C \cup O_N$ on the surface $S$ obtained from $C_I$ by randomly placing the new objects $O_N$ on the surface $S$. It calls the innermost search algorithm described above to find a better configuration, but starting with the configurations obtained from $C_{\is{current}}$ as suggested by the heuristics. The goal is to minimize the objective optimization function $\ii{cost}_c$ defined as a tuple $\langle \ii{\#col}, \ii{cost}_p\rangle$, where $\ii{\#col}$ is the total number of collisions and $\ii{cost}_p$ is the total amount of penetration depth of pairs of objects in collisions. Here, lexicographic ordering is used to find the minimum of two tuples: $\langle a,b\rangle \prec \langle a',b'\rangle$ if either $(a \prec a')$ or ($a=a'$ and $b\prec b'$). In this way, priority is given to $\ii{\#col}$ and then to $\ii{cost}_p$: among multiple configurations of all objects with the same minimum number of collisions, the configuration $C_{\is{next}}$ with the least cumulative penetration depth is returned. An example is presented in Figure \[fig:relocation\_example\] to illustrate the usefulness of the heuristics in the intermediate local search algorithm. The search starts with a configuration $C_{\is{current}}$, where ${\ii{\#col}(C_{\is{current}})=24}$ and ${\ii{cost}_p(C_{\is{current}})=1.75}$. Note that the innermost search alone cannot find a better configuration with less value of $\ii{cost}_p$: the physics engine gets stuck, as it can no longer push objects on the right half of the surface. First, the heuristic is utilized to re-place the cyan-colored object, which is in collision with some other object, in $C_{\is{current}}$. For that, a grid is imposed over the surface with two free cells, labelled A and B, that do not contain any object centroids. Then, two new configurations, $C_{\is{A}}$ and $C_{\is{B}}$, are obtained from $C_{\is{current}}$ by randomly re-placing the cyan-colored object in A and in B, respectively. Here, ${\ii{cost}_c(C_{\is{A}}) = \langle 19, 1.43 \rangle}$ and ${\ii{cost}_c(C_{\is{B}}) = \langle 20, 1.37 \rangle}$. At this point, the intermediate local search algorithm calls the innermost search algorithm for each of these two configurations. The intermediate local search algorithm with heuristically-guided re-placements is useful for minimizing the number of collisions on a surface, but the objects in $O_C$ may end up being displaced and rotated too much with respect to their original configurations in $C_I$. This is undesirable from the perspective of rearrangement planning, because it will require more number of manipulation actions to rearrange such objects. This motivates us towards the outermost local search algorithm, which utilizes some constraints on the placements of objects to limit their movements. Rearrangement Minimization -------------------------- The outermost local search algorithm (Algorithm \[alg:outermost\]) utilizes placement constraints to minimize displacements. Intuitively, the amount of displacement of a movable object $o\in O_C$ is constrained to a ball whose centroid is the object’s centroid in the given initial configuration in $C_I$. Initially, $\ii{radius}_o$ is set to $0$; if the outermost local search algorithm cannot find a better configuration under these constraints, then $\ii{radius}_o$ is increased slightly. The outermost local search algorithm starts with a configuration $C_{\is{current}}$ of all objects $O_O\cup O_C \cup O_N$ on the surface $S$ obtained from $C_I$ by randomly placing the new objects $O_N$ on the surface $S$. It calls the intermediate local search algorithm described above to find a better configuration, but with respect to the given set $V$ of placement constraints. The goal is to minimize the objective optimization function $\ii{cost}_r$ defined as a triple $\langle \ii{\#col}, \ii{\#move}, \ii{cost}_d\rangle$, where $\ii{\#col}$ is the total number of collisions, $\ii{\#move}$ is the total number of moves of objects in $O_C$ from their original places, and $\ii{cost}_d$ is the total amount of change in configurations of objects in $O_C$ with respect to $C_I$. Here, $\ii{cost}_d$ for a configuration $C_x$ is computed as the sum of the total amount of displacements of objects in $O_C$ (i.e., $\sum_{o\in O_C} \ii{dist}_o(C_X)$, where $\ii{dist}_o(C_X)$ is the distance between the centroid of $o$ in $C_{I}$ and the centroid of $o$ in $C_x$) and the total amount of change in orientations of objects in $O_C$ (i.e., $\sum_{o\in O_C} \ii{arc}_o(C_X)$, where $\ii{arc}_o(C_X)$ is the arc length due to the change of configuration of $o$ from $C_{I}$ to $C_x$). The outermost local search algorithm also uses lexicographic ordering to find the minimum of two triples. In this way, priority is given first to $\ii{\#col}$, then to , and then to $\ii{cost}_d$. An example is presented in Figure \[fig:outer\_search\] to illustrate the usefulness of the placement constraints in the outermost local search algorithm. The search starts with a configuration $C_{\is{current}}$, where ${\ii{\#col}(C_{\is{current}})=4}$ and $\ii{\#move}(C_{\is{current}})=\ii{cost}_d(C_{\is{current}})=0$. Initially, the radius of the placement balls for every object that is initially on the table is zero. With these placements constraints, the intermediate local search cannot find a better configuration to optimize $\ii{cost}_r$. Then, for each object, the outermost local search algorithm relaxes its placement constraints by increasing the radius of its placement ball by a certain percentage of $max_d(o)$, where $max_d(o)$ is the distance from the centroid of the object to the furthest point of the surface, then calls the intermediate local search again. With such relaxed constraints, the intermediate local search algorithm returns better configurations with less costs. For example, when it is allowed to place the cuboid object within a small circle around its initial configuration, the intermediate local search algorithm returns a configuration $C_{\is{cuboid}}$ with ${\ii{cost}_r(C_{\is{cuboid}}) = \langle 2, 1, 2.82 \rangle}$; whereas, when it is allowed to displace the cylindrical object by a small amount, it returns a configuration $C_{\is{cyl}}$ with ${\ii{cost}_r(C_{\is{cyl}}) = \langle 0, 1, 1.76 \rangle}$ (Figure \[fig:outer\_search\](a)). The outermost local search algorithm continues search from $C_{\is{cyl}}$, with even more relaxed placement constraints, but cannot find a better configuration with less cost; so the outermost search stops (Figure \[fig:outer\_search\](b)). EXPERIMENTAL EVALUATION {#sec:exp} ======================= We have conducted three sets of experiments to evaluate and compare performance of each search level. All simulations were executed on workstation with an Intel Xeon W-2155 CPU running at 3.30 GHz using a single thread and 32 GB RAM. All algorithms were implemented in Python, with Bullet [@coumans2010bullet] as the back-end physics engine. Each instance of each experiment was run 60 times to allow for averaging of the results. A timeout of 300 s per trial was imposed. Experiment 1 started with the initial configuration depicted in Figure 1(a), where 4 objects and an obstacle were on a surface. The number of new objects to be added to the table was the control variable in this condition. The number of new objects was increased from 4 to 36, such that the total footprint area covered by all objects and the obstacle were gradually increased up to 95% of the total surface area. Experiment 2 involved an obstacle and 4 new objects to be placed on the table. The number of movable objects that were initially on the table was the control variable in this condition. The number of movable objects on the table was increased from 4 to 36, such that the total footprint area covered by all objects and the obstacle were gradually increased up to 95% of the total surface area. Experiment 3 involved 4 initial objects randomly placed on the surface in collision-free configurations and 4 new objects to be added to the surface. The number of obstacles on the table was the control variable in this condition. The number of obstacles was increased from 4 to 32, such that the total footprint area covered by all objects and the obstacles were gradually increased up to 95% of the total surface area. Five algorithms were compared, where three of them correspond to the different levels of our nested local search: - innermost search (*Inner*) based on potential fields, - intermediate local search (*Intermediate*) wrapped around the potential field, and - the outermost local search (*Outer*) wrapped around the intermediate local search. Remaining two algorithms are taken as baselines to demonstrate effectiveness of our approach with respect to naive implementations. In particular, - to highlight the benefits of using the innermost search, we have tested *Random Sample* approach, which randomly samples new configurations for objects with uniform distribution until a collision-free placement is found. - To highlight the benefits of using our intermediate local search, we have tested *Random Restart* which is implemented as the innermost search *Inner* with random restarts. The performance of the algorithms were compared based on three metrics: - efficiency, measured by the average CPU time spend to calculate a solution, - quality, measured by the average number of objects moved that were initially on the table, and - success rate, measured by percentage of trials that converge to a collision-free final configuration before a time-out is reached. Efficiency and quality metrics are computed for partial solutions of unsuccessful trials that return configurations with collision(s). Figure \[fig:exp\_plots\] graphically summarizes the data collected from Experiments 1–3. In Figure \[fig:exp\_plots\], each row corresponds to an experiment, while columns present, CPU time, quality and success rate metrics, respectively. We can observe the effect of increasing the number of new objects to be put on the table from the first row of Figure \[fig:exp\_plots\]. - In terms of average CPU time, *Inner* outperforms *Random Sample*, while *Intermediate* consistently outperforms all other algorithms, including *Random Restart*. While *Outer* is in general slower than *Random Restart* for problems with surface coverage less than 75%, *Outer* outperforms *Random Restart* in highly cluttered environments, since it relies on *Intermediate* when stuck at local minima. - In terms of solution quality, *Inner* and *Intermediate* perform quite similarly and move more objects as the table gets more cluttered, while *Outer* performs significantly better than both. In particular, up to 30% surface coverage, *Outer* could find solutions that do not require any movements of the objects initially on the table. As the clutter increases, it becomes necessary to move some of these objects, but the number of moved objects stays significantly less than those in the solutions computed by *Inner* and *Intermediate*. It is important to note that the baseline algorithms, *Random Sample* and *Random Restart*, consistently result in low solution quality, even when compared to algorithms that do not distinguish between original and new objects, such as *Intermediate*. - In terms of success rate, we note that *Intermediate* and *Outer* can solve almost all problems up to 85%, and are the only algorithms capable of solving problems more cluttered than 85%, although the success rate drops to about 40%. While *Random Sample* outperforms *Inner* for simple problems where the surface coverage is less than 30%, *Inner* can solve some problems of up to 75% surface coverage, while *Random Sample* cannot solve any problems that has more than 40% surface coverage. We can observe the effect of increasing the number of original objects on the table from the second row of Figure \[fig:exp\_plots\]. Note that these objects can be moved but are desired not to be relocated; hence, the quality of solutions becomes more emphasized in these experiments. - In terms of average CPU time, once again, *Inner* outperforms the *Random Sample*, and *Intermediate* outperforms *Random Restart*. From these experiments, we can observe how increasing the number of original objects affects *Outer*, causing it to have a sharp increase in computation time at about 50% surface coverage. - The trend observed from the first set of experiments regarding solution quality can be seen again here. As *Outer* is the only algorithm that attempts to improve solution quality, it consistently moves fewer original objects across all surface coverage levels. - The success rate of *Outer* varies over surface coverage levels, but is always worse than *Intermediate*. In particular, given that *Outer* aims to solve a more constrained version of the problem solved by *Intermediate*, it is not surprising that it fails more often. In this experiment set, *Intermediate* is the only algorithm that is capable of solving a significant portion of problems with 95% surface coverage. We can observe the effect of increasing the number of obstacles on the table from the third row of Figure \[fig:exp\_plots\]. Note that since obstacles cannot be moved, these instances prove to be much harder than the previous experiments, as the percentage of surface coverage increases. - In terms of average CPU time, *Random Restart*, *Intermediate*, and *Outer* perform similarly up to 65% clutter, after which *Random Restart* baseline falls behind in computation time. The computation time for *Outer* and *Intermediate* display great similarity until about 85%, after which no algorithm can solve even a single instance. Note that as the obstacles increase, the problem become highly constrained and objects initially located on the table are trapped; hence, object movements are much less in this experiment. - In terms of solution quality, *Random Sample* and *Random Restart* baselines perform quite similarly and move almost all objects initially on the table, while *Outer* and *Intermediate* perform slightly better than both baselines. *Inner* performs the best in terms of solution quality, but this is only due to the constrained nature of the problem mentioned earlier. - *Inner* and *Intermediate* display significantly different trends in terms of success rate. In particular *Inner* starts failing quite early and can only solve about 30% of instances at 55% clutter, while *Intermediate* and *Outer* consistently find over 80% of the solutions up to 85% clutter, after which all approaches fail. The results of these experiments indicate the usefulness of all three nested local searches proposed by our method. *Outer* significantly improves solution quality by minimizing movements of objects on the table, *Intermediate* significantly improves success rate by allowing replacements when stuck in local optima, and *Inner* significantly improves CPU time over *Random Sampling* especially for cluttered scenarios. Solutions to Benchmarks Instances ================================= To demonstrate the ability of our local search approach to solve a large variety of placement problems, we have also tested it with several difficult benchmark scenarios. These benchmarks have been engineered to result in difficult instances, by introduction of non-convex objects, confined surfaces, and very specific configurations that result in feasible placements. Confined Placement Benchmark ---------------------------- In the confined placement benchmark, 4 objects needs to be placed on a surface highly cluttered with randomly placed of obstacles and 4 movable objects, as shown in Figure \[fig:many\_obs\](a). All objects and obstacles are selected as clones of the same 4 basic convex shapes. This benchmark is challenging as it requires the placement algorithms to be able to handle highly restricted spaces. Figure \[fig:many\_obs\](b) and \[fig:many\_obs\](c) present solutions computed by *Intermediate* and *Outer*, respectively. The solution computed by *Outer* does requires only 1 of the 4 movable objects initially on the surface to be relocated, while the solution computed by *Intermediate* relocates all of these objects. \[fig:many\_obs\] Tight Placement Benchmark ------------------------- The tight placement benchmark shown in Figure \[fig:tight\_placement\](a), where the goal is to find a placement for the blue box upon the green surface, has been introduced in [@havur2014geometric]. This benchmark is challenging as it requires the objects on the surface to be rearranged to very specific configurations such that a solution can be computed. Furthermore, this benchmark includes non-convex objects with holes, which need to be utilized to compute a solution. Figure \[fig:tight\_placement\](b) depicts a random placement for this problem, demonstrating the need for rearrangement of objects on the surface. Figure \[fig:tight\_placement\](c) presents the solution computed by our algorithm. \[fig:tight\_placement\] Elongated Object Benchmark -------------------------- In the elongated object benchmark, the length of slender objects are designed to pose a challenge, as shown in Figure \[fig:elongated\](a). In particular, one of the objects is selected to have a length greater than the width of the square surface, while 11 other slender objects are set to have a length that is equal to the half the width of the square surface. This benchmark is difficult, as the placement of the longest object introduces unusable space on the square surface, rendering convergence to a solution significantly more difficult. Figure \[fig:elongated\](b) depicts a random placement for this problem, demonstrating the unusable area introduced by the long object. Figure \[fig:elongated\](c) presents the solution computed by our algorithm. L-Shape Placement Benchmarks ---------------------------- L-shape benchmarks are commonly utilized to demonstrate capabilities of packing algorithms [@jain1998two]. L-shape placement benchmarks adapt these to placement problems with 2D and 3D variations, as shown in Figures \[fig:lshapes\] and \[fig:lshapes-z\], respectively. In L-shape placement benchmarks, the objects are required to be placed on a surface with no overhangs, and the total contact area of the L-shapes to be placed are equal to the area of the surface. In the 2D version, the L-shapes are configured such the objects have uniform height along the z-axis and are non-convex on parallel to the surface, while in the 3D version, the L-shapes are configured such the objects are uniform parallel to the surface and are non-convex along the z-axis. Figures \[fig:lshapes\](b) and \[fig:lshapes-z\](b) present random placements for 2D and 3D benchmarks, demonstrating the difficulty of finding a collision free solution with no overhangs from the table. Figures \[fig:lshapes\](c) and \[fig:lshapes-z\](c) present the solutions computed by our algorithm to the 2D and 3D benchmarks, respectively. CONCLUSION ========== Given a surface cluttered with (unmovable) obstacles and movable objects, and a new set of objects, the object placement problem asks for a collision-free placement of all the objects on the surface. We have introduced a novel algorithm to solve this problem, utilizing nested local search algorithms with multi-objective optimizations: the innermost search tries to minimize the total penetration depths of objects, the intermediate local search further tries to minimize the number of collisions, and the outermost local search further tries to minimize the changes in object poses. Each level of the search is guided by heuristics: the innermost search utilizes a potential field method over a physics-based engine, the intermediate local search gradually utilizes re-placements of objects to avoid local minima, and the outermost local search gradually relaxes the constraints that specify how far objects can be displaced. In that sense, our method introduces novel mathematical search models with nested multi-objective optimizations, and algorithms that further utilize heuristics to avoid local minima. On the other hand, due to local searches, our algorithm is more about local optimization and is likely not to reach the global optima. Comprehensive experimental evaluations demonstrate high computational efficiency and success rate of our method, as well as good quality of solutions. Furthermore, difficult benchmark scenarios that include non-convex objects, confined surfaces, and very specific configurations that result in feasible placements, can also be solved with our method. [^1]: This work was partially supported by Sabanc[i]{} University. [^2]: A. R. Dabbour, E. Erdem and V. Patoglu are with the Faculty of Engineering and Natural Sciences, Sabanc[i]{} University, İstanbul, Turkey.
--- abstract: 'Two-dimensional (2D) transition-metal dichalcogenide (TMD) MX$_2$ (M = Mo, W; X= S, Se, Te) possess unique properties and novel applications. In this work, we perform first-principles calculations on the van der Waals (vdW) stacked MX$_2$ heterostructures to investigate their electronic, optical and transport properties systematically. We perform the so-called Anderson’s rule to classify the heterostructures by providing the scheme of the construction of energy band diagrams for the heterostructure consisting of two semiconductor materials. For most of the MX$_2$ heterostructures, the conduction band maximum (CBM) and valence band minimum (VBM) reside in two separate semiconductors, forming type II band structure, thus the electron-holes pairs are spatially separated. We also find strong interlayer coupling at $\Gamma$ point after forming MX$_2$ heterostructures, even leading to the indirect band gap. While the band structure near $K$ point remain as the independent monolayer. The carrier mobilities of MX$_2$ heterostructures depend on three decisive factors, elastic modulus, effective mass and deformation potential constant, which are discussed and contrasted with those of monolayer MX$_2$, respectively.' author: - 'Ke Xu$^1$, Yuanfeng Xu$^1$, Hao Zhang$^{1\dag}$, Bo Peng$^1$, Hezhu Shao$^2\ddag$, Gang Ni$^1$, Jing Li$^1$, Mingyuan Yao$^1$, Hongliang Lu$^3$, Heyuan Zhu$^{1\P}$ and Costas M. Soukoulis$^{4,5}$' title: 'Electronic, optical and transport properties of van der Waals Transition-metal Dichalcogenides Heterostructures: A First-principle Study' --- INTRODUCTION ============ The family of Two-dimensional (2D) materials has grown rapidly for their unique properties different from their 3D counterparts. A wide range of 2D materials, e.g. graphene[@Novo2005; @zhang2005], BN[@Dean2010; @Yankowitz2012], transition metal dichalcogenides (TMDs)[@Radisavljevic2011; @Splendiani2010], black phosphorus[@Xiang2015; @Li2014; @Tran2014], and etc, have been proposed and under intense investigations. Among these, transition metal dichalcogenides, with the formula MX$_2$ (where M is a transition metal and X is a chalcogen), are prominent due to their finite direct band gaps, with strong optoelectronic responses[@Bernardi2013], large on-off ratios and high carrier mobilities[@Hennig2012; @Zhang2014]. Furthermore, a spin-orbit driven splitting of the valence band was found in the 2H monolayer TMDs due to the lack of inversion symmetry, which ultimately allows for valley-selective excitation of carriers[@Cao2012; @Zeng2012; @Rasmussen2015]. In addition, the electronic properties of TMDs can be tuned by strain[@Conley2013a], layer numbers[@Mak2010], nanostructuring[@Pedersen2008], and electrostatic gating[@Liu2012a], or by combining individual 2D materials into van der Waals (vdW) stacked heterostructures[@Novoselov2016]. The vdW heterostructures can be obtained by transfer or direct epitaxial growth[@Haigh2012; @Hsu2014]. The interface of the heterostructures can be atomically sharp, with two-atomic thick junction region[@Haigh2012], and the interlayer coupling intensity can even be tuned. Thus, the vdW heterostructures opens up many possibilities for creating new TMD material systems with rich functionalities and novel physical properties[@ZhangWangChenEtAl2016]. Because when two different atomically thin layers are stacked and binded by van der Waals forces to form MX$_2$ heterostructures, electronic properties of the formed vdW MX$_2$ heterostructures will be affected significantly by the alignment of the monolayer MX$_2$ to form varieties of band structures different from the monolayer counterpart, which can be direct- or indirect-bandgap, or metallic materials[@Terrones2013]. For example, MoS$_{2}$-WSe$_{2}$ hetero-bilayer possesses a type II band alignment, and furthermore, the conduction band maximum (CBM) and valence band minimum (VBM) reside in different monolayers. Due to the separate spatial locations of CBM and VBM, the photon-generated electron-holes pairs are therefore spatially separated, resulting in much longer exciton lifetime and interlayer exciton condensation, which might help invent two-dimensional lasers, light-emitting diodes and photovoltaic devices.[@Rivera2015; @Chiu2015]. And the evidence of strong electronic coupling between the two individual monolayer MX$_2$ in MoS$_{2}$-WSe$_{2}$ hetero-bilayer was demonstrated, leading to a new photoluminescence (PL) mode in this heterostructure[@Fang2014]. Hong *et al* have also investigated the ultrafast charge transfer in MoS$_{2}$-WS$_{2}$ heterostructure[@Hong2014] and found the charge-transfer time is in femtosecond scale, much smaller than that in monolayer MoS$_{2}$ or WS$_{2}$. Furthermore, the recombination time of interlayer charge transition is tunable for different stacking order of MoS$_{2}$-WS$_{2}$ heterostructure(one was obtained by vertical epitaxial growth while the other was randomly bilayer stacked), with  39 ps and 1.5 ns respectively[@Heo2015]. To date, most researches on MX$_2$ heterostructures are concerned about the S and Se system. In this paper, by using first-principles calculations, we systematically investigate the electronic, mechanical, transport and optical properties of the vdW MX$_2$ (M = Mo, W; X= S, Se, Te) heterostructures. The bandgaps of the hetero-bilayer MX$_2$ get smaller compared with the corresponding monolayer MX$_2$. And the band alignment under Anderson’s rule and interlayer coupling of heterostructures can result in direct to indirect bandgap transition. The excellent mechanical properties show the structural stability of the vdW MX$_2$ heterostructures. The transport properties exhibit encouraging results with the electron mobilites mostly higher than those of the monolayer MX$_2$. Furthermore, we also investigate the optical properties of the vdW MX$_2$ heterostructures. METHODOLOGY =========== All the calculations are performed using the Vienna *ab-initio* simulation package (VASP) based on density functional theory (DFT)[@Kresse1996]. The exchange-correlation energy is described by the generalized gradient approximation (GGA) in the Perdew-Burke-Ernzerhof (PBE) parametrization. We choose the DFT-D2 semiempirical dispersion-correction approach to involve the long-distance van der Waals (vdW) interactions[@Perdew1996; @grimme2006]. The calculation is carried out by using the projector-augmented-wave (PAW) pseudopotential method with a plane-wave basis set with a kinetic energy cutoff of 600 eV. A 15$\times$15$\times$1 $\Gamma$-centered **k**-mesh is used during structural relaxation for the unit cell until the energy differences are converged within 10$^{-6}$ eV, with a Hellman-Feynman force convergence threshold of 10$^{-4}$ eV/Å. The vacuum size is larger than 25 Å between two adjacent atomic layers to eliminate artificial interactions between them. The electronic bandstructures of the vdW layered heterostructures are further verified by the calculations using hybrid Heyd-Scuseria-Ernzerhof (HSE06) functional[@HSE03; @HSE06], which improves the precision of bandstructures by reducing the localization and delocalization errors of PBE and Hartree-Fock (HF) functionals. Here the mixing ratio is 25% for the short-range HF exchange. The screening parameter is 0.2 Å$^{-1}$. As we know, the electron-phonon scatterings play an important role in determining the intrinsic carrier mobility $\mu$ of 2D vdW MX$_2$ heterostructures, in which the scattering intensities by acoustic phonons are much stronger than those by optic phonons in two-dimensional materials[@tang2009role]. Therefore, the deformation potential theory for semiconductors, which considers only longitudinal acoustic phonon scattering process in the long-wavelength limit[@Cai2014; @Shuai2011; @Shuai2013; @Wang2015b], and was originally proposed by Bardeen and Shockley[@deformation1950], can be used to calculate the intrinsic carrier mobility of 2D materials. In the long-wavelength limit, the carrier mobility of 2D semiconductors can be written as[@Walukiewicz1984; @Takagi1996; @Wang2015b]: $$\label{mobility1} \mu = \frac{2e\hbar^3C}{3k_BT|m^*|^2D_l^2},$$ where $e$ is the electron charge, $\hbar$ is the reduced Planck’s constant, $T$ is the temperature equal to 300 K throughout the paper. $C$ is the elastic modulus of a uniformly deformed crystal by strains and derived from ${C}=[{\partial^2{E}/\partial^2(\Delta{l}/l_0)}]/{S_0}$, in which $E$ is the total energy, $\Delta{l}$ represents the change of lattice constant $l_0$ along the strain direction, and $S_0$ is the lattice area at equilibrium for a 2D system. $m^*$ is the effective mass given by $m^*=\hbar^2({\partial^2{E(k)}/{\partial{k^2}})}^{-1}$ ($k$ is wave-vector, and $E(k)$ is the energy). In addition, $D_l$ is the deformation potential (DP) constant defined by $D_l^{e(h)}=\Delta{E_{CBM(VBM)}}/(\Delta{l}/l_0)$, where $\Delta{E_{CBM(VBM)}}$ is the energy shift of the band edge with respect to the vacuum level under a small dilation $\Delta{l}$ of the lattice constant $l_0$. Results and discussion ====================== Geometric structures of hetero-bilayer MX$_{2}$ ----------------------------------------------- Generally, the MX$_2$ crystals have four stable lattice structures, i.e., 2H, 1T, 1T’ and 3R[@Wilson1969], with the first being the dominating one in nature. Most MX$_2$ crystals, like MoS$_{2}$ and WSe$_{2}$ with a stable 2H phase (1H for monolayer), have been studied widely[@Bhimanapati2015]. For 2H-phase MX$_2$ crystals, the M atoms and X atoms are located in different layers respectively, which can be described by the point group $D_{3h}$. While for the 3R-phase unit cell shown as Fig. \[POSCAR\](b,d), one M atom is eclipsed by the X atoms above and the other one is located in the hexagonal center, leading to the $AB$ Bernal stacking. Here, we only focused on the AA and AB Bernal stacking. One stacking type can be transformed to the other one by horizontal sliding or by the rotation around the vertical axis. For MX$_2$ heterostructures with two different constituent monolayer MX$_2$ crystals, both AA and AB Bernal stacking possess a lower symmetry of $C_{3v}$ point group due to the lack of the mirror reflection $\sigma_{h}$ in the horizontal plane. The symmetry operations include $C_{3}$ and vertical mirror reflection $\sigma_{v}$[@BurnsG1977]. When the two constituent monolayer MX$_2$ crystals are identical, the AA stacking still possesses $D_{3h}$ symmetry. ![image](Figure1.eps){width="0.7\linewidth"} To determine the energetically stable structure before geometry optimization, an interlayer-distance optimization step is implemented to find out an optimized $d$ (defined in the Fig. \[POSCAR\](a)) using the so-called Universal Binding Energy Relation (UBER) method[@Rose1981; @Zhao2016]. The optimized interlayer distance is predicted from a series of unrelaxed models with different $d$ (from 5 to 8 Å), and then we calculate the surface adhesion energy W$_{ad}$ for all 30 types of 2D vdW MX$_2$ heterostructures under investigations here (take MoS$_{2}$/WSe$_{2}$ hetero-bilayer as an example), $$\label{energy} W_{ad}=\frac{E_{MoS_{2}}+E_{WSe_{2}}-E_{MoS_{2}/WSe_{2}}}{A},$$ where A is the interface area and $E_{MoS_{2}}$, $E_{WSe_{2}}$, $E_{MoS_{2}/WSe_{2}}$ are the total energies of the monolayer MoS$_{2}$, WSe$_{2}$ and the MoS$_{2}$/WSe$_{2}$ heterostructure, respectively. The optimal interlayer distances $d$ can be obtained by maximizing the value of W$_{ad}$. Then the obtained optimized structure was further optimized again without any external constraints. The calculated lattice constant $a$ and interlayer distance $d$ for the above-mentioned 30 types of 2D MX$_2$ heterostructures are summarized in the TABLE \[table-structure\], which are in good consistence with previous theoretical and experimental results of the monolayer MX$_2$[@Schutte1987; @Coehoorn1987; @Bronsema1986]. and are not sensitive to the interlayer distance. As shown in TABLE \[table-structure\], the optimized interlayer distances of AA stacking structures are larger than those of the corresponding AB stacking structures, which is due to the fact that, in AB structures, the X atoms are not aligned along the vertical axis and a shorter interlayer distance leads to a smaller total energy. Since the M atoms in different layers almost has no interactions, the change of stacking type will affect the interlayer interactions of X atoms. System (Anderson) Stacking type $a ( \AA ) $ $d_{1} ( \AA ) $ $d_{2} ( \AA ) $ Band type $E_{g}^{PBE}/E_{g}^{HSE}/E_{g}^{SOC}$ (eV) ---------------------------- --------------- ------------------------------ ---------------------- ------------------ ----------- -------------------------------------------- -- -- -- -- -- -- -- -- -- MoS$_{2}$-WSe$_{2}$ (II) AA 3.251 (3.26[@Lu2014b]) 6.919 4.896 Direct 0.46(0.57[@Lu2014b])/1.01/0.23 AB 3.256 6.270 3.580 Direct 0.57/1.12/0.34 MoS$_{2}$-WS$_{2}$ (II) AA 3.183 (3.19[@Lu2014b]) 6.758 (6.8[@Ji2017]) 4.826 Indirect 1.29(1.16[@Lu2014b])/1.93/1.22 AB 3.187 6.137 (6.3[@Ji2017]) 3.535 Indirect 1.08/1.70/1.06 WS$_{2}$-WSe$_{2}$ (II) AA 3.250 (3.204[@Terrones2013]) 6.852 4.846 Direct 0.77(1.007[@Terrones2013])/1.24/0.51 AB 3.253 6.232 3.547 Indirect 0.80/1.31/0.61 MoSe$_{2}$-WS$_{2}$ (II) AA 3.249 (3.210[@Terrones2013]) 6.913 4.893 Direct 1.23 (1.154[@Terrones2013])/1.34/0.85 AB 3.251 6.303 3.613 Indirect 0.86 /1.27/0.80 MoSe$_{2}$-WSe$_{2}$ (II) AA 3.320 (3.277[@Terrones2013]) 7.078 3.745 Direct 1.23 (1.330[@Terrones2013])/1.79/0.93 AB 3.307 6.485 3.680 Indirect 1.21/1.83/1.09 MoS$_{2}$-MoSe$_{2}$ (II) AA 3.250 (3.26[@Lu2014b]) 6.972 4.940 Direct 0.98(0.74[@Lu2014b])/1.10/0.56 AB 3.254 6.350 3.655 Direct 0.65/1.09/0.56 MoTe$_{2}$-MoS$_{2}$ (II) AA 3.328 7.267 5.058 – –/0.45/– AB 3.347 6.575 3.736 – –/0.47/– MoTe$_{2}$-MoSe$_{2}$ (II) AA 3.413 7.421 5.177 Indirect 0.49/0.95/0.19 AB 3.413 6.784 3.853 Indirect 0.51/0.95/0.21 MoTe$_{2}$-WS$_{2}$ (II) AA 3.347 7.170 4.984 – –/0.43/– AB 3.350 6.576 3.757 – –/0.42/– MoTe$_{2}$-WSe$_{2}$ (I) AA 3.425 7.354 5.136 Indirect 0.69/1.05/0.60 AB 3.423 6.725 3.811 Indirect 0.64/1.00/0.53 MoTe$_{2}$-WTe$_{2}$ (II) AA 3.538 7.646 5.348 Direct 0.95/1.44/0.67 AB 3.543 6.954 3.923 Indirect 0.93/1.46/0.74 WTe$_{2}$-MoS$_{2}$ (III) AA 3.354 7.204 5.018 – –/0.46/– AB 3.358 6.584 3.751 – –/0.37/– WTe$_{2}$-MoSe$_{2}$ (II) AA 3.423 7.358 5.128 Direct 0.33/0.85/0.10 AB 3.429 6.740 3.833 Direct 0.35/0.84/0.11 WTe$_{2}$-WS$_{2}$ (III) AA 3.360 7.114 4.963 – –/0.41/– AB 3.365 6.516 3.717 – –/0.40/– WTe$_{2}$-WSe$_{2}$ (I) AA 3.422 7.288 5.092 Direct 0.51/0.93/0.24 AB 3.447 6.679 3.781 Direct 0.45/0.86/0.17 \[table-structure\] Electronic band structure of hetero-bilayer MX$_{2}$ ---------------------------------------------------- Previous studies on TMDs have revealed that the monolayer $MX_{2}$ possesses direct band gap, and the conduction band maximum (CBM) and valence band minimum (VBM) located at K point[@Jiang2012; @Kang2013; @Mak2010; @Ding2011]. Owing to the lack of inversion symmetry and the strong SOC effect, the valence bands possess a significant spin-orbit splitting at K valleys[@Kosmider2013]. And the band alignment for $MX_{2}$ shows the following trends (see from Fig. \[alignment\](b)) For common-X system, the band gap of MoX$_{2}$ are larger than that of WX$_{2}$, and the CBM and VBM of WX$_{2}$ are higher than those of MoX$_{2}$; 2) For common-M system, an increase of the atomic number of X results in a shallower anion $p$ orbital and thus a shift of the VBM to higher energies, finally leading to decreased band gaps[@Zeier2016]. To understand these two trends in band alignment, the atomic orbital composition of the states should be taken into consideration. Taking MoS$_{2}$ as an example, the CBM of MoS$_{2}$ is mainly composed by the $d_{z^{2}}$ orbital of Mo and the $p_{x}$ and $p_{y}$ orbitals of S, whereas the VBM mostly consists of the $d_{x^{2}-y^{2}}$ and $d_{xy}$ orbitals of Mo. ![image](Figure2.eps){width="0.9\linewidth"} For the hetero-bilayer MX$_2$ crystals constructed by two monolayer MX$_2$, their band structures can be understood by the so-called Anderson’s rule, which provides the scheme of the construction of energy band diagrams for the heterostructure consisting of two semiconductor materials[@Anderson1960]. According to the Anderson’s rule, the vacuum energy levels of the two constituent semiconductors on either side of the heterostructure should be aligned at the same energy[@Vol.2004], and there are three types of possible bandedge lineups: straddling, staggered and broken gap, as shown in Fig. \[alignment\](a). For type I heterostructure, the conduction band maximum (CBM) and valence band minimum (VBM) mainly consist of the orbitals of semiconductor B, which possesses a smaller band gap compared to semiconductor A. Thus, the band type of the heterostructure is consistent with the smaller-gap material. For type II heterostructure, the VBM and the CBM around the Fermi level reside in two separate semiconductors, and the formed heterostructure still possesses a small direct or indirect band gap. As for type III heterostructure, the locations of CBM and VBM are similar to those of type II heterostructure, but there does not exist band gap, and the formed heterostructure is a semimetal. It should be noted that, for type II and type III heterostructures, since the CBM and VBM may locate on different semiconductors, the photon-generated excitons are thus spatially separated, which will suppress the recombination of electron-hole pairs and extend the excitons lifetimes compared with the corresponding individual semicondutors[@Kang2013; @Chiu2015; @Fang2014a; @Rivera2015; @Zhang2016; @Chiu2015a]. ![image](Figure3.eps){width="0.9\linewidth"} The band types and bandgaps for the vdW MX$_2$ heterostructures are calculated by the PBE and HSE06 method and the results are shown in TABLE \[table-structure\]. The direct band gap at K point for monolayer MX$_2$ is transformed into three types of band gaps when a hetero-bilayer MX$_2$ crystal is formed, i.e., direct, indirect ($\Gamma$-K, M-K) and zero bandgap or overlapping bands, according to the calculated results shown in TABLE \[table-structure\] and the above-mentioned analysis based on the Anderson’s rule. The formation type of the band gap for the vdW MX$_2$ heterostructures categorized according to the Anderson’rule is also shown in TABLE \[table-structure\]. The classification of the band types according to the Anderson’s rule is called as Anderson band type hereafter. It is shown in TABLE \[table-structure\] that, the Anderson band types for the vdW MX$_2$ are determined by the constituent monolayer MX$_2$ irrespective of the stacking manner, which is probably due to the fact the VBM/CBM of hetero-bilayer structure is attributed to the $d/p-$obitals of M/X atoms, and the weak vdW interactions will not change the charge distribution of the substituent monolayer MX$_2$ of the hetero-bilayer structure significantly. For simpilicity, we first consider the Anderson band type I heterostructure, e.g. band structures for WTe$_{2}$-WSe$_{2}$ and MoTe$_{2}$-WSe$_{2}$ hetero-bilayer structures shown as Fig. \[band\](a,b). Generally, as we mentioned above, two monolayer MX$_2$ crystals with identical M atoms but different X atoms possess different CBM/VBM energy levels, and the crystal with the X atoms with a larger atomic number has a higher energy level of CBM or VBM. However, as shown in Fig. \[alignment\](b), the CBM energy-level of WTe$_{2}$ is lower than that of WSe$_{2}$, although the atomic number of Te is larger than Se. Such a deviation can be understood by the fact that the bond length $d_{W-Te}$ of WTe$_{2}$ is the largest one among the monolayer MX$_2$ crystals, which leads to a small overlap integral $V$ between $d$ orbitals of M atoms and $p$ orbitals of X atoms for the formation of CBM due to $V\propto1/d_{W-Te}^2$[@Peng2018; @Froyen1979], and thus counteracts the increase of CBM energy level from Se with a swallower $p$ orbitals compared to Te[@Kang2013]. The smaller CBM energy-level of WTe$_{2}$ ultimately results in the Anderson band type-I alignment of band edges in WTe$_{2}$-WSe$_{2}$ hetero-bilayer, which possesses a direct bandgap at $K$ point for both AA and AB stacking manners, as shown in Fig. \[band\](a). As shown in Fig. \[band\], the valence band at the $M$ point is attributed to the $p_x$ and $p_y$ orbitals of X atoms, and the corresponding energy level for hetero-bilayer MX$_2$ crystals containing Te atoms is larger than those containing Se or S atoms, since the mass of Te is the largest one. Therefore, for hetero-bilayer MTe$_2$-MX$_2$, the valence band energies at $M$ point significantly increase compared with the hetero-bilayer MSe$_{2}$-MX$_2$ (X$\neq$Te) or MS$_{2}$-MX$_2$ (X$\neq$Te), which subsequently leads to the formation of the $M-K$ indirect band gap for the Anderson band type I heterostructure, e.g. hetero-bilayer MoTe$_{2}$-WSe$_{2}$, as shown in Fig. \[band\](b). ![image](Figure4.eps){width="0.9\linewidth"} As shown in TABLE \[table-structure\], most of the hetero-bilayer MX$_2$ crystals are the Anderson band type II heterostructure, e.g., hetero-bilayer MoS$_{2}$-WSe$_{2}$ and MoSe$_{2}$-WSe$_{2}$. Fig. \[band\](c,d) show the energy band structures of the AA and AB stacking MoS$_{2}$-WSe$_{2}$ hetero-bilayers, exhibiting direct bandgaps of 0.46eV and 0.57eV for AA and AB stacking type, respectively, which are consistent with the previous results[@Lu2014]. The CBM locates on the MoS$_{2}$ layer and the VBM locates on the WSe$_{2}$ layer, resulting in the formation of spatially separated electron-hole pairs. Experiments on hetero-bilayer MoS$_{2}$-WSe$_{2}$ revealed the dramatically quenching of the photoluminescence (PL) intensities[@Fang2014], and the extended exciton lifetime[@Chiu2015]. The valence band at the $\Gamma$ point can be attributed to the inter-layer overlap integral of $p_z$ orbitals of X atoms belonging to different monolayers at $\Gamma$ point, as shown in Fig. \[band\]. For hetero-bilayer MX$_2$ considered here, the distance between X atoms belonging to different monolayers for the AB stacking hetero-bilayer, i.e. $d_2$ shown in Fig. \[POSCAR\](a,b), is smaller than the corresponding AA stacking hetero-bilayer, as shown in TABLE \[table-structure\], thus the energy level of the valence band at the $\Gamma$ point for AB stacking hetero-bilayer is larger than that for AA stacking hetero-bilayer, due to $V_{p_z-p_z}\propto1/d_2^2$. The increase of the energy level of the valence band at $\Gamma$ points sometimes leads to the formation of $\Gamma-K$ indirect band gap, e.g. MoSe$_2$-WSe$_2$ as shown in Fig. \[band\](d). The extreme state of staggering is the formation of broken bandgaps, which is also called as the Anderson band type III alignment, as shown in Fig. \[alignment\](a). For example, the CBMs of MoS$_{2}$ and WS$_{2}$ are much lower than that of other monolayer MX$_2$ and the WTe$_{2}$ possess the highest VBM, as shown in Fig. \[alignment\](b), the band alignment in hetero-bilayer WTe$_{2}$-MoS$_{2}$ and WTe$_{2}$-WS$_{2}$ thus can be approximately considered as the Anderson band type III alignment, as shown in Fig. \[band\](e,f). The band overlaps at $K$ point, changing the heterostructures into metallic phase. The bandgaps of the hetero-bilayer MX$_2$ crystals based on the HSE and SOC calculations are also provided in TABLE \[table-structure\]. The negative SOC effects decrease the band gap and the HSE calculations increase the band gap by 0.4-0.6 eV, compared to the bandgap values calculated by PBE calculations. It should be noted that the metallic phases of the hetero-bilayer MX$_2$ crystals, i.e. the Anderson band type III heterostructures, e.g. hetero-bilayer WTe$_{2}$-MoS$_{2}$ and WTe$_{2}$-WS$_{2}$ crystals as shown in Fig. \[band\](e,f), are replaced by direct bandgap phases based on the more precise HSE calculations, which means that the hetero-bilayer MX$_2$ crystals considered here does not possess the Anderson band type III alignment. Mechanical properties and transport properties of hetero-bilayer MX$_{2}$ ------------------------------------------------------------------------- System (Anderson) Stacking type $Y (N/m)$ $v$ $m_{e}^{*}(m_{0})$ $m_{h}^{*}(m_{0})$ $D_l^{e}$ $D_l^{h}$ C ( N/m ) $\mu_e$(cm$^2$/(V$\cdot$s)) $\mu_h$ (cm$^2$/(V$\cdot$s)) ---------------------------- --------------- ----------- ------ -------------------- -------------------- ----------- ----------- ----------- ----------------------------- ------------------------------ -- -- MoS$_{2}$-WSe$_{2}$ (II) AA 209.95 0.29 0.47 0.46 3.03 2.88 118.58 896.07 873.17 AB 225.30 0.23 0.47 0.46 2.96 3.52 111.47 565.41 873.1 MoS$_{2}$-WS$_{2}$ (II) AA 241.46 0.25 0.46 1.70 6.01 5.70 127.81 256.46 18.04 AB 242.03 0.24 0.46 0.92 6.28 5.03 121.19 318.08 76.7 WS$_{2}$-WSe$_{2}$ (II) AA 206.89 0.25 0.28 0.46 3.33 3.27 114.74 1939.55 709.71 AB 218.60 0.20 0.28 0.85 5.65 4.88 118.01 895.83 75.47 MoSe$_{2}$-WS$_{2}$ (II) AA 272.60 0.31 0.28 0.71 3.10 3.25 119.6 2005.27 360.99 AB 263.53 0.30 0.28 0.97 5.28 4.61 112.98 940.06 63.53 MoSe$_{2}$-WSe$_{2}$ (II) AA 206.94 0.25 0.54 0.44 2.14 2.66 109.98 758.95 1871.24 AB 215.79 0.22 0.56 1.29 4.01 3.24 111.32 477.54 61.56 MoS$_{2}$-MoSe$_{2}$ (II) AA 232.78 0.26 0.42 0.71 2.87 2.78 125.83 1321.55 454.69 AB 230.26 0.27 0.42 0.71 3.07 4.50 114.86 758.03 359.04 MoTe$_{2}$-MoS$_{2}$ (II) AA 196.82 0.36 AB 196.87 0.34 MoTe$_{2}$-MoSe$_{2}$ (II) AA 184.77 0.31 0.46 1.37 4.40 3.74 113.18 532.75 45.79 AB 200.46 0.25 0.46 1.37 4.07 3.75 110.81 532.75 45.79 MoTe$_{2}$-WS$_{2}$ (II) AA 206.17 0.28 AB 195.86 0.31 MoTe$_{2}$-WSe$_{2}$ (I) AA 183.70 0.28 0.30 1.33 3.95 3.83 109.1 515.87 52.52 AB 194.71 0.24 0.30 1.25 4.41 4.14 114.79 1191.02 58.76 MoTe$_{2}$-WTe$_{2}$ (II) AA 136.33 0.39 0.57 0.42 1.61 1.38 101.62 1023.61 55.76 AB 171.83 0.22 0.58 3.46 4.32 3.30 99.43 2315.94 3285.72 WTe$_{2}$-MoS$_{2}$ (III) AA 169.33 0.20 AB 189.09 0.28 WTe$_{2}$-MoSe$_{2}$ (II) AA 183.83 0.27 0.45 0.48 2.65 2.85 109.47 382.87 6.58 AB 196.41 0.22 0.45 0.48 2.70 2.85 102.26 912.5 987.31 WTe$_{2}$-WS$_{2}$ (III) AA 189.00 0.20 AB 233.27 0.29 WTe$_{2}$-WSe$_{2}$ (I) AA 168.36 0.33 0.30 0.46 2.95 2.97 113.4 912.5 987.31 AB 197.77 0.22 0.30 0.45 2.79 3.08 115.65 875.3 918.66 \[table-properties\] Since the MX$_2$ heterostructures under considerations here possess $C_{3v}$ symmetry, which means that the number of independent second-order elastic coefficients $c_{ij}$ is five and $c_{11}=c_{22}$[@Mouhat2014]. The calculated elastic coefficients of all MX$_2$ heterostructures are shown in TABLE S2, and all the vdW MX$_2$ heterostructures are mechanically stable, according to the Born criteria[@Peng2017a], $$\label{born} C_{11}-C_{12}>0, C_{11}+2C_{12}>0,C_{44}>0$$ The 2D Young’s modulus of all MX$_2$ heterostructures, given by $Y^{2D}=\frac{c_{11}c_{22}-c_{12}^{2}}{c_{11}}$[@Andrew2012a], are listed in TABLE \[table-properties\]. The 2D Young’s modulus for monolayer MX$_2$ crystals decrease from MS$_2$ to MSe$_2$ to MTe$_2$[@Zeng2015], which is due to the fact that, the strength of $d_{xy,yz,zx}-p$-orbital coupling, which forms M-X bonding, becomes weaker with an increase of the atomic number of chalcogen[@Yu2017a]. The calculated 2D Young’s modulus for monolayer MX$_2$ crystals are shown in TABLE S1. The contributions to the mechanical properties of MX$_2$ heterostructures can be roughly considered from constituent monolayer MX$_2$ crystals and the interlayer bonding. The Young’s modulus of the MTe$_{2}$-MX$_{2}$ heterostructures are lower than others due to the weakest $Y^{2D}$ of monolayer MTe$_{2}$ among the monolayer MX$_2$ crystals considered here. Meanwhile, the Young’s modulus of the MX$_2$ heterostructures are a little lower than the sum of those of the corresponding monolayer MX$_2$ crystals, which means that the contribution from the interlayer bonding to the total Young’s modulus is negative. The Poisson’s ratios given by $v^{2D}=\frac{c_{12}}{c_{22}}$[@Andrew2012a], which describes the lateral deformation when applying uniaxial strains, are calculated and shown in TABLE \[table-properties\]. Generally materials with high Poisson’s ratio possess good plasticity. The Poisson’s ratios for the MX$_2$ heterostructures are numerically close to each other except WTe$_{2}$-MX$_{2}$, due to the lowest Poisson’s ratio of 0.20 of monolayer WTe$_{2}$ crystal among the monolayer MX$_2$ crystals (see TABLE S1). ![image](Figure6.eps){width="0.7\linewidth"} The calculated effective masses for electrons $m^*_e$ and holes $m^*_h$ of vdW MX$_2$ heterostructures are shown in TABLE \[table-structure\]. The values of $m^*_e$ for AA-stacking MX$_2$ heterostructures are close to those of the corresponding AB-stacking ones, however, the values of $m^*_h$ for AA-stacking heterostructures are deviated obviously from those of AB-stacking ones, e.g. MoS$_2$-WS$_2$ and MoTe$_2$-WTe$_2$ heterostructures, especially when the band types for AA and AB stackings are different (direct vs indirect), as shown in TABLE \[table-structure\] and \[table-properties\]. Such phenomena can be understood by the stable location of CBM (electrons) at $K$ point for all the MX$_2$ heterostructures, and the transition of VBM (holes) from $K$ point to $M$ or $\Gamma$ point for MX$_2$ heterostructures with an indirect band gap. As mentioned above, the bandstructures of MX$_2$ heterostructures can be roughly decomposed into those of the constituent monolayer MX$_2$ crystals, according to the Anderson’s rule, which also leads to the formation of the effective masses of electrons and holes for MX$_2$ heterostructures. Fig. \[mass\] shows the effective masses of electrons and holes for MX$_2$ heterostructures and the corresponding constituent monolayer MX$_2$ crystals along all directions, taking WTe$_{2}$-WSe$_{2}$ and MoS$_{2}$-WSe$_{2}$ hetero-bilayer as examples without loss of generality. The WTe$_{2}$-WSe$_{2}$ hetero-bilayer belongs to the Anderson band type I and the CBM and VBM are attributed to those of monolayer WTe$_{2}$ crystal. It is shown in Fig. \[mass\](a,b) that the effective masses of electrons and holes for the WTe$_{2}$-WSe$_{2}$ hetero-bilayer are close to those of monolayer WTe$_{2}$ crystals, respectively. However, for MoS$_{2}$-WSe$_{2}$ hetero-bilayer (Anderson band type II), since the CBM is attributed to that of monolayer MoS$_{2}$ crystal and VBM is attributed to that of monolayer WSe$_{2}$ crystal, therefore, the $m^*_e$ for MoS$_{2}$-WSe$_{2}$ hetero-bilayer is similar to that of monolayer MoS$_{2}$ and the $m^*_h$ is similar to that of monolayer WSe$_{2}$, as shown in Fig. \[mass\](c,d). According to Eq. (\[mobility1\]), the third factor determining carrier mobilites $\mu$ is the deformation potential constants, $D_l^{e,h}$, which describes the scatterings of electrons/holes by longitudinal acoustic phonons. The calculated $D_l^{e,h}$ for MX$_2$ heterostructures and monolayer MX$_2$ crystals are shown in TABLE \[table-properties\] and TABLE S1, respectively. By comparison, it is found that, the deformation potential constants of MX$_2$ heterostructures are overally larger than those of constituent monolayer MX$_2$, which means that, the formation of the vdW MX$_2$ heterostructures increases the electron-acoustic phonon coupling, leading to the increase of deformation potential constant $D_{l}$, especially for MoS$_2$-WS$_2$ heterostructures. Since the CBM and VBM of the MX$_2$ heterostructures can be attributed to the respective bandstructures of the constituent monolayer MX$_2$, according to the Anderson’s rule, the shift of VBM from $K$ point to $\Gamma/M$ point will result in dramatic change of the deformation potential constants and effective holes masses for MX$_2$ heterostructures with indirect bandgaps, e.g. MoSe$_2$-WSe$_2$. ![image](Figure5.eps){width="0.8\linewidth"} In order to figure out the exact contributions from the three factors, i.e. effective masses $m_{e,h}^{*}$, deformation potential constants $D_l^{e,h}$ and elastic modulus $C$, to the carrier mobilities $\mu$, compared to the constituent monolayer MX$_2$ crystals, we plotted the values of the three factors for constituent monolayer crystals and hetero-bilayer structures in Fig. S4. It is clear that the elastic modulus of hetro-bilayer structures is nearly twice of the constituent monolayer MX$_2$ crystals, the deformation potential constants of hetro-bilayer structures are overally larger or close to the constituent monolayer MX$_2$ crystals except MoTe$_2$-WTe$_2$, the effective masses of hetro-bilayer structures mostly determined by the constituent monolayer cystals, are thus close to those of constituent monolayer cystals, except some hetro-bilayer structures with VBM points shifted from K to $\Gamma/M$, e.g. MoTe$_2$-WTe$_2$. Finally, the carrier mobility of electrons and holes along armchair and zigzag directions for the MX$_2$ hetero-bilayer can be calculated according to Eq. (\[mobility1\]), as shown in Fig. \[mobility\]. The electron mobilities of hetro-bilayer structures are overally larger than those of constituent monolayer MX$_2$ crystals, and the same situation takes place for the holes mobilities of hetro-bilayer structures with VBM located at K point. However, the holes mobilities of hetro-bilayer structures with VBM located at $\Gamma/M$ point are smaller than those of constituent monolayer MX$_2$ crystals. The AA stacked MoTe$_2$-MoSe$_2$ heterostructure possesses the highest electron mobility along zigzag direction, i.e. 3658 cm$^2$/(V$\cdot$s), and the AA stacked MoTe$_2$-WTe$_2$ heterostructure possesses the highest hole mobility along the armchair direction, i.e. 3285 cm$^2$/(V$\cdot$s). Optical properties of hetero-bilayer MX$_{2}$ --------------------------------------------- The optical properties of the vdW MX$_2$ heterostructures are described by the complex dielectric function, $i.e.$ $\epsilon(\omega)=\epsilon_1(\omega)+i\epsilon_2(\omega)$. The imaginary part of dielectric tensor $\epsilon_2(\omega)$ is determined by a summation over empty band states as follows [@Gajdos2006], $$\epsilon_2(\omega) = \frac{2\pi e^2}{\Omega \epsilon_0} \sum_{k,v,c} \delta(E_k^c-E_k^v-\hbar \omega) \Bigg\vert\langle \Psi_k^c \big\vert \textbf{u}\cdot\textbf{r} \big\vert \Psi_k^v \rangle \Bigg\vert ^2, \label{eps2}$$ where $\Omega$ is the crystal volume, $\epsilon_0$ is the vacuum dielectric constant, $\hbar\omega$ represents the photon energy, $v$ and $c$ mean the valence and conduction bands respectively, **u** is the polarization vector in the incident electric field, **u**$\cdot$**r** is the momentum operator, $\Psi_k$ is the wave function at the $k$ point. The real part of dielectric tensor $\epsilon_1(\omega)$ is obtained by the well-known Kramers-Kronig relation[@dresselhaus1999solid], $$\epsilon_1(\omega)=1+\frac{2}{\pi}P\int_0^{\infty} \frac{\epsilon_2(\omega ')\omega '}{\omega '^2-\omega^2+i\eta}d\omega ',$$ where $P$ denotes the principle value. Based on the complex dielectric function, the absorption coefficient $\alpha(\omega)$ is given by [@Saha2000; @Luo2015] $$\alpha(\omega)=\frac{\sqrt{2}\omega}{c} \Big\lbrace \big[\epsilon_1^2(\omega)+\epsilon_2^2(\omega)\big]^{1/2}-\epsilon_1(\omega) \Big\rbrace ^{\frac{1}{2}},$$ ![image](Figure7.eps){width="1\linewidth"} \[optics\] In 2D semiconductor materials, the band gap obtained by HSE06 is usually close to the real optical band gap due to the underestimation of band gap by neglecting excitonic effects[@yang2016two]. Thus, we only performed HSE06 calculations to obtain optical properties for the hetero-bilayer MX$_2$ under considerations here, which show that all of them are semiconductors with a finite band gap, as shown in TABLE \[table-structure\]. All the optical constants are calculated for incident radiations with the electric field vector **E** polarized along the $a$ and $b$ directions[@Xu2017b] shown in Fig. \[POSCAR\](c). Due to the C$_3$ symmetry of hexagonal structure of the hetero-bilayer $MX_{2}$, the dielectric function $\epsilon(\omega)$ possesses the same results along the $a$ and $b$ directions. And the $\epsilon(\omega)$ results for AA and AB stacking type are also close to each other, as shown in Fig. \[optics\](a,b) and Fig. S4, irrrespective of the corresponding Anderson band type. The similarity in $\epsilon(\omega)$ results between AA and AB stacking hetero-bilayer $MX_{2}$ can be understood by the fact that, the bandstructure of the hetero-bilayer MX$_2$ can be roughly decomposed into the respective bandstructures of the constituent monolayer MX$_2$ according to the Anderson’rule, thus the contribution to the total optical response, i.e. $\epsilon_2(\omega)$, from absorption of an incident photon $\hbar\omega$ and then transition from $\Psi_k^c$ to $\Psi_k^v$ can be traced back to the behaviors of electrons located within the constituent monolayer MX$_2$. Therefore, the $\epsilon_2(\omega)$ results for AA and AB stacking hetero-bilayer MX$_2$ probably are similar since they contain identical constituent monolayer MX$_2$, according to Eq. \[eps2\]. The optical properties of hetero-bilayer MX$_2$, e.g. WTe$_{2}$-WSe$_{2}$, MoS$_{2}$-WSe$_{2}$ and WTe$_{2}$-MoS$_{2}$, are shown in Fig. \[optics\]. The main absorption peaks of these three hetero-bilayer MX$_2$ locate in the range of 3.0 to 5.0 eV, i.e. the ultraviolet region, with a refractive range from 2.80 to 4.27 in this region. Conclusion ========== In this work, we have investigated the structure, electronic, mechanical, transport and optical properties of the vdW MX$_2$ heterostructures using first-principles calculations. The AA and AB stacked hetero-bilayer MX$_2$ exhibit three types of band alignment according to Anderson’s rule, with a wide band gap range between 0 and 2 eV. The main differences between AA and AB stacked hetero-bilayer MX$_2$ lie in the band structure and mechanical properties due to the interlayer coupling such as the indirect $\Gamma-K$ bandgap. The band structure of the MTe$_2$-MX$_2$ will possesses a higher valance band at $M$ point due to the high band energy of $5p_{x,y}$ orbitals of Te. The type II band alignment of the vdW hetero-bilayer MX$_2$ make interlayer transitions possible, leading to spatially separated excitons. The transport properties of the vdW MX$_2$ heterostructures are consistent with the symmetry of the geometric structures. It should be noted that the carrier mobilities of the hetero-bilayer MX$_2$ are often higher than those of monolayer MX$_2$, attributed to the higher elastic modulus for the hetero-bilayer MX$_2$, while the hetero-bilayer MX$_2$ with indirect bandgap possess much lower hole mobilities due to the increased effective masses and deformation potential constants. Furthermore, the calculated optical properties show strong optical absorption for vdW MX$_2$ heterostructures, enabling the novel applications in optoelectronics from visible to ultraviolet region, such as photodetectors, light-emitting diodes, and photovoltaics. Acknowledgement {#acknowledgement .unnumbered} =============== This work is supported by the National Natural Science Foundation of China under Grants No. 11374063 and 11404348, and the National Basic Research Program of China (973 Program) under Grant No. 2013CBA01505. Work at Ames Laboratory is partially supported by the U.S.Department of Energy, Office of Basic Energy Science, Division of Materials Science and Engineering (Ames Laboratory is operated for the U.S. Department of Energy by Iowa State University under Contract No. DE-AC02-07CH11358). The European Research Council under ERC Advanced Grant No. 320081 (PHOTOMETA) supports work at FORTH. Reference {#reference .unnumbered} ========= [10]{} K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Katsnelson M. I., I. V. Grigorieva, and A. A. Dubonos, S. V.and Firsov. Two-dimensional gas of massless dirac fermions in graphene. , 438:197–200, 2005. Horst L. Stormer Yuanbo Zhang, Yan-Wen Tan and Philip Kim. Experimental observation of the quantum hall effect and berry’s phase in graphene. , 438, 2005. C. R. Dean, A. F. Young, I. Meric, C. Lee, L. Wang, S. Sorgenfrei, K. Watanabe, T. Taniguchi, P. Kim, K. L. Shepard, and J. Hone. Boron nitride substrates for high-quality graphene electronics. , 5(10):722–726, aug 2010. Matthew Yankowitz, Jiamin Xue, Daniel Cormode, Javier D. Sanchez-Yamagishi, K. Watanabe, T. Taniguchi, Pablo Jarillo-Herrero, Philippe Jacquod, and Brian J. LeRoy. Emergence of superlattice dirac points in graphene on hexagonal boron nitride. , 8(5):382–386, mar 2012. B. Radisavljevic, A. Radenovic, J. Brivio, V. Giacometti, and A. Kis. Single-layer mos2 transistors. , 6:147–150, 2011. Andrea Splendiani, Liang Sun, Yuanbo Zhang, Tianshu Li, Jonghwan Kim, Chi-Yung Chim, Giulia Galli, and Feng Wang. Emerging photoluminescence in monolayer mos$_2$. , 10(4):1271–1275, 2010. Du Xiang, Cheng Han, Jing Wu, Shu Zhong, Yiyang Liu, Jiadan Lin, Xue-Ao Zhang, Wen Ping Hu, Barbaros 脰zyilmaz, A. H. Castro Neto, Andrew Thye Shen Wee, and Wei Chen. Surface transfer doping induced effective modulation on ambipolar characteristics of few-layer black phosphorus. , 6(1), mar 2015. Likai Li, Yijun Yu, Guo Jun Ye, Qingqin Ge, Xuedong Ou, Hua Wu, Donglai Feng, Xian Hui Chen, and Yuanbo Zhang. Black phosphorus field-effect transistors. , 9(5):372–377, 2014. Vy Tran, Ryan Soklaski, Yufeng Liang, and Li Yang. Layer-controlled band gap and anisotropic excitons in few-layer black phosphorus. , 89(23), jun 2014. Marco Bernardi, Maurizia Palummo, and Jeffrey C. Grossman. Extraordinary sunlight absorption and one nanometer thick photovoltaics using two-dimensional monolayer materials. , 13(8):3664–3670, jul 2013. D. Hennig and C. Mulhern. Collective transport of coupled particles. , 85(1), jan 2012. Wenxu Zhang, Zhishuo Huang, Wanli Zhang, and Yanrong Li. Two-dimensional semiconductors with possible high room temperature mobility. , 7(12):1731–1737, sep 2014. Ting Cao, Gang Wang, Wenpeng Han, Huiqi Ye, Chuanrui Zhu, Junren Shi, Qian Niu, Pingheng Tan, Enge Wang, Baoli Liu, and Ji Feng. Valley-selective circular dichroism of monolayer molybdenum disulphide. , 3(1), jan 2012. Hualing Zeng, Junfeng Dai, Wang Yao, Di Xiao, and Xiaodong Cui. Valley polarization in mos2 monolayers by optical pumping. , 7(8):490–493, August 2012. Filip A. Rasmussen and Kristian S. Thygesen. Computational 2d materials database: Electronic structure of transition-metal dichalcogenides and oxides. , 119(23):13169–13183, jun 2015. Hiram J. Conley, Bin Wang, Jed I. Ziegler, Richard F. Haglund, Sokrates T. Pantelides, and Kirill I. Bolotin. Bandgap engineering of strained monolayer and bilayer [MoS]{}2. , 13(8):3626–3630, jul 2013. Kin Fai Mak, Changgu Lee, James Hone, Jie Shan, and Tony F. Heinz. Atomically thin ${\mathrm{mos}}_{2}$: A new direct-gap semiconductor. , 105:136805, Sep 2010. Thomas G. Pedersen, Christian Flindt, Jesper Pedersen, Niels Asger Mortensen, Antti-Pekka Jauho, and Kjeld Pedersen. Graphene antidot lattices: Designed defects and spin qubits. , 100(13), apr 2008. Qihang Liu, Linze Li, Yafei Li, Zhengxiang Gao, Zhongfang Chen, and Jing Lu. Tuning electronic structure of bilayer [MoS]{}2 by vertical electric field: A first-principles investigation. , 116(40):21556–21562, sep 2012. K. S. Novoselov, A. Mishchenko, A. Carvalho, and A. H. Castro Neto. 2d materials and van der waals heterostructures. , 353(6298):aac9439, jul 2016. S. J. Haigh, A. Gholinia, R. Jalil, S. Romani, L. Britnell, D. C. Elias, K. S. Novoselov, L. A. Ponomarenko, A. K. Geim, and R. Gorbachev. Cross-sectional imaging of individual layers and buried interfaces of graphene-based heterostructures and superlattices. , 11(9):764–767, jul 2012. Wei-Ting Hsu, Zi-Ang Zhao, Lain-Jong Li, Chang-Hsiao Chen, Ming-Hui Chiu, Pi-Shan Chang, Yi-Chia Chou, and Wen-Hao Chang. Second harmonic generation from artificially stacked transition metal dichalcogenide twisted bilayers. , 8(3):2951–2958, feb 2014. Wenjing Zhang, Qixing Wang, Yu Chen, Zhuo Wang, and Andrew T S Wee. Van der waals stacked 2d layered materials for optoelectronics. , 3(2):022001, apr 2016. Humberto Terrones, Florentino L[ó]{}pez-Ur[í]{}as, and Mauricio Terrones. Novel hetero-layered materials with tunable direct band gaps by sandwiching different metal disulfides and diselenides. , 3(1), mar 2013. Pasqual Rivera, John R. Schaibley, Aaron M. Jones, Jason S. Ross, Sanfeng Wu, Grant Aivazian, Philip Klement, Kyle Seyler, Genevieve Clark, Nirmal J. Ghimire, Jiaqiang Yan, D. G. Mandrus, Wang Yao, and Xiaodong Xu. Observation of long-lived interlayer excitons in monolayer [MoSe]{}2[WSe]{}2 heterostructures. , 6:6242, feb 2015. Ming-Hui Chiu, Chendong Zhang, Hung-Wei Shiu, Chih-Piao Chuu, Chang-Hsiao Chen, Chih-Yuan S. Chang, Chia-Hao Chen, Mei-Yin Chou, Chih-Kang Shih, and Lain-Jong Li. Determination of band alignment in the single-layer [MoS]{}2/[WSe]{}2 heterojunction. , 6:7666, jul 2015. H. Fang, C. Battaglia, C. Carraro, S. Nemsak, B. Ozdol, J. S. Kang, H. A. Bechtel, S. B. Desai, F. Kronast, A. A. Unal, G. Conti, C. Conlon, G. K. Palsson, M. C. Martin, A. M. Minor, C. S. Fadley, E. Yablonovitch, R. Maboudian, and A. Javey. Strong interlayer coupling in van der waals heterostructures built from single-layer chalcogenides. , 111(17):6198–6202, apr 2014. Xiaoping Hong, Jonghwan Kim, Su-Fei Shi, Yu Zhang, Chenhao Jin, Yinghui Sun, Sefaattin Tongay, Junqiao Wu, Yanfeng Zhang, and Feng Wang. Ultrafast charge transfer in atomically thin [MoS]{}2/[WS]{}2 heterostructures. , 9(9):682–686, aug 2014. Hoseok Heo, Ji Ho Sung, Soonyoung Cha, Bo-Gyu Jang, Joo-Youn Kim, Gangtae Jin, Donghun Lee, Ji-Hoon Ahn, Myoung-Jae Lee, Ji Hoon Shim, Hyunyong Choi, and Moon-Ho Jo. Interlayer orientation-dependent light absorption and emission in monolayer semiconductor stacks. , 6(1), jun 2015. G. Kresse and J. Furthmüller. Efficient iterative schemes for *ab initio* total-energy calculations using a plane-wave basis set. , 54:11169–11186, 1996. John P. Perdew, Kieron Burke, and Matthias Ernzerhof. Generalized gradient approximation made simple. , 77:3865–3868, 1996. Stefan Grimme. Semiempirical gga-type density functional constructed with a long-range dispersion correction. , 27(15):1787–1799, 2006. Jochen Heyd, Gustavo E. Scuseria, and Matthias Ernzerhof. Hybrid functionals based on a screened coulomb potential. , 118(18):8207–8215, 2003. J. Heyd, G. E. Scuseria, and M. Ernzerhof. Erratum: 鈥渉ybrid functionals based on a screened coulomb potential鈥?\[j. chem. phys. 118, 8207 (2003)\]. , 124:219906, 2006. Ling Tang, MengQiu Long, Dong Wang, and ZhiGang Shuai. The role of acoustic phonon scattering in charge transport in organic semiconductors: a first-principles deformation-potential study. , 52(10):1646–1652, 2009. Y. Cai, G. Zhang, and Y.W Zhang. Polarity-reversed robust carrier mobility in monolayer mos$_2$ nanoribbons. , 136:6269–6275, 2014. Mengqiu Long, Ling Tang, Dong Wang, Yuliang Li, and Zhigang Shuai. Electronic structure and carrier mobility in graphdiyne sheet and nanoribbons: Theoretical predictions. , 5(4):2593–2600, 2011. J. Chen, J. Xi, D. Wang, and Z Shuai. Carrier mobility in graphyne should be even larger than that in graphene: A theoretical prediction. , 4:1443–1448, 2013. Yanli Wang and Yi Ding. Electronic structure and carrier mobilities of arsenene and antimonene nanoribbons:a first-principle study. , 10:254, 2015. J. Bardeen and W. Shockley. Deformation potentials and mobilities in non-polar crystals. , 80:72–80, 1950. W. Walukiewicz, H. E. Ruda, J. Lagowski, and H. C. Gatos. Electron mobility in modulation-doped heterostructures. , 30(8):4571–4582, oct 1984. Shin ichi Takagi, Judy L. Hoyt, Jeffrey J. Welser, and James F. Gibbons. Comparative study of phonon-limited mobility of two-dimensional electrons in strained and unstrained si metaloxidesemiconductor field-effect transistors. , 80(3):1567–1577, aug 1996. J.A. Wilson and A.D. Yoffe. The transition metal dichalcogenides discussion and interpretation of the observed optical, electrical and structural properties. , 18(73):193–335, may 1969. Ganesh R. Bhimanapati, Zhong Lin, Vincent Meunier, Yeonwoong Jung, Judy Cha, Saptarshi Das, Di Xiao, Youngwoo Son, Michael S. Strano, Valentino R. Cooper, Liangbo Liang, Steven G. Louie, Emilie Ringe, Wu Zhou, Steve S. Kim, Rajesh R. Naik, Bobby G. Sumpter, Humberto Terrones, Fengnian Xia, Yeliang Wang, Jun Zhu, Deji Akinwande, Nasim Alem, Jon A. Schuller, Raymond E. Schaak, Mauricio Terrones, and Joshua A. Robinson. . , 9(12):11509–11539, 2015. Birman J L Burns G. . Academic Press, 1977. J. H. Rose, John Ferrante, and John R. Smith. Universal binding energy curves for metals and bimetallic interfaces. , 47(9):675–678, aug 1981. Jun Zhao, Yanle Li, and Jing Ma. Quantum spin hall insulators in functionalized arsenene (asx[,]{} x = f[,]{} oh and ch3) monolayers with pronounced light absorption. , 8:9657–9666, 2016. W.J. Schutte, J.L. De Boer, and F. Jellinek. Crystal structures of tungsten disulfide and diselenide. , 70(2):207–209, oct 1987. R. Coehoorn, C. Haas, J. Dijkstra, C. J. F. Flipse, R. A. de Groot, and A. Wold. Electronic structure of ${\mathrm{mose}}_{2}$, ${\mathrm{mos}}_{2}$, and ${\mathrm{wse}}_{2}$. i. band-structure calculations and photoelectron spectroscopy. , 35:6195–6202, Apr 1987. K. D. Bronsema, J. L. De Boer, and F. Jellinek. On the structure of molybdenum diselenide and disulfide. , 540(9-10):15–17, sep 1986. Ning Lu, Hongyan Guo, Lei Li, Jun Dai, Lu Wang, Wai-Ning Mei, Xiaojun Wu, and Xiao Cheng Zeng. 2/[MX]{}2 heterobilayers: bandgap engineering via tensile strain or external electrical field. , 6(5):2879–2886, 2014. Ziheng Ji, Hao Hong, Jin Zhang, Qi Zhang, Wei Huang, Ting Cao, Ruixi Qiao, Can Liu, Jing Liang, Chuanhong Jin, Liying Jiao, Kebin Shi, Sheng Meng, and Kaihui Liu. Robust stacking-independent ultrafast charge transfer in [MoS]{}2/[WS]{}2 bilayers. , nov 2017. Hong Jiang. Electronic band structures of molybdenum and tungsten dichalcogenides by the [GW]{} approach. , 116(14):7664–7671, mar 2012. Jun Kang, Sefaattin Tongay, Jian Zhou, Jingbo Li, and Junqiao Wu. Band offsets and heterostructures of two-dimensional semiconductors. , 102(1):012111, jan 2013. Yi Ding, Yanli Wang, Jun Ni, Lin Shi, Siqi Shi, and Weihua Tang. First principles study of structural, vibrational and electronic properties of graphene-like [MX]{}2 (m=mo, nb, w, ta;x=s, se, te) monolayers. , 406(11):2254–2260, may 2011. K. Ko[ś]{}mider and J. Fern[á]{}ndez-Rossier. Electronic properties of the [MoS]{}2-[WS]{}2heterojunction. , 87(7), feb 2013. Wolfgang G. Zeier, Alex Zevalkink, Zachary M. Gibbs, Geoffroy Hautier, Mercouri G. Kanatzidis, and G. Jeffrey Snyder. Thinking like a chemist: Intuition in thermoelectric materials. , 55(24):6826–6841, 2016. R. L. Anderson. Germanium-gallium arsenide heterojunctions \[letter to the editor\]. , 4(3):283–287, jul 1960. N Vol. What is what in the nanoworld: A handbook on nanoscience and nanotechnology. , 7(12):49, dec 2004. H. Fang, C. Battaglia, C. Carraro, S. Nemsak, B. Ozdol, J. S. Kang, H. A. Bechtel, S. B. Desai, F. Kronast, A. A. Unal, G. Conti, C. Conlon, G. K. Palsson, M. C. Martin, A. M. Minor, C. S. Fadley, E. Yablonovitch, R. Maboudian, and A. Javey. Strong interlayer coupling in van der waals heterostructures built from single-layer chalcogenides. , 111(17):6198–6202, apr 2014. Shengli Zhang, Meiqiu Xie, Fengyu Li, Zhong Yan, Yafei Li, Erjun Kan, Wei Liu, Zhongfang Chen, and Haibo Zeng. Semiconducting group 15 monolayers: A broad range of band gaps and high carrier mobilities. , 55(5):1666–1669, 2016. Ming-Hui Chiu, Chendong Zhang, Hung-Wei Shiu, Chih-Piao Chuu, Chang-Hsiao Chen, Chih-Yuan S. Chang, Chia-Hao Chen, Mei-Yin Chou, Chih-Kang Shih, and Lain-Jong Li. Determination of band alignment in the single-layer [MoS]{}2/[WSe]{}2 heterojunction. , 6:7666, jul 2015. Bo Peng, Hao Zhang, Hezhu Shao, Ke Xu, Gang Ni, Jing Li, Heyuan Zhu, and Costas M. Soukoulis. Chemical intuition for high thermoelectric performance in monolayer black phosphorus, alpha-arsenene and aw-antimonene. , 6(5):2018–2033, 2018. Sverre Froyen and Walter A. Harrison. Elementary prediction of linear combination of atomic orbitals matrix elements. , 20:2420–2422, Sep 1979. Ning Lu, Hongyan Guo, Lei Li, Jun Dai, Lu Wang, Wai-Ning Mei, Xiaojun Wu, and Xiao Cheng Zeng. 2/[MX]{}2 heterobilayers: bandgap engineering via tensile strain or external electrical field. , 6(5):2879–2886, 2014. F[é]{}lix Mouhat and Fran[ç]{}ois-Xavier Coudert. Necessary and sufficient elastic stability conditions in various crystal systems. , 90(22), dec 2014. Bo Peng, Hao Zhang, Hezhu Shao, Zeyu Ning, Yuanfeng Xu, Gang Ni, Hongliang Lu, David Wei Zhang, and Heyuan Zhu. Stability and strength of atomically thin borophene from first principles calculations. , 5(6):399–407, November 2017. R. C. Andrew, R. E. Mapasha, A. M. Ukpong, and N. Chetty. Mechanical properties of graphene and boronitrene. , 85(12), mar 2012. Fan Zeng, Wei-Bing Zhang, and Bi-Yu Tang. Electronic structures and elastic properties of monolayer and bilayer transition metal dichalcogenides [MX]{}2 (m = mo, w; x = o, s, se, te): A comparative first-principles study. , 24(9):097103, sep 2015. Liping Yu, Qimin Yan, and Adrienn Ruzsinszky. Negative poisson’s ratio in 1t-type crystalline two-dimensional transition metal dichalcogenides. , 8:15224, may 2017. M. Gajdoš, K. Hummer, G. Kresse, J. Furthmüller, and F. Bechstedt. Linear optical properties in the projector-augmented wave methodology. , 73:045112, 2006. MS Dresselhaus. . Citeseer, 1999. Sonali Saha, T. P. Sinha, and Abhijit Mookerjee. Electronic structure, chemical bonding, and optical properties of paraelectric ${\mathrm{batio}}_{3}$. , 62:8828–8834, 2000. Bingcheng Luo, Xiaohui Wang, Enke Tian, Guowu Li, and Longtu Li. Electronic structure, optical and dielectric properties of batio$_3$/catio$_3$/srtio$_3$ ferroelectric superlattices from first-principles calculations. , 3:8625–8633, 2015. Ji-Hui Yang, Yueyu Zhang, Wan-Jian Yin, XG Gong, Boris I Yakobson, and Su-Huai Wei. Two-dimensional sis layers with promising electronic and optoelectronic properties: theoretical prediction. , 16(2):1110–1117, 2016. Yuanfeng Xu, Hao Zhang, Hezhu Shao, Gang Ni, Jing Li, Hongliang Lu, Rongjun Zhang, Bo Peng, Yongyuan Zhu, Heyuan Zhu, and Costas M. Soukoulis. First-principles study on the electronic, optical, and transport properties of monolayer alpha - and beta -[GeSe]{}. , 96(24), dec 2017.
--- abstract: | We validate the performance and accuracy of the current SEGUE (Sloan Extension for Galactic Understanding and Exploration) Stellar Parameter Pipeline (SSPP), which determines stellar atmospheric parameters (effective temperature, surface gravity, and metallicity) by comparing derived overall metallicities and radial velocities from selected likely members of three globular clusters (M 13, M 15, and M 2) and two open clusters (NGC 2420 and M 67) to the literature values. Spectroscopic and photometric data obtained during the course of the original Sloan Digital Sky Survey (SDSS-I) and its first extension (SDSS-II/SEGUE) are used to determine stellar radial velocities and atmospheric parameter estimates for stars in these clusters. Based on the scatter in the metallicities derived for the members of each cluster, we quantify the typical uncertainty of the SSPP values, $\sigma (\rm [Fe/H])$ = 0.13 dex for stars in the range of 4500 K $\le T_{\rm eff} \le 7500$ K and $2.0 \le \log g \le 5.0$, at least over the metallicity interval spanned by the clusters studied ($-2.3 \le {\rm [Fe/H]} < 0 $). The surface gravities and effective temperatures derived by the SSPP are also compared with those estimated from the comparison of the color-magnitude diagrams with stellar evolution models; we find satisfactory agreement. At present, the SSPP underestimates \[Fe/H\] for near-solar-metallicity stars, represented by members of M 67 in this study, by $\sim$ 0.3 dex. author: - 'Young Sun Lee, Timothy C. Beers, Thirupathi Sivarani' - 'Jennifer A. Johnson, Deokkeun An' - Ronald Wilhelm - 'Carlos Allende Prieto, Lars Koesterke' - Paola Re Fiorentin - 'Coryn A.L. Bailer-Jones' - 'John E. Norris' - Brian Yanny - Constance Rockosi - 'Heidi J. Newberg' - 'Kyle M. Cudworth' - Kaike Pan - title: 'The SEGUE Stellar Parameter Pipeline. II. Validation with Galactic Globular and Open Clusters' --- Introduction ============ The Sloan Extension for Galactic Understanding and Exploration (SEGUE) is one of three key projects (LEGACY, SUPERNOVA SURVEY, and SEGUE) in the current extension of the Sloan Digital Sky Survey, known collectively as SDSS-II. The SEGUE program is in the process of obtaining $ugriz$ imaging of some 3500 square degrees of sky outside of the SDSS-I footprint (Fukugita et al. 1996; Gunn et al. 1998, 2006; York et al. 2000; Stoughton et al. 2002; Abazajian et al. 2003, 2004, 2005; Pier et al. 2003), with special attention being given to scans of lower Galactic latitudes ($|b|$ $<$ 35$^{\circ}$) in order to better probe the disk/halo interface of the Milky Way. SEGUE is also obtaining $R$ $\simeq$ 2000 spectroscopy over the wavelength range 3800 $-$ 9200[Å]{} for some 250,000 stars in 200 selected areas over the sky available from Apache Point, New Mexico. The SEGUE Stellar Parameter Pipeline (hereafter, SSPP) processes the wavelength- and flux-calibrated spectra generated by the standard SDSS spectroscopic reduction pipeline (Stoughton et al. 2002), obtains equivalent widths and/or line indices for 77 atomic or molecular absorption lines, and estimates $T_{\rm eff}$, log $g$, and \[Fe/H\] through the application of a number of approaches. The current techniques employed by the SSPP include a minimum distance method (Allende Prieto et al. 2006), neural network analysis (Bailer-Jones 2000; Willemsen et al. 2005; Re Fiorentin et al. 2007), auto-correlation analysis (Beers et al. 1999), and a variety of line index calculations based on previous calibrations with respect to known standard stars (Beers et al. 1999; Cenarro et al. 2001a,b; Morrison et al. 2003). The SSPP employs five different methods for estimation of $T_{\rm eff}$, eight for estimation of log $g$, and nine for estimation of \[Fe/H\]. Details of the methods used are discussed in detail by Lee et al. (2007a, hereafter Paper I). The use of multiple methods allows for empirical determinations of the internal errors for each parameter, based on the range of reported values – typical internal errors for stars in the temperature range $4500$ K $\le$ $T_{\rm eff}$ $\le$ 7500 K are $\sim$ 73 K, $\sim$ 0.19 dex, and $\sim$ 0.10 dex, in $T_{\rm eff}$, log $g$, and \[Fe/H\], respectively. Allende Prieto et al. (2007, hereafter Paper III) point out that the internal uncertainties provided by the SSPP underestimate the typical random errors at high signal-to-noise ($S/N$) ratios because most methods in the SSPP make use of similar parameter indicators (e.g., hydrogen lines for effective temperature) and similar atmospheric models. Paper III empirically determines empirically external uncertainties of $\sim$ 130 K, $\sim$ 0.21 dex, and $\sim$ 0.11 dex, for $T_{\rm eff}$, log $g$, and \[Fe/H\], respectively, by comparison with high-resolution spectroscopy ($7000 < R < 45,000$) of brighter SDSS-I/SEGUE that have been obtained with 8m$-$10m class telescopes. Somewhat larger errors apply to stars with temperatures near the extremes of the range above. The present study of Galactic globular and open cluster stars tests the SSPP’s ability to derive accurate results for stars with a wide range of temperatures and gravities appropriate for metal-poor and near-solar-metallicity stellar populations in the Galaxy, and demonstrates that the derived metallicity scale is identical for dwarfs and giants. Although the SSPP will continue to evolve in the near future, it has been frozen for now at the version used for obtaining results for stars with suitable data from SDSS Data Release 6 (DR-6; Adelman-McCarthy et al. 2007b). Previous versions of the SSPP have already been used for the analysis of SDSS-I observations. For example, Allende Prieto et al. (2006) report on the application of one of the methods included in the SSPP to some 20,000 F- and G-type stars from SDSS-I DR-3 (Abazajian et al. 2005). Beers et al. (2006) have compiled a list of over 6000 stars with \[Fe/H\] $< -2.0$ (including several hundred with \[Fe/H\] $< -3.0$), based on application of the present SSPP to some 200,000 stars from SDSS-I DR-5 (Adelman-McCarthy et al. 2007a). Carollo et al. (2007) reports on an analysis of the kinematics of relatively bright stars from SDSS-I that have been used as calibration objects during the main survey. In this paper, the second in the SSPP series, we show that estimates of the atmospheric parameters and radial velocities obtained by the SSPP for stars with a reasonable likelihood of membership in previously studied Galactic globular and open clusters are sufficiently accurate to justify the use of the present SSPP parameters for carrying out detailed studies of the halo and thick-disk populations of the Milky Way. In deriving the overall iron abundance for each cluster, we assume it comprises a chemically homogenous population. In §2, the photometric and spectroscopic data obtained for M 13, M 15, M 2, NGC 2402, and M 67 are described. Section 3 presents the methods used to separate likely cluster members from field stars in the directions toward these clusters. Best estimates of the overall \[Fe/H\] and radial velocity of each cluster are derived in §4. In §5 we compare the SSPP determinations of $T_{\rm eff}$ and log $g$ for selected member stars in each cluster with their expected positions on color-magnitude diagrams. A summary and brief conclusions are provided in §6. Photometric and Spectroscopic Data ================================== Galactic globular and open clusters are nearly ideal testbeds for validation of the stellar atmospheric parameters estimated by the SSPP. In most clusters, it is expected that their member stars were born simultaneously out of well-mixed, uniform-abundance gas at the same location in the Galaxy. Therefore, with the exception of effects due to post main-sequence evolution, primordial variations in carbon and nitrogen, or contamination from binary companions that have transferred material, the member stars should exhibit very similar elemental abundance patterns. Three of the clusters in our study, M 13, M 15, and M 2, have well-known CN variations that extend to the main-sequence turnoffs (Smith & Briley 2006 for M13; Cohen, Briley, & Stetson 2005 for M15; Smith & Mateo 1990 for M2). However, these abundance variations can be ignored when deriving metallicities from regions of the spectra that do not include CH, CN, or NH features, as is the case with most of our techniques (those that may be affected by the presence of such features are automatically de-selected in the determination of the adopted \[Fe/H\]). True cluster members should exhibit small radial velocity differences with respect to their parent clusters. Furthermore, it is possible to examine theoretical predictions of temperatures and surface gravities for member stars that lie along the cluster main sequence (MS), red giant branch (RGB), or horizontal branch (HB) in color-magnitude diagrams (CMDs). As part of tests of the SEGUE star-selection algorithm (Adelman-McCarthy et al. 2007b) and the SSPP, and during normal SEGUE operation, we have obtained $ugriz$ photometry and medium-resolution (2.3 [Å]{}; $R$ = 2000) spectroscopy for large numbers of stars along lines of sight toward the globular clusters M 13, M 15, and M 2 and the open clusters NGC 2420 and M 67. Below we discuss these photometric and spectroscopic data in more detail. Photometric Data ---------------- The SDSS obtains scans of the sky using the ARC 2.5m telescope on Apache Point, New Mexico. These data are collected in five broad bands ($u, g, r, i, z$) with central wavelengths 3551, 4686, 6166, 7480, and 8932[Å]{} (Fukugita et al. 1996), respectively, using an imaging array of 30 ($6 \times 5$) 2048 $\times$ 2048 Tektronix CCDs (Gunn et al. 1998). The pixel size is 24 $\mu$m, corresponding to $0.396{''}$ on the sky. A series of software procedures, collectively known as the SDSS PHOTO pipeline (Lupton et al. 2001), processes and reduces the scanned images shortly after data are obtained. As part of these procedures, the instrumental fluxes and astrometric positions (Pier et al. 2003), as well as a determination of whether an object is likely to be stellar (i.e., a *point source*), or not (an *extended* source) are obtained. Afterwards, the photometric data are further calibrated by matching to brighter known standards observed with a smaller calibration telescope on Apache Point (Hogg et al. 2001; Smith et al. 2002; Tucker et al. 2006). The processed photometric data have been shown to exhibit 2% relative and absolute errors (0.02 magnitudes) in $g$, $r$, and $i$, and $3 \%-5 \%$ errors in $u$ and $z$ for all stellar objects brighter than $g = 20$ (Stoughton et al. 2002; Abazajian et al. 2004, 2005; Ivezic et al. 2004). The first-pass photometric data for each of the clusters used in the present study were secured by querying the DR-3 (Abazanjian et al. 2005), DR-5 (Adelman-McCarthy et al. 2007a), and DR-6 (Adelman-McCarthy et al. 2007b) releases from the SDSS Catalog Archive Server (CAS). Figure 1 illustrates one of the primary challenges in working with data for clusters obtained with SDSS – the automated PHOTO pipeline (Lupton et al. 2001) was not designed to adequately deal with crowded fields such as the central regions of globular clusters. As a result, essentially all of the stars in this region (which are by definition the most likely ones to be cluster members) do not have reported apparent magnitudes in the SDSS CAS. To circumvent this limitation as much as possible, we have instead performed crowded field photometry for the center of the clusters, using the DAOPHOT/ALLFRAME suite of programs (Stetson 1987; Stetson 1994) in IRAF[^1]. A full description of the methods used and the photometric measures obtained is provided by Johnson et al. (2007). Briefly, DAOPHOT was run on each image, and the five images of each field (one for each filter) were then simultaneously run through ALLFRAME. DAOGROW (Stetson 1990) was used to derive aperture corrections to the point-spread-function photometry for the SDSS aperture radius of 7.4 arcsecs. Finally, the zeropoint term from the [*tsField*]{} files was applied to calibrate the data. This procedure also permits a check on the techniques used by the SDSS PHOTO pipeline in regions outside the cluster where the areal density of sources on the sky is sufficiently low that it may be used. After completing the above procedures, we finally combine the results from the PHOTO pipeline with those from the crowded-field photometry to obtain an almost complete catalog of $ugriz$ photometry for stars in the region of each of our program clusters. All photometric data are corrected for extinction and reddening by application of the Schlegel, Finkbeiner, $\&$ Davis (1998) maps. The average reddening ($E(B-V)$) for stars in the direction of these clusters is 0.017, 0.110, 0.045, 0.041, 0.032 for M 13, M 15, M 2, NGC 2420, and M 67, respectively. Comparing with the literature values listed in Table 1, most of the average reddenings of the clusters agree within about 0.02 mags. Spectroscopic Data ------------------ The spectroscopy discussed in the present paper was obtained during the course of SEGUE tests and normal SEGUE observations. In normal SEGUE operation mode, a pair of plug-plates (referred to as the “bright” and “faint” plates) are obtained over the $3^{\circ}$ field of the ARC 2.5m. A total of 640 optical fibers are employed to obtain $R=2000$ spectra for on the order of 600 program stars for each plate (the remaining fibers are used for spectrophotometric and reddening calibration objects and observations of the night sky). The exposure time depends on observation conditions. For a bright plate, exposures are set to achieve a total $(S/N)^{2} >$ 15/1 from the two blue-side CCDs on the SDSS spectrographs; the exposure for a faint plate is set such that a total $(S/N) ^{2} >$ 50/1 for all four (red and blue CCDs) on the SDSS spectrographs is achieved. In order to identify and remove cosmic ray hits, each plate must have at least three exposures; the integration time for any single exposure is not longer than 30 minutes. For the purposes of targeting objects on these plates, the boundary between the bright and faint plates is set at $r \sim$ 18.0. The data thus obtained are processed through the SDSS spectroscopic pipeline software (SPECTRO2D and SPECTRO1D), which produces wavelength and flux-calibrated spectra, and also obtains estimates of radial velocities and line indices (Stoughton et al. 2002). Tests of the quality of stellar radial velocities from the SSPP (which uses initial estimates from the SDSS processing pipelines) indicate precisions better than 5 km s$^{-1}$ are achieved for brighter stars, with zero-point offsets of no more than a few km s$^{-1}$, respectively (Paper III). These errors degrade for fainter stars, as expected. An initial set of candidate member stars of the globular and open clusters studied in the present paper were selected on the basis of photometric and astrometric data (proper motions) from the literature. The central cores of the clusters were not targeted because the PHOTO pipeline does not resolve the very crowded fields into single star detections, and also due to limitations on the separations of the fibers during the spectroscopic follow-up stage. The primary method for selecting member candidates was performed by plotting a photometric CMD for a given cluster, and choosing stars from regions of this diagram that correspond to location on the MS turnoff or RGB of the cluster. An additional list of bright stars for M 15 and M 2 with previously available proper motions consistent with membership in the clusters was provided by Cudworth (1976 and private communication) and Cudworth $\&$ Rauscher (1987). Other stars in the fields of these clusters were used to fill spectroscopic fibers using the default SEGUE target selection algorithm (Adelman-McCarthy et al. 2007b). While many of these additional targets turned out to be stars from the general field populations, a significant fraction turned out serendipitously to be members of the clusters. For M 13, three specially designed plates were obtained. Two of the three plates followed the standard SEGUE target selection procedure (Adelman-McCarthy et al. 2007b) of sampling stars with a variety of spectral types based on the SDSS imaging and PHOTO processing. An additional set of likely M 13 members, including several stars that were saturated in the SDSS image ($r < 14.5$) and with coordinates from Cudworth $\&$ Monet (1979) and Cudworth (private communication), were added to the target list with high priority (bumping ordinary SEGUE targets), in order to obtain spectra of several likely giant-branch and horizontal-branch members. In the case of NGC 2420, the stars chosen for spectroscopy were primarily targeted from the SDSS photometry obtained by the PHOTO pipeline, using the normal SEGUE target selection algorithm. Additional stars with apparent magnitudes in the range 14.5 $< g <$ 20.5 that fell within 0.5 degrees from the center of NGC 2420 were also targeted for spectroscopy. However, due to crowding, if two objects were within 55${''}$ of one another, then only one received a fiber. Thus, not every star in the central region of NGC 2420 was targeted. There were about 480 objects selected in this way, including a number of non-cluster members that are located in the NGC 2420 field. For M 67, the initial targets came from the SDSS imaging data processed by the PHOTO pipeline. However, for this cluster, many candidate members with positions, magnitudes, and colors from the WEBDA (http: //www.univie.ac.at/webda/) catalogs were added to the target lists. The bright targets (with $r < 14$) saturate the SDSS imaging camera, so these were added from the literature (Sanders 1989; Fan et al. 1996). Such bright stars normally saturate a regular SDSS spectroscopic exposure, so there were exposed for shorter than normal. About 200 very bright stars between about 12 $<$ $g$ $<$ 14 were targeted. In total, we obtained SDSS spectroscopy for 1920, 1280, 640, 1280, and 640 targets, including sky spectra and calibration object spectra, in the fields of M 13, M 15, M 2, NGC 2420, and M 67 respectively. The reduced spectra were then processed through the SSPP in order to estimate $T_{\rm eff}$, log $g$, and \[Fe/H\], among other quantities. Table 1 summarizes the global properties of the clusters under consideration in this paper, taken from the compilation of Harris (1996) for the globular clusters and from WEDBA or Gratton (2000) for the open clusters. Radial Velocities ----------------- There are two estimated radial velocities provided from the SDSS spectroscopic pipeline. One is an absorption redshift obtained by cross-correlating the spectra with templates that were obtained from SDSS commissioning spectra (Stoughton et al. 2002). Another comes from matching the spectra with ELODIE template spectra (Prugniel $\&$ Soubiran 2001). In most cases the velocity based on the ELODIE template matches appears to be the best available estimate, as spectra of “quality assurance” stars with multiple measurements show the most repeatable values for this estimator. However, this is not always the case. We proceed to select the best available velocity in the following manner. If the velocity determined by comparison with the ELODIE templates has a reported error of 20 km s$^{-1}$ or less then this velocity is adopted. If the error from the ELODIE template comparison is larger than 20 km s$^{-1}$, and the relative offset between the two radial velocities is less than 40 km s$^{-1}$, we take an average of the two. If none of the criteria above are satisfied, which happens only rarely, and mainly for quite low $S/N$ spectra, or for hot/cool stars without adequate templates, we obtain the calculated radial velocity from a custom routine that examines the wavelengths of a number of prominent absorption features. If none of these methods yield a reasonable estimate of radial velocity, or it appears spurious (i.e., falls outside of the range $\pm 1000$ km s$^{-1}$), we simply ignore the star in subsequent analyses. A more detailed description of the procedures used for determination of the best available radial velocity, and of zero-point offsets of the radial velocities, can be found in Paper I. Membership Selection from the Spectroscopic Samples =================================================== Owing to an insufficient number of stars with available spectroscopy for each cluster, it is not possible to obtain a well-defined Color Magnitude Diagram (CMD) based solely on spectroscopically confirmed member stars. Thus, we make use of photometric data in the field of each cluster, and describe below how we obtain a relatively clean CMD for individual clusters, and select likely member stars from the spectroscopic data. Likely Member Star Selection for Globular Clusters -------------------------------------------------- One of the primary issues that one needs to address when creating a CMD, or selecting likely member stars, for a star cluster is removal of contamination from field stars. In order to approximately isolate the likely cluster members from the field stars we have made use of the CMD mask algorithm described by Grillmair et al. (1995). We illustrate the basic idea by application of this algorithm to the M 13 field shown in Figure 1. We first select all stars inside the estimated tidal radius (25.2$^{'}$; Harris 1996), shown as the innermost green circle in Figure 1. This is regarded as the cluster region. The red dots represent stars with available photometry from the SDSS PHOTO pipeline (Lupton et al. 2001); the black dots are stars with photometry obtained from DAOPHOT. The blue open circles indicate stars with available spectroscopy. We then choose an annulus outside the cluster region, indicated on the Figure as the region between the two black circles, as the field or background region. We next obtain CMDs of each region, spanning $-1.0 \le (g-r)_0 \le 1.5$ and $12 \le g_0 \le 22$, and then subdivide these diagrams such that the size of each sub-grid is 0.2 mag wide in $g_0$ and 0.05 mag wide in $(g-r)_0$ color. The total number of sub-grids for the CMDs in each region is thus 2500 (50$\times$50). Figure 2 shows the resulting CMDs of the cluster (left panel) and field (right panel) regions, overplotted with squares representing the selected sub-grids, obtained as described below. We first calculate the signal-to-noise ($s/n$) in each preliminary sub-grid by application of Eqn. (1) over the entire CMD region shown in Figure 2. Here we assume that the field stars outside the tidal radius are uniformly distributed throughout the annulus area. $$s/n(i,j) = \frac{n_{c}(i,j) - gn_{f}(i,j)}{\sqrt{n_{c}(i,j) + g^{2}n_{f}(i,j)}}. \label{eq1}$$ In the above, $n_{c}$ and $n_{f}$ refer to the number of stars in each sub-grid with color index $i$ and magnitude index $j$, counted within the cluster region and field region, respectively. The parameter $g$ represents the ratio of the cluster area to the field area. The following procedures are followed in order to find the optimal range of colors and magnitudes that correspond to the likely members of each cluster. First, we sort the elements of $s/n(i,j)$ in descending order, so that we obtain a one-dimensional array of $s/n(i,j)$ with index $l$; the array element with the highest $s/n(i,j)$ corresponds to $l=1$. The next step is to obtain star counts in gradually larger regions of the CMDs. The accumulated area is represented as $a_{k} = ka_{l}$, where $a_{l} = 0.01$ mag$^{2}$, which is the same for all sub-grids, and is the area of a single sub-grid in the CMD array, and the $k$ is the number of sub-grids to combine. Finally, the cumulative signal-to-noise ratio, $S/N(a_{k}) $, as a function of $a_{k}$, is calculated from: $$S/N(a_{k}) = \frac{N_{c}(a_{k}) - gN_{f} (a_{k})}{\sqrt{N_{c} (a_{k}) + g^{2} N_{f} (a_{k})}} \label{eq3}$$ where, $$N_{c}(a_{k}) = \sum_{l=1}^{k} n_{c}(l), ~~~N_{f}(a_{k}) = \sum_{l=1}^{k} n_{f}(l) \label{eq4}$$ The parameter $n_{c}(l)$ denotes the number of stars within the cluster region having ordered color-magnitude index $l$; $n_{f}(l)$ represents the same quantity for the field region. Based on the maximum value of $S/N(a_{k})$, a threshold value of $s/n$ is picked in order to select high- contrast surface-density areas (i.e, high $s/n$) between the cluster and field regions. These are considered to be the sub-grids that contain likely cluster members. After removing single-star events in areas of the CMDs where the field-star density is low, all stars in sub-grids with $s/n(i,j)$ greater than the threshold value of $s/n$ are selected. These stars are considered as the photometrically likely member stars for a given cluster. The red squares shown in the left panel of Figure 2 are the sub-grids with $s/n$ greater than the threshold value; the corresponding sub-grids in the field region are shown as green squares in the right panel of this Figure. Figure 3 depicts the CMD of the selected likely members of M 13 from the photometric data, shown as black dots. The same procedures are performed to differentiate the likely member stars of M 15 and M 2 from the photometric sample. We now proceed to select the stars that are likely members of the globular clusters from the available spectroscopic sample. This step begins by selection of the stars within the cluster tidal radii that pass the photometric criterion for membership, based on their location on the cluster CMDs according to the algorithm described above. Figure 3 displays the cleaned CMD of M 13, overplotted with the likely members from the spectroscopic sample (shown as red circles). The same procedures are carried out to identify likely member stars of M 15 and M 2 from their spectroscopic data. Based on these cuts, at this stage of the analysis there are 296 (338) likely members for M 13, 124 (160) for M 15, and 21 (22) for M 2 identified. In the above, the first listed numbers indicate the stars with available estimates of \[Fe/H\] from the SSPP, while the quantities in parentheses represent the number of stars with available radial velocities (RVs). Additional cuts, based on the derived metallicity estimates and RVs, are described in §4. Likely Member Star Selection for Open Clusters ---------------------------------------------- Since the fields of nearby open clusters are not as crowded as those of globular clusters, the signal-to-noise ratio between the cluster region and the background region is not sufficiently high to select likely cluster members by means of the CMD mask algorithm. As an alternative, we first obtain a fiducial line for an open cluster (including its main sequence and sub-giant branch, if it exists) by use of an robust polynomial fitting procedure. As an example, Figure 4 shows the CMD of the NGC 2420 field inside a radius of 0.3 degrees from the center of the cluster. According to the open cluster catalog of Dias et al. (2002), the apparent diameter of this cluster is only 5$^{'}$ on the sky, but we prefer to adopt a 20$^{'}$ radius, in order to include as many member stars as possible. The red line is the fiducial line derived from the robust polynomial fit. The blue lines are the upper and lower limits (fiducial $\pm$ 0.06 dex in $(g-r)_{0}$), determined by eye. Stars from the spectroscopic sample that fall within the 20$^{'}$ radius and inside the blue limit lines in Figure 4 are identified. A similar procedure is applied to M 67, except that a 30$^{'}$ (apparent diameter of 25$^{'}$; Dias et al. 2002) radius and fiducial $\pm$ 0.10 mag in $(g-r)_{0}$ is used. Based on this selection method, there are 195 (234) and 61 (64) for NGC 2420 and M 67, respectively. The first listed numbers indicate the stars with available estimates of \[Fe/H\] from the SSPP, while the quantities in parentheses represent the number of stars with available radial velocities (RVs) . Additional cuts, based on the derived metallicity estimates and RVs, are described below. Determination of Overall Metallicities and Radial Velocities of the Clusters ============================================================================ In order to investigate the accuracy of our derived metallicities and RVs, we now consider the global distribution of these parameters obtained from the current version of the SSPP for the likely cluster members. In this section, we describe a method to best isolate “true member stars” from the spectroscopic samples described above. We then use these subsamples to determine our best estimates of the overall metallicities and RVs of the clusters considered in this study. Selection of True Members ------------------------- We establish the criteria for carrying out metallicity and RV cuts as follows. The left panel of Figure 5 illustrates the \[Fe/H\] distribution for three different subsamples of stars. The first, shown as the black dot-dashed line, represents the distribution of derived metallicities for the 1547 stars with available estimates of \[Fe/H\] along the line of sight to M 13. Note that this distribution includes numerous stars that cannot be considered members of the cluster, as they cover a much wider range of \[Fe/H\] than might be expected if they were drawn exclusively from the cluster member population. The dot-dashed line in the right panel of Figure 5 shows the RV distribution of these same stars. The red dashed line in the left panel of Figure 5 is the distribution of \[Fe/H\] for the 296 likely members selected from the spectroscopic sample as described above. We obtain a Gaussian fit to the [*highest peak*]{} of the distribution of these stars (solid blue line in Figure 5), and obtain an estimate of the mean ($<$\[Fe/H\]$>$) and standard deviation ($\sigma$) for this distribution. Similar fits are obtained for the distribution of RVs for the likely members shown in the right panel of Figure 5. On the basis of these fits, we now trim likely outliers by application of a 2-$\sigma$ clipping procedure, for example: $$\rm <[Fe/H]> - 2\sigma_{\rm [Fe/H]} \leq \rm [Fe/H]_{\star} \leq \rm<[Fe/H]> + 2\sigma_{\rm [Fe/H]} \label{eq6}$$ $$\rm <RV> - 2\sigma_{\rm RV} \leq \rm RV_{\star} \leq \rm<RV> + 2\sigma_{\rm RV} \label{eq7}$$ In the above, \[Fe/H\]$_{\star}$ and RV$_{\star}$ correspond to the values of these parameters for each star under consideration. The stars surviving both of these clips are considered true cluster members for the purpose of this study. Note that at no point have we considered the external “known” values of \[Fe/H\] and RV for the clusters as a whole. Based on the application of these membership cuts, we now have a total of 169 stars identified as true members of M 13, 63 stars as true members of M 15, 9 stars as true members of M 2, 195 stars as true members of NGC 2420, and 51 stars as true members of M 67. The distribution of \[Fe/H\] and RV for the surviving members of M 13 are shown as green histograms in the left and right panels of Figure 5, respectively. Similar plots for M 15, M 2, NGC 2420, and M 67 are shown in Figures 6, 7, 8, and 9, respectively. The distribution of \[Fe/H\] for the selected true member stars of each cluster, as a function of $T_{\rm eff}$, is shown in Figure 10. Table 2 summarizes the results of the above exercise. Column (1) lists the cluster name. Columns (2) and (3) are the lower and upper limits for the 2-$\sigma$ cuts on \[Fe/H\], respectively. Columns (4) and (5) are the corresponding adopted limits on RV used for these cuts. Tables 4$-$8 list the observed and derived quantities for all of the individual stars considered as true member stars in the analysis of each cluster. The columns are as defined in the table notes for Table 4. Determination of Overall Estimates of Mean Cluster Metallicity and Radial Velocity ---------------------------------------------------------------------------------- We now obtain final estimates of the cluster metallicities and RVs based on Gaussian fits to the surviving true member stars for each cluster, as shown by the blue curves in Figures 5$-$9. Table 2 summarizes these determinations. The mean metallicity and 1-$\sigma$ spread of the metallicities of the true member stars are listed in columns (6) and (7), respectively. Similar quantities for the RVs are listed in columns (8) and (9). Column (10) lists the total number of true member stars associated with each cluster, based on our analysis. External estimates of the metallicities and RVs for these clusters, adopted from the Harris (1996) compilation for M 13, M 15, and M 2 and from WEBDA (and references therein) for NGC 2420 and M 67, are listed in columns (11) and (12). Column (13) lists metallicity estimates for these clusters obtained from high-resolution spectroscopy of a limited number of brighter stars by Kraft & Ivans (2003) for M 15 and M 13, Ivans (private communication) for M 2, and Gratton (2000) for NGC 2420 and M 67. For M 13, our estimate of the mean abundance, $<$\[Fe/H\]$>$ = $-1.56$, is very close to the Harris (1996) estimate (\[Fe/H\]$_{\rm H}$ = $-$1.54). However, the recent study of Kraft & Ivans (2003) reported a revised cluster abundance for M 13, derived from high-resolution spectroscopy of 28 giants. Their value indicates a metallicity for M 13 that is a bit lower than that given by Harris, \[Fe/H\]$_{\rm HR}$ = $-$1.63, and is lower by 0.07 dex than our estimate. Cohen & Mel$\acute{\rm e}$ndez (2005) reported \[Fe/H\] = $-1.50$ from a high-resolution ($R$ = 35,000) analysis of a sample of 25 stars, consisting of stars from the giant branch to near the main-sequence turnoff. Our derived spread in the metallicities of the M 13 true member stars (0.16 dex) is also satisfyingly low, especially considering the wide range of temperatures for true members that are considered here. Our estimate of the mean radial velocity, $<$RV$>$ = $-$245.1 km s$^{-1}$, with a standard deviation of 8.7 km s$^{-1}$, is in good agreement with that given by Harris ($-$245.6 km s$^{-1}$). It is important to note that, as mentioned in Paper I, we have already added $+$7.3 km s$^{-1}$ to all DR-6 (Adelman-McCarthy et al. 2007b) stellar radial velocities. Before the adjustment of this offset, an average offset of $-$8.6 km s$^{-1}$ for M 13 and $-$6.8 km s$^{-1}$ for M 15 is obtained. Thus, together with an offset of $-$6.6 km s$^{-1}$ that was obtained from a preliminary result of a high-resolution spectroscopic analysis of SDSS-I/SEGUE stars (Paper III) before DR-6 (Adelman-McCarthy et al. 2007b), we derived an average offset of $-$7.3 km s$^{-1}$. However, a recent high-resolution spectroscopic analysis of SDSS-I/SEGUE stars indicates an offset of about $-$6.9 km s$^{-1}$, resulting in an average of $-$7.4 km s$^{-1}$, Hence, in future data releases (e.g, DR-7), this very minor difference might be reflected. For the analysis of our clusters, all radial velocities have been corrected by $+$7.3 km s$^{-1}$, in order to be consistent with DR-6. Our estimate of the mean abundance of M 15, $<$\[Fe/H\]$>$ = $-2.12$, is close to the value listed by Harris (\[Fe/H\]$_{\rm H}$ = $-$2.26). While Kraft $\&$ Ivans (2003) obtained \[Fe/H\]$_{\rm HR}$ = $-$2.42, based on high-resolution spectroscopy for nine giants in this cluster, Otsuki et al. (2006) reported \[Fe/H\] = $-$2.29 from an analysis of high-resolution spectra for six giants belonging to this cluster. Our derived spread in the metallicities of true member stars in M 15 is quite low (0.14 dex). Our estimate of the mean radial velocity, $<$RV$>$ = $-$107.4 km s$^{-1}$, with a standard deviation of 10.5 km s$^{-1}$, agrees very well with that of the Harris (1996) value ($-$107.0 km s$^{-1}$). There are only a very small number of true member stars (9) for M 2. Their average metallicity, $<$\[Fe/H\]$>$ = $-$1.58, is similar to the Harris (1996) value (\[Fe/H\]$_{\rm H}$ = $-$1.62), and is in very good agreement with the value obtained by Ivans (private communication) (\[Fe/H\]$_{\rm HR}$ = $-$1.56). The estimated spread in our derived metallicities, 0.08 dex, is quite small. Our estimate of the mean radial velocity, $<$RV$>$ = $+$0.4 km s$^{-1}$, with a standard deviation of 7.7 km s$^{-1}$, is higher (by about 6 km s$^{-1}$) than that provided by Harris ($-$5.3 km s$^{-1}$). Clearly, for the purposes of validation of the SSPP, it would be highly desirable to obtain a larger number of member stars in M 2; a new plate (640 spectra) will be obtained in the near future. Note that M 2 presents a special challenge, since its mean metallicity is quite close to that expected for members of the field halo population, while its radial velocity is buried in the peak of foreground disk stars. Our stringent criterion for true cluster members should remain effective, however, since few non-cluster members will fulfill both the RV and metallicity criteria. There are 130 true member stars selected for the open cluster NGC 2420. The mean iron abundance of the selected true member stars is $<$\[Fe/H\]$>$ = $-$0.46, which is in excellent agreement with the value (\[Fe/H\] = $-$0.44) determined by Gratton (2000) from high-resolution spectroscopy of one member star. Friel $\&$ Janes (1993) reported \[Fe/H\] = $-$0.42 for nine member stars, based on medium- and low-resolution spectroscopic data. Friel et al. (2002) determined \[Fe/H\] = $-$0.38 $\pm$ 0.07, based on medium-resolution spectra of 20 member stars. Most of these literature values are within the spread of our derived value. The derived spread in the metallicities of true member stars (0.12 dex) is very low. The radial velocity for NGC 2420 listed in Table 2, ($+74.0$ km s$^{-1}$), is an average of the values $+84.0$ km s$^{-1}$, $+71.1$ km s$^{-1}$, and $+67.0$ km s$^{-1}$ from Friel (1989), Scott et al. (1995), and Rastorguev et al. (1999), respectively. This value agrees very well with our derived estimate of $+74.8$ km s$^{-1}$, with a standard deviation of $6.2$ km s$^{-1}$. For M 67, 51 stars are identified as true member stars. A mean metallicity of $<$\[Fe/H\]$> = -$0.35 is derived, with a small spread of 0.15 dex. This derived $<$\[Fe/H\]$>$ differs by 0.37 dex from that of Gratton (2000), \[Fe/H\] = $+$0.02, who derived this value from a high-resolution study of one member star. Randich et al. (2007) analyzed 10 member stars of this cluster, based on high-resolution ($R \sim 45,000$) spectroscopy, and derived \[Fe/H\] = $+0.03 \pm 0.01$. \[Fe/H\] = $+0.02 \pm 0.03$ was determined by Yong et al. (2005) from a high-resolution spectroscopic analysis of three member stars. However, based on medium-resolution spectra of 25 members, Friel et al. (2002) reported \[Fe/H\] = $-0.15 \pm 0.05$. Other catalogs of open clusters (e.g., Twarog et al. 1997; Chen et al. 2003) also report a solar metallicity for this cluster. The literature values based on high-resolution spectroscopic analyses clearly suggest that the present SSPP tends to under-estimate \[Fe/H\] by about 0.3 dex for near-solar-metallicity stars. This is perhaps related to the difficulties arising from the strong atomic and molecular lines (and possibly unreliable synthetic spectra) for metal-rich stars. As mentioned by Gratton (2000), it might be desirable to re-calibrate the metallicity scale used for the analysis of medium-resolution spectra for stars with the solar iron abundance to better match the results obtained from high-resolution analyses. The radial velocity of $+32.9$ km s$^{-1}$ for M 67 from the literature listed in Table 2 (the average of the Scott et al. 1995; Friel $\&$ Janes 1993; Rastorguev et al. 1999 values) agrees with our derived mean radial velocity of $+34.9$ km s$^{-1}$ within the standard deviation of our measurements, 5.6 km s$^{-1}$. Thus, taking into account only the scatter in the metallicities and radial velocities calculated from the members of each cluster, we are able to derive estimates of typical external uncertainties for the SSPP values, $\sigma (\rm [Fe/H])$ = 0.13 dex, and $\sigma (\rm RV)$ = 7.7 km s$^{-1}$ (after 5$\sigma$ clipping is applied). A Comparison of Derived $T_{\rm eff}$ and log $g$ for True Cluster Members with Color-Magnitude Diagrams ======================================================================================================== In the previous section, we have considered the accuracy with which the SSPP obtains estimates of metallicity and radial velocity. We now consider the accuracy with which the SSPP obtains estimates of effective temperatures, $T_{\rm eff}$, and surface gravities, log $g$. One excellent “global” test of these estimates is to examine the locations of the true member stars on the observed CMD (based on the totality of likely photometric member data) for each cluster. One can also compare with corresponding theoretical CMDs. Figures 11$-$15 show plots of the SSPP-estimated temperatures and gravities for true member stars superposed on the photometrically cleaned CMDs for each of our clusters. Note that in order to obtain the theoretical temperature scales (shown along the top of the left-hand panels in each Figure), we make use of a linear relation between $(g-r)_0$ color and $T_{\rm eff}$ by performing a least square fit in this plane to the theoretical models of Girardi et al. (2004). We choose the isochrones from this study that have the closest \[Fe/H\] to the derived metallicity of each cluster, and adopt an age of 13.5 Gyr for the globular clusters, 2.2 Gyr for NGC 2420, and 4.3 Gyr for M 67 (adopting the ages from WEBDA for the open clusters). A similar procedure is applied for transforming the $g_0$ magnitude to a theoretical log $g$ scale (shown along the far right axis in the right-hand panels of each Figure). Distance moduli from the Harris (1996) compilation for the globular clusters and WEBDA for the open clusters (also listed in Table 1 of this study) are used in order to compute apparent magnitudes. In these Figures, we plot the SSPP-estimated parameters for true member stars in different colors, corresponding to different ranges of temperature and surface gravity (as shown in the legend for each plot). Each color represents a range of 500 K in $T_{\rm eff}$ and 0.5 dex in log $g$. The effective temperature estimated by the SSPP appears in excellent agreement for most of the true member stars, with only a few exceptions. Such stars could either be outright errors in SSPP predictions, or could just be foreground/background stars that survived the various membership cuts we have applied. Inspection of the Figures also reveals the presence of a few stars close to the main-sequence TO in M 13 and M 15 that appear to have slightly lower SSPP-estimated log $g$ than expected from the theoretical scale. The surface gravities of stars along the RGB appear to be very well estimated. Such behavior is perhaps to be expected, since the stars close to the TO region are at the low end of the $S/N$ range that we accept for analysis, and thus are subject to greater errors in the determination of their atmospheric parameters. The RGB stars are among the brightest, and hence are likely to have the best-determined estimates. Inspection of Figure 14 for NGC 2420 indicates that gravity estimates for most of the main-sequence stars are well-estimated from the SSPP, with the exception of the faintest stars. These stars have only low $S/N$ spectra available, resulting in higher uncertainties in determinations of their surface gravities. It should also be recalled that surface gravity is a difficult parameter to estimate, especially from spectra of the resolving power obtained by the SDSS. Overall, we are pleased to see as good a behavior in the estimates of this parameter as is demonstrated in Figures 11$-$15. In addition, using the derived relations between $(g-r)_0$ and $T_{\rm eff}$, and $g_0$ and log $g$ from the isochrones, we predicted $T_{\rm eff}$ and log $g$ from the observed $(g-r)_0$ and $g_0$, respectively. Table 3 lists the averages and standard deviations of the residuals of the effective temperatures and surface gravities between the SSPP estimates and the calculated values. Even though we have employed a simple relationship between $(g-r)_0$ with $T_{\rm eff}$, we see good agreement between the SSPP estimates and the theoretical values in $T_{\rm eff}$. Although, as expected, we notice a rather large offset and scatter in the gravity, indicating a more complex function is needed, the scatters are still within each bin size (0.5 dex) in Figures 11$-$15. M 2 exhibits a larger scatter in both $T_{\rm eff}$ and log $g$ than the other clusters, owing to the small number of member stars selected. Summary and Conclusions ======================= Based on photometric and spectroscopic data reported in SDSS-I and SDSS-II/SEGUE, we have examined estimates of stellar atmospheric parameters and heliocentric radial velocities obtained by the SEGUE Stellar Parameter Pipeline (SSPP) for likely members of three Galactic globular clusters, M 13, M 15, and M 2, and two open clusters, NGC 2420 and M 67, and compared them with those obtained by external estimates for each cluster as a whole. From the derived scatters in the metallicities and radial velocities obtained for the likely members of each cluster, we quantify the typical external uncertainties of the SSPP-determined values, $\sigma (\rm [Fe/H])$ = 0.13 dex, and $\sigma (\rm RV)$ = 7.7 km s$^{-1}$, respectively. These uncertainties apply for stars in the range of 4500 K $\le T_{\rm eff} \le 7500$ K and $2.0 \le \log g \le 5.0$, at least over the metallicity interval spanned by the clusters studied ($-2.3 \le {\rm [Fe/H]} < -0.4$). Therefore, the metallicities and radial velocities obtained by the SSPP appear sufficiently accurate to be used for studies of the kinematics and chemistry of the metal-poor and moderately metal-rich stellar populations in the Galaxy. We have also confirmed that $T_{\rm eff}$ and log $g$ are sufficiently well-determined by the SSPP to distinguish between different luminosity classes through a comparison with theoretical predictions. A comparison of the analysis of the available high-resolution spectroscopy of SDSS-I/SEGUE stars (Paper III) with the SSPP predictions indicates that the uncertainty in radial velocities adopted by the SSPP is no more than 5 km s$^{-1}$ (after adjusting for an empirical offset of +7.3 km s$^{-1}$). The empirically determined precisions in estimated atmospheric parameters are $\sim$ 130 K for effective temperature, $\sim$ 0.21 dex for surface gravity, and $\sim$ 0.11 dex for \[Fe/H\]. These errors apply to the brightest stars obtained by SDSS-I/SEGUE observations, on the order of $14.0 \le g \le 15.5$, and are expected to degrade somewhat for fainter stars. We also found that the SSPP tends to underestimate \[Fe/H\] for near-solar-metallicity stars (represented by members of M 67 in this study), by $\sim$ 0.3 dex. In future papers we will compare the predictions of the SSPP with intermediate-metallicity clusters (\[Fe/H\] $\sim$ $-$0.7) and with additional near-solar-metallicity populations, as sampled by metal-rich globular clusters and nearby open clusters. Additionall metal-poor clusters will also be examined. Further refinements in the SSPP, which hopefully will be better able to recover accurate abundances for near-solar-metallicity stars, are anticipated. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. Y.S.L., T.C.B., and T.S. acknowledge partial funding of this work from grant PHY 02-16783: Physics Frontiers Center / Joint Institute for Nuclear Astrophysics (JINA), awarded by the U.S. National Science Foundation. NASA grants (NAG5-13057, NAG5-13147) to C.A.P. are thankfully acknowledged. J.E.N acknowledges support from Australian Research Council Grant DP0663562. C.B.J and P.R.F acknowledge support from the Deutsche Forschungsgemeinschaft (DFG) grant BA2163. Abazajian, K., et al. 2003, , 126, 2081 Abazajian, K., et al. 2004, , 128, 502 Abazajian, K., et al. 2005, , 129, 1755 Adelman-McCarthy, J. K., et al. 2007a, , in press Adelman-McCarthy, J. K., et al. 2007b, , accepted Allende Prieto, C., et al. 2007, , submitted (Paper III) Allende Prieto, C., Beers, T. C., Wilhelm, R., et al. 2006, , 636, 804 Bailer-Jones, C. A. L. 2000, , 357, 197 Beers, T. C., et al. 2006, BAAS, 38, 168.08 Beers, T. C., Rossi, S., Norris, J. E., Ryan, S. G., $\&$ Shefler, T. 1999, , 506, 892 Carollo, D., et al. 2007, Nature, submitted (astro-ph/0706.3005) Cenarro A.J., Cardiel N., Gorgas J., Peletier R.F., Vazdekis A., $\&$ Prada F. 2001a, , 326, 959 Cenarro A. J., Gorgas J., Cardiel N., Pedraz S., Peletier R.F., $\&$ Vazdekis, A. 2001b, , 326, 981 Chen, L., Hou, J.L., $\&$ Wang, J.J. 2003, , 125, 1397, Cohen, J. G., Briley, M. M., $\&$ Stetson, P. B. 2005, , 130, 1177 Cudworth, K. M. 1976, , 81, 519 Cudworth, K. M. $\&$ Monet, D. G. 1979, , 84, 774 Cudworth, K. M. $\&$ Rauscher, B. 1987, , 93, 856 Dias, W. S., Alessi, B. S., Moitinho, A., $\&$ Lepine, J. R. D. 2002, , 389, 871 Fan, X., et al. 1996, , 112, 628 Friel, E. D.1989, , 101, 244 Friel, E. D. $\&$ Janes, K. A. 1993, , 267, 75 Friel, E. D., Janes, K. A., Tavarez, M., Scott, J., et al. 2002, , 124, 2693 Fukugita, M., Ichikawa, T., Gunn, J.E., Doi, M., Shimasaku, K., $\&$ Schneider, D.P. 1996, , 111, 1748 Girardi, L., Grebel, E. K., Odenkirchen, M., $\&$ Chiosi, C. 2004, , 422, 205 Gratton, R. 2000, in Stellar Clusters and Associations: Convection, Rotation, and Dynamos, ASP Conference Series (eds. R. Pallavicini, G. Micela, & S. Sciortino), 198, p. 225 Grillmair, C. J., Freeman, K. C., Irwin, M., $\&$ Quinn, P. J. 1995, , 109, 2553 Gunn, J. E., et al. 1998, , 116, 3040 Gunn, J. E., et al. 2006, , 131, 2332 Harris, W. E. 1996, , 112, 1487 Hogg, D.W., Finkbeiner, D.P., Schlegel, D.J., $\&$ Gunn, J.E. 2001, , 122, 2129 Ivezic, Z., et al. 2004, Astron. Nach., 325, 583 Johnson, J. A. et al. 2007, in preparation Kraft, R. P. $\&$ Ivans, I. I. 2003, , 115, 143 Lee, Y. S., et al. 2007a, , submitted (Paper I) Lupton, R., et al. 2001, in ASP Conf. Ser. 238, Astronomical Data Analysis Software and Systems X, ed. F. R. Harnden, Jr., F. A. Primini, and H. E. Payne (San Francisco: Astr. Soc. Pac.), p. 269 Morrison, H. L., Norris, J., Mateo, M., et al. 2003, , 125, 2502 Moultaka, J., Ilovaisky, S. A., Prugniel, P., $\&$ Soubiran, C. 2004, , 116, 693 Otsuki, K., Honda, S., Aoki, W., Kajino, T., $\&$ Mathews, G. 2006, , 641L, 117 Pier, J.R., Munn, J.A., Hindsley, R.B., Hennessy, G.S., Kent, S.M., Lupton, R.H., $\&$ Ivezic, Z. 2003, , 125, 1559 Prugniel, Ph. $\&$ Soubiran, C., 2001, , 369,1048 Randich, S., Sestito, P., Primas, F., Pallavicini, R., $\&$ Pasquini, L., 2006, , 450, 557 Rastorguev, A.S., Glushkova, E.V., Dambis, A.K., $\&$ Zabolotskikh M.V. 1999, Astron. Letters, 25, 689 Re Fiorentin, P., Bailer-Jones, C. A. L., Lee, Y. S. et al. 2007, , 467, 1373 Sanders, W. L. 1989, Rev., Mex. Astron. Astro. 17, 31 Schlegel, D. J., Finkbeiner, D. P., $\&$ Davis, M., 1998, , 500, 525 Scott, J. E., Friel, E. D., $\&$ Janes, K. A. 1995, , 109, 1706 Smith, G. H $\&$ Briley, M. M. 2006, , 118, 740 Smith, G. H. $\&$ Mateo, M. 1990, , 353, 533 Smith, J.A., et al 2002, , 123, 2121 Stetson, P. B. 1987, , 99, 191 Stetson, P. B. 1990, , 102, 932 Stetson, P. B. 1994, , 106, 250 Stoughton, C., et al. 2002, , 123, 485 Twarog, B.A., Ashman, K.M., $\&$ Anthony-Twarog, B.J. 1997, , 114, 2556 Tucker, D., et al. 2006, AN, 327, 821 Willemsen, P.G., Hilker, M., Kayser, A., $\&$ Bailer-Jones, C. A. L. 2005, , 436, 379 Yong, D., Carney, B. W., $\&$ Teixera de Almeida, M. L. 2005, , 130, 597 York, D. G., et al. 2000, , 120, 1579 ![Color-Magnitude Diagrams of the M 13 stars inside the tidal radius (left panel), and inside the field region (right panel), shown as black dots. The selected sub-grids from the $S/N$ cut are shown as red squares in the left panel and green squares in the right panel. These selected sub-grids are used in the analysis.](f2.eps) ![\[Fe/H\] and radial velocity distributions for stars in the direction of M 13. Gaussian fits (blue solid curves) to the distribution of the selected true members, shown in the green histograms, are over-plotted. ](f5.eps) ![Same as Fig. 5 but for M 15.](f6.eps) ![Same as Fig. 5 but for M 2.](f7.eps) ![Same as Fig. 5 but for NGC 2420.](f8.eps) ![Same as Fig. 5 but for M 67.](f9.eps) ![Distribution of \[Fe/H\] as a function of effective temperature for selected true member stars of M 13, M 15, M 2, NGC 2420, and M 67. The mean \[Fe/H\] determined for each cluster based on these estimates is shown with the blue dashed line; the red solid line represents the adopted literature value in each panel.](f10.eps) ![Temperature and gravity distributions of the selected true member stars for M 13. Each color represents a temperature range of width 500 K, and a surface gravity range of 0.5 dex, respectively. The temperature scales on the top of the left hand panel come from a linear relation between $(g-r)_0$ color and $T_{\rm eff}$ by performing a least squares fit to the theoretical models of Girardi et al. (2004). A similar procedure is applied for transforming the $g_0$ magnitude to a theoretical log $g$ scale on the ordinate in the right-hand panel.](f11.eps) ![Same as Fig. 11 but for M 15.](f12.eps) ![Same as Fig. 11 but for M 2.](f13.eps) ![Same as Fig. 11 but for NGC 2420.](f14.eps) ![Same as Fig. 11 but for M 67.](f15.eps) [^1]: IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc. under cooperative agreement with the National Science Foundation.